Friday, September 8, 2017

How Infrastructure Monitoring and Tooling are connected with Performance Engineering


In the last five to six years, I had a chance to work in a few applications which are into CRM and financial closure domains. While working as a QA Lead and the Scrum Master, I was able to work as the performance tester as well. Fortunately, in these applications, our customers are willing to spend some time on performance engineering and monitoring through different Application Performance Management Systems which targeting application performance and infrastructure monitoring. Other than the APM systems, I was able to setup CI environments and dashboards.

From this article, I would like to share my experience with the application performance monitoring and how it is connected with Performance Engineering. First of all, I will start with the main performance approaches and talk about how we can connect tools, etc. Then all can clearly understand what I am going to be discussed.  


Three Performance Engineering Approaches:

In Performance Engineering, we have 3 approaches which we can present the entire life cycle. Those are Reactive, Proactive and Pridictive.


First: Reactive approach

When we are dealing with software applications, there a lot of sudden performance concerns due to various issues such as the requirements are not identified properly (SLA), customer database related issues, problems in the user groups or user personas and issues when the user load getting increased unexpectedly. This approach is needed, but not encourage to categorize as the best way of ensuring the product performance. Hence we are looking for the second approach.


Second: Proactive approach

In this, we are targeting a sustainable way of ensuring the product performance. Nowadays, when we are going to start a software product implementation, we need to think about the non-functional aspects more than ever. Specially performance engineering aspects as it will trigger issues such as we may need to change the entire application architecture at the last stages. Hence, starting the performance assessments in the early stages of the life cycle is a must. In an agile environment, we can say it should be started from Sprint 0. Not only that, it should go throughout all the sprints.

Furthermore, we need to focus more on continuous performance assessments and monitoring (Unit and functional level) rather than doing manual assessments by developers or testers. By using continues approach, we can eliminate the poor cost of quality and its remedies more effectively and efficiently. Using a monitoring dashboard, we can see the release or commit wise performance figures to detect issues ASAP. And it will increase the visibility for the customers who bother about the application performance and improvements. Say for an example, if we can establish a CI environment which triggers in the nightly build, we can find the performance issues easily by spotting the trend graphs rather than waiting till hardening sprint or system tests at the end of the release. As depicted in the following images, we can display figures on the dashboard:



  
Following is the figure I took from the daily performance monitoring dashboard which configured in Jenkins with JMeter plugin. Every morning, it will be triggered and pass all the data to the dashboard plugin.



Some of the useful links to configure the dashboard as follows:




Third: Predictive approach

Now I am going to talk about the approach that most of the project teams, customers and stakeholders are neglected. However, if you noticed the topic of the third approach, it will give you the whole idea behind this. When our application is in the production, it’s not enough to deal with the development and continues CI environments. Hence, we must work on the production figures and trends to analyze the current user behaviors to identify the usage patterns. In other words, we should be able to predict or forecast the future loads and the impact (Ex: To indentify seasons or dates such as Black Friday, etc).

As I told you in the first paragraph, there are a lot of APM (Application Performance Management) systems in the market to get these valuable data and deliver great digital customer experience. Therefore real user relations and critical transactions can be monitored and analyzed down to code level. So we can categorize those as standalone applications and systems that coming with the cloud platforms.

Following are some standalone APM systems that we can use on the market (Only a few).


















I had a chance to work with New Relic tool and it has most of the features which intended to have in such application. Very good visualization and data analysis, including find the most cumbersome SQL queries, etc. Following is a sample dashboard which used New Relic insights:



If we are using cloud platforms, then we can use Application Insights with Azure and CloudWatch with AWS.


    


Both Azure and AWS have a built in solutions for keeping data and tracking of metrics with the online alerting system. These tools are there to see the big picture of what is going on with your platform as a service environment, like standing on the Mount Everest and watching everything below. If we talk about Azure application insights, we can make its own Dashboard or show the data through the PowerBI.

The first thing you see after you log in to the Azure portal is the dashboard. Here you can bring together the charts (Server responce Time, page view Load Time, Errors, etc) that are most important to you and to your stakeholders.

*Following are the areas of the dashboard:



  1. Navigate to specific resources such as your app in Application Insights: Use the left bar.
  2. Return to the current dashboard, or switch to other recent views: Use the drop-down menu at top left.
  3. Switch dashboards: Use the drop-down menu on the dashboard title
  4. Create, edit, and share dashboards in the dashboard toolbar.
  5. Edit the dashboard: Hover over a tile and then use its top bar to move, customize, or remove it.
*Took from https://docs.microsoft.com

If the Azure dashboard is not good enough for you, as I told you earlier, we can use PowerBI tool easily.

Power BI is a suite of business analytics tools to analyze and share data. Its dashboards provide a wideangle view for business users with their most important metrics in one place.

Some of the useful links to configure PBI:



Not only the PowerBI standard reports (adapter), but also we can get the data via Export Analytics queries and Azure Stream Analytics. In this way, we can write any query you want to use and export it to Power BI. In our current project, we are basically working with Azure Stream Analytics as it has good control of data.


 


Hope you understood the difference between these approaches and what are the places which we can use monitoring systems and dashboards to upgrade the customer visibility, application performance and product quality.

Friday, January 6, 2017

Do not just follow the process, observe, improvise and adapt

During last few years, I worked as a Scrum Master who is responsible for process activities, I learned a lot related to ISO/SCRUM rules and its behaviors. As a starter, I tend to do the same as what process tries to explain and implant. I tried to tally each process item to the project workflow and make an extra effort to maintain the process conformity. Sometimes I have created some documents and graphs which do not have much contribution to the smooth progress of the project. However, I do not want to think differently as I thought it will harm to the process and then the process marks will go down (NB: In our company, there are some marks for the process conformity).

But when there are a lot of issues due to particular project behaviors and stakeholder requirements, we tend to apply different principles, methodologies and exceptions become more realistic with what we are actually doing inside the project. In other words, we try to think out of the box and applied necessary process changes to be more practical, visible and productive while confirming to the process.

In the next few paragraphs, I will try to explain what are the particular project behaviors / stakeholder requirements and how we changed the way we are doing in order to be ISO confirmed and practical within an agile scrum environment.


What are the particular project behaviors and stakeholder requirements?

In that period of time, from the pool of projects, our project is the most prioritized one in their stack and we were pressurized to release features very frequently. And they did not support much for the process things since the mindset was towards release items and time to market. First they gave us 10 big features and set a deadline, which is not that suitable for an agile scrum environment. Initially we tried to explain the situation related to process, etc., but those are not much considered.

In that time we had 2 weeks sprints and we had to create a lot of artifacts in order to confirm the project and process for each sprint. And also we do not have much idea about the sprint outcome and that is not tallied with the deliverables as we cannot give a shippable product within 2 weeks. But as product owners, they are expecting a release after every sprint.

And also, due to the big regression work we have to do before a release, we have asked for test automation suite several times but, those are not accepted as they need features more than else (though that is not the correct way and maybe they were also know that). As well as test automation, we asked to do thorough performance assessment, but the answer was the same.

While these things are happening, as a Scrum Master, I also pressurized due the process and product work. But one day I thought this is not going to work and this will end up in nowhere. Hence I have started to create a process or way of working which is suitable for the current situation and also can maintain the process conformity. In other words, a way to simplify the process/work pattern to cater the project delivery needs and avoid extra work.


What we did to become more realistic in process, improve visibility and avoid extra work?

We did a few things to make the change and those are as follows:

1.  Learn the process well

First of all, what we did was we went to the process head and know about the company process well. Not only the documented points, but the essence one by one, what he said was that if you need any changes, do it and add those to the exception document. And most importantly he said that we do not need to do the exact thing what process doc says. But do things which brings what process hope to add to the project.


2.  Learn different methodologies and applied

In that period, we followed scrum but actually it was “SCRUM BUT”. Hence, we have introduced new but sustainable methodology called Scrumban which is a combination of Scrum and Kanban. Because the application is in the production and it is sort of in the maintenance + adding feature phase and we need to follow some of the best scrum practices such as daily standups, grooming sessions, demos, retrospectives, etc. It gave us more flexibility on ceremonies. And also we have changed the sprint length to one month, which minimize the overhead as we do not need to start, close sprints and no need to create documents every two weeks. Hence, we had more time to do project work while doing the bare minimum to satisfy the process needs and product quality.

Following is the diagram which I was created to depict the full process of the application (Start to Release) 



3.  Learn how to make the proper artifacts and improve customer visibility, which we really need.

Then we come up with proper artifacts such as exception document, release related statistics, project matrixes for all the releases, the progress status of the major features (10 features) and all the bugs added/resolved which all the stakeholders can see. Specially, the stakeholders are like to see the current statistics, hence we should make sure that the visibility is there. If not, they will suddenly come and ask questions which we do not have any clue at all. Glad to say that, the CTO who was there at that time, looked at the progress status page more frequently to understand the progress and releases. He always adding comments to that page so it was a live document. Likewise, proper artifacts will add more value rather than traditional documents.

Finally, hope this blog will help you to understand the real meaning of ‘Do not just follow the process. Observe, improvise and adapted’. Thank you.




Tuesday, November 11, 2014

How to create a file with exact amount of bytes to test the boundary value

Once I need to create a file with exact amount of bytes (file size) to test the boundary value of a file upload event in my project. Requirement is to be able to upload a file < or = 100MB. So I tried to create a file to test this scenario.

To create a file less than 100MB is very simple. But, according to the QA principles, we need to check the boundary value of that feature with the file size 99MB, 100MB and 101MB won't be an easy task as we have to create 3 files with exact amount of bytes. We called it as Boundary Value Analysis.

Then I found a nice command line to create such files without wasting our valuable time.

What you need to do is just create a .bat file with following command line and execute it.

fsutil file createnew filename length 

Ex: Let say if you need to create a file with 100 MB (1024 * 1024 * 100)

fsutil file createnew 100MB_File 104857600


Thursday, October 23, 2014

Usability Engineering Basics - Part 3 (First)

In this time, I thought writing some words about usability engineering as I promise you some months back. If we need to highlight real usability issues which customers have, we need to conduct a practical usability test with appropriate infrastructure and tools. Then we can collect data to be analyzed and get the correct actions upon solving any usability issues. So we called it as Moderated Usability Test and let’s discuss how we can plan and do it.

1. Planning a Moderated Usability Test

Planning a test should be the first part and it is very vital when we conducting this kind of complex test. And the testing scope will get differ upon the usability requirements.


What you should test?

First, as usability engineer we need to decide what the functionalities that we are going to take are. In this case, we can collect valuable information from BA, design and development teams and select features that are new, often used, error prone and specially critical. After that we should prioritize them and write scenarios which represents typical user work flow. Scenarios should be:
  • Small. Time is costly during usability testing.
  • To the point. The meaning of the scenario should have a specific DOD.
  • Realistic. The scenario should be a normal activity that the average user do.
  • Scenario should be on user’s context. Selected scenarios should be related to the user’s context and we need to understand the participant’s connection with the system.

Following is an example for a scenario:
You need to add a contact with first name, last name and contact image in your company’s CRM web application. Add those information and click on “Save” button. Make sure to let me know when you are done.


Who is going to evaluate the Application?

Who you choose to evaluate the application will have a massive effect on the outcome of the test. Imagine that you’re creating an application that reconcile accounts. Your customers are people who deal with account reconciliation. That is a huge group of people. Narrow your focus to a particular user profile.

While you’re creating the user profiles, you may realize that you have two or more equally important subgroups like people who control the system (Business Admin) and people who use the system (Normail User).

Test with proper participants – Rule of thumb is testing no more than 5 users and running small tests as you can afford.

According to the industry standards, the common curve for usability test is as follows:


If we try to read the curve, as soon as you collect data from a first 3 users, you have already captured almost 50% of the usability issues. After you have usability insights from 5th user almost all of the usability issues were captured and no need to spend more time and money. In other words, as you add more and more users, you learn less and less because you will keep seeing the same things again and again.


Who is going to Observe and how?

When talk about the observation, we have 2 main observation types. First one is ‘Obtrusive observation’ method and the other one is Unobtrusive observation’ method which concentrate on observing what the test user does and refrain as much as possible from influencing her/him by explaining the design or asking questions. In Unobtrusive observation’ method, we can have following combination as observers:
  •         At least 2 developers.
  •         At least 1 tester
  •        And a BA


Where Are You Going to Test?

Now we know, what we are planning to test and who is going to evaluate the system. Now we have another question about the place which we going to conduct the test. The location of the test can be as simple as a meeting room or as complex as a purpose-built facility. If we need to conduct an advanced usability test we need video or audio recording equipment for analysis. Conduct formal tests in an environment that simulates normal use as much as possible. Usability tool like Morae is a good example for a usability tool which we can use to do such tests with advanced data analysis and presentation of results.


In coming articles, I’ll continue this and hope to address topics such as “How we going to create a script”, “Run the tests” and “How we gonna analyze results and prepare the report”.

Friday, August 1, 2014

An approach to build a team beyond Cross Functional

Few years back, I met one of my uncles who lives in Australia. We had a nice chat about our families, etc. After a while he asked me details about my job. I said I am a Software QA engineer. He was surprised and asked, what my responsibilities are. I then listed out my responsibilities one by one. He then asked, “other than the normal QA work, do you have other responsibilities to help each other and to add value to your project?” I answered that I did not. Then told me that he too was a QA person in the pharmaceutical industry. And other than his usual day to day responsibilities he works on other areas to help different people and to add value to the process/team. He described that main advantage is that the people in the team has knowledge about the other persons work and hence can help out when that person is unable to attend to it. (Maybe she/he on leave or sick).

In this article, I will not going to explain anything about product performance engineering but how to build an ‘Extended Cross-functional Team’

First of all, ‘what is Cross Functional Team’?

According to the Wikipedia “A cross functional team is a team composed of at least three members from diverse functional entities work ing together towards a common goal. This team will have members with different functional experiences and abilities, and who will likely come from different departments within the organization”.
In software industry, we have the same experience while working for a project. Say for an ex: we have a Oracle expert, SQL DB expert, Security expert, Product Performance Expert, Automation expert, Overall QA expert(s), Overall Dev expert(s) and Domain expert(s). It seems a nice cross functional team. But let say Automation expert is on leave for his/her wedding and suddenly there is an issue in the automated suite. Imagine the situation, do we have a resource to fix that in our team. The answer is NO as our test automation resource is on leave.
But see, if that team had a resource which have a basic knowledge about the test suite, then he can have a look and fix or he can give a call to the guy who knows the entire process and find a solution as quickly as possible. So that is the main reason why we talk about Extended Cross-functional Team which beyond normal Cross-functional Team.

How we need to build an ‘Extended Cross-functional Team’?

In order to build this extended teams, we need to target the individuals who worked in a project. After a lot of effort, experts came up with a model called T model which we can use as a basic guide.

Model is as follows:


Main idea behind this model is explained by the vertical and horizontal lines. Thick vertical line represent the core competences that the resource has (Or in other words, a lot about a bit). Thin horizontal line represent his/her other capabilities or areas that has basic knowledge (Or in other words, a bit about a lot).

As a QA or Dev engineer, we have to select this according to following 2 facts: 
  1. Core area (Vertical line of the T model) will be decided upon the project requirements and skill set. Ex: Dev team members should be worked in .Net / MS SQL platform. Build should be tested by QA members.
  2. Other areas (Horizontal line of the T model) will be decided by the Team or individual. Ex: One of the members like to learn performance testing and he/she will add as the backup for performance engineering area. Team decide member A should work on Oracle side in order to backup related areas (In this case individual can disagree with the team decision and ask for a different area).

As a team, can we just do this?

Yes but not efficient. Then, we have to come up with some statistics to adjust the gaps. In order to do so, we have to do a gap analysis about the current competences and build the resources to bridge the gap.

 

In this scenario, we can create a table as follows to see the areas that we have to fill.

Color scheme is as follows:


Expert

Have good experience

Know about the area but need guidance

Do not know anything

Now you can see how the domain, tech, DB and nonfunctional areas are distributed among the team members. Specially CS (Domain), .Net, C ++, N JS (Tech) and ST (Nonfunctional) have a huge gap. So the team lead can use this valuable data to create an action plan on how to fill this gap from existing team members and create a perfect ‘Extended Cross-functional Team’.

Hope this will help you.