Friday, October 4, 2019

API Performance Testing Guideline using JMeter


In this document, we are going to understand the basic API load testing with Apache JMeter (which is an open source tool) and how we can apply the load.


1. Prerequisites

Java 8 or 9 for Apache JMeter 5.0

JMeter Tool:
First, you need to download the JMeter from the Apache JMeter website. Once you download the zip file then extract it to a location where you want to work with it. Then go to the /bin folder inside that extracted folder and enter the following to open the Apache JMeter with GUI mode. Please note that it is advised not to load the GUI when running load tests, only advised to use GUI for Test creation and Test debugging purposes.


If that was successful, you should see the JMeter GUI as follows:



2. Let’s Start with a simple example

1. What is a test plan:

A Test Plan can be viewed as a container for running tests. It defines what to test and how to go about it. A proper test plan will have some of the elements such as Thread groups / Logic controllers / Timers / Configuration Elements etc.

Please note that, in a test plan, it’s a must to have at least one thread group.

You can simply add or remove above elements to a Test Plan by right-clicking on the name of the Test Plan. Here right click and then select add from the menu and, chose whatever the element you want to add.



First we need to create the script. The following steps need to follow when you do JMeter Scripting:


2. Adding a Thread Group

As we learned we need to have at least one Thread Group for the test plan in order to run the script. So let’s first add a thread group to the Test Plan.

Right click on Test Plan -> Add ->Threads -> Thread Group



3. Adding a HTTP Request

Then what you need to do is add another element under Thread Group for HTTP Request, as we did before we can add a new HTTP Request sampler to Thread Group by right-clicking on that and select the menu items as shown in the picture.

For scripting purpose, you can keep the thread properties as “1”. This need to be changed when doing the execution. (Will discussed further in execution)



If you are successful, then you will see your HTTP Request element listed under the Thread Group element as follows (in here I have changed the name to “Get_Issue_Details”).



4. Configure HTTP Request

Now let’s configure our HTTP Request sampler element with relevant values for Protocol, Server Name, Port Number, Http Request Method and Path as follows.

When talking about methods, API Request has 3 main methods. Based on what you need to configure request details. Mainly we are using following methods:

  •          GET – Retrieve information about the REST API resource
  •          POST – Create a REST API resource
  •          PUT – Update a REST API Resource
  •          DELETE – Delete a REST API resource


First we will consider on GET Request, for that you need to go and add the following main details as highlighted in the below screenshot. Also you can give meaningful transaction name for HTTP Request which I mentioned above.

  •          Protocol
  •          Server Name/ IP
  •          Method
  •          Path





According to the above image, we gonna take Jira API as our application and the configuration is as follows. But you can use your own APIs as well:
  •          Protocol - https
  •          Server Name/ IP – issu.cam.se (add the correct URL since this is a dummy)
  •          Method - GET
  •          Path - /rest/api/2/issue/C-3023

NB: C-3023 is our dummy Jira issue ID



5. Adding a Listener and run the test

To see the results, we should add a listener and it is as follows. Here we are adding ‘View Results Tree’




Now run the test and see the results in the View Results Tree. We can see that the request is failed due to the permission issue.














6. Adding a HTTP Authorization Manager

Authorization manager is a configuration element which can be added as follows. This is to specify the user name and password to trigger the authorization.



In this element, we gonna add the application authorization parameters according to your user. See below:



Again run the test and see the results in the View Results Tree. Now we can see the request is pass.









7. Adding a Response Assertion

Right click the thread group > go to the Assertions and add a response assertion. Here we can check various parameters

Main idea of the response assertion is to make sure that the response is what we need. In our example, verify whether this response is the exact issue we need to verify.



Now, add a pattern to test. Here I am adding the issue heading as a Text Response


Now the test should be successful as follows. (If you need to test the assertion, change the pattern to a wrong text and run the test, it should be failed).



8. Adding a CSV Data Set Config for dynamic values

Next you need to identify whether there are any dynamic values such as user names, session ids, task IDs. Dynamic values can input in different ways. If it is user input such as username you can proceed with parametrization. If the value is auto generated by the previous response you need to do correlation. (This will be discussed later)

Now we will discuss how to do parameterization. There are two main methods to do this
  •          User Parameter
  • .       CSV Data Config

In this example, we are using CSV Data configuration.


First, create your csv file and the best practice is to save CSV file in the same location which has JMeter script. In CSV file you need to separate columns using comma and no need to add any heading. In this first example, we have only one value.



Now go to CSV data config and give the file name (File Location). If you save the CSV in same location where you have the script you just go and give the file name. IT will be easy for use when moving the script.

Next you need to go and give the variable names in the similar order you gave in CSV file. Also variables need to be separated by commas if we have multiple.


Then, you need to go and change the values with variable name using ${Variable_Name} format. In here, we are changing the issue ID of the path in the HTTP Request.





9. Apply load

As I told you in the beginning, this is the time to apply the load. Now select the Thread Group and change the following parameters according to your requirements:
  • Number of Threads (users) – This is the virtual users we try to test (here 10)
  • Ramp-up Period – How much time it takes to load all the users (here 20 sec)
  • Loop Count – What is the repeat count (e.g. 10 user * 5 loop count = 50 Samples)





Now run this and take the results as follows. Make sure to add the ‘Summary Report’ listener beforehand.




You can see the average response time, error rates, etc. If you need you can add more listeners.

Sunday, February 11, 2018

Why Quality Assurance Engineers should be UX Engineers

Software Quality Assurance is a buzzword for quite some time but with the agile transformation hits the ground, there were some questions like ‘will it die out sooner or do we need a separate gang to assure quality?, etc’ But, still, it stands tall with other areas in the software field with more interesting methodologies and practices. Anyway, as QAs, we kept questioned by various parties about the real requirement of a QA or in other words, what we are actually doing other than just testing functionalities in the application and adding bugs.

I can remember a few years back, one of my colleague developers also questioned us about a designation that we have added to our QA track. He blamed that we took an engineering track which belongs to them.

Whatever the things said and done in the past, the future of the QA depends on the change that we are going to be incorporated and how effective it is. In this article, I like to emphasize one factor which we can gain the boost and differentiation. That is how QAs can be UX Engineers.

In our projects, we are normally working with product owners and stakeholders directly to get the requirements. So this is a better move rather than requirements are coming from a BA or a person who is dealing with the customers. Hence, this is a very good chance to get leverage over traditional mechanisms.


I need to start like this, according to my experience and knowledge, User Experience (UX) has two areas which can depict as follows:



According to the diagram, the first area is basically related to UI/UX engineers on the project. They need to make sure that the website should be designed according to the user's expectations. Say, for an example, if we are creating a website for Scandinavian countries, they have a different color set and unique thinking patterns which we need to read.

Other than that, as the second area depicted in the diagram, QAs have a big part in order to ensure the functional and non-functional user experience. Why are functional and non-functional areas related to user experience? What is the relationship? I would say, even though it is not highlighted, QA is all about ensuring functional and non-functional user expectations and improving the experience. Now let’s have a look at how exactly this happens.


Functional User Experience:

In an application, the first priority should go to the functionality. I will take ‘Login to the system’ as my user story in all the cases. Let's say login functionality is not working properly, it is directed to the subpage instead of the home page. You can see, obviously, the user experience ruined. User is not satisfied and there will be a lot of customer complaints. To fix this issue in the application, we need to check the root cause and the quality assurance process to understand why it is not being tested. Have we covered it or not in the test case suite. If it is covered then why it is not being executed, etc. So the point is, whatever we call, functionality is directly affected to UX.


Nonfunctional User Experience:

Even though the first priority should go to the functional areas, a lot of big problems are coming from the nonfunctional part of the application. Say for an example, in my project, due to a performance issue, we had to spend 2 weeks to find the problem. Fortunately, we were able to but at a cost. Some of the experts say, nonfunctional requirements or problems are not clear enough and it’s like a hidden part of the iceberg. Hence user experience in nonfunctional areas is crucial and we need to spend more time on it.





Security:

I will take the same example. Let's say due to the weak password validation mechanism, an unauthorized user entered into the system and deleted some valuable info. Now some of the real users cannot access the site as their login data is missing in the DB. So this triggers a big impact on the user experience (UX) and is really bad for the reputation of the application and the brand. Sometimes, an entire business can be collapsed within a few days due to such incidents. Hence, all the QA engineers need to concentrate on the Security aspect and need to learn how to apply these tests in the testing process.


Performance:

In the peak time of the day, if some of the regular customers cannot log in to the site due to a performance issue, it will lead to a big loss in the reputation and the growth of the business. This is an often situation in the Sri Lankan GOV web sites, since it is free of charge we are not bothered much. But, if it is a service for a certain cost, then we have to face a lot of problems from the users. Therefore, whether it is Performance Engineering or whatever, it is impacting the user experience (UX). Hence, we should look at these nonfunctional areas from the beginning of the production cycle than wait till the end.


I am not going to describe the reliability and maintainability but those are also big pieces of the UX puzzle. Therefore when you get the requirements from the customers or end-users, we need to check the SLAs (Service Level Agreements), Security background (Who are the user group), etc of the application. I encourage you to have a non-functional requirement gathering document that we can send to our customers or end-users to fill with their requirements. If they are not technical enough to fill that, better we fill it with the past experience and send for the validation. Following are a few sample questions for application performance data gathering:




Finally, What I have to tell you is that even we have divided into several areas such as Quality Assurance, Security Engineering, Performance Engineering, etc., User Experience (UX) is the umbrella there at the top. Hence, all the QA Engineers should think from the user perspective first of all before going to the technical separation as I mention above. In other words, we should be UX engineers than QA engineers.

Thursday, December 7, 2017

How to create front-end performance Test Cases and beyond

Front-end performance testing is crucial when considering today’s performance testing and engineering domain. Now a days we are more or less into the web base application implementation and testing. Hence we need to look at client side performance while we are taking care of the server side (Back-end).

As quality assurance engineers, we are more methodical. Whatever things we have to test, we tend to create a test case in order to keep the test results visible and manageable. And also, QAs are using these TCs to get the history of that feature as well as (considering a performance test) we can take the benchmark of the figures which were there before the optimization. So we can compare results and show to our customers as they are keen on the ROI.

In this blog, I am going to describe and show you how to create an initial front-end performance test cases which you can use for performance testing. The main motivation behind these test cases is to avoid the unprofessional way of doing it.


First of all, lets have a look at following cycle which discribe all the areas including mitigation.



1. Info/Data: First of all, you need to read articles, books, blogs to get the data and information to do this. Without the knowledge, you cannot create any valuble scenarios.

2. Learn / R&D: Then, you need to learn from them and do the R&D stuffs to clear things. Get the idea behind it and properly structure the knowledge base.

3. Create TCs: Now you have proper knowledge to create test cases. So create those according to your domain, SLA, etc. But there should be common practices.

4. Execute/Assess: We need to execute the test cases with the SUT (System Under Test) and update the TCs. Make sure to mention all the performance values in the comment area. And also take screen shots if necessary (I prefer it’s good to take screen shots).

5. Log: All the finding should be logged after you had a chat with your developers. So this should be a collaborative approach and not a single decision.

6. Optimize: After you know that there is an issue, and after you log it in your tracking tool (e.g.: Jira), developers can work on that. So developers are fixing this and then QAs are testing with the benchmark to check that is there any improvement.

According to my experience, do not create separate hardening sprints to fix those non-functional issues. What we can do is, if you are fixing 10 functional issues in your sprint, take only 9 and pick one from the non-functional backlog.

7 Re-assess: After all the improvements are fixed, then QAs can do another round of official test to check the results. In here, make sure to update your test cases with new figures so that all the stakeholders can see that.

Likewise, the cycle should execute regularly.


So let’s start describing front end test cases from now on. There are test cases I have created upon my requirements so you can have your own test cases.

1. As the first part of my test cases, I have selected to validate the front end performance rules as it is utmost important. Targeting Web and mobile apps.







































2. And then, I am referring to analyze how content size, type, and global rate as well as analyze how user see the performance and timing. In this case I have used 2 very popular tools which freely available.





























3. As the 3rd area, I have selected to analyze the Critical Rendering Path (CRP) using manual analysis and using the lighthouse tool in the Chrome browser. In this case we need to manually analyze the CRP and find issues. As well as from the lighthouse tool, there are good options to check the CRP.












Hope you got an idea about Front-end testing, test case writing and beyond. And make sure to create your own TCs according to your requirements. And please share all those with us coz then we also can learn from it.


Friday, September 8, 2017

How Infrastructure Monitoring and Tooling are connected with Performance Engineering


In the last five to six years, I had a chance to work in a few applications which are into CRM and financial closure domains. While working as a QA Lead and the Scrum Master, I was able to work as the performance tester as well. Fortunately, in these applications, our customers are willing to spend some time on performance engineering and monitoring through different Application Performance Management Systems which targeting application performance and infrastructure monitoring. Other than the APM systems, I was able to setup CI environments and dashboards.

From this article, I would like to share my experience with the application performance monitoring and how it is connected with Performance Engineering. First of all, I will start with the main performance approaches and talk about how we can connect tools, etc. Then all can clearly understand what I am going to be discussed.  


Three Performance Engineering Approaches:

In Performance Engineering, we have 3 approaches which we can present the entire life cycle. Those are Reactive, Proactive and Pridictive.


First: Reactive approach

When we are dealing with software applications, there a lot of sudden performance concerns due to various issues such as the requirements are not identified properly (SLA), customer database related issues, problems in the user groups or user personas and issues when the user load getting increased unexpectedly. This approach is needed, but not encourage to categorize as the best way of ensuring the product performance. Hence we are looking for the second approach.


Second: Proactive approach

In this, we are targeting a sustainable way of ensuring the product performance. Nowadays, when we are going to start a software product implementation, we need to think about the non-functional aspects more than ever. Specially performance engineering aspects as it will trigger issues such as we may need to change the entire application architecture at the last stages. Hence, starting the performance assessments in the early stages of the life cycle is a must. In an agile environment, we can say it should be started from Sprint 0. Not only that, it should go throughout all the sprints.

Furthermore, we need to focus more on continuous performance assessments and monitoring (Unit and functional level) rather than doing manual assessments by developers or testers. By using continues approach, we can eliminate the poor cost of quality and its remedies more effectively and efficiently. Using a monitoring dashboard, we can see the release or commit wise performance figures to detect issues ASAP. And it will increase the visibility for the customers who bother about the application performance and improvements. Say for an example, if we can establish a CI environment which triggers in the nightly build, we can find the performance issues easily by spotting the trend graphs rather than waiting till hardening sprint or system tests at the end of the release. As depicted in the following images, we can display figures on the dashboard:



  
Following is the figure I took from the daily performance monitoring dashboard which configured in Jenkins with JMeter plugin. Every morning, it will be triggered and pass all the data to the dashboard plugin.



Some of the useful links to configure the dashboard as follows:




Third: Predictive approach

Now I am going to talk about the approach that most of the project teams, customers and stakeholders are neglected. However, if you noticed the topic of the third approach, it will give you the whole idea behind this. When our application is in the production, it’s not enough to deal with the development and continues CI environments. Hence, we must work on the production figures and trends to analyze the current user behaviors to identify the usage patterns. In other words, we should be able to predict or forecast the future loads and the impact (Ex: To indentify seasons or dates such as Black Friday, etc).

As I told you in the first paragraph, there are a lot of APM (Application Performance Management) systems in the market to get these valuable data and deliver great digital customer experience. Therefore real user relations and critical transactions can be monitored and analyzed down to code level. So we can categorize those as standalone applications and systems that coming with the cloud platforms.

Following are some standalone APM systems that we can use on the market (Only a few).


















I had a chance to work with New Relic tool and it has most of the features which intended to have in such application. Very good visualization and data analysis, including find the most cumbersome SQL queries, etc. Following is a sample dashboard which used New Relic insights:



If we are using cloud platforms, then we can use Application Insights with Azure and CloudWatch with AWS.


    


Both Azure and AWS have a built in solutions for keeping data and tracking of metrics with the online alerting system. These tools are there to see the big picture of what is going on with your platform as a service environment, like standing on the Mount Everest and watching everything below. If we talk about Azure application insights, we can make its own Dashboard or show the data through the PowerBI.

The first thing you see after you log in to the Azure portal is the dashboard. Here you can bring together the charts (Server responce Time, page view Load Time, Errors, etc) that are most important to you and to your stakeholders.

*Following are the areas of the dashboard:



  1. Navigate to specific resources such as your app in Application Insights: Use the left bar.
  2. Return to the current dashboard, or switch to other recent views: Use the drop-down menu at top left.
  3. Switch dashboards: Use the drop-down menu on the dashboard title
  4. Create, edit, and share dashboards in the dashboard toolbar.
  5. Edit the dashboard: Hover over a tile and then use its top bar to move, customize, or remove it.
*Took from https://docs.microsoft.com

If the Azure dashboard is not good enough for you, as I told you earlier, we can use PowerBI tool easily.

Power BI is a suite of business analytics tools to analyze and share data. Its dashboards provide a wideangle view for business users with their most important metrics in one place.

Some of the useful links to configure PBI:



Not only the PowerBI standard reports (adapter), but also we can get the data via Export Analytics queries and Azure Stream Analytics. In this way, we can write any query you want to use and export it to Power BI. In our current project, we are basically working with Azure Stream Analytics as it has good control of data.


 


Hope you understood the difference between these approaches and what are the places which we can use monitoring systems and dashboards to upgrade the customer visibility, application performance and product quality.