Cathy Zhang, Author at Perficient Blogs https://blogs.perficient.com/author/cathyzhang/ Expert Digital Insights Thu, 18 Jan 2018 20:53:43 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Cathy Zhang, Author at Perficient Blogs https://blogs.perficient.com/author/cathyzhang/ 32 32 30508587 QA Solution for SharePoint Upgrade https://blogs.perficient.com/2018/01/18/qa-solution-for-sharepoint-upgrade/ https://blogs.perficient.com/2018/01/18/qa-solution-for-sharepoint-upgrade/#respond Thu, 18 Jan 2018 20:53:43 +0000 https://blogs.perficient.com/delivery/?p=10150

1. Goals for testing upgrades

As SharePoint gets stronger and stronger, SharePoint Online and SharePoint 2016 have more new features and abilities. Enterprise can provide more functionality to business users with less customization and at a lower cost. According to the upgrading solutions, QA strategy needs to ensure that sites are upgraded seamlessly at first, then new OOTB features can leverage existing customizations in a more efficient way and satisfy new business requirements.

To reach this goal, you should find out:

  • Customizations that you keep in your new environment, so that you can plan for how to deal with them during upgrades.
  • Customizations that you plan to replace with new OOB features, so that you can plan for how to deal with risks of retirement and functionality differences.
  • Authentication applications that you plan to integrate with SharePoint, so that you can plan for user permission and user picker verification.
  • Site Collection lists that will be migrated to SharePoint 2016 or kept in existing environments, so that you can plan the schedule for different site tests.
  • Performance requirement of the whole SharePoint farm.
  • Integrated systems that affect the functionality of SharePoint.
  • SharePoint Upgrade/Migration tools that will be applied, so that you can plan how to use these tools for QA work, like Sharegate.
  • Team and related business owners that SharePoint upgrade will involve or effect, so that you can plan how to collaborate with them during the upgrade and final cutover.

2. Steps to verify upgrades

After you set up the new environment for SharePoint, you can transfer all the settings and install customizations. The main steps are:

  • Identify customizations and install them on the farm.
  • Verify upgraded and attached database if it is for the SharePoint Farm.
  • Verify customized features, which is often the most complicated part.
  • Verify upgraded site collections and sites with list/libraries, pages and so on.
  • Verify integrations with the third-party applications.

Before the formal site collections and sites upgrade, perform a trial upgrade on a test farm first. Examine the results to determine the following:

  • Whether the service application data was upgraded as expected
  • Servers, Web applications, Services, Service Applications which are configured correctly and up and running
  • The appearance of upgraded sites
  • The time to allow for post-upgrade troubleshooting

2.1. Identify and verify customizations

Export all the solution and customizations to a list and identify the source of the customizations, if possible. For example, are there third-party add-ins or features that were customized in-house? If you identify the source, you can check the upgraded version for SharePoint 2016.

For customized features, we need to identify following points so that you can plan to deal with them during the upgrade.

  • Solutions – by default legacy solutions are deployed, they will have less risk, as its assembly structure and resource dependencies are not changed. If the legacy solutions need to be reorganized, it is important to define the classification rules and identify all the dependencies.
  • Functionalities of customized features and dependencies among them.
  • Review what’s new and changed in new version of SharePoint too, and review the source code of customizations to adjust them accordingly if the change is major.
  • Usage of customized features – it helps you look into the behavior in SharePoint 2013 and their business importance. Sometimes, you could find specific features are never used.
  • Potential risks of upgrade or dismission.

The customized features need to be verified in different sites to cover business scenarios. Then it can be verified with a smoking test in other upgraded sites.

2.2. Upgrade database and verification

You cannot achieve your testing goals unless you use your actual data. However, this might not always be a realistic option for initial testing. You can assemble a subset of data from representative sites in your environment. If you want to first test by using a subset of your data, be sure that the subset has the following characteristics:

  • The data subset contains sites that are typical of the sites that you support in your environment.
  • The size and complexity of the data subset closely resembles the actual size and complexity of your environment.

Verify upgrade status for the database:

  • Use the upgrade status page in central admin
  • Review the log files to look for errors and warnings
  • Validate the upgraded environment, including service applications, site collections, search and so on

2.3. Upgrade and verify site collections and sites

The issues of site collections and sites must be identified before you upgrade your production environment. Review your upgraded sites to fix any issues with content, integration, styles, and features. The following checklist can be followed and can tell how to address verification.

  • Customized Features
  • Web Parts
  • Branding and styles
  • Timer jobs
  • PowerShell Scripts
  • Feature and Event Receivers
  • Logging and Analytics
  • User Permission Matrix
  • Key properties of Site Contents (Lists/Libraries)

To verify basic functionality before the site collection upgrade, you could create a new site collection by using a representative set of lists, libraries, Web Parts, and so on. Review the new site to make sure that the common, basic elements of your sites are working.

3. Testing Tools and Automated Testing

Commercial Tools

There are multiple commercial tools for SharePoint Migration. Take Sharegate as an example. It can provide a user permission matrix, list matrix and so on. Testers can develop some macro code in excel to finish the matrix comparison. However, the license of Sharegate is not free.

Automated Testing with Scripts

If you don’t have a budget for commercial tools, SharePoint also provides a client-side object model (CSOM) to retrieve user permissions, properties of lists and libraries. The script can be executed on .Net platform without the SharePoint environment. But if your SharePoint has been integrated with SSO application, like OKTA, you need to get authenticated context firstly, using PnP authentication Manager or Web Client Context.

Anyway, automated testing with scripts or commercial tools is recommended to verify user permission and properties of lists.

4. Conclusions

Generally, customized features are the most complex part, as they have special and complicated business requirements for different enterprises. Site content verification can refer to some automated testing tools or scripts with CSOM. OOB features are not the scope of verification, as it has been verified by Microsoft team.

]]>
https://blogs.perficient.com/2018/01/18/qa-solution-for-sharepoint-upgrade/feed/ 0 211004
Page Object and Page Factory Pattern Based UI Automated Testing https://blogs.perficient.com/2017/08/23/page-object-and-page-factory-pattern-based-ui-automated-testing/ https://blogs.perficient.com/2017/08/23/page-object-and-page-factory-pattern-based-ui-automated-testing/#respond Wed, 23 Aug 2017 05:39:10 +0000 http://blogs.perficient.com/delivery/?p=8991

1.      Introduction

The Page Object Model (POM) is a very important pattern in selenium webdriver. It can also be applied to most of UI automated testing, even with other tools like UFT, Appium and so on. Page Factory is an inbuilt and optimized concept of POM. Page Factory can be used in many frameworks such as Data Driven, Behavior Driven, Modular or Keyword Driven. By integrating POM and Page Factory with the Test Case Model, you receive more focus on how the code is being structured to get the most benefits.              

2.      Advantages

Page Object can be considered as offering the “services” which a page offers rather than exposing the details and UI structure of the page. It can separate the page operations from complicated business logic in the test case. When the change happens in page, you only need to update the page object rather than tens or hundreds of test cases.

Page Factory can initiate elements to create an object repository with annotations @FindBy. It is a lazy way of loading to identify WebElements only with AjaxElementLocatorFactory, when they are used in any operation. In other words,  it will find it again whenever we use a Webelement. So you won’t see StaleElementExceptions with PageFactory, but you may see them by using driver.findElement in the example below.

Two elements are defined by PageFactory and FindElement respectively in the product page.

Then use PageFactory to initiate the product element.

Then use the above two elements in methods. The page is refreshed after click the sorting link.

3.       Architecture

Besides PageObject, Pagemanager is an extended Page Factory Concept, because it contains more operations of the driver and comprise methods of every page. Add the TestCaseBase and Reporter component and we will see Page Factory automated testing architecture in detail. If we want to apply the BDD concept, more classes for step and scenario can be added.

Page Object and Page Factory Pattern Based UI Automated Testing

3.1 PageObject

As you see above, PageFactory initiates every element defined in PageObject at first instead of calling the findElement method for every element. It helps the tester to focus on the operation instead of locating elements. For testing across platforms, different annotations can support when defining elements in Pages, i.e., @AndroidFindBy, @FindBy and @iOSFindBy. It is easier to integrate multiple platforms to a unified testing harness.  For each page in the application, we have a corresponding Page class and encapsulate operations to methods.

“Page object” may be misleading because you think it is just a physical page. Our practices tell that it should be treated as a portlet which may be used across pages, like a navigation bar.

3.2 PageManager

In PageManager Class, methods of elements are redefined, like Click, ClickByAction and so on. We can customize the operations with smart waiting, logging and reporting functionalities. It’s also very easy to upgrade a driver or change a driver.

3.3 TestCase

The TestCaseBase class aggregates driver, pageObject, pageManager, assertion and report together. Every test case needs to define pages and call methods to comprise steps, then add an assertion to verify the result.

3.4 Report

CustomAssertion and ComplexReportFactory are designed to integrate Extentreport, so that we can have a dynamic and visual report and dashboard. It can be replaced with other reporting tools.

4.      Conclusion

The Page Object Model can organize elements across different platforms, and Page Factory can locate elements dynamically. It is popularized by selenium webdriver, but it can be regarded as a common model to design a UI testing model.

]]>
https://blogs.perficient.com/2017/08/23/page-object-and-page-factory-pattern-based-ui-automated-testing/feed/ 0 210952
Enable Continuous Delivery with Layered Testing Approach https://blogs.perficient.com/2017/06/08/enable-continuous-delivery-with-layered-testing-approach/ https://blogs.perficient.com/2017/06/08/enable-continuous-delivery-with-layered-testing-approach/#respond Fri, 09 Jun 2017 02:15:16 +0000 http://blogs.perficient.com/delivery/?p=8094

1.      Automated Testing is a key success factor for “Continuous Delivery”

Continuous delivery takes the idea of continuous integration and advances it one step further. Automated testing is an extremely important middle step to realize continuous delivery. With continuous delivery, the code is not only integrated with changes on a regular basis, but it is also verified automatically safely and quickly in a sustainable way.

2.      Challenges of GUI Automated Testing

However, if just seek a 100% Graphical User Interface (GUI) automated test to fulfill the needs of continuous delivery, some obstacles affect automated verification in delivery pipeline. GUI automated test scripts development is often left to next iteration until new features are stable because of following difficulties.

  • Automated test can only be started when development is done;
  • Automated test need to be updated when UI is changed;
  • Business logic need to be verified in multiple platform, like both desktop web and mobile web;
  • GUI testing need to resolve page loading or compatibility issues of OS and browsers;
  • Test automation at presentation layer is often expensive both to produce and to maintain over time.

3.      Layered Testing Pyramid

Micro service architecture is very popular because of its loose coupling feature, and applications can be integrated with and sharing services flexibly. Modern End-to-end business processes have rich and complex steps that are happening below the GUI within what is sometimes called the ‘business layer’ through API calls and data bases interfaces. Testing should also take care of the quality of the smallest component and services instead of only GUI layer.

Layered testing pyramid should be like figure2, and business logic acceptance testing will affect the result of GUI testing in wired, mobile and other devices. Except exploratory test, Unit test, Service test and GUI end to end system testing is suggested to be automated for continuous delivery.

4.      Layered Automated Test

Let’s introduce layered automated test from a simple scenario in training system. A trainee can register a course using mobile application, mobile browser or wired browser, but registering time and available seats of the course are limited. Its business logic is implemented in service layer as figure 3.

The business logic of registration is exposed as restful services.

  • When trainee register course before registering duration, service return a failure code and message;
  • When trainee register course after registering duration, service return a failure code and message;
  • When no available seat is left for course, service return a failure code and message.
  • When there are available seats and trainee register during the right time, service return successful code and message.

Unit test should test the smallest unit of code, like time comparison. It is always done by developer, but QA testers had better to review unit test code. The overall coverage is suggested to be over 85%.

Service test can be automated to verify course register logic with different input parameters following below verification points.

  • Verify returned codes and messages for above scenarios.
  • Verify registered person number is calculated correctly by course id.
  • Verify seat number is calculated correctly by course id.
  • When trainee need to pay for the course, system need to be integrated with Payment services.

The response of course registration service and course info service is as figure 4.  Then service integration with the payment system can be verified if it is needed. Mocked service can be built to simulate the response of payment, when it is not available for testing. It helps service test shift left in the whole process. The overall coverage is suggested to be over 85%, and should cover the part which unit test missed for.

What to be automated test from GUI? When business logic is accepted, presentation layer test is to verify the integration between UI and Service, and user interaction can work as expected from presentation layer.

Click register button and verify course is registered with correct seats and attendees displayed. Because seat number calculation is verified in service and use the same web elements for four scenarios as figure 5, it should pass all with one scenarios verification. When the failure or successful message are displayed with the same text element, it is not necessary to verify message displayed in GUI for every scenario either. In this case, 75% of test cases were removed from GUI automated test in this case. If no message is shown for successful scenario, another exceptional test scenario need be covered to verify message UI element and 50% of test cases were reduced from GUI test.

In this way, when new features are applied to UI, tester maintain less test cases in GUI. Service test and unit test can be reused to verify business logic is correct. Even if business is changed, scripts maintenance in service and unit test is easier than GUI testing scripts.

5.      Exploratory Testing

Exploratory testing, is all about discovery, investigation and learning based on business and testing knowledge. Tester need to know which has been covered by existing test cases, and then explore more scenarios. Sometimes, we call exploratory test as “Bug Out”, as it looks for defects hided in complicated and uncommon scenarios. The improvement of layout, color, and requirement can be detected during this phase. It uses testers’ expertise and intelligence to validate software and improve it.

6.      Conclusion

Layered testing can reduce the dependencies for testing scripts development, and shift testing earlier by maximizing and reusing non-GUI testing. The defects of business logic can be detected before the whole feature is done.

It can also minimize the use of hard-to-maintain GUI tests, as we move business logic verification to service layer. As service test scripts is easier to maintain and faster to execute, layered approach can reduce response time of testing and overall cost.

Last but not least, QA testers are suggested to being involved in presentation, business, and data Layers design, so that application is more testable and extendable.

]]>
https://blogs.perficient.com/2017/06/08/enable-continuous-delivery-with-layered-testing-approach/feed/ 0 210911
Accurate Functional Test Based Code Coverage https://blogs.perficient.com/2017/03/08/accurate-functional-test-based-code-coverage/ https://blogs.perficient.com/2017/03/08/accurate-functional-test-based-code-coverage/#respond Thu, 09 Mar 2017 02:49:51 +0000 http://blogs.perficient.com/delivery/?p=7188

1.      Functional Test Coverage

Requirement coverage is always used to track test case design quality, but this traceability can’t get the coverage of acceptance criterion in code. After the test case execution, how many branches or lines of code are executed? How do you improve test cases or test data design with quantified indicators? Functional test execution is often black box testing.

In this paper, you will learn how to track the code coverage of a functional test, so that the test design and execution can be refined to reach an  accurate, functional test goal.

2.      Collect Code Coverage

Functional test code coverage can be accumulated during test sessions triggered from the interface and UI. A tester can get coverage of the target classes dynamically and check for missed scope anytime and then review the test case or test data to include more scenarios while removing redundant tests at the same time. Low code coverage can indicate that the test scope can’t reach a quality goal, so it can be considered a quantified indicator to evaluate quality.

The popular tools to get code coverage for an integration test are Emma, Jacoco, Sonar, Cobertura, dotCover(.NET) and so on. Jacoco is a java code coverage tool and will be used as our example, as it has many functions and is continuously upgraded. Jacoco instrument java classes in the server with java byte code and its workflow is shown in figure 2.

Jacoco Instrument Java Classes

Figure 2 Workflow of On-the-fly mode

On-the-fly and offline mode can be used for instrumentation.  On-the-fly mode is recommended because it is more convenient and there is no limitation to classpath configuration. On-the-fly can be used when the java agent is allowed to run in JVM. The java agent can be configured in catalina.bat as below. The Java agent can also be replaced by maven. Start agent, then code coverage is under listening and saved in the EXEC file.

JAVA_OPTS=”-javaagent:%CATALINA_HOME%/jacoco/lib/jacocoagent.jar=includes=*,output=tcpserver, address=10.2.1.122,port=8080

3.      Code Coverage Analysis

An html report can be generated from the EXEC file by using the maven plug in jacoco-maven-plugin.

Five indicators can be analyzed with the Jacoco report, including instructions (C0), branches (C1), lines, methods, types and cyclomatic complexity as shown below. Branches and complexity are the main indicators that the tester needs to pay attention to. Cyclomatic complexity refers to the minimum number of paths that can, in (linear) combination, generate all possible paths through a method. Missed complexity is also an indication to fully cover a module. Complexity often has lower coverage than other four indicators.

Although coverage counters are important, the tester needs to focus more on analyzing missed code with developers together. For instance, four test cases were created for a resume reader user story. The BaseSheetReader class contains key logics for realizing functions. The overall coverage of this class is show in figure 3. Check to make sure the coverage of every method, and the complexity and branch coverage of the method getSheetType is only 50% and 67% as in figure 4. Uncovered lines and branches are highlighted in red and yellow as in figure 5.

From the report, only 1/3 of total branches were covered during the functional test. The tester needs to compare the acceptance criteria of a user story, and update the test steps to cover different sheet type and process exception scenarios. The test steps and test data have to be followed exactly during execution, otherwise missed steps can still be tracked in the coverage report.

BaseSheetReader Coverage

Figure 3 Overall Coverage of Class

GetSheetType Coverage of Methods

Figure 4 Coverage of Methods

Code Analysis

Figure 5 Code Analysis

Unit test coverage can also be calculated by Jacoco, and exec files can be merged for the unit test and functional test by jacoco-maven-plugin:merge. Then the tester can focus more on missed branches by all tests and treat them as a higher priority.

4.      Conclusion

After implementing Jacoco in the project, the tester can locate uncovered logic accurately, and it is easier to track the quality of the test design and execution. The coverage of targeted packages improved from 16% to 67%  so far. Some best practices can be summarized for reference:

1.       Targeted classes files for report and application have to be generated from the same JVM.

2.       Filter out package and classes if they should not be tracked.

3.       Test coverage need to be tracked during the first round of execution.

4.       Code analysis is more important than the overall coverage indicators.

5.       Tester and developer need corporate to finish code analysis.

6.       Traceability of acceptance criteria and code coverage need to be analyzed together.

Reference

http://www.eclemma.org/jacoco/

https://wiki.jenkins-ci.org/display/JENKINS/JaCoCo+Plugin

http://www.jacoco.org/

 

]]>
https://blogs.perficient.com/2017/03/08/accurate-functional-test-based-code-coverage/feed/ 0 210872
Serenity and Cucumber Make Automation Testing Vivid https://blogs.perficient.com/2016/08/03/serenity-and-cucumber-make-automation-testing-be-vivid/ https://blogs.perficient.com/2016/08/03/serenity-and-cucumber-make-automation-testing-be-vivid/#respond Wed, 03 Aug 2016 06:25:12 +0000 http://blogs.perficient.com/delivery/?p=5602

Do product owners and business analysts complain about complex testing scripts when you work through acceptance criteria together?

Do you stress about maintaining the complex testing framework?

Do you write codes to generate a better HTML report?

 A good automated testing framework can be used by testers and business people without programming skills. It’s also easier to maintain. Behavior driven development can help to resolve this problem.

Cucumber is a popular BDD tool, but users have to define the backend framework and the reporting is not very user-friendly.

Serenity is another open source BDD testing framework and it can be used together with Cucumber.

Do product owners and business analysts complain about complex testing scripts when you work through acceptance criteria together?

Cucumber organizes testing steps and verification points in feature files. The scenarios and features can be grouped by high-level concepts, such as epics. In Agile terms, the user story might look like this:

“As a buyer, I want to be able to add the searched products to a cart so that I can buy them together.”

Cucumber describes requirements with epics, features, scenarios, and steps.  Steps, are also known as Given, When and Then. The details following step key words are defined by users, so it is easier to read for your testers and business people. Test cases can be delivered by features directly as below.

Feature: AddtoCart

Asauser,Iwanttosearchaproductandaddittomycart.

ScenarioOutline: Usercansearchoutproductandaddtocartsuccessfully

GivenIcansearchoutproduct’<keywords>

ThenIcanaddproducttocartwith’<quantity>

AndIcansee'<quantity>’ofproductsincart

Are you stressing about maintaining the complex testing framework?

Serenity BDD helps you write cleaner and more maintainable automated acceptance and regression tests faster. Serenity BDD has a defined PageObject class. It hides WebDriver logic in “Page objects”. The JUnit Serenity integration provides special support for Serenity Page Objects. In particular, Serenity will automatically instantiate any PageObject fields in your JUnit test.  Serenity also has a ScenarioSteps class, and it is a parent of every steps class. It can be used in steps definition. The web driver is started at the start of the scenario and closed at the end of scenario automatically.  Common web driver methods have been defined in the Serenity PageObject.

1

Testing code focuses on business logic without webdriver management exposing. The web driver properties can also be configured.

2

Are you writing codes to generate a better HTML report?

 Serenity also uses the test results to produce illustrated, narrative reports that document and describe what your application does and how it works. Serenity maven plugin can aggregate report  features together.

The user can see the testing result of every feature, scenario, and steps. The test data and the screenshot of every step are also listed in the report.

3

The statistics of an overall testing result are also generated automatically.

4

This should resolve many of your concerns. In following posts, we will discuss more about webdriver configuration, test data management, distribution test, CI integration and useful annotations.

 

]]>
https://blogs.perficient.com/2016/08/03/serenity-and-cucumber-make-automation-testing-be-vivid/feed/ 0 210802
How to Get an HTTP Cookie for Web Service Authorization in SoapUI https://blogs.perficient.com/2015/07/31/get-http-cookie-for-web-service-authorization-in-soapui/ https://blogs.perficient.com/2015/07/31/get-http-cookie-for-web-service-authorization-in-soapui/#respond Fri, 31 Jul 2015 07:19:27 +0000 http://blogs.perficient.com/delivery/?p=3851

A web service may need credentials to allow a client to make a request call to the report server. The authorization method depends on the security settings for your report server. SoapUI is a popular web service testing tool, and testers need to send authentication information in SoapUI to the server before testing target requests.

Authorization Types

SoapUI provides a UI function to get credentials for basic authorization, NTLM authorization, and OAuth 2.0 authorization. Testers just need to select the authorization type and type their username, password and domain, then the request can pass the authorized information to the  server. But some servers need the sessionID or cookies to validate, rather than the username. In this blog post, we will discuss how to get the HTTP cookie as a credential.

Get Credentials from HTTP Cookie

Here is a sample where the user has to login to a site and then receives the cookie in the response header. Then, when the user visits the target web service request, the request will catch the cookie to pass authorization in a report server. We can accomplish this with three steps:

1. How to identify the HTTP request for authorization from browser

Get the log-in form, submit the URL using firebug or Chrome. The HTML source code is similar to:

<form data-ajax=”false” name=”loginForm” action=”/Login/cws/processlogin.htm” method=”POST”> Get the parameters for this post operation as below.

How to Get an HTTP Cookie for Web Service Authorization in SoapUI

2. Create an HTTP request in SoapUI with parameters.

The endpoint URL should be https://localhost/Login/cws/processlogin.htm. The parameters should match the three fields we catches in step 1.

How to Get an HTTP Cookie for Web Service Authorization in SoapUI

3. Pass the cookie to the next target step.

We can see the cookie is created in the headers of the login http request.

How to Get an HTTP Cookie for Web Service Authorization in SoapUI

A groovy script can help fetch the sso cookie. The script is as follows:

def setCookie = testRunner.testCase.testSteps[“login”].testRequest.response.responseHeaders[“Set-Cookie”]

def re = /(SSOCookie=.*,)/

def matcher = ( setCookie =~ re )

def cookie = matcher[0][0]

def map=[:]

testRunner.testCase.testSteps[“ship”].testRequest.requestHeaders=map

def headers=testRunner.testCase.testSteps[“ship”].testRequest.requestHeaders

headers.put(“Cookie”, cookie)

testRunner.testCase.testSteps[“ship”].testRequest.requestHeaders=headers

Then the cookie has been added to the header of teststeps “Ship” and the communication with the response server can succeed.

 How to Get an HTTP Cookie for Web Service Authorization in SoapUI

Reference

http://www.soapui.org/

 

]]>
https://blogs.perficient.com/2015/07/31/get-http-cookie-for-web-service-authorization-in-soapui/feed/ 0 210728
Generate Performance Testing Report with JMeter-Plugins and Ant https://blogs.perficient.com/2015/04/08/generate-performance-testing-report-with-jmeter-plugins-and-ant/ https://blogs.perficient.com/2015/04/08/generate-performance-testing-report-with-jmeter-plugins-and-ant/#comments Thu, 09 Apr 2015 03:02:35 +0000 http://blogs.perficient.com/delivery/?p=3659

JMeter is a popular open source performance test tool. However, its shortage of comprehensive report functionality, affects performance test result analysis, e.g., performance monitor of server, hits per second, response code over time, and so on. This paper will introduce how to create more analysis reports in both JMeter GUI Mode and Non GUI mode.

Generate JMeter-Plugins Report in JMeter GUI Mode

JMeter-Plugins Standard and Extra can resolve its limitation of reporting and data analysis. If scripts are executed in JMeter GUI Mode, it is convenient to add listeners of plugins.

1.Download JMeterPlugins-Standard-1.2.1.zip and JMeterPlugins-Extras-1.2.1.zip from http://jmeter-plugins.org/downloads/all/.

2.Unzip them and copy *.jar file to %JMETER_HOME%\lib\ext.

3.Restart JMeter.

4.Then right click Test plan – > listener -> jp@gc-xxx. More than twenty listener/reports are available now.

01

02

 

Generate JMeter-Plugins Report with Ant in Non-GUI Mode

Jenkins and Ant is always used in JMeter performance testing to schedule and trigger test. In this way, listener added in GUI JMeter mode can’t be saved. However, all test result can be generated from result log .jtl files. JMeter-Plugins provides a CMDRunner.jar, which can be called by command line to generate reports from JTL files.

1.Make sure JMeter-Plugins has been installed correctly as GUI mode.

2.Update JMeter.properties to save data in JTL file.

First you need change output format to xml because JTL file follows XML format. Thread_counts is advised to be selected because parts of report is based on threads.

jmeter.save.saveservice.output_format=xml (default is csv.)

Second, change the values to true if you want to save special data.

jmeter.save.saveservice.assertion_results_failure_message=true

jmeter.save.saveservice.assertion_results=true

jmeter.save.saveservice.data_type=true

jmeter.save.saveservice.label=true

jmeter.save.saveservice.response_code=true

jmeter.save.saveservice.response_data.on_error=true

jmeter.save.saveservice.response_message=true

jmeter.save.saveservice.successful=true

jmeter.save.saveservice.thread_name=true

jmeter.save.saveservice.time=true

jmeter.save.saveservice.subresults=true

jmeter.save.saveservice.assertions=true

jmeter.save.saveservice.latency=true

jmeter.save.saveservice.samplerData=true

jmeter.save.saveservice.responseHeaders=true

jmeter.save.saveservice.requestHeaders=true

jmeter.save.saveservice.encoding=true

jmeter.save.saveservice.bytes=true

jmeter.save.saveservice.url=true

jmeter.save.saveservice.filename=true

jmeter.save.saveservice.hostname=true

jmeter.save.saveservice.thread_counts=true

jmeter.save.saveservice.sample_count= true

jmeter.save.saveservice.idle_time= true

3.Execute jmx scripts with ant. The core target in build.xml can be as below:

<target name=”test”> 

  <taskdef name=”jmeter”     classname=”org.programmerplanet.ant.taskdefs.jmeter.JMeterTask” /> 

 <! — Execute JMX scripts and create JTL file — >

    <jmeter jmeterhome=”${jmeter.home}” resultlog=”${jmeter.result.jtlName}”>     

     <testplans dir=”${basedir}\test” includes=”*.jmx” /> 

    </jmeter> 

</target>

4.Then a resultlog with JTL format has been created at the same time.

5.Add a target in build.xml to generate report with plugins.

Command line: java -jar $CMDRunnerPath/CMDRunner.jar  –tool Reporter –generate-png TransactionsPerSecond.png –input-jtl  $ jmeter.result.jtlName –plugin-type TransactionsPerSecond

Realize the same function as above in build.xml:

<target name=”runTransactionsPerSecond “> 

   <exec executable=”cmd” failonerror=”true”>

         <arg value=”/c”/> 

         <arg value=”java”/> 

         <arg value=”-jar”/> 

         <arg value=”${jmeter.home}\lib\ext\CMDRunner.jar”/> 

         <arg value=”–tool”/> 

          <arg value=”Reporter”/> 

          <arg value=”–generate-png”/> 

          <arg value=”${generate-png}”/>

          <arg value=”–input-jtl”/> 

          <arg value=”${jmeter.result.jtlName}”/>

          <arg value=”–plugin-type”/> 

          <arg value=”TransactionsPerSecond”/>

  </exec>

</target>

6.Run build.xml with ant. Then you can find .png has been created in target folder.

03

Because Ant can be called in Jenkins, reports can be easily created and checked in Jenkins project workspace.

 

Reference

http://www.jmeter-plugins.org/wiki/Start/

http://jmeter.apache.org/usermanual/index.html

 

 

]]>
https://blogs.perficient.com/2015/04/08/generate-performance-testing-report-with-jmeter-plugins-and-ant/feed/ 3 210712