Skip to main content

Digital Experience

JMeter Testing for a Datapower ESB Implementation – Part 1

Introduction

When considering testing a Datapower implementation the first tool that is generally mentioned SoapUI. While this is a good tool for a particular aspect of testing, you may need to expand your testing capabilities to include a broader set of concerns. In this blog I’d like to consider an architectural scenario in which I will need to cover a range of architectural patterns.

The Architectural Components

Datapower deployed as a single component in in the architecture provides very little in terms of the need for a testing a solution. For this example I’ll consider the following architecture. A Datapower XI52 deployed as an ESB. Websphere MQ for protocol mediation, LADP for authentication and authorization. The client will use RESTful calls with transformation between XML and JSON. An FTP server has also been added to the scenario. O’yes the webservce is Soap based. Finally, I have two Datapower application domains, what I’ll call DEV-DEV deployed on Port 5400 and DEV deployed on Port 5420. This could also be DEV to QA or staging. This basic architectural configuration will cover the following Datapower patterns;  MQ to MQ; HTTP to MQ; MQ to HTTP; Datapower LDAP integration; Datapower FTP Integration; Many to Many Transformation; Soap WS integration; RESTfull to Soap integration;

The Test Plan

I’m in the early development phase of my project, and I need to setup some unit tests;

1. I want to place a RESTful GET call to the appliance and evaluate the Response
2. I want PUT & GET  messages from Queue’s
3. I want to be able to either a) use or b) by-pass the Datapower ESB to get a file from the FTP-Server
4. I want to call the Active Directory and get back the DN
5. Call A webservice operation using HTTP transport
6. I’d like to do some preliminary load testing

The other requirement is; I’d like to do most if not all of my unit testing from a single environment, as well as switch between the Datapower  application domains. And to add a typical twist. I have no budget for an advanced testing suite. Me thinks this is not too far off from a real-word scenario!

The Test Tool

This is where I turn to Apache JMeter. Apache JMeter™ is an open source desktop tool that is built as a 100% pure Java application. The tool provides functional behavior testing capability, and is also designed to perform load and performance testing. It was originally designed for testing Web Applications but has since expanded to other test functions. http://jmeter.apache.org/

Test Plan Configurations

My project has a formal testing strategy, so I will suggest my JMeter test plan as a starting point for QA. Generally, I setup the Test Plan using a {Project Name} Unit Test with {Version}. As the project matures I will move the test plan into CM and baseline for each release.

Shot-Test Plan Config.png

Another good practice is to set up the test plan with naming conventions that reflect the architectural design. The next task is to set up a Test Result View using JMeter’s predefined Listener components. Here I have chosen to include a Results Tree, and an Aggregation View which I will use for initial load testing results. Depending on the availability of the components in the architecture this test configuration will capture metrics for a performance baseline. The listener widgets are found in the Add Listener menu by a right click on the the root test plan. This is the general case for each of the test widgets that available in JMeter. Also the tool is context aware in terms of features available to the test plan and test threads.

Next I’ll  user a JMeter User Defined Variable widget, which is a nice feature that will enable me to easily switch between application domains. JMeter also allows you to add variables that can be inserted as part of the test thread execution, as well adding metadata to the test plan. For example, in this configuration I have setup a Service Name variable which can be used to configure port-type attributes for Soap calls. However, this capability is not limited to this scenario alone. As you will see I will create a range of variables for configurations throughout the plan and cases.

UserVars.png

Perhaps the key take away here is the use of Variable Categories. For example, the DataPower Base Appliance Variable is used  as the default settings for the appliance ip:address, port, and protocol. Which are configured as property settings.  Appliance :${_property(dp.test.address,192.170.25.xx), Port: ${_property(dp.test.port,192.170.25.xx), and Protocol: dp.test.protocal=http. I can then setup overrides for each application domain. For example, this configuration has an additional integration test environment, and a QA environment which have override values for dp.test.address, and dp.test.port. This is where I will switch between domains by simply enabling the User Defined Variable to point to the environment I desire.

UserVars2.png

Also because these variables are global they can be used across any of the test threads in you project plan. Another nice capability is that a User Defined Variable can be configured to generate custom values such as GUID’s which can be a  handy feature. Here I’ve provided is a seed java script algorithm for generating a multi-part GUID with random number and characters.

var chars = ‘0123456789abcdef’.split(”); var uuid = [], rnd = Math.random, r; uuid[8] = uuid[13] = uuid[18] = uuid[23] = ‘-‘; uuid[14] = ‘4’; for (var i = 0; i < 36; i++) { if (!uuid[i]) {r = 0 | rnd()*16; uuid[i] = chars[(i == 19) ? (r & 0x3) | 0x8 : r & 0xf]; } }; uuid.join(”);

I will also setup a load configuration User Defined Variable. This is used for creating various conditions for preliminary load testing by changing the number of test threads and loops. Finally, I have setup a HTTP Authorization Manager for basic authorization against an HTTP server if it should be need it in the course of testing.

Test Plan – Testing Thread Groups

Once the foundational configurations have been completed, I will then set up a series of test threads for each architectural component.  I’ve configured four Thread Groups that are named then to reflect the components of the solution architecture.

TopView.png

The LDAP Test Case

The first Thread Group provides a test for the LDAP services. For the most part this test thread is taken from the JMeter documentation, and has been modified for this solution. I will use this test for checking the availability of the LDAP server, as well as to query the Active Directory for the DN’s or other LDAP attributes that I may be interested in.

LDAP-Test Run.png

This is the Test Result view for a series of LDAP test queries. There are several things to point out.

  • Take note that several items in the test case are greyed out. This means that these test are disabled for this test run.
  • The LDAP Thread Group has been enabled. Note that for several of the test cases, there are XPath Assertions. Which I can use for a deeper evaluation of a successful test case.
  • The Individual Test Results View has a View Tree and three Tab Views. “Sampler result”, “Request” and “Response Data”. The View Tree provides feedback on each.

The Tree View Output indicates the outcome of the test case.

  1. Basic Request using various Filters
  2. 2. Search Test, 2.1 Search Test and 2.2 Search Test indicate that some thing in the test case has failed.
  3. In this case I have highlighted the Response Code 800 exception that was returned from AD-LDAP query.
  4. However, the Response also returned a DN, based on the test parameter “Search with filter” (sAMAccountName=adminuser)

The Compare Test “Passed” a view of the Response data Tab would provide the following xml response:

<ldapanswer>
<operation>
<opertype>compare</opertype>
<comparedn>cn=dparch, ou=DatapowerESB</comparedn>
<comparefilter>sAMAccountName=dparch</comparefilter>
</operation>
<responsecode>0</responsecode>
<responsemessage>Success</responsemessage>
</ldapanswer>

The MQ Test Case

The next test case is the testing for MQ as a JMS Point-to-Point and also the use of a custom extension for sending and receiving messages from MQ. In the most basic scenario JMeter is setup to PUT an XML Payload message to Queue that has been configured using JMS Context and the JNDI Properties supplied by the MQ administrator.

In the JMS Point-to-Point scenario, the ESB has been configured as a webservce proxy. In this case the test results must be viewed as a part of the ESB’s transaction history. Additionally, by using the IBM MQ explorer you can also check current queue depth if problems have been encountered at the message consumer end-point. This is a simple solution it has it’s advantages. You run a series of P2P test cases. By cutting and pasting messages built in your development environment into the content window, which are then placed on the Queue. While this has included the use of other development components. The test case is repeatable throughout the development and testing lifecycle.

JMS-Point-to-Point.png

In this next second scenario, the ESB has again been set up as a webservce proxy. However, the architecture calls for protocol mediation between the Client to the the ESB  and the service end-point, which is a typical event driven architectural solution.

In this test case JMeter will use a custom Java Request extension. The JMeter source code package provides fairly simple set of reference class implementations, which can be used to extend the tools capability to operate as provider/consumer of messages on the Queues. While this approach is a bit more sophisticated, in that it requires some Java development. It provides an excellent value add in terms of a reusable end-to-end testing capability. The screenshot shows the use of a custom SendMessage class, that has been set up to read in a set of parameters for the Send Queue, there is a corresponding GetMessage class for the Response Queue.

Additional to the MQ setup, there are 3 parameter values which enhances this configuration.

  • Service Name = Service identifier for the target end-point.
  • ClientName = Operation request identifier.
  • BaseDir = Location of the Message payload data file.

Using these parameters JMeter will take the data content in the file identified by the Service Name value as the name for the message file. For example, the SOAP message that was used in the P2P content window can now be placed in a file and read by a Java extension class, and then PUT on the send queue. A get operation can then be be used to read the response queue. This approach can provide flexibility in terms of test automation and load testing.

Busi-toServ.png

For this type of test case the result is rather straightforward. The Response Tab would simply read:  Send was successful: However because this is an end-to-end test. The GET queue provides the round trip test case. I can then evaluate the Request against the expected Response. In this example the Response queue is reporting a failure.  While not show here, the reply could be a Soap fault message or some other fault returned by the queue manager. This I will add test Assertions to evaluate the reply message using an XPath expression. This approach is dependent on the goal of the test case.

Event=-Driven-Test.png

This is also where I will turn to the Aggregate test report. By changing the Normal User Parameter value to apply various load factors. In this example I have set JMeter for 5 Threads with 10 loops through the test case.

AggReport.png

The results are recorded in milliseconds:

  • Average – The average time of a set of results
  • Median – The median is the time in the middle of a set of results. 50% of the samples took no more than this time; the remainder took at least as long.
  • 90% Line – 90% of the samples took no more than this time. The remaining samples at least as long as this. (90 th percentile )
  • Min – The shortest time for the samples with the same label
  • Max – The longest time for the samples with the same label
  • Error % – Percent of requests with errors
  • Throughput – the Throughput is measured in requests per second/minute/hour. The time unit is chosen so that the displayed rate is at least 1.0. When the throughput is saved to a CSV file, it is expressed in requests/second, i.e. 30.0 requests/minute is saved as 0.5.
  • Kb/sec – The throughput measured in Kilobytes per second

Conclusion

In part 1 I’ve tested two components of in my architectural scenario. I’ve also introduced the use of the extension capabilities to cover an end-to-end rest case. In part 2 I will build out the test case for RESTful testing,  as well as the FTP server test case. In part 3 I will cover how to use the Java extension capability of JMeter to enhance my testing capability, as well as provide a reusable test plan for QA and some preliminary integration performance testing.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.