Integration & IT Modernization

Test Management DevOps Best Practices

Security and Compliance

DevOps pushes organizations to accelerate innovation, leveraging automation in the face of rapidly changing customer tastes and expectations. Some organizations leverage Test Data Management in DevOps, which is the ability to create targeted, right-size databases rather than cloning entire production environments. This enables easier test environments to test realistic data sets in advance of going to production.

Our work with clients has enabled us to create best practices around test data management. We cover some of the tools we use in the four sections below.

Data at rest:

Data at rest is looking to provide large cleaned/scrubbed (SPI/PII) data for lower environment refresh, partners, testing. This is where CA Test Data Manager and IBM Optim normally play, but can be a combination of other open source solution (OSS) tools, scripts, and vendor specific tools for backup/restore of “canned” starting points. The value of tools like TDM/Optim is in how they are used and if there is a need to ensure data scrubbing, freshness of data matters and they can both provide intelligent meaningful subsets of data by analyzing how the data is accessed, to provide cheap shallow copies of very large data sets rather than just naively taking full database copies.

Data in flight:

This is normally more of a test driver/ performance tool/simulator but needs to have or generate data to flow through a system. Normally needs to be correlated and either come from a static data set or be smart enough to simulate data based on SOAP/XSD. This is normally server-based tools and not client test tools, that act as man-in-the-middle type interceptors, this allows for TDM in a non-traditional way as it can be used to “trap” data flows at various points of a standard API/ESB call by pre-tending to be the real service. So it provides test data management by virtualizing individual services. For CA this is LISA/ITKO or IBM GreenHat/Rational Test Virtualization Server, HP Service Manager, ParaSoft, Mulesoft all have offerings of varying levels here, depending on needs and budget this can be as simple as we build bespoke stubs for services (mocking) or provide a Center of Excellence model for Service Virtualization and API Management (these tend to go hand-in-hand).

Programmatic/Replay/Record Tester drivers:

Probably the most common set of tools for doing end state functional and integrated testing is API/UI testing of Web/Greenscreen/Console applications that require someone to program or manual record/update tests with and construct sets of individual recordings into suites that can be used for functional, regression, and performance testing. This will have a certain aspect of the TDM strategy here as well being that it has to be correlated with the backend systems in order to have real tests that can be automated and have request/responses validate properly. Our GDC team has built a large amount of their collateral in this space around Selenium, reporting, and popular test drivers that most clients have for functional, performance, service/API, and Unit testing.

Orchestration Strategy:

Where this comes together for most teams is the actual orchestration of the TDM as this provides the mechanism for implementing the strategy. The most logical place is to do TDM around deployments as this is where you will likely be dropping new code changes, refreshing data sources/services, and then triggering automation to validate the changes did not break the system. This is most commonly where the DevOps team is engaged, we provide the technical enablers by providing the automation hook points, gating criteria, storage of meta-data around lifecycle, and promotion processes to start to really glue the process together in a meaningful way that gets you to a push button orchestrated process. We see lots of clients using Jenkins/TeamCity/TFS Build to get the CI side of the equation done pretty well but have much more success when clients adopt purpose built deployment tools CA Automic, Xebia Deploy, OctopusDeploy, IBM UrbanCode, Spinnaker to actually get more traction in non-trivial environments ( i.e. more than one or two services are deployed at a time )

We are happy to pull our team together to dig deeper but are pretty pragmatic in terms of what tools we work with. If you have nothing today and have budget for tools the IBM/CA tools are very good.  The biggest challenge that we see if most corporations is not the tool that is selected but how the practice of TDM is implemented and the shared ownership model is or IS NOT executed effectively and strategically.

Reach out to see our work at www.perficient.com for more information today and download our guide for more best practices on DevOps success in your organization.

About the Author

DevOps Architect with 10+ years in change & configuration, agile methodologies, build automation, release engineering, infrastructure provisioning, and development best practices. Provide across Perficient around devops practices as well as lead and design solutions for engagements in the IBM, Cloud and DevOps practice.

More from this Author

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Subscribe to the Weekly Blog Digest:

Sign Up