The software development outsourcing industry is experiencing a performance revolution, which marks a new level of technological progress in the sector.
This dedication to performance is prominent in the American nearshore region, as US-based clients and their customers increasingly demand higher quality software that performs at superior levels of excellence.
As demand for software grows, so too does the number of users and the amount of data they produce. Applications need to be able to handle higher loads without crashing and businesses need the capabilities to draw value from larger amounts of data.
Adding even more complexity is the fact that applications are more intricate than ever before and rarely have clear performance goals, leaving performance teams to work magic to understand the application. Lastly, the performance team has to be willing to carrying out activities on a regular basis because the source code is always changing, meaning performance results are also constantly changing. All of these issues are putting more pressure on software development outsourcing companies to solve challenges in performance testing.
In this article, we’ll explore some of the ways that Perficient Latin America approaches continuous performance testing on a massive scale, as well as sharing how our performance engineers are solving challenges along the way.
Establishing a Baseline for Future Testing
Performance measurements are numerical and data-driven, so engineers first need a metric baseline that shows how the app is performing at the start of a project. Performance metrics should be based on customer expectations and clear business requirements, as these determine how the app should be performing in the future.
If there are no clear requirements or customer expectations yet, then it’s necessary to run performance tests against the pre-existing version of the code to see how it behaves and find any breakpoints. This information can then be compared against future performance test results as an initial baseline.
These first data points might include average response time, request rate, processor usage, memory use, user satisfaction, and many more. It is important to create this baseline to ensure that the app can be scaled and subjected to massive performance testing.
If there is no existing code, then the baseline can be based on whatever information the customer has given to the team, such as an estimated user base or daily transactions. This knowledge is essential for guiding the next stages of development and testing, where functionality will be created and the load tested accordingly.
[RELEVANT CONTENT: How Perficient Latin America Uses Performance Engineering to Deliver Top-quality Software]
Scaling to Massive Performance Testing
Certain applications can draw thousands, if not millions, of users every day. These users might all interact differently with the interface, so software development outsourcing companies need the ability to replicate all of those scenarios, thousands of times, and compare results with previous tests to uncover breakpoints.
At Perficient Latin America and with our nearshore software outsourcing engagements we have implemented a continuous performance testing approach into our continuous integration pipeline, allowing us to handle these heavy loads and stress-test on a massive scale.
Our performance tests run automatically and test multiple new versions, multiple times per day. They are closely linked to the software development lifecycle, which allows us to push changes to the repository every day, resulting in valuable feedback that helps us vastly improve the code and the application’s performance.
Across all teams, performance engineering must be seen as a benefit to the process, not an overhead. When the teams are more involved with the performance engineers, it’s easier to communicate results and have them act upon any findings. Performance teams should be free to talk to developers to explain where the software is failing and what the consequences of failure might be.
Overcoming Massive Challenges
Massive performance testing on demand can create a lot of issues. There may even be cases where an error occurs and it is almost impossible to spot where the code failed. This is why creating a baseline and measuring the results of every test is so important.
Once massive performance testing is performed regularly, it becomes much easier to compare results on a daily basis and find those performance improvements more quickly. Even so, that comes with the challenge of data storage and data analysis, which we cover extensively in another article.
Unlike QA testing, performance tests take a long time and demand a lot of resources, so there has to be a high degree of accuracy in their planning and execution. This means that engineers need to be able to understand and then maintain a dedicated balance between time and resources.
Depending on the number of scenarios that are being tested, it may not be possible to complete all tests in one day. When the software development team is practicing DevOps or Agile principles, this can sometimes be an issue because of the daily iterations that these methodologies require. We’ve found that it helps to focus on the scenarios that are most aligned with the business goals in this case, which reduces the amount of time spent testing the less important scenarios.
In some nearshore software development projects, clients might not be aware of performance until the very end. When they can see how many transactions the app can handle, or how many resources are being used, those metrics are a huge eye-opener and can help guide their next business decisions. This, as we see it, is the real value of mastering massive performance testing.
—
If you’re looking to find the real value in performance testing, we’d love to chat with you. Call us today.