There’s more to performance engineering than just load injection and response-time validation. Simulating production volumes on our changed applications—and we have plenty of tools for doing this— makes a good start. But it won’t uncover all your critical performance hotspots. Read this post and find out how to modernize your performance-engineering approach, and close all those remaining gaps this year.
Shift performance engineering to the left
Unit testing is great for validating core functions in the early development stages because your developers can write test cases alongside their code. Whenever your engineers complete a new feature, they can execute their unit-level tests to make sure the implementation hasn’t brought new bugs to their existing components. As a rule of thumb, create a unit-level test for each core function. If a defect is detected, also create a unit-test case to validate the problematic function.
Do your testing early! Reuse your unit-level tests for early performance validation. Select the most important core functions, calculate their expected request volumes, and create a scaled-down workload appropriate for your development environment. Schedule a regular execution of your unit-level performance test—ideally, make this part of your CICD build pipeline and validate the performance of your new build automatically.
Some tools for your CICD pipeline:
JUnit, NUnit framework
Jenkins scripted pipeline
Jenkins plugins for load-injection tools
Jenkins plugins for automated performance validation
UX Testing
Performance means more than fast backend services. By limiting performance testing on your API or service layer, you’re risking a poor user experience. New frameworks, such as AngularJ or React, help bring more functionality to the client layer. Browser-side caching, content-delivery network, content compression, image size and many other web-development best practices all act to give your end user a great experience!
The one thing our customers care about is the end-to-end performance of the business functions they use most often. The response times of your internal services are of no interest to them. That’s why your approach to performance engineering needs to reflect the importance of user-experience testing by extending performance testing to the client layer.
These tools will help you to identify and fix the glitches in your client layer:
Know your workload and sizing constraints
When performance requirements are unavailable or inaccurate, or when the sizing of your dev or test stages are not comparable to your production environment, all your performance-engineering activities will be risky. Your test results will also be doubted if you can’t give the reasons for simulating a certain user and request volume in your load and performance tests.
Don’t move forward blindly and simulate the agreed workload. Question it and start with a comparison of your test and prod environments. You need to review all the gaps with your application and developer teams, and discuss the impact on your performance-engineering strategy. You can also create a heat map that highlights all the dependencies to surrounding services—but we’ll look at this in another blog post.
After you’ve addressed every gap in the infrastructure, you should proceed with a qualitative and quantitative analysis of your workload and usage figures. A workload that is either too high or too low will present a high risk in your performance tests. You can extract request volumes and usage trends from your production systems or use Little’s law to validate the given requirements. Accurate usage figures improve the predictability of your test results, and put much more trust in the identified issues.
Users’ perceived response times
You’ve tested the performance of the new application but some of your customers are still not satisfied. The slow response times they’re experiencing means they just abandon your new application before they even use it. Your engineers confirm that they can’t reproduce this problem. At the same time, all the business processes are within the agreed boundaries. I expect that something like this has already happened to you.
In most of these cases there are two root causes. The first bottleneck is related to application design. If the client-server communication of your application is chatty, the response times for your customers abroad will be much higher then for those who are close to your application’s hosting location.
The second bottleneck is due to the network quality. The response time of your application will not be acceptable for customers connected through a low network connection. A network virtualization solution and full browser-based testing could help to close this gap. Ideally, you should reflect the geographical location in your performance-testing strategy.
Effective tools in this category are:
full browser-based testing tools such as LoadView or SilkPerformer
network-virtualization tools such as WanBridge
Contact me or my team with any questions on how to make performance engineering part of your software-development process
Happy performance engineering!
Commentaires