With the rise of mobile devices, user experience has become more critical. We have the internet in our pockets, browse through our favorite sites multiple times daily, receive push notifications from our loved ones, and buy products online whenever we identify something attractive. This usage pattern benefits those who sell goods online and is a challenge for IT service providers.
Life cycle integration and proactivity are the two most important aspects of the performance value chain. If you don’t uncover reliability hotspots quickly after they are introduced, you may run into delays, resulting in firefighting on production and massive rework. Once a slow or not responsive App is deployed to your user community, the chance is very high that your return on sales declines. My strategy for such performance nightmares is rooted in proactivity. It includes early and continuous performance validation throughout the development pipeline on one hand and shift-right, ongoing performance monitoring at production on the other hand.
Load Testing
Shift left intends to find problem spots earlier in the life cycle. It’s a top business priority to launch new products quickly. Erroneous applications are holding us back and blocking more developer resources than expected, which delays subsequent sprints. Automation is a grand strategy to tackle many errors after they have been introduced. This is true for functional inspections, and validation of non-functional requirements such as vulnerability scans, load tests, spike tests, or long-duration tests are highly efficient the earlier you execute them in your development chain. Ideally, you integrate those validations of agreed NFRs in your development pipeline to assess the quality of a new build immediately after it has been deployed and escalate deviations on time to your development teams. Performance testing is your chance to find more critical hotspots early in your DevOps chain, which protects your investments.
Monitoring
Excellent user experience is critical to good business and happy customers. Think about your last negative experience with one of your formerly favorite shopping sites. You won’t use it anymore after you’ve experienced hanging, not responding, or prolonged web pages. This behavior is quite typical for all customers out there. People abandon using applications once they experience response times of 3 seconds or above, and the chance of never returning is relatively high.
Did this ring your alarm bells? Monitoring availability and response times at production is ultimately required to identify and escalate slowdowns before your customers are affected.
Combining both
We’ve learned that the point in time when we identify hotspots can make our lives as support or developers much more comfortable because it gives us more time to implement a fix and results in fewer tickets from angry customers. The combination of load testing and Monitoring is your chance to understand how the new system behaves under production load conditions. At the same time, you can figure out how powerful your monitoring solution is. Powerful monitoring platforms make the search for the needle in the haystack much easier.
Takeaway
Performance Testing: Simulation of current and future growth user volumes on your application under test. Test execution on pre-prod stages. The objective is to find and fix critical design, coding, or reliability issues that would slow down your business applications.
Monitoring: We distinguish between active and passive Monitoring. Active Monitoring is a probing of single user requests. Passive Monitoring is collecting performance metrics while your real users are navigating through your applications. Monitoring aims to identify hotspots before your customers or end-users get affected and escalate those issues to your support teams in charge.
Happy Performance Engineering!
Comments