New technology is popping up almost overnight, and the complexity of our applications is increasing. Virtualization, artificial intelligence, microservices, and machine learning are just the beginning, and there is much more to come. Some people argue that performance is the most important feature. I entirely agree with this statement because slow loading applications disturb the customer’s attention span. Once they mentally left their intended activity due to pure speed, the chance is high that they will abandon using this service. In the worst case, they spend their money on your competitor’s website.
The reasons for low speed
Indeed, there are good reasons why so many applications struggling and provide an unsatisfied user experience. Often organizations ignore non-functional aspects in their development chain. Software design specialists build services which fulfill the intended features documented in their given requirement papers. Our today’s construction cycles are highly feature driven because the customer plans to use the new product soon. QA staff is testing the functionality of the new application and in the best case a few weeks prior to deployment into production they simulate load and measure response times. In the final Go-NoGo meeting, the project team decides that performance issues are low priority and gives their Go for deployment into production. Finally, the nightmare continues at production. Business customer claim that response times are not acceptable and deny using the new product. Operational teams are wondering because they can’t reproduce the issue. Their log file and system resource based monitoring confirm that there is no issue at all.
The problems involved
I assume that you’ve experienced the scenario above in one of your past projects. Obviously, there are several problems involved.
1. No functional requirements: Our customer knows what they want from a feature perspective. There is no line in their specification about how the new service should be delivered.
2. Late involvement: Start performance engineering late in the chain leads to high risks and delays because identified issues often result in a redesign of the whole application.
3. Firefighting: Analyzing bottlenecks in production is a nasty task. The pressure from business and management is high. Teams start protecting their domain. At the end of the day, you can’t identify the cause and start adding more hardware to fix the issues.
4. Loss of revenue and reputation: According to recent research, our response time expectations continues to decrease. The majority of our customer expect load times of below two seconds, and if services can’t fulfill their speed requirements, they spend their money on more reliable websites. It’s also true that speed has an impact on company's reputation. No matter how fancy an application might be. If it can’t deliver their functionality at high speed, it will shine a bad light on your firm.
A state-of-the-art performance engineering approach
Don’t wait for the next performance bottleneck. Start your engines and replace the reactive behavior with early error detection. This practice will prevent you from massive failure correction exercises. Review your current development processes and calculate your performance engineering maturity level. Those areas with a maturity level less than 1.5 require urgent attention.
Performance is no one-off exercise. You can design a highly scalable and responsive application and demonstrate in your testing stages that the new system can easily handle current and future load patterns, but if you don’t continue with periodic performance reviews post-production your former fast services can sharply slow down.
Even more important is that your teams share performance metrics across the whole organization. Let developer dive into response time, throughput and error figures collected at production. This practice motivates your teams and enables continuous improvements.
Comments