top of page
Writer's pictureJosef Mayrhofer

6 Pitfalls that waste your Performance Engineering budget

Updated: Feb 17, 2022

Many companies are cutting their IT costs. Outsourcing, out-tasking, and elimination of redundancies are some of those measures how organizations try to reduce spending on digital services. The competition is high and on the long term will most likely only those businesses survive which were able to transform their culture from reactive to continuous improvement. In this post, I will shine a light on the performance engineering topic and give you some insights of how you can waste money in this area.

Just to be clear, as a performance advocate, I fully believe that tuning applications for speed and stability are one of your best investments. Based on my knowledge, many software companies are still spending too much money on their performance testing or monitoring activities without satisfying results. Over the last few years, I came to know that six pitfalls are often killing performance engineering budgets:

  1. Late involvement

  2. Outdated tools

  3. Wrong simulation approach

  4. Cosmetic bug fixes

  5. Don’t share metrics

  6. Fire and forget

All of those topics are equally important. Try to avoid all, and your chance is high to make your next performance engineering assignment a success. Read my instructions below and get a better understanding about how you can save money in your next performance engineering task.

Late Involvement

Business analysts, developers and requirement engineers completely ignore non-functional aspects in their design decisions. A few weeks before go-live, some randomly selected use cases will be tested under concurrent user situations.

Outdated Tools

There is no budget for professional load testing and monitoring suites. Some home-brewed load simulation tools also do the job. In the best case, an open-source load testing platform with limited simulation techniques is available. Sometimes engineers spend days on the implementation of just one load testing script. Each minor change in the application under test results in big re-scripting efforts.

Wrong Simulation Approach

Both, a too high or a too low load volume comes with a high risk. Nobody has a clue how to derive the appropriate user and transaction amount for your new application. However, the test environment and data volume are also far away from reality. Due to the lack of non-functional requirements, engineers build the load pattern based on assumptions.

Cosmetic Bug Fixes

Fixing nasty performance issues can be a tough job. You identified the cause, but there is simply not enough time to fix such problems. As a short-term fix, they ramp up the hardware.

Don’t Share Metrics

There are strict gates between Dev, QA, and Ops. Implementation teams often loos days due to difficulties during error reproduction. It works on my environment is one of the most used argument due to this lack of insights and metrics sharing across the organization.

Fire and Forget

Your performance testing team successfully simulated the expected load on the new application. The launch at production went fine, and your new users are fully satisfied. Ops teams have no performance monitoring solutions in place, and from their perspective, a continuous performance monitoring post go-live is a waste of money.

I highly recommend avoiding all those pitfalls in your next performance assignment, and you will make a good step towards IT cost reduction and better user experience. Keep on doing the good work!

5 views0 comments

Recent Posts

See All

Comments


bottom of page