Let's assume a scenario: while testing a product, the number of defects reached an all-time high, and thus your teams are in the fighting mode. Does this scenario sound familiar to you? Sometimes, Test Engineers can't test all changes before deployment on production environments.
I assume all of us working in the testing industry experienced such situations. What learnings have you derived from such disasters? What are the possible solutions to overcome such complex situations? In this blog post, I will share my experience collected in recent automation projects and give some ideas about overcoming such nightmares.
The Pitfalls
There are various reasons behind experiencing such situations thus making the testing process tedious. The reasons are:
Complex test cases: Record and replay is a nice feature, but it often results in quick and dirty test cases, which are entirely garbage. Clicking through your application is too easy, and the recording engine captures all clicks, so let's navigate through all available links, menus, and submenus.
Reinvent the wheel: Some of us realized boundaries in existing solutions and decided to implement their own automation tool. New ideas are more than welcome, but don't reinvent the wheel because this will most likely result in massive programming tasks.
Accountability: Who is in charge of your automation projects? There are often a functional, performance, monitoring and security teams and all of them deal with automation of repetitive tasks. Sure, they have different objectives, but the nature of their job is to automate manual work.
The Solution
To counter the pitfalls and achieve efficiency, the following solutions can be used:
Script once and reuse it many times: Why should all of your teams code the scripts multiple times? It's true that they focus on different purposes, but a load testing script is also good enough for regression testing and even the other way around. The best products on the market let you reuse your writing in your entire value stream from automated testing through load testing, monitoring, and even security testing.
Keep it simple: As mentioned earlier, the complexity of automated test cases is a massive show stopper when it comes to efficiency. Don't try to automate complex end-to-end scenarios, which contain hundreds of steps. Efficiency-wise it's better to focus your automation on interfaces and critical use cases which do not involve many satellite systems.
Shift left: Robust automation requires robust applications. Interfaces and APIs of our systems under test are getting implemented first, and that's where you should start your automated testing. You will create robust scripts for components, which will not change too often. If there is a change, such API based scripts can be adjusted easily. Shift left helps you to find and fix issues earlier in the lifecycle.
Enablement: Don't stick to your existing tool-set. Technology is rising, and new solutions on the market have traditional products already left behind when it comes to ease of use, flexibility, and costs involved. You should block some time in your weekly agenda for research to keep yourself and your toolbox up to date.
I believe the herein above mentioned solutions are helpful to you. In case, if you have any further queries, you are more than welcome to contact us. Performetriks is always there to assist you in your automation, performance, monitoring and security challenges.
Keep doing the excellent work! Happy Testing!
Comments