A worrying number of defects have cropped up late in your project. Although your teams are in their best fighting mode, the test engineers can’t test all the changes before deployment on the production environments. Does this scenario sound familiar?
Most of us in the testing industry have experienced such situations. The question is: What have you learned from such disasters? Here, I’ll share my experiences from recent automation projects, and give you some ideas on how to avoid such scenarios and the sunk costs that result.
The pitfalls
Complex test cases: Record and replay is a nice feature, but it often leads to quick and dirty test cases that are little better than garbage. By simply clicking through your application though, the recording engine will capture all the clicks, so you can navigate through all the available links, menus, and submenus.
Reinventing the wheel: Realizing the limitations of our existing solutions, some of us decided to implement an automation tool. While new ideas are always welcome, you don’t need to reinvent the wheel—it’ll probably result in massive programming tasks.
Accountability: Who’s in charge of your automation projects? Often there are functional, performance, monitoring, and security teams—all working to automate repetitive tasks. Sure, they have different objectives, but they’re all aiming to automate manual work.
The solution
Script once and reuse it many times: Why should all your teams code the scripts multiple times? True, they’ll all have different purposes when coding, but a load-testing script is valid for regression testing, and even vice versa. In fact, the best products on the market let you reuse your writing in your entire value stream— from automated testing through load testing, monitoring, and even security testing.
Keep it simple: As already mentioned; the complexity of automated test cases can be a massive showstopper when it comes to efficiency. Don’t try to automate complex end-to-end scenarios that require hundreds of steps! For efficiency, focus your automation instead on interfaces and critical use cases that do not involve many satellite systems.
Shift left: Robust automation requires robust applications. We implement the interfaces and APIs of systems under test first, and that’s where you should start your automated testing too. You can create robust scripts for components that won’t change too often. If there is a change, such API-based scripts can easily be adjusted. Shift left helps you find and fix issues earlier in the lifecycle.
Enablement: Don’t just stick to your existing tool set and hope for the best. Technology is advancing. New solutions on the market, such as the SaaS-based products, have already left behind the traditional on-prem products, both in ease of use and flexibility as well as cost. Why not reserve some time in your weekly agenda for research to keep yourself and your toolbox up-to-date?
Recommended tooling: Try EveryStep from dotcom-monitor. The first entirely cloud-based tool on the market, it supports all teams working in the automation pipeline. And it enables full browser-based testing at the same time. You can use the basic edition for free, and save your scripts locally on your machine. What’s more; a powerful load-testing and performance-monitoring solution is already integrated, and a QA edition is coming soon. Read more about this solution on https://www.everystep-automation.com/. You can use it straight away for your automation tasks, and upgrade anytime to the load testing or monitoring edition.
We at Performetriks are always here to assist you in all your automation, performance, monitoring, and security challenges.
Keep up the excellent work! Happy testing!
Commentaires