ylliX - Online Advertising Network
Test smart: how to apply automation and stay sane?

Test smart: how to apply automation and stay sane?


For the last decade automation has intruded into the QA field. Now, with many tools offered, it has become a challenge that needs to be handled elegantly

On a travel around Tuscany, I knocked on the Leonardo da Vinci Museum. There, I was impressed by the number of projects Leonardo produced. One of the most amazing objects that caught my eye was an automatic gold-beating machine designed for the needs of textile manufacturers in Florence.

The intention to increase productivity through automation has been known for humanity for a while. Or maybe Leonardo was super advanced for his time?

Now, automation has become a trend that has developed not only in manufacturing but also has become a must-have asset in digital environments. Development (and testing) gets more optimized with automatic CI/CD pipelines adopted by agile teams. In the QA field, new roles of Test Automation Engineers have been popping up. Automation has a huge impact on the QA industry. And with AI-driven tools being developed quickly, there is probably more to come…

A funny owl is looking at the cuckoo clock and thinking: “Oh no, Cuckoo is… automated!”

Less than 10 years ago, there was a huge hype around automation in QA: it seemed that by adopting the CI/CD approach teams could automate anything. It also referred to testing. I remember the vivid discussions in the QA community about the future of the craft. Frankly speaking, there was a lot of pessimism in the air. Somehow, the Testers felt especially vulnerable due to the role reshuffling brought by automation.

Shorter release cycles indeed required more automation efforts. Testing was supposed to be done faster in a so-called “fast-paced” environment; otherwise, it was perceived as the bottleneck in the development process. Agile teams shifted to automation of repetitive tests, e.g. end-to-end, to free time for more demanding (but still manual) testing methods, e.g. exploratory testing. The most popular automation frameworks were Selenium and Cypress — they required some level of coding skills… That was how it functioned until the 2020s.

A few years later, AI-driven automation tools joined the party. Some even require no coding skills (bye-bye good old Selenium and Cypress buddies!) If before test automation ate lots of time and energy, now you can automate the tests easily by recording them with the help of AI-based tools.

While automation tools (including AI-driven ones) are perfect for covering functional tests based on defined requirements, it is still questionable how the machine (tool) could evaluate quality attributes such as usability. It is also not clear if existing automation tools are sensitive enough to test the accessibility of the product for the various groups with special needs. Industry experts still apply manual testing techniques to these areas.

Besides, there are other constraints of automated tests. Once your team has less and less time for testing and relies on a list of automated checks that give the green light to ship the product, this might give a false sense of safety for overall quality. What if your tests checked features A and B but features C and D were neglected? This is the typical case for new bugs to fly into the room.

Anyhow, human testing is crucial for test cases that require human intuition, creativity and critical thinking. Human testers detect real-world issues and edge cases that automated tests might miss. Moreover, they could use judgment to ensure the product aligns with user expectations and local regulations. They are also more flexible in situations that demand quick exploration of the product’s areas.

As Kate Dames notes, there are both good sides and constraints in human testing:

“Humans are good at spotting discrepancies. They are also good at figuring out why something isn’t working or where the error lies. Technology, however, is much better at repeating the same work over and over and over and over again. Where humans get bored and distracted and accustomed to what is in front of them, technology thrives when able to repeat the same thing over and over again.”

Human Testers have obvious advantages but machines might make testing flow easier. This way, automation should not be perceived as a remedy for quality. In my opinion, successful businesses will maintain a balance between manual and automated testing.

The teams need to be careful when planning their automation efforts. Ideally, you should gather your team and discuss which tests should be automated. With no clear automation strategy, you might get into the trap of automating the tests for nicer statistics. It may impress your stakeholders but also add an unnecessary burden for the team.

I like the idea shared by Iryna Suprun. She says:

Do not automate something just for a sake of automation, or to reach some automation % that somebody said you must have. You must not, especially if resources are scarce. The only thing that you must — is to deliver the software that your customers enjoy using and be profitable as a company. When you decided that you really need to automate something — automate it on the lowest possible level. Do not add end-to-end tests where the unit test is enough.”

And this idea resonates a lot with the popular test automation pyramid concept. According to it, the lower the layers of tests are, the more automated they should be.

As for visualising, I prefer the idea of a reversed test pyramid — a so-called bug filter that was introduced by Noah Sussman. The unit tests may capture the bugs at an early stage, while the integration and UI (end-to-end) tests will catch them later.

Bug filter catches the bugs at the earliest stage of three — unit layer. There are unit, integration, and UI layers.

This sounds a bit idealistic but why not strive for a better (smart) way of automation? The earlier the bugs are caught (for instance, at the unit tests stage), the less rework will be needed later.

And don’t be surprised if your testing coverage includes only 20% of automated end-to-end tests. This might be normal in a specific situation. Once, I worked on a project where automation was limited due to dependency between multiple product components. So, only the test cases that could be automated were automated. Otherwise, it would be wasting of time and effort.

It is also crucial for a team to define which areas of the product are riskier and focus test automation efforts on them. For instance, for e-commerce products, one of the most vital is order flow. The tests that cover such a flow are perfect candidates to be automated.

So let’s be wise when estimating the resources for automation. It will be good to arrange a brainstorming session with the whole team and decide which efforts will give you the most value. The precious time of Developers and QA Engineers needs to be allocated to the right tooling and processes that will match the needs of the whole team.

Besides the dilemma of what to automate, we should take into account that the results of automated test runs should be still monitored by humans. Even if some notification comes from the pipeline, it goes to a human (QA Engineer or Developer) who intervenes and confirms if any bug or regression occurred.

Also, there is a question of who maintains the automated tests if any changes are applied to the product. For instance, some tests might become redundant or some features might have been updated — this should be reflected in the automated test suit. I would not rely on AI even if an AI-driven tool claims it is “self-healing” and “self-manageable”. Human is still the one who should orchestrate it.

A funny owl is looking at the screen that displays: “100% of tests passed!”

Thus, it is important to remember that there is always someone who stays behind the automation and keeps an eye on automated tests. This should be taken into account by the managers when distributing the test automation workload.

It is hard to predict what will happen to QA craft in 10 or 20 years. Yet, according to some industry insiders, automation (as a current trend in QA) might reach another level in the upcoming years. This next milestone is autonomous testing (which reminds me a little of autonomous driving — hello E. M.). It points out that AI-driven bots will do all the testing instead of humans… But wait, are we ready for that?

To my mind, autonomous testing without human engagement is possible. However, it will require decades to be properly implemented. Despite some tools being developed for this purpose, they still lack human empathy, intuition, critical thinking and creativity.

Will AI technology be able to incorporate these values in the future? Let’s see… Anyhow, I am sure that QA people will still contribute to the quality of digital products tomorrow.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *