TL;DR
Testival concept is to have, along with open sessions chosen by the audience, two keynote talks. This year we had Titus Fortner and Gaspar Nagy.
After welcoming words from Zabok mayor Ivan Hanzek and his deputy Valentina Durek, we started with the first keynote.
Keynote #1 by Titus Fortner, Coping With The Testing Transitions.
Titus is a software developer in the test in SauceLabs, and in the open-source community, he is the Watir project lead. In this presentation, he explained what is for the SauceLabs client testing transition and which problems they usually have during this transition.
SauceLab’s main business case is to enable his clients to do efficient browser automation. He started with the question:
What the money wants?
The main focus of the company with a web application is to keep customers on their site as long as possible. To do that, they need to keep the bugs out of that page. Simple and powerful analogy, but how to do that?
As software testers, we need to think about what are the benefits of software testing to software development. Titus gave a brief overview of the progression of software development. All started with waterfall, s separation of development phases, a big list of specific test cases, and testers blindly followed those instructions during their manual testing execution. Bugs were found late in the project, and the cost was very high.
Manual testing execution was hard to scale. He put a famous analogy from Mythical Man-Month, where nine pregnant women still could not deliver one (nine) baby in ONE month.
Agile was a software development idea that should resolve those issues. In the software testing transition, we have the main issue of Humans vs. Computers. Let’s automate test execution (my note here is that testing activity could not be automated, just the test script execution). Titus warns us that we have to be careful with the automation effort. Checks that are close to source code (usually called unit tests) are the fastest in execution. They point directly to the source line of code where the error happened. Integration tests are still in connection with source code but are slower than “unit tests.” UI tests are the most time-consuming, and they can only point to the UI location of failure with no connection with the source code.
A most frequent problem with automation effort is that UI check scripts use scenarios software testing techniques. Advice is to break long scenario tests into atomic feature tests that could be run in parallel. And AHA moment is:
Do not try to automate scenario testing flows on any level!
When you have automated checks, you have a new problem on the table, how to efficiently to maintain those?
Titus gave a reasonable explanation when you have a cheap process of creating automated checks, you have higher maintenance cost:
- Record and playback; an example is the Selenium IDE. Cheap creation.
- Visual coding tools, MABL
- Machine Learning
The main question in maintenance is Who responds to failure?
Reasonable properties of automated checks:
- Well-known states
- Actions have result
- Atomic, short and autonomous actions
Reasonable features of the testing framework:
- abstractions
- global solutions
The biggest mistake in the first testing transition is to try to write automated checks for test cases that you have.
The second step in the testing transition is continuous delivery.
We have two important metrics to take care of. MTTF, mean time to failure and MTTR, mean time to recovery. The biggest mistake in continuous delivery is to try to automate what you currently do manually. In order to have low MTTR, you need to have real-time production monitoring. Root cause analysis should be done on the staging environment.
In Titus conclusion, software testing transition towards “Automation” should be automated checks + continuous delivery with real-time monitoring and root cause analysis on the staging environment.