This post is about my learning experience on Testival #56 meetup.
This time we gathered in the ground floor lobby in HUB385. The host and sponsor of this meetup was Repsly, many thanks for supporting the Testival community.
MC of this meetup was Kreso Linke. We started with usual Testival introductions for 25 participants. I stated my current testing problem where Python FactoryBoy factory method for one model class with foreign relation to other model classes refused to create a model object for an unexpected reason.
Niko Mađar (Lemax) – Load Testing Web Applications
Niko first gave a general introduction in testing types, where performance testing is one of them. Load testing is just one subtype of performance testing. With load testing, we try to determine how applications behave with a concurrent number of users in typical and peak loads. Who needs to do load testing? If you have an SLA agreement with your customer or you would like to know bottlenecks in your application flow.
We have several steps in load testing:
- plan – answers on questions: app architecture, available hardware, and software, resources (testers, tools, workstations, network)
- model – scripts in the selected tool, test data generators, monitoring, environment set up, the time window, rump uptime
- execute – run scripts, application monitoring, use the application in customer way during the script run.
- Analysis – what can we conclude from the collected data?
- report – make a report for managers based on analysis conclusions
Then he presented one real case example. The application under test was a monolithic application with SOAP API. The tool was JMeter run on one workstation. Data was generated using the JMeter script. The script had rump up time with steps load. The load was increased over time. The report was generated using the JMeter report feature.
Recommendations and tips (the best part of every Testival)
Workstation with JMeter could create a load of 500 – 700 concurrent users. It has
4 CPUs one CPU with 4 cores and 8GB of memory. To run the script, use the tool CLI interface, not GUI. The reason is much better control over the script run. Data analysis should be done using dedicated scripts after the test execution to avoid the Schrödinger’s cat effect. And do not chain request (complex scenarios) to get more realistic user throughput. Always aim load test for realistic application usage.
Željko Filipin (https://filipin.eu/) – Software Testing Anti-patterns
Zeljko talked about software checking antipatterns. He read about them on the following blog post as part of the Wikimedia book club. Every antipattern had a Ukulele chord play by Zeljko, participant experience, and Zeljko’s explanation.
#1 No Test Strategy
On post After Testing Mission Goes Testing Strategy, you can find out what is Test Strategy. It is the second step in testing. Every testing should have it.
#2 Bad reputation of software testing is based on ignorance
We do not need software testing because we could not get 100% code coverage is one example of ignorance.
#3 Running tests (checks) manually
Ruby has a great gem, guard-test that that runs only required checks on file save.
#4 Testing is only done by testers
Whole team effort.
#5 Flaky or slow tests (checks)
You should delete them. In testing, the flaky and slow test tells us an important story about the program under test.
Ivan Lalic presented results on Survey about software testers’ salaries in Croatia.
Niko explained how he found the root cause of one load test result that had a spike compared to other results. The cause was database lock on a table.
Comments are closed.