Category Archives: load test framework jython java

Advantages of Grinder load testing framework

Reading Time: 2 minutes

In this post, I will provide arguments that amplify Grinder advantages. I was provoked to write this post because I witnessed a lot of false facts that Grinder is the worst load testing tool.

First fact. Grinder is load testing framework. It is not a tool. It is a framework that enables you to tailor load testing tool according to your needs.

In computer programming, a software framework is an abstraction in which software providing generic functionality can be selectively changed by additional user-written code, thus providing application-specific software. [source]

What are the advantages of framework?

“Give a man a fish and you feed him for a day; teach a man to fish and you feed him for a lifetime”

On the other side, JMeter is a tool:

“The Apache JMeter™ application is open source software, a 100% pure Java application designed to load test functional behavior and measure performance.”

Software testers usually prefer JMeter over Grinder because with JMeter they can start first load test without any knowledge of programming. Let me be clear, this is not bad decision only if it is appropriate to you project. JMeter offers rudimentary “visual programming”, but if you need to load test scenario that involves logic, you will need to use Java.

Grinder programming language is Jython. This is Python that runs on Java Virtual Machine which means that you are using Java multithreading. Also, you can call any library written in Java. Grinder learning curve is not steppy, but this could be resolved if you ask your developers for help. Also, by writing your first load testing script you will learn not only Grinder, but also your product (for example business data involved in http interactions).

Once I heard that Grinder is obsolete and that project is dead. Which is wrong. Grinder do not pumps up new features, because current framework features are enough to load test any modern application.

During the Grinder learning, you will also learn about multithreading programming. In Grinder this is abstracted in framework, but you do need to use Grinder framework. For example, in order to have client multithreaded logging, you need to use Grinder logging method, do not reinvent the wheel and write your own.

Grinder has basic reporting out of the box which is often pointed out as its disadvantage. In combination with Grinder Analyzer, this issue is also resolved.

Grinder is not a hammer, but also it is not worst than JMeter or some fancy paid tools.

This post was first published at zagorski software tester blog.

Facebooktwittergoogle_plusredditpinterestlinkedinmailby feather

Could you guess what I discovered with load test?

Reading Time: 2 minutes

In this post I will describe how I discovered important product information using load test. And nobody involved in this product expected that we will find out this information.

For one of my client I performed and load test with goal to discover how much concurrent users could get satisfactory response times on production environment.

I coded load test script with agreed scenario using Grinder Load Testing Framework. I created test result reports using Grinder Analyzer. Both tools are opened sourced. Every measured request was POST http request with simple http authentication. Data format was json.

In order to prepare the load test, I had a meeting with all important stakeholders: project lead, test lead and developer lead. This is important, especially when you plan to do load test on production environment. I presented them my test strategy and requested feedback.

We agreed on user scenario where most important requirement was to have user/requests per second ratio equal to 1. Second important thing was to agree on load test time execution window time frame. With those information,  I was ready to execute load test on production environment.
I checked test script and test report by first running load test on test environment.

I run the test from my office in early hours and when I came to client site I got immediate questions from developer lead.

What do you know about transactions with date from the past? I got a call from person xy!
These are transactions generated by load test, I put that date in test data.

So first revealed information was that transaction date was taken from incoming data, not generated by application in real time. Which is bad because you must never trust the user data.

But who is person xy? Was asked by developer lead, not me. Why he has access to our production environment? I thought about Monty Python’s sketch: Nobody expects the Spanish inquisition.
We revealed second important information, who has access to production site and has business value in production data.

Because of person xy request, developer lead decided to delete all load test generated data. Which is not good pattern because second run of load test would have database with more data than test before it. And that also could reveal some interesting information (missing database indexes) about production site in next run of load test.

Facebooktwittergoogle_plusredditpinterestlinkedinmailby feather

Investigation on ssh connection timeout

Reading Time: 2 minutes


This month I was assigned to a new project. I have to do load test for one part of the system that handles web services. I wrote little client framework for sending hl7v2 web service messages in jython, and I am using grinder as a load testing framework. Run time is java virtual machine, version 7.
We have testing environment consisting of VMware instances, and I am using Red Hat 64 bit VMWare instance for producing hl7v2 traffic using grinder scripts.


As some test runs last for hours, I have a requirement of ssh idle connection lasting for 24 hours. When I left putty session in idle state, I got putty error with text
“Server unexpectedly closed network connection”.

Investigation in tester mode

I first asked colleague that installed VMware instance for help. He replied that he has no time right know for the investigation.
As I am using putty as a ssh client, I browsed putty connection properties. I know about keep alive TCP option, so I set Enable TCP keep alive options, along with Seconds between keep alive set to 5 seconds.
I started another ssh session, but after some time amount I got the same error.
I asked another colleague for advice, and he pointed me at the ssh server configuration in file /etc/ssh/sshd_config. I set following properties
TCPKeepAlive yes
ClientAliveInterval 5 (same as in putty)
And then I restarted sshd process (ssh server).
Same error again.
I examined putty event log. There were following messages that caught my attention

2012-09-28 16:17:51 Initiating key re-exchange (timeout)

2012-09-28 16:17:51 Server unexpectedly closed network connection

Time stamps were exactly one hour after the start of a session. I Googled first message, I was pointed to following putty property
Max minutes before rekey
I set that value from 60 to 1, and after one minute, I replicated my issue message.
I set max minutes before rekey to 0. I started new putty session in order to verify is this solution of my problem. Session was still up in the morning. And that satisfy my requirement for testing environment.


With this post I wanted to point out how tester should behave when encounters an issue. Your responsibility as a tester is to find proper solution, not a ‘dirty hack’. Ask for help, your are not alone. And your colleagues job description is to help you. Also, I would like to point out that this is solution that works in my context. For example, if there where a lot of traffic data through putty session, this solution would not work because I did not set putty parameter ‘Max data before rekey’ to zero value (unlimited).

Facebooktwittergoogle_plusredditpinterestlinkedinmailby feather

Twitter tester ping

Reading Time: 6 minutes

After first Zagreb STC meeting, I mentioned that if anyone has any question about testing, I am willing to help via my Skype account (karlo.smid). I picked up the idea from Michael Bolton. One month after the meeting, I held my first Skype session. Here is chat transcript:

Wednesday, December 14, 2011

[8:55:08 PM CEST] Karlo Smid: Hi!
[8:55:12 PM CEST] Hrvoje: Hi!
[8:55:23 PM CEST] Karlo Smid: We can start.
[8:55:32 PM CEST] Hrvoje: Ok
[8:55:38 PM CEST] Karlo Smid: Describe me application technology.
[8:56:01 PM CEST] Hrvoje: Java application running on linux server.
[8:56:04 PM CEST] Hrvoje: With mysql database.
[8:56:40 PM CEST] Hrvoje: Is that what you meant by technology?
[8:56:43 PM CEST] Hrvoje: Could we switch to call?
[8:56:57 PM CEST] Karlo Smid: Ok, so basic application architecture is application server + database.
[8:57:58 PM CEST] Karlo Smid: I would like to continue with skype chat, so I will have archive about this session.
[8:58:05 PM CEST] Hrvoje: Ok.
[8:58:37 PM CEST] Karlo Smid: Do you know approximate number of application users?
[8:59:16 PM CEST] Hrvoje: I am trying to reconstruct testing network, this application will be used for performance and functional testing.
[8:59:54 PM CEST] Hrvoje: In production environment, application has few tens of requests per second. This is pick traffic.
[9:00:16 PM CEST] Hrvoje: Average is 2-3 requests/sec.
[9:00:32 PM CEST] Karlo Smid: Web service requests?
[9:01:16 PM CEST] Hrvoje: No, the most frequent protocols are: HTTP, SOAP, UCP, SMPP.
[9:01:26 PM CEST] Karlo Smid: For test network you mean testing environment?
[9:01:45 PM CEST] Hrvoje: For our customer our application is blackbox, there is no any interface.
[9:01:49 PM CEST] Hrvoje: Yes.
[9:02:39 PM CEST] Hrvoje: What’s bothering me is that they are trying to introduce virtual machines instead of real hardware. I am not sure how application will operate on virtual machines. I do not know how to justify order for real hardware servers.
[9:03:24 PM CEST] Karlo Smid: You finished development of the application?
[9:03:32 PM CEST] Hrvoje: Yes.
[9:03:58 PM CEST] Hrvoje: We used virtual servers for functional tests, are there was no any problem with that.
[9:04:00 PM CEST] Karlo Smid: Do you have any testing server for your application?
[9:04:05 PM CEST] Hrvoje: Yes.
[9:04:21 PM CEST] Hrvoje: 20 for production and 2 to 3 for testing.
[9:05:05 PM CEST] Hrvoje: Application is running in production environment.
[9:05:15 PM CEST] Karlo Smid: How complicate is to simulate application data traffic? I assume that you have done that for one user, because you finished functional testing.
[9:05:49 PM CEST] Karlo Smid: Ok, that mean that you can record live application traffic.
[9:05:55 PM CEST] Hrvoje: Yes.
[9:06:42 PM CEST] Karlo Smid: Which application server do you use in production?
[9:07:27 PM CEST] Hrvoje: tomcat
[9:08:33 PM CEST] Karlo Smid: Lets define your problem. You have in production real linux hardware, and customer wants to replace them with virtual servers. You do not will virtual servers be able to handle real data traffic? 
[9:09:06 PM CEST] Hrvoje: This is not the problem, but it is very close to my problem.
[9:09:23 PM CEST] Karlo Smid: What is your problem?
[9:11:16 PM CEST] Hrvoje: We want to create new test environment for our core application and 5-10 supporting applications. Management approved virtual technology for this test environment.
[9:12:35 PM CEST] Karlo Smid: Have you ever simulated live traffic with some load test tool?
[9:12:43 PM CEST] Hrvoje: I would like real hardware for that new testing environment, but I do not how to argument that to my management.
[9:13:11 PM CEST] Hrvoje: We used JMeter and internally developed applications.
[9:15:37 PM CEST] Karlo Smid: I am using virtual servers (Solaris 10 zones), mostly for functional testing. Zones do not have any restrictions (quotes) on shared hardware, so we are doing load test on them.
[9:16:33 PM CEST] Karlo Smid: For application functionality, virtual environment and OS on the actual hardware are the same.
[9:17:26 PM CEST] Karlo Smid: Of course, OS patch levels, java virtual machine version and settings, OS settings must be the same.
[9:18:14 PM CEST] Karlo Smid: You just have to dimension host hardware for the needed number of virtual machines.
[9:19:09 PM CEST] Karlo Smid: What is virtualization technology?
[9:19:19 PM CEST] Hrvoje: It is very hard to simulate real traffic, because there is great number of traffic use case possibilities.
[9:19:36 PM CEST] Hrvoje: I think that it is VMware.
[9:21:10 PM CEST] Karlo Smid: For virtualization server host machine, you also need to have enough disk space, not just big amount of physical memory. The reason is swap memory configuration on disk drive.
[9:22:05 PM CEST] Karlo Smid: Real traffic consists of great number of different requests?
[9:23:20 PM CEST] Hrvoje: Yes, every mobile operator for every country have different settings for SMS/MMS services.
[9:23:45 PM CEST] Hrvoje: Requests differ in application and database routing.
[9:24:54 PM CEST] Hrvoje: Requests have common parameters, but request handling depends on service type.
[9:25:35 PM CEST] Karlo Smid: Have you heard of
[9:25:43 PM CEST] Hrvoje: No
[9:27:01 PM CEST] Karlo Smid: This is useful technique when you have great number of test cases. It reduces the number of test cases by keeping test case coverage.
[9:27:36 PM CEST] Karlo Smid: There is tool for that
[9:29:24 PM CEST] Karlo Smid: Your goal is to predict the application behavior for those request combinations?
[9:30:22 PM CEST] Hrvoje: Yes, but for load test, not for functional test.
[9:31:02 PM CEST] Hrvoje: Our system has two user roles: operator and customer.
[9:31:28 PM CEST] Karlo Smid: Could you record production application SQL queries?
[9:32:01 PM CEST] Hrvoje: Yes
[9:32:32 PM CEST] Hrvoje: From log files or from source code, but there is a lot of them.
[9:32:49 PM CEST] Hrvoje: And I do not see the purpose of using grep on log files.
[9:32:59 PM CEST] Hrvoje: I think that grep would be hard to do.
[9:33:16 PM CEST] Karlo Smid: Your application has already been deployed in production. What do you want to measure using load test on new testing environment?
[9:34:01 PM CEST] Hrvoje:  For example, we have a new customer and that new customer wants to have 70 SMS/sec. in our application.
[9:34:25 PM CEST] Hrvoje: I do not know is our application  able to handle that peak traffic. And how long it will be able to handle it.
[9:35:26 PM CEST] Karlo Smid: Ok, here is what I would do.
[9:35:53 PM CEST] Hrvoje: I need arguments for real testing hardware, not virtualization solution. Or arguments for the virtualization solution.
[9:36:41 PM CEST] Karlo Smid: 1. Talk with your developers. Do they know which indexes are needed for their SQL queries?
[9:37:23 PM CEST] Karlo Smid: 2. Database has to be BIG, production data scale. What is the number of records in your production system?
[9:37:35 PM CEST] Hrvoje: Yes.
[9:38:13 PM CEST] Karlo Smid: Could you replicate production database in your test environment? Do you have that storage capacity?
[9:38:55 PM CEST] Karlo Smid: What is the deployment of the Java Enterprise application? war or ear archive?
[9:39:38 PM CEST] Hrvoje: I can replicate production database.
[9:39:52 PM CEST] Hrvoje: Several jar archives.
[9:41:08 PM CEST] Karlo Smid: Have you ever used jconsole (it is part of standard JDK)? This is java jmx client for monitoring java virtual machine (jvm) parameters.
[9:41:45 PM CEST] Hrvoje: No, is this similar to htop?
[9:41:56 PM CEST] Hrvoje: Or munin?
[9:43:42 PM CEST] Karlo Smid: No, it monitors jvm, in your case Tomcat instance. jmx port should be activated on Tomcat instance.
[9:45:16 PM CEST] Karlo Smid: Have you ever tuned heap jvm parameters? Garbage collector parameters? These are all important jvm parameters regarding the performance. If you have strong hardware and default settings of those jvm parameters, your hardware is not used in full potential.
[9:46:05 PM CEST] Hrvoje: Yes.
[9:46:35 PM CEST] Hrvoje: System architect and developers are taking care of those parameters.
[9:46:41 PM CEST] Karlo Smid: Using jconsole you will be able to determine are those parameters properly set.
[9:47:02 PM CEST] Hrvoje: What is the format of results?
[9:47:50 PM CEST] Karlo Smid: Regarding the traffic simulation, I would write client for one case of user traffic. I prefer Grinder.
[9:49:40 PM CEST] Karlo Smid: Than I would increase the client traffic in steps of 25 concurrent clients. For that you need also hardware (linux server with quad core and 16GB or RAM could easily produce 70 requests per second.)
[9:50:25 PM CEST] Karlo Smid: Using jconsole I would monitor server jvm. grinder gives out of the box system response times.
[9:52:20 PM CEST] Karlo Smid: Have you ever got java out of memory exception in production system?
[9:52:31 PM CEST] Hrvoje: Sometimes.
[9:54:37 PM CEST] Karlo Smid: This is indication that you have java memory leak in your application, or you should tweak heap memory settings.
[9:55:33 PM CEST] Karlo Smid: Have I helped you so for?
[9:55:45 PM CEST] Hrvoje: What is your opinion about jmeter??
[9:56:08 PM CEST] Hrvoje: Yes, you gave me enough input for investigation/thinking.
[9:57:46 PM CEST] Karlo Smid: I am not fond of tools which offer GUI programming because they are not flexible enough. Grinder is Java Load Programming framework. You should try grinder, you or your developers. I can help to to overcome starting learning curve.
[9:59:05 PM CEST] Karlo Smid: Why grinder? Because it gives you API for concurrent programming. You do not have to worry about deadlocks, concurrent file logging. You can easily use any java API. You can use its out of the box httprequest API class.
[10:00:07 PM CEST] Hrvoje: Than you for your time and will.
[10:01:19 PM CEST] Karlo Smid: See you on the next meeting, I will give grinder presentation.

So we talked about virtualization and performance test. From the chat transcript you can see the difference between load, peak and duration performance test.  We mentioned open source load testing tools Jmeter and grinder. I explained why is Grinder better tool. I also gave load test plan for the application depending on its context.

Facebooktwittergoogle_plusredditpinterestlinkedinmailby feather

Notes on Software Testers Speak Up Meeting #1

Reading Time: 2 minutes

Yesterday, we held Software Testers Speak Up Meeting #1. Zeljko Filipin wrote about that event on his blog.
I would like to put some of my notes.
There were 16 people from various companies (200+ employees), and one tester from very small company (Zeljko).
For me It was a great event. How do I know that it was a great event? For that I use my own heuristic:
If discussions on event last for three hours without a break, and people starts topics in a meaningful flow, than that is a great event.
I put some topics on the whiteboard just for start.
We started discussion with question: “Why did you become a tester?”
First issue emerged, testers from big companies have one common problem: they all have to write documentation. I asked two questions: who reads that documentation, and do they mean by documentation word documents?.
Should (and which) documentation must be produced, depends on the company process (not only necessary software development process). But remember, when tester writes Word document, he does not do his job, and that is testing.
Question about requirements specification emerged next. We all agreed that no one have ever got satisfactory requirements specification.
Testing process was next topic, and CMMI come along with that discussion. I briefly explained testing process in my company and how we use mantis as a supporting tool for that process.
How testers cope with the deadlines provoked counter questions: “What is deadline?” and “What should be done by that deadline?” On statement QA is testing I explained that testers are not QA police. Whole organization  should take care about quality. Testers only provide information about product quality.
Testers from big companies all agreed that their software development process does not contain unit testing, TDD, code reviews.

Pizza time had arrived, and I discussed about JMeter. I introduced grinder, by explaining how I am currently using it in order to test the capacity of our new mq broker configuration with Oracle database.

In the end, whiteboard looked like this.

Our plan is to do #2 SpeekUp meeting in two months, this time with some presentations.

Facebooktwittergoogle_plusredditpinterestlinkedinmailby feather