What is happening in our application?

TL;DR

In this post, I will try to elaborate why is often expected from me as a tester to create meaningful application error messages.

So, you are using some web application, and you get something like this( thanks to Evil Tester):

Screen Shot 2017-01-14 at 2.04.02 PM

And that error message does not tell you anything about your action or input. At least they put disclaimer: “That is all we know”.

While I am testing application, I find a lot of such not informative error messages. Why I find them and developers not? Because I know how to make reasonable scenarios that trigger those errors. And to come up with such scenarios and test data is not something of interest of regular developer. In my experience they find this very boring and not creative job. But I think that finding alternative flows by exploring api that you use is very creative and important job.

Developers just have different meaning for definition of DONE. Or they are just lazy.

First line of defense is that user will never do that. But you took BBST Bug advocacy, and you have already uncornered your test scenarios making it more probable in real life.

After I talk to project leader and provide him information about low quality error messages, I put extra work on my shoulders. Karlo, as you already found those error messages, you will create meaningful error messages.

The problem here is that I can only provide error messages for scenarios that I covered. But low level mechanics of the application, which includes inter module cooperation based on api contracts, this is developer responsibility.

 

Facebooktwittergoogle_plusredditpinterestlinkedinmailby feather

Selenium webdriver Net::ReadTimeout (Net::ReadTimeout) exception

TL;DR

In this post I will explain how I resolved this exception in context of selenium webdriver testing in headless mode using xvfb on linux.

You finally managed to set up linux and xvfb server, your selenium webdriver settings are correct, and you successfully run your first scenario in headless mode! Astonishing achievement when I add fact that you only used open source technology.

You have your cucumber feature files with scenarios, written according every best BDD practice. Although, one feature file has more scenarios than other. Feature that it tests is rather complicated.

You set up jenkins job, and find out in the next morning of run, that only that feature file has failures. You examined execution html log, and find out this cryptic exception:

screen-shot-2017-01-07-at-11-56-08-am

Network timeout!? But site that you test is ready and online.

What is going on under the hood of selenium webdriver tests.

Selenium webdriver is http server. It accepts http connections, according to webdriver protocol that is http based. Your language selenium webdriver bindings start server for you when you create driver and your test script sends webdriver protocol commands to webdriver server. Simple as that.

Above exception means that your selenium webdriver server is not responding. Why? In context of headless xvfb linux test run, this could be caused by various reasons.

Solution.

I noticed that exception appeared always on same scenario, so I split long cucumber feature file in two. If you have properly written feature file according to BDD practices, this should not affect your test suite.

I guess My heuristic is that selenium webdriver server could be up for some limited time and that with long feature file I hit that limit.

Facebooktwittergoogle_plusredditpinterestlinkedinmailby feather

Meet Blitzy: not so simple HTTP load tester in Elixir

TL;DR

In this post I present Blitzy, not so simple HTTP load tester in Elixir. This is my open source contribution to original Blitzy, simple HTTP load tester in Elixir.

My journey with Elixir started six months ago. With proper book from The Pragmatic Bookshelf, this was fun experience. I made myself to do all exercises, but in the end I skipped Macro exercises. How to test my Elixir practical knowledge? The best way is to contribute to some of Elixir open source project.

For one of my clients, I need to do load, duration and concurrent test. I previously used Grinder, Java Load Test Framework. Driven by pure curiosity, I searched for load testing tool in Elixir, and I found Blitzy.

It has simple feature set, do load test for simple http get. By exploring the code and using Programming Elixir 1.3 as a reference, I found out that I can read and build Elixir project code. So, why not add features to Blitzy that I need for my current project.

My first contribution was to update dependencies, and do some refactor because some libs changed their interfaces. This was need just to build Blitzy for current set of features.

In closing session of CAST 2016, there was discussion how can we help to enhance software testing. My contribution, lets create software testing tool, that can be easy to extend, develop and maintain. Another pointer for Blitzy contribution.

Power of Blitzy is that is programmed in Elixir, which uses Erlang virtual machine. Downsize of Erlang is rather cryptic programming language, it feels that you are writing mathematical formulas and proofs, and not a program. But there is Elixir, that resolves that issue.

Blitzy codebase is small, because it uses Erlang virtual machine concurrency and code distribution out of the box features.

I remember that most of the code for Grinder was about those features, distribution and concurrency.

Check github repository, and happy load testing.

I wrote tests for Blitzy, currently line coverage is more that 80%, but I have not wrote most important test that include data. How about that metric preachers? Important unit test are on to do list.

Facebooktwittergoogle_plusredditpinterestlinkedinmailby feather

Technology junky

TL;DR

In this blog post I will describe my TV upgrade that should last for next 10 years.

This summer was Football UEFA Euro competition year in France. Heuristic is that you could by new TV models with significant discount. I bought my last TV in 2008, and it was plasma. TV technology have dramatically changed in those eight years and I did know where to start. My Christmas started earlier this year. I asked my friend, who is real TV junky, for help. He needed just one input, my budget. With my budget of 1500 eur, he started his research. Believe me, it is not an easy task. After few pointers and one week, he hit a jackpot, LG-49UH8507, in my budget range.

This is my short review wearing a testers hat. Basically, I bought computer system . Every computer needs operating system.

WebOS 3.0

You can find technical specification here. From user perspective it is very fast and reliable. UX is pretty good, menus and actions are strait forward. I enabled auto upgrade which is done in sleep mode. When I turn it on, I get notification in upper right corner about the upgrade.

Network

WiFi and Ethernet.

I use ethernet for streaming services. Big surprise was when my Iphone 6 offered to connect to TV while I was watching Youtube. It is called screen sharing.

Remote

It has magic mouse feature, which means mouse pointer that I steer remotely. Remote UX is not so good, because I mostly use it in dark room and I can not guess button that I need based on its position or texture structure.

Display

UltraHD which means 4K resolution. This is Netflix Dardevil in 4K (click for larger image):

img_1231

Entertainment and education

I watch 90% of my time Netflix or HBOGo. Up till last update, Netflix had stability issue. I always exited application, but when I reentered it, it always crashed. HBOGo has streaming issue with new items, but it is stable. Netflix has much batter UX experience when I continue watching where I stopped previously. No streaming issues at all even with 4K items.

There is also youtube player, so I can educate myself from the conform of my sofa.

I am very satisfied with my new gadget. For the same price, I also got LG G3 smartphone, which was big surprise! And salesperson was very educated about the product, he confirmed all technical specs that I knew thanks to my friend.

Facebooktwittergoogle_plusredditpinterestlinkedinmailby feather

Cleanup order in selenium headless test

TL;DR

In this post I describe on real example how to use important question for every software tester: “Did I do it in proper order?”

For one client I created environment for running selenium webdriver test suite in headless mode for firefox browser. I use docker image that contains latest Jenkins, xvfb server and firefox browser. I was not able to start chrome with xvfb because chrome has much higher security. More about that environment in following posts.

When I run test suite, I got following cryptic exception:

Failure/Error: Unable to find matching line from backtrace
     Selenium::WebDriver::Error::UnknownError:
       Failed to decode response from marionette
     # ./spec/spec_helper.rb:93:in `block (2 levels) in <top (required)>'

Line that triggered exception was:

@driver.quit

With help of this excellent blog post,  I find how to ask right question: “Did I use proper order of calling different api methods?”

I first closed headless driver (which is xvfb server), and then I tried to close selenium webdriver. That lost connection to its X server manager (xvfb).

After I first closed selenium webdriver, and after it I closed headless xvfb server, test suite worked as expected.

When you investigate what went wrong, use this simple question:

“Did I use proper order?”

 

Facebooktwittergoogle_plusredditpinterestlinkedinmailby feather

Report on Testival #26 meetup

TL;DR

This post is about why we moved from Software Testing Club to Testival meetup group. Main theme of meetup was browser automation.

First of all, I would like to thank Rosie Sherry that have let us to use Software Testing Club meetup group  for Zagreb meetups for six years.

Software Testing Club changed in a good way in this six years. Zeljko and I also did not know the real purpose of meetup, and that is building of local community. Testival meetup group will definitely change that, because now we have free meetup local visibility. As proof of concept, this time we had 20+ attendants, which was much more than usual.

Our host and sponsor was Repsly in HUB385 startup coworking place. Great venue. On meetup, we had two talks.

Kresimir Linke from Replsy had talk: Test Automation of Push notifications using Ranorex.

img_2197

He also demonstrated one complex end to end scenario that involved several users, web browser and two mobile devices. All automated using Ranorex tool.

Second talk was from Ana Prpic: Introduction to WebDriver IO.

img_2200

It is another Javascript implementation of selenium webdriver. Most importantly, Ana presented whole ecosystem of Javascripts tool that enables you to but webdriverIO test in continuos integration pipe.

Meetup was visited by NSoft software testers. They are from Mostar and we discussed with the how to start software tester community in their town.

img_2198

My meetup takeaways:

  • cast device software for presenting mobile device screen
  • how to test email gui
  • html id attribute and security compliance
  • webdrivercss library for visual comparing
  • circleci can run selenium tests in headless mode

After that, Zeljko presented 5 minutes talk format, with his talk:

Why you should not attend testing conferences?

Another talk was about software testing pyramid and I talked about open session conferences (Testival as example.)

img_2201

Facebooktwittergoogle_plusredditpinterestlinkedinmailby feather

How to measure upload speed of web application upload form

TL;DR

In this post I will explain how we measure speed of upload feature in web application using curl tool.

One of web application feature is to upload a set of files (size in several 100 MB) to web application.

Testing from our office location through various paths (vpn, no vpn, office network, public network) but all using local country Internet connection showed satisfied upload speed.

But client on other continent has poor upload speeds. Developer asked me how to measure upload speed form his browser to web application. Here is how.

First, you need to copy upload post request of your web application using Chrome developer tools, in Network tab find that request and copy it as curl request.

Paste that curl request in your favorite shell (Windows OS developers usually  use git bash) and at the end of cur command add this:

-o output.txt

That’s it. You will get something like this (click for larger picture):

 

screen-shot-2016-12-10-at-6-53-01-pm

 

 

 

Facebooktwittergoogle_plusredditpinterestlinkedinmailby feather

Search in project for method usage using only bash script

TL;DR

This post is about simple bash shell scripts that finds all files that use particular method.

In my previous post: Product moving parts as source for test strategy, I described how I use github pull request in order to discover which part of application changed in order to create regression test strategy.

Code that changed could be some helper method or .css and .jpg assets that are used in various places in code base. And those places are not part of pull request. So I need an automated way to find all places where is that helper method used.

For that purpose I use simple bash script. You need to know loop programming concept and a unix utility commands cat and grep.

Here is bash script:

#!/bin/bash
for i in `cat pull_request_items.txt`
do
  echo $i
  grep -H -r "$i" * | grep -v cache | grep -v manifest
done

And content of pull_request_items.txt

our_overview_gettingstarted.png
tour_overview_lessonhighlights.png
tour_overview_originals.png

Script reads items from txt file, and each row value is searched in project codebase using grep utility. Search is recursive in all subfolders.

Output contains files that contain searched items.

Manual part was to copy/paste from pull request to pull_request_items.txt file and do some editing in order to clean not important pull request information.

Why not use some fancy editor like sublime? Because presented utilities are installed on almost every linux machine in the world and i can use this script out of the box.

Facebooktwittergoogle_plusredditpinterestlinkedinmailby feather

Outlook broke important business feature

TL;DR

This post is about Outlook broken business feature, carbon copy recipients.

In e-mail, the abbreviation CC indicates those who are to receive a copy of a message addressed primarily to another (CC is the abbreviation of carbon copy). The list of recipients in copy is visible to all other recipients of the message. In common usage, the To field recipients are the primary audience of the message, CC field recipients are others to whom the author wishes to send the message publicly, and BCC field recipients are the others to whom the message is sent.[source]

Carbon copy feature in email is de facto a standard business rule. If you are implementing email client, you should implement cc feature in that way.

And here is Microsoft 2016 cc feature:

issues_from_te_void_2 issues_from_te_void

And here is received email:

issues_from_te_void_2

You can not distinguish to: recipients from cc: recipients.

Business impact.

I received such an email, and replied to project manager to explain who is responsible for subject of his email. He replied (not very pleasantly, but that is the different issue) with highlighted to:  in message conversation automatically attached in message reply, and that I should know the cc: rule. So in message reply, you can distinguish between to: and cc: fields. But not in original mail.

Can I move to other email client? Answer is no, because Outlook is mandatory tool in clients organization and I am obligated to use it.

Update after Facebook feedback (Thanks Vanja and Igor!)

This issue is not consistent with Outlook history, because in previous Outlook versions that worked. Also, Gmail is not adequate alternative email client for this feature, because gmail also does not show cc field.

Facebooktwittergoogle_plusredditpinterestlinkedinmailby feather

Value of UI automation tests

TL;DR

After series of posts about How to kill UI regression testing, maybe you got impression that I am against UI browser automation test scripts. In this post, I will put my thoughts about the value of UI automation tests.

First some context. Here are requirements for excellent UI automation tests:

  • Language that enables you to write less code. Something like Ruby.
  • selenium web driver implementation in that language
  • framework just above selenium webdriver. For example watir-webdriver.
  • Page object pattern implementation
  • test framework, something like rspec. For writing awesome asserts.
  • gherkin implementation. Something like  cucumber. No, I have not changed my mind stated here. But we have different context here.
  • continuous integration tool. Something like jenkins
  • headless browser. Something like xvfb or sauce labs.

What is the value of UI automation test scripts? They represent executable documentation for your product. They help you to answer questions about complicated business scenarios. For example facebook  setting who can contact me.

Your testers will need to have a skill of writing DRY tests using those requirements. So when you have new tester on board, by reading those tests script, he can quickly learn application features.

Those tests would be run on jenkins and run should be triggered manually. Tests will provide you information is some automated feature changed. It is not wise to use those results for you regression testing strategy.

Because those checks, or asserts only check what is coded. Nothing else. And UI is for human users, not for test scripts. So human should always do smart regression testing , using its brain, eyes, ears and fingers.

Facebooktwittergoogle_plusredditpinterestlinkedinmailby feather

Blog that makes software testing interesting and exciting.