It seems to me, that almost everybody in IT quality assurance sphere shares some experience with automated tests. So not just because of the fact, that my first automated test script came alive more than 15 years ago, I’ve decided to contribute to that famous topic too. Don’t worry, I won’t bore you with any kind of a nostalgic tale from an IT prehistory, but tell you a fresh story from the spring of 2020.
You can find the original article in Czech here.
Nature of the Project
The agile team of the standard size (around 8 people) received the task, which consisted of developing an application that helps to speed up the process of verifying employee’s health status, at the workplace entrance.
A typical scenario is as follows: early in the morning the employee will receive SMS with the link, that leads to the questionnaire indicating the possibility of infection with the Covid-19 virus. Before entering the workplace, the authorized doorman checks on their mobile phone whether the employee’s questionnaire is filled and measures their temperature, which he also enters into the application.
If the application does not evaluate the risk of infection, the employee is happy to enter his workplace.
Design of the application is primarily customized to mobile phones. Questionnaire wizard tosses screens with single questions: depending on the user’s answer at first screen, it directs him to the following one. The application looks very similar to this indicative test.
Technologies and Tools applied
As a part of the analysis process Figma has been used for screens graphical description, as well as diagrams at draw.io, where a decision tree for routing the user between screens has been captured.
The project management took place in the Jira Kanban, according to the Scrumban methodology. Tests were covered by the Jira plugin TM4J. Obviously all the code was stored in versioned repositories (GitLab).
Scrumban, as the name suggests, is a hybrid of Scrum and Kanban methodologies with the intention of easinging Scrum off -being locked into sprints, it became too rigid for some teams. More you can find e.g. in this article about Scrumban methodology.
GUI automation has been implemented with the traditional combination of Robot Framework along Selenium library, respectively by using Karate on API tests.
For the time being Karate Framework was the only open-source tool combining API test automation, mocking and performance testing into one framework. It uses Gherkin syntax for writing tests, which is ideal from the point of BDD, also for non-programmers. Not only it has effective JSON & XML validations, but it offers parallel test execution too. If you want to know more, have a look at the Karate project home page.
There were many ways in which the user could be routed between screens (in different words: application’s decision tree was very extensive). Therefore, from the beginning, it was clear that manual testing won’t be sufficient, and so the influence was put on automated GUI tests, that needed to meet two criteria: being developed simultaneously with the application and cover all possible ways through the decision tree.
Service architecture of the application urged the use of automated API tests, so that in parallel with delivery of individual APIs, API tests were also implemented. At the same time, the goal was to cover the whole process through API calls, i.e. to simulate a complete use case, when the frontend application collects answers from the user’s questionnaire and sends them via API for further processing, which ends with recommendations on whether the employee can enter their workplace or not.
Tests were designed with regard to their easy maintenance and flexibility during later possible modifications. Although “Record & Replay” technique is still popular nowadays, it has not been used at all. Application organized into individual screens implied the use of the POM design pattern supported by the atomic design principles.
Page Object Model (POM) is a design pattern extensively used in the area of test automation. It significantly contributes to the efficiency of test maintenance mainly by avoiding duplication in the code (see, for example, the article Page Object Model design pattern). As nothing is perfect, also POM has certain drawbacks. But they can be mitigated by the atomic approach — you can let inspired yourself by the article Why You Should Modularise Your Automation Code.
Furthermore, the test’s creation process had to be easy and fast in many variants of passing through the decision tree. Firstly because of the previously mentioned flexibility, secondly because tests were also prepared by the analyst without technical knowledge necessary for scripting.
The whole solution thus consisted of three layers:
- Test Cases: passing through the decision tree i.e questionnaire screens with data filled in and arranged in the expected sequence.
- Screens: implementation of all screens, i.e the logic of filling them in.
- Shared support libraries: supporting technical components, facilitating the implementation of screens (scripting of certain shared UI elements, necessary supporting communication directly on the API, etc.) and enabling specific test configurations (browser type, headless mode for running in CI, etc.)
Among other benefits implied by the POM approach, this concept also enabled one more significant advantage: the test design process could arise virtually independently from the screen’s implementation. It was enough to know their name and data structure, which was already defined in the screen design proposal (see Figma above).
In the case of API, the situation was simpler. Each API was covered by an elementary test. These elementary tests were also used for E2E scenarios covering a few happy day use cases.
The main purpose of API tests was to check the overall condition of the application, so that it could be tested, sometimes such tests are referred to as “Health check”. In other words the smooth running of API tests was a criterion for running GUI tests.
The advantage of API tests was that the execution of the complete test set was very fast (in a matter of seconds), so it was not a problem to run API tests very often, e.g. every time the developer changes the code.
Test design and implementation
The implementation of GUI tests is built on the RobotFramework platform supported by the SeleniumLibrary and REST libraries.
Test cases: each test case represents one pass through the application decision tree. A typical test case thus consists, in addition to the opening or terminating directives (Intro or Closure) only the list of screens.For example in the test case Symptom 1 | Below 40 | worsening the keyword pass symptoms fever says, “On the screen with symptoms questions, select that you have a fever and move on.”
Screens: screens are stored in a repository (one place in the project’s directory structure).
Shared support libraries: shared technical components are located separately. These are typically various “tweaks” that we bring as experiences / tips from other projects, but also project-specific tools.
API test’s implementation is based on the Karate framework, which uses Cucumber style keywords. An example of a single test for a particular API is trivial:
Even in API tests, the intention was to use elementary tests located in one repository. You can also find the questionnaire data from the example above in this E2E flow:
Calling tests from the repository is accomplished by using:
Provided we want to call a test in the Background section, which in Cucumber contains the steps valid for each Scenario, we use:
callonceoperation ensures that the actual call takes place only at the first time and the result is cached for further calls within the other Scenarios in the Feature. See the documentation for more details.
Overall, it can be stated, that Karate has a very good quality documentation, especially considering that it is open source.
In the example code, the code implementing Slack integration is also worth acknowledging:
And eval if (validateSToken.responseStatus != 200) karate.call('sendResultSlack.feature')
Projects Highlights & Downfalls
What was ok
- To be a part of tuned team
The tools and methodologies can be great, but at the end of the day, it turns out that this is all about people. Whenever there is a team of similarly motivated people who enjoy the work, are professionals (they understand their work and are able to ask the right question, when they can’t figure something out ) and work for the team (they try to help, communicate effectively ) -it is a very nice experience. And only then those tools and methodologies, if properly selected and used, can really increase the overall effectiveness of the team.
- API tests
I try to propagate the Karate framework among testers and implement it on the projects for some time now, which can be demonstrated by my over-year-old publication on TestStack ;] (unfortunately in Czech only). Apart from its technical advantages, it is interesting for its NON-GUI concept. Eventually, you’ll find that clicking and filling boxes in recently popular tools (Postman, Insomnia, soapUI, etc.) simply holds things up. Additionally, provided that an application is written purely in code, it allows the use of a versioning system (in our case a proven GIT), i.e. makes it easier to collaborate flexibly within a test team, transparently shares tests (typically for a development team), and connect them to the CI/CD effectively and reliably.
If you are wondering why Karate and not RestAssured, check out this comprehensive comparison.
What hurt a little
The main unpleasant surprise (and probably the only one) from the project was the toothlessness of the Selenium library for RobotFramework against the JS framework React.
Selenium for RF contains a number of practical keywords that simulate a user’s actions on a website. However, it must be used with caution, as React and JS frameworks are generally based primarily on the user’s basic actions: keystroke and mouse click (or mouse down/up). Thus, for example, keywords that select an item from a list are virtually impossible to adopt.
Another problem is the complexity of the application in JS as such. Sometimes we do not know from the visual components whether the application in the background is still processing something (e.g. some asynchronous ajax call).
In order for the tests to be stable, I recommend that you stick to the ground (keep it low profile) with the choice of keywords to simulate user actions. Therefore, this is what worked for me:
- manage it with just a few keywords to simulate user interactions
- do not underestimate the readiness checks of screen/component (I truly recommend to use them generously)
Wait Until Element Is Enabled
Wait Until Element Is Not Visible
Wait Until Page Contains
Location Should Contain
- wrap up more sophisticated logic into user keywords.
A hint of a solution is the robotframework-react project, though unfortunately not very active. So far, a single keyword is implemented, indicating that the application is fully up and running.
However, this topic deserves a separate article ;-]
About the author
Viktor Terinek has started his professional journey in IT Quality Assurance more than 15 years ago. Now he works as Test Consultant/Architect at Tesena, the only one Czech IT company focusing solely on quality assurance and testing services.