Recipes for test automation (Part 4) – When chaos reigns in the kitchen

Everybody knows what it’s like in a poorly run restaurant. At the table, everyone gets their food at different times, the schnitzel has the consistency of a shoe sole or we get something we didn’t order at all. The chef in the kitchen is completely overwhelmed and cannot cope with the flood of orders and constant changes to the recipe.

Software testing is not all that different. Let’s see a tester as the chef trying out a new recipe. This means that the new recipe is our test object, which is checked by the chef by means of cooking. The test team cannot keep up with the flood of changes. Tests are unnecessarily duplicated or forgotten again or overlooked. Errors are not detected and may then get incorporated during production. The chaos is perfect and the quality is poor. What to do in this case? Urge the chef on, automate the chaos, or simply hire more testers? No! Because, what would happen then?

Support and urge the staff to get things done?

Since the chef is already on the verge of collapsing due to the chaos, urging everyone on will only lead to short-term improvement followed by a knockout. This does not lead to long-term optimization of the situation of the situation.

Automate the chaos (introduce test automation)?

Since there is an initial additional effort related to the automation and it is not clear where to start in this chaos, this would only result in even more chaos and overload in the kitchen. Reducing the quality even more.

Simply hire more staff?

As the saying goes: “Too many cooks spoil the broth.” So simply providing the chef with additional assistants does not necessarily mean that all issues are solved. It should not be underestimated here that the new staff must first be trained. This, in turn, can lead to delays in workflows. This definitely needs to be carefully planned, as it will otherwise result in even more kitchen chaos.

So what’s the solution?

First of all, we have to analyze why there is chaos in the kitchen and what causes it. It often turns out that there are bottlenecks in unexpected places. For example, a waiter writes the orders illegibly on a piece of paper rendering it unclear what was ordered for each table. This means that the chef (tester) has to constantly make inquiries about the orders. In this comparison, we consider the waiter as an analyst and the order placed by the waiter as a partial request. So even in the test (in the kitchen), the problems can already be present in the recorded requests and the tester must constantly ask what the request might mean.

Likewise, the chef might always first go look for the ingredients and start preparations only once the order is placed, i.e., the test person only creates the test cases when they get the finished product.

Also, it is important that communication in the kitchen runs smoothly. Not only in the kitchen, though, but also with the waiter, the patron and the creator of the recipe, communication must run smoothly. In the test, this means that communication must be ongoing not only with the test team, but also with the analyst, the product owner and the developer.

Another problem could be that the distance between stove and sink is too far. For our testers, this means that their test tool is simply too slow and takes too much time to create or perform a test case.

Consequently, the work process must be examined more closely:

  • Starting situation
  • Communication
  • Work steps
  • Tools used
  • Documentation
  • etc.

The analysis can be used to identify shortcomings and take appropriate measures. To put it briefly, this analysis with appropriate measures must become a regular process. That’s right: I’m talking about retrospective analysis at regular intervals. It is also important that a retrospective analysis of this kind not only identifies the issues, but also defines the measures to be implemented and reviewed in the next retrospective analysis. If only an analysis of problems is made and no measures are taken, then nothing will change.

Also with regard to test automation, it is important for work processes to be optimized or they will not be successful. In other words, the schnitzel turns black if cooked without oil, regardless of whether it is automated or not.

Unfortunately, there is no one-size-fits-all formula that works in every project. However, there are some “best practices” in the form of suggestions and as an impetus for improving the project. For an initial introduction to a regular improvement process, you are welcome to contact us and conduct the first retrospective analysis with one of our experienced consultants.

Please have a look at my other articles on test automation:

Recipes for Test Automation (Part 1) – Soup
Recipes for Test Automation (Part 2) – Data Salad
Recipes for Test Automation (Part 3) – What should a proper (test) recipe look like?

Selecting test automation tools – criteria and methods

The first article presented the general challenges and influencing factors that arise when selecting test automation tools, based on the results of various interviews. It showed that no standard method exists for selecting test automation tools, although there are some initial approaches involving checklists. The purpose of the thesis was therefore to find a simple, productive approach that would support the search for appropriate test automation tools on the basis of a criteria catalogue, taking account of the project application.

The basic requirement in the development of a criteria catalogue is to determine which criteria are actually relevant in the selection of a tool. I will look at this in the second part of the blog series.

Symbolbild: Junger Mann vor dem Bildschirm, zeigt mit dem Finger auf den Desktop und telefoniert

Relevance of the selection criteria

There is extensive discussion in the literature on suitable criteria for determining the quality of software products and, by extension, of test automation tools. The criteria identified for the criteria catalogue were largely developed on the basis of literature research. The sources used are presented briefly below:

The ISO 25010 list of software quality characteristics provides a useful checklist when deciding whether or not to test each criterion. Similar lists of quality characteristics can be found elsewhere (Spillner/Linz 2019, p. 27). In each case, the authors provide guidance that can be used to determine whether the individual quality characteristics have been fulfilled for a project. In the field of test automation, there are lists of criteria in Baumgartner et al. 2021, Lemke/Röttger 2021, Knott 2016 and others. These criteria relate to such factors as the supported technologies, options for test case description and modularisation, target group, integration into the tool landscape and costs. However, the objective here is simply to identify relevant criteria, not to evaluate them. There are additional criteria from the expert interviews and the analysis of the ZDI projects.

The selection criteria identified in this work therefore take findings and experience from existing papers on the subjects of quality, test automation and the creation and review of requirements for test automation tools, and combine them into one approach with practical relevance.

Figure 1: Sources of the criteria catalogue

Classification of criteria

In the selection of test automation tools, influencing factors such as experience, costs or interest groups were mentioned. For many, costs were the key factor. During the course of this work, it was found that criteria such as integration or compliance are defined differently depending on the role, but are included more or less as standard in the list of criteria. And they are unchangeable, regardless of the application undergoing testing. But there is a small proportion of criteria that vary depending on the application being tested. Here is a scenario to illustrate the problem: In the medical technology industry, a new, web-based application is being developed – the frontend with Angular 8 and NodeJS and the backend with Java Microservices. The test automation tool to be selected must primarily be appropriate for the framework conditions specified by the web application undergoing testing. Before an automation tool can be used, the interface technologies of the application must be examined. In practice, test automation tools have specific characteristics and are therefore used for different technologies. Some tend to specialise in web testing while others are more at home in desktop testing. Whether it’s a web application or mobile application, there are always certain expectations that apply to the test automation tool. This means the most critical factor is the test object or the system under test (SuT). This forms the basis for selecting the tool (Baumgartner et al. 2021, p. 45). In summary, the criteria can be classified in two types: the standard proportion and the variable proportion.

table showing the criteria of test automation tools
Figure 2: The two types of criteria

The standard criteria are not dependent on the test object. These criteria have been grouped into four categories based on the quality criteria: features, system, usability and provider-related criteria. By contrast, the variable criteria are dependent on the SuT. The variable criteria may include the supported application types, operating systems, interface technologies, browser and devices.

Variable selection strategy

Variable selection means selecting a number of variables to be included in the list of criteria. To support the selection of variable criteria, a target model with AND/OR decomposition based on GORE (Goal Oriented Requirements Engineering) and the work of Lapouchnian (2005) and Liaskos et al. (2010) was introduced during the course of my work. It had proven to be effective at recording exact alternatives or variabilities (Mylopoulos/Yu 1994, p. 159-168) so that alternative designs can be evaluated during the analysis process (Mylopoulos et al. 2001, p. 92-96). The targets and requirements are linked via AND/OR decomposition. The AND decomposition expresses that fulfilment of the relevant targets or requirements is required. An OR-link means that the fulfilment of one of these targets or requirements is sufficient. In this process, the initial expectations for the tool are formulated in explicit terms and irrelevant requirements are avoided.

Figure 3: Simplified target model for sample project XY 

Approach to tool selection

Based on the types of criteria identified (Spillner et al. 2011), this work designs a structured approach to selecting the appropriate test automation tool. The new approach can be divided into five stages:

  1. Develop the project requirements
  2. Identify the variable criteria using the AND/OR target model
  3. Weight the categories and criteria
  4. Evaluate the different tool solutions
  5. Evaluation and selection

The next blog article looks at how the criteria catalogue has been structured on the basis of the approach outlined above. We will also show how the variable criteria have been included in the catalogue and present the results of the validation of the criteria catalogue.

This article was technically supported by:

Kay Grebenstein

Kay Grebenstein works as a tester and agile QA coach for ZEISS Digital Innovation, Dresden. Over the last few years, he has ensured quality and tested software in projects in various specialist domains (telecommunications, industry, mail order, energy, …). He shares his experiences at conferences, meetups and in publications of various kinds.

Alle Beiträge des Autors anzeigen

Estimation of testing cost – “Haggling like at the bazaar?”

When an external testing service provider and its client agree on the scope of the next testing project, they use an estimation of testing cost. In most cases, however, preparing an estimation of testing cost that is as accurate as possible and accepted by all parties involved is a challenge for two reasons: the timing of the preparation and the parameters defined for the specific assessment of the test scenarios or test cases. Most readers are probably wondering what the above headline has to do with this problem. It is the result of a realization based on my experience preparing estimations of testing cost in the past. Time and time again, I experienced the same fundamental discussions and amendments of the estimation of testing cost I prepared. I will return to this later.

First of all, I will describe the basis of an estimation of testing cost.

Basis of an estimation of testing cost

There are several ways for the testing service provider to bill the client for testing services. So far, I have used two of them in projects where I prepared estimations of testing cost: Firstly, on a T&M (time and material) basis, and secondly, based on the complexity of the test scenarios to be executed.

With the first option, the testing service is billed based on the time spent on the tests (worker days). For this purpose, we estimate the number of days or hours and the number of testers required for the execution of the tests.

Since this type of estimate of the test effort usually refers to a specified period of time, there is no allocation to specific scenarios or test cases. Consequently, the client usually accepts this estimate in its entirety and awards the contract accordingly.

Hand shake
Figure 1: Successful negotiation

The second option referred to in this blog post is billing on the basis of the test scenarios and the effort required by means of a complexity model. In a first step, we identify the necessary test scenarios based on the test activities to be performed. In a second step, we determine the corresponding test cases and allocate them to the respective scenario. Ideally, these scenarios represent a business process and therefore comprise several systems within the company. Such a process could, for example, cover the following systems:

  • Account management software (in-house use by the client)
  • Archiving (in-house use by the client)
  • Accounting software (in-house use by the client)
  • Device management (in-house use by the client)
  • Client portal (external use by clients)
  • Device usage (external use by clients)
  • Additional tests in databases

Consequently, different business processes usually entail different levels of effort required for the execution of the tests. In addition to the trivial consideration of the effort of testing itself, we have to take into account that the overall effort not only comprises the execution of the tests as such, but the prerequisites to be fulfilled (e.g. certain data constellations) as well.

Furthermore, the test activities can increase if the client requires additional support during testing. Due to the coordination and the different availabilities of the company’s employees, the necessary amount of time can increase. All of these aspects directly affect the effort required for testing.

Determining the complexity 

In order to determine the effort and calculate the price of a scenario for testing, we have to determine the level of complexity of the test tasks to be performed.

For this purpose, I used different methods in past projects.

With the first, “simple” option, I classified the complexity into three classes (small scenario, medium scenario or large scenario). But even there, there were exceptions for certain scenarios agreed with the client where the effort was rated higher because additional external service providers were involved in the tests.

In order to determine the complexity more accurately, I developed and introduced a more detailed version with a broader range of complexity categories. This enhanced the classification ranges from “1” for a very small scenario to “9” for a highly sophisticated scenario involving external service providers.

Two people sitting at a table, dicussing
Figure 2: Coordination between client and service provider

The following three segments within an individual scenario were defined to determine the category of the scenario:

  • Effort for the test preparations
  • Effort for testing
  • Effort for the test specification

Each of these segments is then given points depending on the time required. The sum of the points in all three segments is converted by means of a conversion key, resulting in the complexity of the scenario.

Problems with the estimation of testing cost

We will now describe problems that may occur with the estimation of testing cost, or circumstances that often cause the client to start negotiating the categories of the scenarios after the estimate has been prepared.

From the client’s point of view, this is usually about the cost of the tests, which the client wants to keep as low as possible without reducing the scope of the tests.

Since the estimation of testing cost usually has to be prepared long before the tests are to be performed, precise specifications are not yet available in most cases. In the best case, first drafts of the technical specifications have already been prepared. However, they often do not yet include the entire system environment that is affected by the software adjustment. The missing documents have a negative effect on the following aspects.

With estimations of testing cost, the test effort of a scenario is determined based on previous test runs or, in the case of new topics, it is estimated and categorized accordingly. Since the effort in the individual segments (e.g. effort for testing) is estimated based on the time required, discussions about the estimated time spans and the corresponding category of a scenario are common. After reviewing the estimate, the client frequently believes that it is possible to perform the tests of a scenario in less time. Often, they will get their way, and the estimation of testing cost is amended – first successful haggling.

Such minor deviations can be compensated by experienced testers who know the systems involved like the back of their hand and can therefore perform the tests more quickly. Inexperienced testers who do not have the necessary expertise for one or several of the systems involved in the scenario will need more time, causing the first discrepancy between the estimated and the actual effort.

Another discrepancy arises if software is modified across several systems, causing the work required to meet the testing prerequisites to increase. This can, for example, happen if certain data constellations can no longer be readily generated in the legacy system, e.g. if write permissions on a database are eliminated. Since software adjustments are done at regular intervals (semi-annually) in this project, scenarios representing the same business process repeatedly emerge after some time. These previous estimates are used for comparison. Consequently, small additional efforts such as the additional work required to fulfill the testing prerequisites are allocated to a lower category as well. The estimation of testing cost is amended – second successful haggling.

Sometimes, the category a scenario has been allocated to in an estimation of testing cost may also be too low. This can happen, for example, when the software includes entirely new functions for which empirical values do not yet exist. Consequently, the scenario has to be allocated to a higher category in the next estimate. The client checks this change in category and, having little experience in this respect, uses the previous estimate as a basis. In this case, the estimation of testing cost is not amended – third successful haggling.

This kind of negotiation does not happen with every scenario in every cost estimate, but it happens every so often with individual scenarios in every estimate.

Conclusion

In my opinion, an estimation of testing cost based on time expenditure alone should only be done for internal use by the test service provider in order to determine the complexity. Estimating the effort required for the preparation, the testing, etc. based on the time can only work if the necessary parameters are quantifiable and defined by both parties. Discrepancies already arise when test cases are executed and recorded by different testers. The first tester records his results more accurately and with additional screenshots, while the second tester sufficiently records the results by copying-pasting them from the system. This alone can cause a discrepancy of up to +/- 30 minutes.

An estimation of testing cost requires the use of evaluation criteria. These criteria should be comprehensible and assessable for both the client and the test service provider. Furthermore, a certain basis of mutual trust is necessary in order to be able to prepare an estimate that is convincing and acceptable for both parties. While the estimate of some scenarios may be lower, others may be allocated to a higher category. This way, the estimation of testing cost as a whole will be well balanced.

Once you have found a common basis, working on an estimation of testing cost becomes much easier. When both parties are in agreement about the estimated effort, subsequent amendments – “haggling like at the bazaar” – are for the most part minimized.

Distributed working: Remote Mob Testing

Following the post on remote pair testing, now I would like to address the topic of remote mob testing. In many companies, employees work together, but in different locations, even within the project team. If you still want to use the benefits of mob testing, you can use the distributed version.

First of all, you need to understand the concept of mob testing. It is a collaborative and exploratory method that can, for example, be used to share testing know-how among the teams, or to automate testing together with the team, or to use new technologies.

Roles in mob testing

Mob testing has four different roles:

  • The driver is responsible for implementing the test ideas. They follow the navigator’s specifications, and they must not interfere with the test idea.
  • The navigator, on the other hand, is the person who has the test idea and provides the specifications in this respect. They explain to the driver how and, most importantly, why, the driver is to implement the idea.
  • The mob members observe what is happening and help the navigator develop the ideas. They can also ask questions.
  • The facilitator is the only role that does not rotate. They manage the schedule, record comments and anomalies, and act as moderator and arbitrator. They are responsible for preventing unnecessary discussions and promoting useful conversations.
Figure 1: roles in mob testing

The roles of driver, navigator and mob member rotate regularly, e.g. when a test idea has been implemented, or alternatively after several minutes. For time-based rotation, the pertinent literature specifies a standard rotation cycle of four minutes. This cycle can be adjusted as necessary. The goal is not to interrupt the current navigator in the middle of a test idea. For remote mob testing, some sources say that remote implementation works better if the rotation time is set to 10 minutes in order to avoid inopportune interruptions.

The remote concept in mob testing

For the remote approach, a smaller group is advisable. Four to six people is a suitable size to ensure a clear structure where constructive discussions are possible. The duration can be set to 1.5 hours.

For meaningful remote mob testing to be possible, you first need to choose a suitable communication tool. MS Teams or Skype, for example, work well in this context. There are different options for executing the rotation, and consequently, for the implementation by the driver.

Practical examples

Below, I am going to present two approaches that have worked well for me. It is advisable to define a sequence for the rotation during the preparation to ensure a smooth rotation. For this purpose, you can number the participants.

Approach #1: Transfer of mouse control

In this case, the facilitator shares their screen, and the driver requests control of the mouse. The mouse control is passed on when it is the next driver’s turn.

This approach works well when you are working on a test object for which you need the previous driver’s state in order to be able to continue working in an expedient manner. Personally, this is my favorite approach because it is uncomplicated and quick.

Approach #2: Transfer of screen transmission

With this approach, each driver shares their screen when it is their turn. However, this method is suitable only if it is possible to implement further ideas on the test object, irrespective of the previous test, because the last state of the previous driver is no longer available for the next screen transmission. Another option in this approach is giving several people access to the object, e.g. a user for all the participants.

Fazit

Mob testing itself is simple to do, and, like pair testing, it is a very lightweight testing method. Likewise, the two approaches that I have presented in this post (transfer of mouse control vs. screen transmission) are uncomplicated and simple to implement. Both methods have worked well and facilitate distributed collaboration.

Furthermore, there are some tools available for frequent use and/or the joint development of automated tests, for example. If you want to explore the test object and perform distributed tests quickly and straightforwardly, the two methods presented above are highly suitable.

Distributed working: Remote Pair Testing

At ZEISS Digital Innovation the distributed approach has been put into practice for a long time. Especially in corona times, this approach is more in demand than ever. The good news: Work can continue from home. But remote work is not only possible in home office, but also at different locations and offices.

Classic pairing is already used very successfully in agile software development. It is an efficient method to solve complex tasks together and to achieve the best possible result with the knowledge of two people. It is also an optimal tool to distribute knowledge. Through extensive communication of thoughts and ideas, both participants reach a similar know-how level in the end. In this article I would like to show how the distributed cooperation can be successful.

Presentation of the method: Pair Testing

Pairing involves dividing the couple into two roles. On one side there is the driver. This driver implements his test idea and communicates his thoughts to the navigator. He explains everything he does as transparently as possible. This enables the navigator to understand the driver’s approaches and steps.

On the other side there is the Navigator. He checks the driver’s inputs and also communicates his thoughts about them. In this way, new solutions can be shown and the navigator can clear up any ambiguities by asking questions. Thus both learn from each other. 

The roles change regularly so that everyone gets the chance to experience the application and implement his ideas. The change takes place after a completed test idea or after a few minutes. This is also called the rotation of the roles.

man at the wheel and woman with map
Figure 1: driver and navigator

Remote work: Technical requirements

In order to both parties being able to work remotely with each other, suitable conference software is required, e. g. MS Teams or Skype. This allows the test object to be shared via screen sharing. There are two possibilities for the working process:

  • On the one hand, the mouse control for role rotation can be requested alternately. However, this can lead to delayed work.
  • Alternatively, the screen sharing can be switched according to the role rotation. This disturbs the process less, but then both require access to the test object. Likewise, a thought cannot be continued directly, since the application is in a different state after the change.

If you follow the approach of the role change after a few minutes, any stopwatch function (e.g. mobile phone clock) can be used to stop the time. However, this can lead to problems if you are interrupted in the middle of the test idea and this idea may have to be followed up by the new driver. Therefore it is worthwhile to have the rotation performed after the test ideas are completed.

Pair Testing: General Requirements

There are other aspects to be considered in order to make distributed working a success.

The tasks for the Pair Testing Session should be large and complex enough that they can be solved in pairs. Therefore, it is important to prepare the session well, and appropriate tasks should be set. This content can be, for example, stories that are to be tested.

Focused cooperation requires a lot of concentration. It is therefore important to take coordinated breaks to regain energy. Simple tasks can also be solved quickly and effectively on their own. It is therefore advisable to create time slots in which Pair Testing Sessions are held for the prepared content. Then this can mean, for example, spending half a day testing together in pairs and the other half working alone.

Summary

Pair Testing is a lightweight method that can be used by anyone without any complications. With the right technical support it is also easy to be implemented in remote work. This way we can learn from each other and support each other in complicated tasks, despite long distances. Furthermore, working together helps to prevent remote alienation.

Remote Alienation – It works without!

Digitization and the associated integration of IT and software components, demand in a specific IT discipline is not only growing linearly but rather exponentially – in quality assurance! Scarce office space, an increasingly difficult recruiting situation and the lack of innovation in the image of quality assurance clearly exacerbate the problem.

At first glance, the use of external QA consultants appears to be the solution, but at second glance it also presents some hurdles:

  • Due to new legislation (employee leasing law) there are consequences in terms of labour law when using one or more test experts; the proximity to the customer and the integration into the customer’s test processes leads to the leasing of employees and the resulting in organisational and legal consequences.
  • If the test team is to be deployed directly at the customer’s premises, it is also necessary to make office space and the corresponding technology available to all those involved
  • The necessary budget must be provided for an external test team. The on-site activity of the external test team or external test experts increases the budget
  • Due to the current situation on the labour market, it is becoming increasingly difficult to find employees for internal positions as well as for external service providers, who are prepared to relocate to the project location or are willing to travel

The solution here is an external test-centre, which supervises the necessary tests remotely. The following advantages of a remote deployment for the customer:

  • Lower costs for the test service, as travel and accommodation costs are eliminated
  • Higher scalability of the test personnel, as the service provider can optimize its utilization at one location and thus guarantee the customer greater planning reliability in the provision of resources
  • By concentration of the personnel at less locations, more know-how carriers and experts are available for special assignments
  • Centralization keeps the test teams stable, which reduces the fluctuation of acquired domain knowledge
Figure 1: Collaboration at different locations

In addition, the service provider can open the test-centre at a location with a strong and stable employ supply base. In this way, he can find suitable employees and retain them for longer.

However, the use of a remote test team/test-centre or the physical separation from the customer creates a new problem, the so-called remote alienation. This is expressed by the following symptoms:

  • Speed disadvantage due to technical and communication hurdles
  • “Faulty” coordination leads to deviating test results (from customer expectations)
  • A lack of transparency leads to a loss of trust in the test results for the customer
  • Due to the physical separation, the testers have limited opportunity to collect the customer’s domain knowledge, which can lead to incorrect test procedures despite good test expertise

To counteract remote alienation, we have developed an approach that has the advantages of a remote test centre or distributed test teams, but tries to keep their disadvantages as small as possible.

The approach consists of optimizing our customer-oriented test procedure for remote or distributed test deployment and using ETEO, our working concept for distributed teams already established in software development projects. This involves setting up a small on-site office at the customer’s site as well as workstations at our sites (remote office).

The process model of our customer-oriented test procedure consists of three phases: planning, transition, and actual service support. In the planning phase, the feasibility is checked, and the contractual matters are regulated. 

Process model against remote alienation
Figure 2: Process model against remote alienation

The focus is both on a project with customer proximity and on a remote approach to transition.

In the first step, our IT experts and those of the customer are involved. Together they identify the technical framework conditions such as remote and system accesses as well as authorizations and connect the “on-site office” with the “remote office”.

At the same time, the customer’s test managers coordinate with our test task force (our test manager and test analysts). The tasks include getting to know the customer’s domain and organization, organizing the setup of the remote team (TM) and setting up the “on-site office” and the “remote office”.

Basically, the testers work remotely from their home location, but as part of the transition, a minimum six-week training period takes place at the customer’s site. On the one hand this enables the testers to get to know the customer and his processes intensively and on the other hand it also allows the customer to get to know the test team. The basis of the training is an explicit training plan, which is created by the test manager and reviewed by the customer.

During the actual service support, additional tools are used to counter remote alienation. The first tool is the test task force, which consists of our test manager and the test analysts. Despite the remote focus of the project, a significant part of these analysts are located at the customer’s site. Various scenarios are used: The test manager is on customer’s site at least two days a week and the rest of the time he supervises the team. In peak times or on important occasions, the test manager stays at the customer’s site longer. For the test analysts a similar setup would be conceivable. However, it is also possible for the test analysts to alternate between on-site at the customer’s premises and in the remote office (e.g. on a weekly basis). The test analysts largely require the same know-how (domain knowledge) to be able to represent themselves in an emergency. By visiting the remote office, the test analysts automatically transfer their knowledge to the testers. Test analysts can also be on-site at the customer’s premises for longer periods or even simultaneously during peak periods. This concept can save 25 % in travel expenses.

A second aspect in reducing remote alienation is the use of ETEO. ETEO – Ein Team Ein Office (German for One Team One Office) is our innovative concept for distributed cooperation and a modern form of collaboration where all participants can work together on a scenario from different locations. 

Entwicklerteams an zwei Standorten
Figure 3: Distributed collaboration at two different locations

Through the targeted use of technology, methods and personnel trained for this purpose, a procedural model has been created that reduces travel costs to a minimum, and which sees transparency over all cycles of the test project as its top priority.

Verteiltes Daily
Figure 4: Distributed Daily

Part of the ETEO concept is a short distributed daily meeting, the so-called “test thing”.

The aim of the daily meeting is to report the status, name impediments and plan the day. Thanks to the technology of the ETEO concept, all parties involved can participate on site and in the remote office. An electronic task board (eteoBoard) is used to coordinate the daily planning.

At the remote locations, a periodic knowledge transfer (e.g. every 14 days) for all team members takes place. In addition to exchanging and building up knowledge, these meetings also serve to conduct retrospectives to document any problems that may arise and to improve the testing process.

Distributed collaboration - test thing
Figure 5: test thing

In addition, other tools are also used. With regular status reports by the test manager, transparency is achieved not only for the customer but also for the entire team. This transparency is enhanced by the creation of a dashboard, which, based on the test management tool used, displays important key figures for all those involved, in agreement with the customer.

In almost all industries along the IT challenges, but especially in QA, the necessary scaling has become the number one problem. Even reinforcement by external resources is not always possible on the one hand and/or brings new challenges on the other hand. One solution is to have well-rehearsed teams to turn highly qualified quality assessments of your software into continuous services.

With the described approach, it is possible to access the resource pool of scalable locations, such as Leipzig, Dresden, or Miskolc, and escape remote alienation by using the appropriate tools and procedures.

Simple Case History of the QA Strategy for Agile Teams

We would like to present our agile visualization tool, the QA battle plan, a tool that allows agile development teams to recognize and eliminate typical QA issues and their effects.

Like a wrong architectural approach or using the wrong programming language, the wrong testing and quality assurance strategy can result in adverse effects during the course of a project. In the best case, it only causes delays or additional expenses. In the worst case, the tests done prove to be insufficient, and severe deviations occur repeatedly when the application is used.

Introduction

Agile development teams notice issues and document their effects in the Retrospectives, but they are often unable to identify the root cause, and therefore cannot solve the problem because they lack QA expertise. In such cases, the teams need the support of an agile QA coach. This coach is characterized on the one hand by his knowledge of the agile working method, and on the other hand by his experience in agile quality assurance.

The first step in the agile QA coach’s work is recording the status quo of the testing methods of the agile development team. For this purpose, he will use the QA battle plan, e.g. within the framework of a workshop. The QA battle plan provides a visual aid to the development teams which they can use to assess the planning aspects of quality assurance. Throughout the project, the QA battle plan can also be used as a reference for the current procedure and as a basis for potential improvements.

Anti-patterns

In addition, the QA battle plan makes it possible to study the case history of the current testing method. By means of this visualization, the agile QA coach can reveal certain anti-pattern symptoms in the quality assurance and testing process, and discuss them directly with the team. In software development, an anti-pattern is an approach that is detrimental or harmful to a project’s or an organization’s success.

I will describe several anti-patterns below. In addition to the defining characteristics, I will present their respective effects. As a contrast to the anti-pattern, the pattern—good and proven problem-solving approaches—will also be presented.

The “It’ll be just fine” Anti-pattern

This anti-pattern is characterized by the complete lack of testing or other quality assurance measures whatsoever. This can cause severe consequences for the project and the product. The team cannot make any statement regarding the quality of their deliverables, and consequently, they do not, strictly speaking, have a product that is ready for delivery. Errors occur upon use by the end user, repeatedly distracting the team from the development process because they have to analyze and rectify these so-called incidents, which is time-consuming and costly.

No testingEffectSolution
• There are no tests• No quality statement• “Leave quickly”
• Testing is done in the user’s environment• Introduce QA

The solution is simple: Test! The sooner deviations are discovered, the easier it is to remove them. In addition, quality assurance measures such as code reviews and static code analysis are constructive measures for consistent improvement.

The Dysfunctional Test Anti-pattern

ISO 25010 specifies eight quality characteristics for software: functional suitability, performance efficiency, compatibility, usability, reliability, security, maintainability, and portability. When new software is implemented, the focus is most often on functional suitability, but other characteristics such as security and usability play an important role today as well. The higher the priority of the other quality characteristics, the more likely a non-functional test should be scheduled for them.

Therefore, the first question to be asked at the start of a software project is: Which quality characteristics does the development, and therefore the quality assurance, focus on? To facilitate the introduction of this topic to the teams, we use the QA octant. The QA octant contains the quality characteristics for software systems according to ISO 25010. These characteristics also point to the necessary types of tests that result from the set weighting of the different functional and non-functional characteristics.

Functional tests onlyEffectSolution
• There are only functional tests• No quality statement about non-functional characteristics• Discuss important quality characteristics with the client
• It works as required, but…• Non-functional test types
• Start with the QA octant

The Attack of the Development Warriors Anti-pattern

Many agile development teams—especially teams that consist only of developers—only rely on development-related tests for their QA measures. Usually, they only use unit and component tests for this purpose. Such tests can easily be written with the same development tool and promptly be integrated in the development process. The possibility to obtain information about the coverage of the tests with respect to the code by means of code coverage tools is particularly convenient in this context. Certainty is quickly achieved if the code coverage tool reports 100% test coverage. But the devil is in the detail, or in this case, in the complexity. This method would be sufficient for simple applications, but with more complex applications, problems arise.

With highly complex applications, errors may occur despite a convenient unit test coverage, and such errors can only be discovered with extensive system and end-to-end tests. And for such extensive tests, the team needs advanced QA know-how. Testers or trained developers have to address the complexity of the application at higher levels of testing in order to be able to make an appropriate quality statement.

The attack of the development warriorsEffectSolution
• Development-related tests only• No end-to-end test• Include a tester in the team
• No tester on the team• Bugs occur with complex features• Test at higher levels of testing
• 100% code coverage• Quick

The “Spanish Option” Anti-pattern

The time in which a function has been coded and is integrated into the target system is becoming ever shorter. Consequently, the time for comprehensive testing is becoming shorter as well. For agile projects with fixed iterations, i.e. sprints, another problem arises: The number of functions to be tested increases with every sprint.

Plain standard manual tests cannot handle this. Therefore, testers and developers should work together to develop a test automation strategy.

Manual WorkEffectSolution
• There are only manual tests• Delayed feedback in case of errors• Everybody shoulders part of the QA work
• Testers are overburdened• Test at all levels of testing
• Introduce automation

The Automated Regression Gap Anti-pattern

A project without any manual tests would be the other extreme. Even though this means a high degree of integration in the CI/CD processes and quick feedback in case of errors, it also causes avoidable problems. A high degree of test automation requires great effort—both in developing the tests and in maintenance. The more complex the specific applications, and the more sophisticated the technologies used, the higher the probability of test runs being stopped due to problems occurring during the test run, or extensive reviews of test deviations being required. Furthermore, most automated tests only test the regression. Consequently, automated tests would never find new errors, but only test the functioning of the old features.

Therefore, automation should always be used with common sense, and parallel manual and, if necessary, explorative tests should be used to discover new deviations.

100% test automationEffectSolution
• There are only automated tests• Very high effort• Automate with common sense
• Everybody is overexerted• Manual testing makes sense
• Build stops due to problems

The Test Singularity Anti-pattern

Tests of different types and at different levels each have a different focus. Consequently, they each have different requirements regarding the test environment, such as stability, test data, resources, etc. Development-related test environments are frequently updated to new versions to test the development progress. Higher levels of testing or other types of tests require a steadier version status over a longer period of time.

To avoid possibly compromising the tests due to changes in the software status or a modified version, a separate test environment should be provided for each type of test.

One Test EnvironmentEffectSolution
• There is only one test environment• No test focus possible• Several test environments
• Compromised tests• By level of testing or test focus
• No production-related tests• “One test environment per type of test”

The “Manual” Building Anti-pattern

State-of-the-art development depends on fast delivery, and modern quality assurance depends on the integration of the automated tests into the build process and the automated distribution of the current software status to the various (test) environments. These functions cannot be provided without a build or CI/CD tool.

If there are still tasks to be done relating to the provision of a CI/CD process in a project, they can be marked as “to do” on the board.

No Build ToolEffectSolution
• There is no build tool• No CI/CD• Introduce CI/CD
• Slow delivery• Highlight gaps on the board
• Delayed test results
• Dependent on resources

The Early Adopter Anti-pattern

New technologies usually involve new tools, and new versions involve new features. But introducing new tools and updating to new versions also entail a certain risk. It is advisable to proceed with care, and not to change the parts/tools of the projects all at once.

Early AdopterEffectSolution
• Always the latest of everything…• Challenging training• No big bang
• Deficiencies in skills• Old tools are familiar
• New problems• Highlight deficiencies in skills on the board

The QA Navigation Board workshop

The QA Navigation Board provides a visual aid to agile development teams which they can use to assess the planning aspects of quality assurance at an early stage. During the project duration, the QA Navigation Board can also be used as a reference for the current procedure and as a basis for potential improvements.  » QA Navigation Board

The QA Navigation Board is developed within the framework of a workshop run by an agile QA coach. The duration of the workshop should not exceed 1.5 hours.

Preparation

All the parties involved in the agile project should be invited:

  • Development team (developers, testers)
  • Scrum master
  • Product owner
  • Other stakeholders and shareholders

The QA Navigation Board is affixed to a bulletin board or wall. In addition, each participant receives a printout of the QA Octant as a worksheet.

Step 1:

Presentation of the QA Navigation Board and the objectives of the workshop by the host (agile QA coach), and introduction of the participants.

Step 2:

Brief presentation of the QA Octant and the quality characteristics. The goal is for all the participants to be able to complete the worksheet and to understand the quality characteristics so that they do not talk at cross purposes later.

Furthermore, the participants agree on the dimensions of the QA Octant: Which label is to be given to the intervals of the diagram (1, 2, 3 or S, M, L, XL, etc.)? Then, the worksheets are handed out and completed within 5 to 10 minutes by each participant, the name of whom is indicated on the worksheet (cf. blog post: How to use the QA Octant).

Step 3:

At the end of this time, the host collects the worksheets and puts them up on a bulletin board or wall.

The host then goes through each of the quality characteristics. For this purpose, he identifies the common denominator (average) of each characteristic and discusses the greatest deviations with the respective persons (cf. planning poker). Once the team reaches a consensus regarding the value of a characteristic, the host documents this value.

Step 4:

Based on the valuation of the quality characteristics, the participants then deduce the necessary types of tests. The higher the value of a quality characteristic, the more likely it requires testing by means of an appropriate test procedure. The team then places the types of tests determined in the test pyramid of the QA Navigation Board.

Step 5:

Once all types of tests have been determined and placed, the necessary test resources and other test artifacts can be placed on the QA Navigation Board. A checklist can help in this respect (cf. blog post: The QA Map or “How to complete the QA Navigation Board”).

Step 6:

When the team has mostly completed the QA Navigation Board, it is put up in an appropriate place in the team room. The host concludes the workshop and points out that the QA Navigation Board can be updated and further developed by the team, and also used in retrospectives.

Recipes for Test Automation (Part 3)

In my previous two posts, „Ingredients and appliances for test automation, and who is the chef“ and „Testomatoes on data salad with stressing“, I described the prerequisites for test automation and the challenges with respect to the test data that have to be met in order to successfully implement automated processes. Now we have to ask ourselves, what is the recipe, i.e. a test case for test automation, supposed to look like?

Figure 1: Recipe for test automation

Let us first take a look at a typical recipe. Generally, it consists of two parts: the list of ingredients (test data) and a description of the sequence in which the ingredients are to be used. The description contains both the steps required to prepare the recipe and the names of the ingredients from the list of ingredients. Recipes are more or less detailed, depending on the person for whom they are intended. Recipes for a trained chef are often much less detailed because the chef already knows certain work processes, i.e. they do not need to be described in detail. Recipes for a private household or even a novice in the kitchen have to look different. The same is true for test cases. For a tester with corresponding domain knowledge regarding the domain-driven design of their application, the test cases can be less detailed. But what about automation? Let us compare a baker with a bread-making machine. All the baker needs for a recipe is the instruction “Bake a rye bread”. The bread machine needs a precise recipe description, i.e. the sequence in which the ingredients have to be put into the machine, which program and temperature have to be selected, etc.

In quality assurance, however, where we have more than one recipe or one test case, we want to make the work easier for ourselves. Like in industrial kitchens, we make preparations that will make our work easier later. In the kitchen, the salad garnish, for example, is used for various dishes; similarly, reusable test case modules are created for test cases. For this purpose, several test steps are summarized into blocks and stored as reusable test step blocks. This method can be used both in manual testing and in test automation. Here, too, the difference is in the level of detail:  while a low level of detail may be sufficient for manual testing, automation will always require the highest level of detail.

Hands kneading a dough, baking ingredients and utensils lying on the side
Figure 2: Baking bread

vs.

Part of Code for test case preparation
Figure 3: Creating test cases

From this point of view, test automation is in fact the world’s worst cook. It would even burn water if we didn’t tell it to remove the pot from the stove when the water is bubbling. But then, why do we even use test automation? Well, test automation has some important benefits: A cook can forget an ingredient or deviate from the recipe. Consequently, the dish comes out different every time. The automation does not forget anything, and it always sticks to the sequence prescribed in the recipe. The greatest advantage of test automation, however, is the speed at which it can run the test cases. Furthermore, the cook needs a break every now and then. If we imagine such automation in the kitchen, we would get a kind of field kitchen that processes all kinds of recipes in seconds, and accurately places the result on the plate.

That makes test automation sound very tempting, but you should always keep an eye on the cost-benefit ratio. The work involved in feeding the automation with perfectly designed test cases (recipes) is often underestimated: If I have a birthday party with ten guests once a year, a cooking machine probably won’t pay off. But if I have an event business that provides à la carte food to a wedding party every day, such automation is definitely worth considering.

How to use the QA Octant?

In my blog post “The QA Navigation Board – What do you mean, we have to test that?”, I introduced the QA Navigation Board. Now, I would like to share our experience using the QA Octant contained in this QA Navigation Board to identify the necessary types of tests.

One of the questions asked at the start of a software project is: Which quality characteristics does the development, and therefore the quality assurance, focus on? To facilitate the introduction of this topic to the teams, we use the QA Octant. The QA Octant contains the quality characteristics for software systems according to ISO 25010. These characteristics also point to the necessary types of tests that result from the set weighting of the different functional and non-functional characteristics.

Depending on how much the implemented requirements affect the quality characteristics, it is necessary to check these characteristics by means of a corresponding type of test. Apps with a high data throughput for example require efficiency tests, whereas web shops should be tested for compatibility in various browsers. Thanks to the simple visualization and weighting of the different quality characteristics, the QA Octant can be used for planning.

QA octant from the QA Navigation Board

The team asks the product owner or the department: “How important is each of the quality characteristics?” The goal of this round of questions is to visualize a ranking in the weighting of the different characteristics. Most of the respondents will not really differentiate between the quality characteristics, or rather they will answer: “Everything is important!”

It is now up to the team and the host of the meeting to clarify the question to the point that such a differentiation is possible. Different questioning techniques can be used for this purpose.

Differentiation is for example possible by delimiting the area of application. If an HTML-based technical application is used in a company network, and the IT compliance regulations specify one browser and one operating system version, the aspect of compatibility and the associated tests can be ranked lower. If, by contrast, a large number of different combinations of platforms are used, extensive testing has to be planned.

For further differentiation, you can for example use a negative questioning technique: “What happens if, for example, usability is reduced?” Using the example of an application for monthly invoicing, we assume that a negative effect on the usability increases the time it takes to issue an invoice from two to four hours. Since the application is only used once every month, this “delay” would be acceptable, and usability can be ranked lower in the QA Octant.

This questioning technique can be expanded by prioritizing by means of risk assessment. “What happens, or which consequences arise if, for example, the security characteristic is lowered?” The answers result from the following aspects:

  • What financial impact would a failure of the application have if the focus on this characteristic was reduced?
  • How many users would be affected by a failure of the application if the focus on this characteristic was reduced?
  • Would a failure of the application cause danger to life and limb if the focus on this characteristic was reduced?
  • Would a failure of the application affect the company’s reputation if the focus on this characteristic was reduced?

If results and findings are available with respect to one or several of the quality characteristics, you can compare them to the open quality characteristics and proceed similarly to the complexity comparison for the planning or estimation.

Asking the right questions produces an overview of the quality characteristics. Thanks to the simple visualization and weighting of the different quality characteristics, the QA Octant can be used for planning the types of tests.

The result is not always the most important part of the QA Octant: “the journey is the destination” as well. Due to the weighting in the team and together with the PO and/or the department, different opinions are more discernible, and all the parties involved develop a better understanding.