The QA Navigation Board provides a visual aid to agile development teams which they can use to assess the planning aspects of quality assurance at an early stage. During the project duration, the QA Navigation Board can also be used as a reference for the current procedure and as a basis for potential improvements. » QA Navigation Board
The QA Navigation Board is developed within the framework of a workshop run by an agile QA coach. The duration of the workshop should not exceed 1.5 hours.
Preparation
All the parties involved in the agile project should be invited:
Development team (developers, testers)
Scrum master
Product owner
Other stakeholders and shareholders
The QA Navigation Board is affixed to a bulletin board or wall. In addition, each participant receives a printout of the QA Octant as a worksheet.
Step 1:
Presentation of the QA Navigation Board and the objectives of the workshop by the host (agile QA coach), and introduction of the participants.
Step 2:
Brief presentation of the QA Octant and the quality characteristics. The goal is for all the participants to be able to complete the worksheet and to understand the quality characteristics so that they do not talk at cross purposes later.
Furthermore, the participants agree on the dimensions of the QA Octant: Which label is to be given to the intervals of the diagram (1, 2, 3 or S, M, L, XL, etc.)? Then, the worksheets are handed out and completed within 5 to 10 minutes by each participant, the name of whom is indicated on the worksheet (cf. blog post: How to use the QA Octant).
Step 3:
At the end of this time, the host collects the worksheets and puts them up on a bulletin board or wall.
The host then goes through each of the quality characteristics. For this purpose, he identifies the common denominator (average) of each characteristic and discusses the greatest deviations with the respective persons (cf. planning poker). Once the team reaches a consensus regarding the value of a characteristic, the host documents this value.
Step 4:
Based on the valuation of the quality characteristics, the participants then deduce the necessary types of tests. The higher the value of a quality characteristic, the more likely it requires testing by means of an appropriate test procedure. The team then places the types of tests determined in the test pyramid of the QA Navigation Board.
Step 5:
Once all types of tests have been determined and placed, the necessary test resources and other test artifacts can be placed on the QA Navigation Board. A checklist can help in this respect (cf. blog post: The QA Map or “How to complete the QA Navigation Board”).
Step 6:
When the team has mostly completed the QA Navigation Board, it is put up in an appropriate place in the team room. The host concludes the workshop and points out that the QA Navigation Board can be updated and further developed by the team, and also used in retrospectives.
The QA Navigation Board provides a visual aid to the development teams which they can use to assess the planning aspects of quality assurance at an early stage. During the project duration, the QA Navigation Board can also be used as a reference for the current procedure and as a basis for potential improvements. But how should the types of tests and other test artifacts be placed on the QA Navigation Board?
To answer the question, “How and where do we want to test?”, the team would have to comb through the entire development process to find and document test and QA aspects. The development process can be different for every project, which could quickly make this issue highly complex (Fig. 1).
Figure 1: Development and QA process
Again, to facilitate the introduction of this topic to the teams, we have developed the QA Map. The QA Map gives the team a practical tool to plan and document the measures required for optimal testability of the projects. The objective is to determine all QA-relevant issues for the teams and development projects, using a playful approach and at an early stage.
Figure 2: The QA Map
After defining all the key test areas by means of the QA Octant and determining the necessary types of tests, all aspects of the test strategy, such as types of tests, resources and tools, can be visualized, discussed, and prioritized.
A good practice resulting from the workshops done in the past is using two tools to control the completion of the QA Map. The first is a competent host who leads the workshop in the right direction, and the second is using a check list. The check list comprises appropriate questions that are intended to provide suggestions in the workshop in order to complete the various parts of the QA Map. These questions are listed below and allocated to the respective field to be completed.
Requirements
What are the requirements?
Do the requirements support the preparation of the test case?
Can requirements and tests be linked?
Test / Code
Where do we place the tests?
Do we have the necessary skills?
Repository
Where do we store the test artifacts?
Are there different artifacts?
Test Management
How do we plan our tests?
How do we document our tests?
How do we report? And to whom?
Automation
How much test automation is required?
Do we need additional tools?
Do we need test data?
Build
How often do we want to build and test?
How do we want to integrate QA?
Do we want to test maintainability?
Test Environments
Do we have an adequate environment for every test?
Will we get in each other’s way?
Figure 3: Example 1 of a completed QA Navigation BoardFigure 4: Example 2 of a completed QA Navigation Board
Once all types of tests have been selected and the team has started to place the other test artifacts (e.g. tools, environments), the host can withdraw. The team should put up the final picture in the team room as an eye-catcher. This way, the QA Navigation Board plan can be used as a reference for the current procedure and as a basis for potential improvements.
One of the questions asked at the start of a software project is: Which quality characteristics does the development, and therefore the quality assurance, focus on? To facilitate the introduction of this topic to the teams, we use the QA Octant. The QA Octant contains the quality characteristics for software systems according to ISO 25010. These characteristics also point to the necessary types of tests that result from the set weighting of the different functional and non-functional characteristics.
Depending on how much the implemented requirements affect the quality characteristics, it is necessary to check these characteristics by means of a corresponding type of test. Apps with a high data throughput for example require efficiency tests, whereas web shops should be tested for compatibility in various browsers. Thanks to the simple visualization and weighting of the different quality characteristics, the QA Octant can be used for planning.
The team asks the product owner or the department: “How important is each of the quality characteristics?” The goal of this round of questions is to visualize a ranking in the weighting of the different characteristics. Most of the respondents will not really differentiate between the quality characteristics, or rather they will answer: “Everything is important!”
It is now up to the team and the host of the meeting to clarify the question to the point that such a differentiation is possible. Different questioning techniques can be used for this purpose.
Differentiation is for example possible by delimiting the area of application. If an HTML-based technical application is used in a company network, and the IT compliance regulations specify one browser and one operating system version, the aspect of compatibility and the associated tests can be ranked lower. If, by contrast, a large number of different combinations of platforms are used, extensive testing has to be planned.
For further differentiation, you can for example use a negative questioning technique: “What happens if, for example, usability is reduced?” Using the example of an application for monthly invoicing, we assume that a negative effect on the usability increases the time it takes to issue an invoice from two to four hours. Since the application is only used once every month, this “delay” would be acceptable, and usability can be ranked lower in the QA Octant.
This questioning technique can be expanded by prioritizing by means of risk assessment. “What happens, or which consequences arise if, for example, the security characteristic is lowered?” The answers result from the following aspects:
What financial impact would a failure of the application have if the focus on this characteristic was reduced?
How many users would be affected by a failure of the application if the focus on this characteristic was reduced?
Would a failure of the application cause danger to life and limb if the focus on this characteristic was reduced?
Would a failure of the application affect the company’s reputation if the focus on this characteristic was reduced?
If results and findings are available with respect to one or several of the quality characteristics, you can compare them to the open quality characteristics and proceed similarly to the complexity comparison for the planning or estimation.
Asking the right questions produces an overview of the quality characteristics. Thanks to the simple visualization and weighting of the different quality characteristics, the QA Octant can be used for planning the types of tests.
The result is not always the most important part of the QA Octant: “the journey is the destination” as well. Due to the weighting in the team and together with the PO and/or the department, different opinions are more discernible, and all the parties involved develop a better understanding.
In development projects, most clients primarily focus on thoughts of functionality and added value. Consequently, QA and testability are neglected in the planning stage. The team then encounters obstacles in the testing stage that can be avoided if the QA tasks are planned with some forethought. For the planning of the advanced testing stages, testers already have an adequate procedure: a detailed test concept that documents the test objectives and defines corresponding measures and a schedule.
Figure 1: aspects of the test strategy I topics of a test concept
However, this level of detail is not suitable for agile projects and development teams. Nevertheless, the team should consider most of the aspects that are specified in the test concept before starting a project. This is why we have developed a tool that enables the teams to take all the measures required for optimal testability in software projects into account. This tool covers the questions “What needs to be tested?” and “How and where do we want to test?”
To answer the first question, “What needs to be tested?”, in regard to software products, specifying the quality characteristics for the requirements to be fulfilled is decisive. The different quality characteristics are provided in ISO 25010 “Systems and software Quality Requirements and Evaluation (SQuaRE)” (Fig. 2).
Figure 2: Quality criteria according to ISO 25010
Depending on how much the implemented requirements affect the quality characteristics, it is necessary to check these characteristics by means of a corresponding type of test. Apps with a high data throughput for example require efficiency tests, whereas web shops should be tested for compatibility in various browsers.
To facilitate the introduction of this topic to the teams, we use the QA Octant. The QA Octant contains the quality characteristics for software systems according to ISO 25010. These characteristics also point to the necessary types of tests that result from the set weighting of the different functional and non-functional characteristics (Fig. 3).
Figure 3: The QA octant with weighted quality criteria
Thanks to the simple visualization and weighting of the different quality characteristics, the QA Octant can be used for planning. It allows product owners to keep track of the relevant requirements, and the team can classify the requirements according to the quality characteristics together with the product owner. Due to the weighting in the team, different opinions are more discernible, and the agreed classification can be clearly documented. The result then allows for the necessary types of tests to be deduced.
To answer the second question, “How and where do we want to test?”, the team would have to comb through the entire development process to find and document test and QA aspects. The development process can be different for every project, which could quickly make this issue highly complex (Fig. 4).
Figure 4: Development and QA process
Again, to facilitate the introduction of this topic to the teams, we have developed the QA Map. The QA Map gives the team a practical tool to plan and document the measures required for optimal testability of the projects. The objective is to determine all QA-relevant issues for the teams and development projects, using a playful approach and at an early stage. All aspects of the test strategy, such as types of tests and tools, can be visualized, discussed and prioritized together in planning rounds. In addition to the planning, the QA Map with its eye-catching representation also serves as a reminder, or a quick introduction to the team’s test strategy.
Put together, the octant and the map form the QA Navigation Board, which can be put up as a picture on the wall (Fig. 5).
Figure 5: The QA navigation board (with octant and map) as a mural
The QA Navigation Board provides a visual aid to the development teams, by means of which they can assess the planning aspects of quality assurance at an early stage. During the project term, the QA Navigation Board can also be used as a reference for the current procedure and as a basis for potential improvements.
Digitalization and Industry 4.0 entail new requirements for processes and software systems in all company divisions and business areas. Companies that outsource the development of their software or purchase it from third parties face an additional challenge. Considering the interconnected work in the companies’ business operation, the different systems of various manufacturers are required to exchange ever more data. Despite the tests by the internal and external development teams who validate the software in various development-related levels of testing before handing it over to the client, and despite the subsequent approval by way of departmental testing, errors occur when the individual components interact. A test center with a focus on comprehensive integration tests could possibly solve this problem, but it has to meet specific requirements to be successful.
Critical errors that become evident only in live operation mean negative publicity both for the product and the companies involved. To prevent this, testing is a fundamental, integral part of modern software development. Only a sufficient number of tests and prompt feedback of the results allow for the quality and maturity of the product to be appropriately documented and confirmed. In the course of large software development projects, the number of new and/or upgraded functions is often in the hundreds. Development teams use component, integration and system tests to test the software before it is handed over to the client. The department approves the delivered software by way of the acceptance test (see Figure 1: Test pyramid).
Figure 1: test pyramid
Companies have several information systems for different tasks such as logistics, accounting, sales, etc., all of which were built using a wide variety of technologies. These information systems are already exchanging data today. The requirements of the digitalization and Industry 4.0 amplify these effects: New requirements such as increased networking throughout the entire value chain lead to a higher number or extension of the existing interfaces between the information systems. Thus, the overall system becomes more and more complex, as does the life cycle of the software: Dependencies have to be taken into account from the identification of the requirements through to the testing.
Figure 2: Challenges in testing due to digitalization and industry 4.0
The effort required for the integrative tests increases enormously, in particular for companies that have their software developed by various service providers. In most constellations, the software systems are developed by several third-party software manufacturers and/or possibly an in-house IT organization. The providers themselves perform more or less in-depth component, integration and system tests, and verify the quality for the individual information system they create. The departments are now responsible for testing the interconnected information systems as they interact (see Figure 2: Challenges in testing due to digitalization and Industry 4.0).
The worst problems, or errors with a high risk, occur in the interaction of the information systems. However, most companies fail to perform the necessary comprehensive, integrative tests, or the testing done is insufficient, resulting in an insufficient quality statement and errors in live operation. This has various reasons. A comprehensive test at the development level is impossible due to the organizational and geographical separation of the service providers involved, and performing the necessary tests for each release is impossible for the expert users or testers from the department because that is too costly in terms of time and resources. Furthermore, the employees tasked with these tests lack the experience and the know-how necessary for optimal test planning and covering all the requirements for integration tests. The physical distance between the respective testers in the various departments further impede consultations and knowledge-building.
For the company to be successful, it is therefore becoming ever more important to outsource the necessary tests to dedicated testers, significantly increasing both the degree of coverage achieved in testing and the frequency of testing (regression). A possible solution is a test center that oversees the comprehensive integration test that takes place after the tests of the service providers and before the acceptance test of the department (see Figure 3: Comprehensive integration test by a test center). The test center verifies that the information systems interact correctly, and the department ultimately focuses on the approval of the requirements it specified.
Figure 3: Comprehensive integration test by a test center
A test team or test center of dedicated and trained testers has several advantages:
The quality of the information systems is the dedicated test team’s primary objective.
The test results are collected and communicated to the parties involved in an objective manner.
There is a test manager who focuses on quality issues and who is responsible for the management of the test group.
The test manager coordinates with the technical and development departments, determines the requirements to be tested, coordinates the test team, integrates the testers from the departments, communicates with the project management, and documents the results in test reports.
However, there are also disadvantages to an in-house test team or test center: Longer release cycles or delays in the provision of the software to be tested can cause the workload to fluctuate. The in-house test team or test center continuously generates costs, but does not always have enough work. On the other hand, in the case of peaks in the testing work, the team may not, or only with great difficulty, be able to cover them.
Companies that already use service providers for the development of their software can also call on an external provider offering integration testing as a service for the test center. Using a test center does not merely mean outsourcing the testing. A test center based on a test service agreement is a solution where the responsibilities, duties and settlement terms are customized for the individual client.
The third-party test team or test center is as independent as possible from the software development, highly specialized, and due to the nature of the service, adaptable to the client’s testing requirements. This resolves the above-mentioned disadvantages of an in-house test team or test center, and allows the company to focus on its processes.
In order for a test center to be able to optimally respond to the client’s wishes, certain prerequisites need to be fulfilled. The test center must not be a detached organizational unit, but has to establish open channels of communication and information with all the parties involved. Proximity to the client is of particular importance. Based on our experience, the test team should preferably be located on the client’s premises or at a distance of no more than 5 to 10 minutes on foot. This ensures knowledge transfer and target-oriented coordination with the departments.
The service managers and test managers are responsible for coordinating with the client and/or the department. The service manager agrees the planning of the test services with the client. This includes defining the content of the test services and the responsibilities of the test center. As every client has different requirements and processes, the assumption of the testing requires individual coordination with each client, and an individual transition. If the transition, and thus the assumption of the testing, has been successful, the test manager agrees the testing period, the operative test content and the test cases with the department and/or the client’s test coordination for each test release. But the communication is not limited to these two roles. The test experts in the test center and the department need to be in immediate, close contact in order to create, adapt and review the test cases in the best possible way, and to coordinate when deviations are discovered.
The result of the tests largely depends on the know-how of the testers, which has to comprise at least three aspects: Firstly, the technical know-how regarding the applications to be tested, and secondly, comprehensive knowledge of the testing methods. This ensures that optimal coverage of the requirements is achieved, both in technical terms and with respect to the definition of test cases. Thirdly, the testers also need to know the way the developers work. This enables them to better identify and analyze errors and communicate them to the software developers in the best possible way.
Figure 4: Coordination tasks of the service and test manager
On the other side, the test center has to exchange information with the third-party providers and the development. The objective of such coordination is, for example, comparing the content of the tests already done to the downstream integration tests in order to identify any gaps in the testing or redundancies, if any. Furthermore, the delivery of new software releases to the test systems is planned, and the analysis and follow-up of deviations are discussed with the client.
In addition to the planning and execution of the test activities of the comprehensive integration test, the test center can also take over the technical support of the testing in the company. This includes, for example, the development and maintenance of the test infrastructure and test environments, and the development of comprehensive test data management. It is important that all the software systems to be tested are installed in the integration test environments, ensuring that the entire business process can be comprehensively tested.
An additional aspect of the test center is the continuous optimization of the test processes. This not only includes the optimization of the operations that have already been established, but also the introduction and operation of test automation, the dissolution of current interdependencies within and between the test stages, and the review of the level of development of the software developers at an early stage by way of so-called pre-integration tests.
For this purpose, additional test environments besides the test environments for the comprehensive integration tests are created. The service providers provide pre-release versions of the software in the pre-integration environments, giving the test center’s pre-integration team the opportunity to perform tests with interaction with the other applications at an early stage. Thus, the pre-integration tests help to identify possible deviations between the different information systems of the various providers more quickly.
For companies that have their software developed by various providers and still have complex, interconnected system environments, an external test center offers a quick and in-depth quality statement regarding all the software systems. The objective of the external test center is the establishment of an integrative test process that includes not only the interconnection of the test systems, but an interconnected test organization and interconnected test processes as well. This way, the test center responds to the companies’ requirements regarding a more integrative test focus and flexibility through scalability, communication, and concentrated testing expertise.