Mobile Testing – Do I Have to Reinvent the Wheel?

Most of us cannot imagine a day without our smartphone anymore, using apps to surf the web, listen to music, or play games. Therefore, besides countless developers, numerous testers also work on mobile terminals. The question is, do testing methods change because of the new platforms?

In the beginning…

About two years ago, I had the opportunity to immerse myself in the world of mobile testing within the framework of a client project. A small team at Saxonia Systems (since 03/2020 ZEISS Digital Innovation) had started developing an iOS app. Even then, an item on my software tester agenda was: “One day, you’ll do something with apps.” And since I had always had an affinity for Apple and their high quality standards (and I own a variety of their products), I did not hesitate to accept.

From then on, I was supposed to accept the development team’s sprint result for the client every two weeks. The onboarding was quickly completed. Within hours, I had the names of the contacts within the development team, an iPad on my desk, and ready access to JIRA.

And now: Start testing!

The test case specifications were very similar to what I was used to. Over the course of the development of the app, the client introduced the JIRA plug-in Xray. Anyone who has worked with JIRA before and knows the specification of test cases from other tools will quickly manage this plug-in. Since I had worked with several test management tools and with JIRA before, my learning curve with respect to Xray was quickly conquered, and soon enough I had specified my first test cases for the acceptance test.

According to the specifications, the acceptance environment was always to be equipped with the latest iOS version, reducing the combination of iOS releases and simplifying the tests. Until then, I had had to pay attention to which operating system and which service pack were installed in order to ensure that the software was supported. In the mobile sector, and with iOS in particular, the users’ quick readiness to update makes the acceptance test somewhat easier because the test artifact has to work with the current iOS version at all times.

New challenges

Now, how to transfer the sprint result to my iPad? In my previous projects for this client, all of which were restricted to desktop applications, I found an installer of the software to be tested on a shared drive every morning. I installed it on my virtual machine and was ready to start testing.

Now I was supposed to install an unpublished app on my iPad, but how? I contacted the Saxonia development team, and they told me what to do. All I had to do was give them the Apple ID used on my iPad, and to install the iOS application TestFlight. TestFlight is an online service of Apple that allows developers to make their apps available to a select group of users for testing prior to the official App Store release.

The development team added my Apple ID to the pool of app testers. I accepted the subsequent invitation in my electronic mailbox, and promptly, the latest version of the app was available to me in TestFlight. One click on “Install”, and the test artifact was downloaded and installed on my iPad. From then on, TestFlight automatically notified me by push email whenever a new version was available for testing. Gone were the times when I had to look for a new build for installation on the shared drive every morning. One glance at my iPad, one click, and I was ready to go. The provision of the test artifact was much more convenient than I was used to from other projects.

Here we go!

The day of the first acceptance arrived, and I was very excited to get to work at last. I could not have been better prepared: The specified test cases were ready for execution, the iPad’s battery was fully charged, the latest app version was installed, and a soft drink was at hand. So let’s get going!

But what was that? I had only just executed the first test steps when something deviated from my test case. I had found a deviation from the acceptance criterion. Consequently, I created a bug from the test execution, clearly recorded all the steps for reproduction, and was going to add a screenshot for better visualization. And that was where I faced a problem: How do you get a screenshot of the iPad into the bug report?

On the PC, that was simple: Capture the screen, save it, and attach it to the ticket. But how do you create a screenshot on the iPad? And how do you send it to JIRA? Being an experienced iOS user, I quickly found a way to create a screenshot, and soon, I had it on my iPad. But then I had to think about how to transfer the screenshot. I considered the following options:

  • Send the screenshot to myself by email
  • Upload the screenshot to an online storage space and download it to the PC
  • Use the data cable and connect the iPad to the PC

I chose the data cable, and from then on, I diligently transferred my screenshots to JIRA.

With mobile testing, documenting bug reports (with screenshots) was different than with desktop or web applications. Back then, this meant that bug reporting was more arduous. Today, I work with a MacBook, and I can easily share and transfer screenshots of mobile terminals by means of Apple’s AirDrop.

I was able to complete the acceptance test without further deviations from the target state, and I was happy to see a lot of green test cases. The development team took the bug report into account in the next sprint. The screenshot that had been so difficult to document was much appreciated and helped correct the deviation. Thus, it was worth the effort.

Done!

It was easy for me to reach a conclusion after the first mobile acceptance test. Thanks to my previous project experience, and being trained in the art of software testing, I found my way around in the world of mobile testing quickly. There are always challenges when new technologies are being discovered—but that does not mean you have to reinvent the wheel. Tried and tested processes and methods can be used without difficulty. My affinity for mobile applications and devices certainly gave me an edge in exploring this new world, but I can only encourage every tester to get involved in this exciting field.

Today, I am no longer working on the acceptance side, but I have become an established member of the development team, and, in addition to the acceptance test of the stories, I am also responsible for the management of a variety of test devices and their various operating systems, test data management, and automated UI tests. I will tell you about the challenges in these fields in the next post.

QA Navigation Board – What do you mean, we have to test that?

In development projects, most clients primarily focus on thoughts of functionality and added value. Consequently, QA and testability are neglected in the planning stage. The team then encounters obstacles in the testing stage that can be avoided if the QA tasks are planned with some forethought. For the planning of the advanced testing stages, testers already have an adequate procedure: a detailed test concept that documents the test objectives and defines corresponding measures and a schedule.

aspects of the test strategy I topics of a test concept
Figure 1: aspects of the test strategy I topics of a test concept

However, this level of detail is not suitable for agile projects and development teams. Nevertheless, the team should consider most of the aspects that are specified in the test concept before starting a project. This is why we have developed a tool that enables the teams to take all the measures required for optimal testability in software projects into account. This tool covers the questions “What needs to be tested?” and “How and where do we want to test?”

To answer the first question, “What needs to be tested?”, in regard to software products, specifying the quality characteristics for the requirements to be fulfilled is decisive. The different quality characteristics are provided in ISO 25010 “Systems and software Quality Requirements and Evaluation (SQuaRE)” (Fig. 2).

Quality criteria according to ISO 25010
Figure 2: Quality criteria according to ISO 25010

Depending on how much the implemented requirements affect the quality characteristics, it is necessary to check these characteristics by means of a corresponding type of test. Apps with a high data throughput for example require efficiency tests, whereas web shops should be tested for compatibility in various browsers.

To facilitate the introduction of this topic to the teams, we use the QA Octant. The QA Octant contains the quality characteristics for software systems according to ISO 25010. These characteristics also point to the necessary types of tests that result from the set weighting of the different functional and non-functional characteristics (Fig. 3).

QA octant from the QA Navigation Board
Figure 3: The QA octant with weighted quality criteria

Thanks to the simple visualization and weighting of the different quality characteristics, the QA Octant can be used for planning. It allows product owners to keep track of the relevant requirements, and the team can classify the requirements according to the quality characteristics together with the product owner. Due to the weighting in the team, different opinions are more discernible, and the agreed classification can be clearly documented. The result then allows for the necessary types of tests to be deduced.

To answer the second question, “How and where do we want to test?”, the team would have to comb through the entire development process to find and document test and QA aspects. The development process can be different for every project, which could quickly make this issue highly complex (Fig. 4).

Development and QA process
Figure 4: Development and QA process

Again, to facilitate the introduction of this topic to the teams, we have developed the QA Map. The QA Map gives the team a practical tool to plan and document the measures required for optimal testability of the projects. The objective is to determine all QA-relevant issues for the teams and development projects, using a playful approach and at an early stage. All aspects of the test strategy, such as types of tests and tools, can be visualized, discussed and prioritized together in planning rounds. In addition to the planning, the QA Map with its eye-catching representation also serves as a reminder, or a quick introduction to the team’s test strategy.

Put together, the octant and the map form the QA Navigation Board, which can be put up as a picture on the wall (Fig. 5).

The QA navigation board (with octant and map) as a mural
Figure 5: The QA navigation board (with octant and map) as a mural

The QA Navigation Board provides a visual aid to the development teams, by means of which they can assess the planning aspects of quality assurance at an early stage. During the project term, the QA Navigation Board can also be used as a reference for the current procedure and as a basis for potential improvements.

Good luck testing!

Protractor – Automated Testing with Angular

Critical errors that become evident only in live operation constitute a major financial risk, and they also mean negative publicity both for the product and the companies involved. This is why testing is a fundamental, integral part of modern software development. High test coverage and prompt feedback of the test results allow for the quality and maturity of the product to be appropriately documented and confirmed.

Using test automation tools constitutes a solution that enables quick execution of such tests and meets the requirements of modern development projects. These tools work according to the principle of tool-based collection of information via the graphic interface of the product to be tested, which enables the automated execution of scripted interactions, and, as a result, the assessment of the respective application.

Test automation tools ensure quick and continuous feedback regarding the quality of the software to be tested. But there are some points that have to be observed when using them. There are various tools available on the market which use different approaches as to how they are integrated into the development and testing process, or which technologies they support. The efficient use of a test automation solution depends primarily on the engine used to control the graphic interface. This engine has to optimally support the technology to be tested. Development projects using “new” technologies such as Angular2 in particular face the problem that available and familiar tools are not always the same state of the art as the technology on which they are used.

CLINTR project and testing with Protractor

We use Angular2 as the development framework for our current software development project, Clintr, and we wanted a high frequency of automated test cases from the start. Clintr is a web application that alerts service providers to prospective customers in their contact network. For this purpose, it uses and analyzes data of the provided XING API to derive a demand for services in companies according to defined criteria in a fully automated manner. If a demand for services in a company has been identified, Clintr searches the service provider’s network of contacts (e.g. XING or CRM systems) for contact paths to the prospective customer. Spring Boot-based micro-services with Kubernetes are used as container cluster manager in the back-end, while Angular (>2) is used in the front-end. In order to be able to release new versions of the application at a high frequency, a continuous delivery pipeline into the Google cloud was established, both for the test and the production environment.

Because we used Angular2, we chose the automation test tool Protractor. Protractor is based on Selenium and the WebDriver framework. As usual, the interface tests are done in the browser, simulating the behavior of a user using the application. Since Protractor was written directly for Angular, it can access all the Angular elements without limitation. Furthermore, additional functions for waiting for components such as “sleeps” or “waits” are not necessary because Protractor recognizes the state the components are in and whether they are available for the intended interaction.

How to

For the execution, you need AngularCLI and NodeJS. Then the interface tests (end-to-end or e2e) can be created in the project. To prepare for the local test run, you use the console to switch to the project directory and enter “ng serve”. After entering “ng e2e”, the test cases are then run on the localhost.

The end-to-end tests consist of type script files with the extension .e2e-spec.ts, .po.ts, or just .ts. The test cases are described in the .e2e-spec.ts files. Only tests contained in these files are executed. The following example shows the header of a .e2e-spec.ts file:

    import { browser, by, ElementFinder } from 'protractor';
    import { ResultPage } from './result-list.po';
    import { CommonTabActions } from './common-tab-actions';
    import { SearchPage } from './search.po';
    import { AppPage } from './app.po';
    import { CardPageObject } from './card.po';
    import * as webdriver from 'selenium-webdriver';
    import ModulePromise = webdriver.promise;
    import Promise = webdriver.promise.Promise;
     
    describe('Result list', function () {
     
     let app: AppPage;
     let result: ResultPage;
     let common: CommonTabActions;
     let search: SearchPage;
     
     beforeEach(() => {
     app = new AppPage();
     result = new ResultPage();
     common = new CommonTabActions();
     search = new SearchPage();
     result.navigateTo();
     });

Like the other file types, it starts with the imports. Then the test cases start with describe. The string in brackets specifies which area is to be tested. Below that, the individual .po.ts files required for the subsequent tests are created and instantiated. The beforeEach function allows for preconditions for the test to be defined. For the purpose of reusability, the tests can also be exported to modules (see the following code example):

    it('should display the correct background-image when accessing the page', require('./background'));
    it('should send me to the impressum page', require('./impressum'));
    it('should send me to the privacy-policy page', require('./privacy-policy'));
     
    it('should open the search page after clicking clintr logo', require('./logo'));

The following code lists common e2e tests. First, it specifies what is expected, and then the test is executed. You should keep in mind that the e2e tests in the .e2e-spec.ts only calls the methods of the .po.ts and then waits for the result to be returned. The executing methods belong in the .po.ts.

    it('should still show the elements of the searchbar', () => {
     expect(result.isSearchFieldDisplayed()).toBe(true);
     expect(result.isSearchButtonDisplayed()).toBe(true);
    });
     
    it('should show the correct Search Term', () => {
     expect(result.getSearchTerm()).toBe(result.searchTerm);
    });

The following code example shows the .po.ts relating to the .e2e-spec.ts above. Each .e2e-spec.ts does not necessarily have its own .po.ts or vice versa. A .po.ts can, for example, contain tab-related actions such as switch or close tabs. As long as a .e2e-spec.ts only uses methods from other .po.ts, it does not necessarily need its own .po.ts. As mentioned above, the .po.ts starts with the imports, and then the class (ResultPage in the example) is created.

When called up, the navigateTo method causes the test to navigate to the specified page. Since the test is not supposed to do this directly in the present case, it navigates to the Search page first. There, a search term is entered and the search is started. Thus, the test arrives at the result_list page where the tests are subsequently run.

    import { element, by, ElementFinder, browser } from 'protractor';
    import { SearchPage } from './search.po';
    import * as webdriver from 'selenium-webdriver';
    import { CardPageObject } from './card.po';
    import ModulePromise = webdriver.promise;
    import Promise = webdriver.promise.Promise;
     
    export class ResultPage {
     
     public searchTerm: string = 'test';
     
     search: SearchPage;
     
     navigateTo(): Promise<void> {
     this.search = new SearchPage();
     return this.search.navigateTo()
     .then(() => this.search.setTextInSearchField(this.searchTerm))
     .then(() => this.search.clickSearchButton());
     }

Each of the following three methods queries an element of the page. The first two tests have a Union Type return value. This means they can either return a boolean or a Promise<boolean>, i.e. either a Boolean or a promise of a Boolean. When using Promise as the return value, it should always be followed by a then; otherwise, asynchronous errors may occur.

    isSearchButtonDisplayed(): Promise<boolean> | boolean {
     return element(by.name('searchInputField')).isDisplayed();
    }
     
    isSearchFieldDisplayed(): Promise<boolean> | boolean {
     return element(by.name('searchButton')).isDisplayed();
    }
     
    getSearchTerm(): Promise<string> {
     return element(by.name('searchInputField')).getAttribute('value');
    }

Example

An implementation example of a test case in ClintR is the test of the link to the legal notice. First, it is supposed to click on the link. Then, the test is supposed to switch to the newly opened tab and confirm that the URL contains /legal-notice. Lastly, it is supposed to close the tab. This test was initially created only for the home page.

    it('should send me to the impressum page',() => {
     impressum.clickImpressumLink();
     common.switchToAnotherTab(1);
     expect(browser.getCurrentUrl()).toContain('/legal-notice');
     common.closeSelectedTab(1);
    })

As, according to the acceptance criteria, the legal notice has to be accessible from every subpage, the test was later adopted into all the other specs. To keep the code clear, it was decided to export this test into a module (impressum.ts).

    import { browser } from 'protractor';
    import { AppPage } from './app.po';
    import { CommonTabActions } from './common-tab-actions';
     
    module.exports = () => {
     let common: CommonTabActions = new CommonTabActions();
     new AppPage().clickImpressumLink().then(() => {
     common.switchToAnotherTab(1);
     expect(browser.getCurrentUrl()).toContain('/legal-notice');
     common.closeSelectedTab(1);
     });
    };

It is used in the e2e-spec.ts this way:

    it('should send me to the impressum page', require('./impressum'));

Particularities, notes & problems

Certain given functions can be written in each e2e-spec.ts, e.g. beforeEach, beforeAll or afterEach and afterAll. As the names suggest, the code contained in one of these functions is executed before or after each test or all the tests. In our example, each test should include its own page view. Accordingly, the navigateTo method can, for example, be written in the beforeEach function. afterEach can, for example, be used to close tabs that were opened during the tests.

Each test starts with the word it. If you add an x before this word, i.e. xit, this test will be skipped in the test run. But, contrary to a commented-out test, notice will be given that one or more tests have been skipped in the test run. If you write a test case with f, i.e. fit, only those tests starting with fit will be taken into account in the test run. This is expedient when you have a large number of test cases and only want to run some of them.

When working with Promise, which you get from some methods, you should keep in mind that incorrect handling can cause asynchronous errors. Many events, such as pressing a button or querying whether an element is displayed, return such a Promise. Even opening a page returns a Promise<void>. To avoid errors, each Promise that entails further actions such as pressing a button or outputting a resulting value should explicitly be followed by a then. For example:

    pressButton().then( () => {
     giveMeTheCreatedValue();
    });
    //If this value is again a Promise, which should trigger something, then the whole thing would look like this:
    pressButton().then( () => {
     giveMeTheCreatedValue().then( () => {
     doSomethingElse();
     });
    });
    // or slightly shorter
    pressButton()
     .then(giveMeTheCreatedValue)
     .then(doSomethingElse);

Further information on Promises is available here.

Conclusion

Protractor is highly suitable for the automation of interface tests in a software development project using Angular2. The documentation on the project side is very detailed and comprehensive. Using Selenium allows the tests to be easily integrated into the build process.

The Life of a Tester

Sometimes, during your daily project routine, you ask yourself, what am I doing here, and do the others on the team know what I’m actually doing? As a tester, I have asked myself this question before, and I have put together some of my thoughts. In the following post, you will find some anecdotes from my life as a tester.

Daily

Another day, another daily, and I as a tester wonder how to favorably present the specification of test cases today, as work done, current work, and future work. Usually, I say “I specified test cases for the user story xyz and will continue doing that”, or something to that effect.  Rather monotonous, and often cause for a condescending smile, but this is in fact the main task during the first days of a sprint. After all, I, as a tester, am the one laying the foundation for the proper testing of current user stories.

Things get more exciting when I report that I am now running test cases as well, proud to have verified an error here and there. But pride is really the wrong word in this context. It all depends on the developers, the current mood in the team, the priority of the error, or the tester’s (i.e. my) self-confidence. Sometimes I just kind of mumble the word “bug” and then let someone else take the floor. But why be so reticent, really?

As a tester, I am responsible for pointing out if the software does not work as defined. And that is a good thing. How are we supposed to deliver high-quality software if we do not accept that mistakes happen, and that they cause stressful situations or conflicts? Have courage—the error does not disappear if you do not address it (on the contrary, it gets worse the longer you wait to fix it), and it can easily be forgotten in the jumble of the daily development work if it is simply pinned to the task board between the major user stories.

Regression testing/test automation

Speaking of the task board: This is a good tool for keeping an eye on the user stories with the tasks they contain and the progress made during the current sprint. If you take the time to stop and consider the work in the team with the help of the task board over a period of weeks or even months, you could come to the following conclusion: The developers work with shiny new tools, and the testers dig through the dirt of the past, always looking at all the glossy new things. That does not mean that I as a tester am not aware that the developers also have to adapt existing code and fix bugs from time to time. But as a tester, I have a very different perspective. Despite all the automated tests that our developers write, I have to keep track of everything that is affected by any changes made, i.e. I keep looking back at the relics of the past while the programmer forges ahead.

And this is where the greatest challenge in my everyday work comes in: regression testing. Although this can be automated as well, the tools for that are not always kind to me. If I do not use them from the start, I am continuously busy chasing the changes made by the developers. But even if I use them from the start, I have to use them on the UI level to fulfill my tasks, which is tedious and, unfortunately, prone to errors as well. Some thoughts on this:

Test automation tools suggest that I can automate testing simply by recording test steps. Unfortunately, my initial enthusiasm is replaced with the realization that this is not enough because my test records usually do not persist for more than a day. So let us try something new: This time, we program, leaving the recording tools behind. Immediately, new obstacles appear. I am not good at programming, and now I have to deal with some cryptic things. And the next idea is born: Use the developers’ experience. Again, there are limits to this idea because it usually only works as long as the developer has time and is willing to do this. Which is understandable, considering that the developer wants to create something new.

So I am torn: Do I write automated tests, or do I do everything manually? I definitely need to plan for regression tests. Whether they turn out to be automated, manual, or exploratory depends on a wide variety of parameters. For me, at least, test case automation is no cure-all. I appreciate the value of automated test cases that can be done in every sprint: greater test coverage, less manual testing of defined work flows. But test case automation by itself is no cure-all because it cannot replace my experience as a tester, and the corresponding ability to “think outside the box”.

Conclusion

In any case, testing as such—be it manual testing, regression testing or test automation—should be acknowledged within the team, and everybody should work together on the quality assurance. Otherwise, the tester will quickly start to feel like an outsider, not a part of the team, if their work is not taken seriously.

The Agile Test Manager – An Oxymoron? (Part 3)

In the first and second part, we successfully completed the agile transition of the test manager’s responsibilities, and we proved that small scrum teams no longer require a test manager. Several scrum teams working together on the same product pose another challenge. Can the previous observations made with respect to a small scrum project with a single team also be adopted for large projects?

Do we need a test manager for large scrum projects?

In many companies that use scrum, several teams work together on the development of the same product. There have already been some thoughts regarding scaled scrum on the company level and how it is possible to comply with agile principles even as the level of complexity increases. In this context, Boris Glober identifies two problematic areas: “compliance with the scaled scrum framework” and “scaling of the requirements process”. To solve this, he introduces additional roles: the company scrum master and the company product owner, who assist all the scrum masters and product owners of the project. Both roles coordinate with their counterparts in the individual scrum teams, and they manage the scrum tools on the company level, such as the company product backlog and the company scrum board.

Can this concept be adopted for testing as well, i.e. do we need something like a company test owner or a company QA master? The purpose of this new tool would be the coordination of and responsibility for the overall, integrative testing process, establishing the necessary common test structures, and filling the backlog with stories, tasks and incidents relating to QA.

Coordination meeting on QA issues
Figure 1: Coordination meeting on QA issues

For projects with a manageable number (2 to 8) of scrum teams, this kind of coordination can be done within the framework of the general coordination meeting, the scrum of scrums. If the coordination becomes more test-specific and time-consuming, it is advisable to establish a separate test meeting as required. Each team sends a team member with the necessary test know-how to this meeting. If the number of teams is higher, or if coordination between several projects is required with respect to QA issues, the guilds can again be used for this purpose. The guilds collect examples of best practices that they provide to all the projects, or they appoint coaches who bring agile testing methods to new projects. The guild masters agree on important decisions and moderate the scrum teams wherever it is necessary to define common rules and solutions.

We can conclude that agile companies no longer need a test manager, even for large projects. However, this is possible only by way of a complete agile transition of the test manager’s responsibilities because each of the test manager’s responsibilities—in particular with respect to communication and coordination—has to be matched with an agile tool.

The Agile Test Manager – An Oxymoron? (Part 2)

In the first part, we successfully completed the agile transition of the test manager’s operative responsibilities, and we proved that small scrum teams no longer require a test manager. But will this work for the test manager’s strategic responsibilities, too? And who assumes the test manager’s strategic responsibilities?

Who assumes the test manager’s strategic responsibilities in agile companies?

A test manager’s strategic area of responsibility includes quality management (QM), which, according to ISO 8402, comprises “all activities of the overall management function that determine the quality policy, objectives and responsibilities and implement them by means such as quality planning, quality control, quality assurance and quality improvement within the quality system”.

Figure 1: Responsibilities of the test manager (according to the ISTQB)

Irrespective of whether a company uses traditional or agile methods in development, the company’s management is responsible for the quality management. The scrum team cannot, should not and must not assume this responsibility. At first glance, nothing in the organization and the division of responsibilities changes. At second glance, however, a problem arises for agile companies. In the traditional world, the test manager used to act as the interface between the operative and the strategic level. They transmitted information between the management and the operative level, passed strategic specifications and methods on to the test teams, or forwarded developments and improvements to the management.

But how does information get transmitted from the strategic to the operative level and vice versa in an agile environment? The answer: The agile transition does not stop with the allocation of the test manager’s operative responsibilities to the scrum team. The solution for the agile transition of the “interface” between the two levels is similar to the agile transition of the test manager’s operative responsibilities. New concepts and tools are available for the current corporate communication tasks.

One of the new concepts is the so-called guild. Guilds (called competence teams in our company) are a tool that puts and keeps knowledge and information management on track. They are organized in a matrix structure apart from the normal company structure. The objective of the guilds is to pool the company’s know-how carriers, offer the staff a platform to exchange knowledge or carry out training programs, or agree general project decisions such as the development of test environments or rules for code quality. Depending on the company’s goals, the structure and composition of the guilds can differ: For example, guilds can be divided according to competence areas such as Java development, .NET development, test or process analysis. Or entire scrum teams can be aggregated in guilds, working together on a specific subject in the project, such as QA, database connection, GUI, or interfaces (see Figure 1).

Figure 2: Example of classification and structure of the guilds by competence area

The guilds work according to the following pattern: The primary role that each employee assumes in their daily work (e.g. developer, tester [QAlas], product owner and scrum master) is determined. With this role, they are allocated to a corresponding guild. Within the guild, the members exchange information relating to their areas of responsibility, carry out in-house training seminars, collect the available knowledge in portals, or discuss best practices in individual projects. A guild master is responsible for the moderation and coordination within the guild. In accordance with the motto “primus inter pares”, they do not have more rights than their colleagues in the guild, and they are elected from among the guild members. The guild master is the principal contact for the members of their own guild, the other guild masters and the management. This is necessary because the guilds engage in active interdisciplinary exchange, and they are also supposed to forward feedback from the operative level—such as new developments or technologies—to sales, or suggestions regarding training seminars to HR.

In the last part, we will look at what happens when several scrum teams work together on the same project and the effort required to coordinate them increases for the field of QA as well.

The Agile Test Manager – An Oxymoron? (Part 1)

Some time ago, a colleague of mine asked me whether we still needed a test manager in an agile development process like scrum. My first response was no because the Agile Manifesto and the scrum framework only know three roles: product owner, development team, and scrum master. Accordingly, the scrum team—i.e. the three scrum roles mentioned above—does not provide for a test manager. But on second thought, I wondered who among the scrum team was supposed to assume the test manager’s responsibilities in and around the sprint?

Studies such as the 2014 ASQF Industry Report[1] and the 2011 Standish Chaos Report show that agile methods have already become a permanent fixture in companies. Furthermore, the Standish Chaos Report shows that projects using agile processes are more likely to be successful than “conventional projects”.  The Agile Manifesto of Ken Schwaber and Jeff Sutherland was the basis for this development. It defines basic principles and specifications that uncover “better ways of developing software by [the parties involved in the process] doing it and helping others do it” [²].

Scrum process and parties involved
Figure 1: Scrum process and parties involved

The key principles from the Agile Manifesto are:

  • Individuals and interactions are more important than processes and tools.
  • Working software is more important than comprehensive documentation.
  • Client collaboration is more important than the original specifications.
  • Responding to change is more important than following a plan.

Companies that switch to the agile method of development have a competitive advantage compared to companies using traditional methods. However, transitioning the processes to development methods such as scrum—also called agile transition—presents a great challenge. Agility is not achieved by divvying the development milestones up into sprints and appointing a product owner (see Figure 1). Comprehensive changes in the organization are required to achieve an agile way of working and living.

The challenge of the agile transition becomes very clear when you look at the example of the test manager. The question:  If we do not need a test manager in scrum, who assumes their responsibilities? The product owner? The scrum master? The team?

According to the Scrum Alliance, the product owner is the person responsible for developing the product accurately and on time. The product owner fills and refines the product backlog, and ensures that everybody knows what it contains and what is given which priority. Consequently, they are usually closest to the “business side” of the project. Scrum requires the development team to be a group with mixed functions—pooling all the necessary skills—that are required for the development of the product. The team organizes itself, i.e. it independently chooses the content to be implemented in the sprint and takes care of the planning, management, and implementation. The scrum master is the “pilot” guiding them through the depths of the scrum framework, helping the rest of the scrum team to comply with scrum principles. Another task of the scrum master is removing obstacles that hamper the team’s progress

Even after studying the scrum roles, we do not know who assumes the test manager’s responsibilities, or how they are allocated. To answer these questions, we must first determine the test manager’s responsibilities in the traditional testing and quality assurance process. According to the International Software Testing Qualifications Board (ISTQB), the certification board for testers, the test manager’s role and responsibilities include more than just supervising the test project. They manage the testing department or the test team, and thus, the resources for the tests. They prepare reports; escalate to development, technical department and project management; assess test projects; ensure compliance with the company’s quality processes; procure the testing tools for the organization; and review the test plans and the test cases.

The responsibilities can be allocated to two fields: strategic and operative (see Figure 2). The operative level includes the planning and conceptual design of the test cases and tests, monitoring the execution of the tests, and the communication within the project. The strategic level includes the quality management tasks.

Responsibilities of the test manager (according to the ISTQB)
Figure 2: Responsibilities of the test manager (according to the ISTQB)

The operative responsibilities cannot be assumed by the scrum master or the product owner. The product owner does not get involved in the implementation, nor the scrum master in the development. In agile development, the testing tasks are assumed by the team. Following the definition and allocation of the test manager’s responsibilities, we can examine how they can be integrated into the agile process. However, there is a problem: It is not possible to allocate them to a specific person because within the scrum framework, all tasks are distributed across the agile team.

The solution to the problem lies within the scrum framework itself. It provides a comprehensive package of tools and artifacts that can be matched with the test manager’s responsibilities. We did a complete agile transition of the responsibilities in our scrum teams, and we found that scrum provides a tool or artifact for every one of the test manager’s responsibilities.

Agile transition of the test manager – test coordination
Figure 3: Agile transition of the test manager – test coordination

For example, the test strategy and the paramount quality characteristics are considered during the planning of the sprint and the backlog grooming. The pass-fail criteria, i.e. the criteria that determine whether a sprint has been successful or the test has been completed, are defined in the definition of done (see Figure 3).

Agile transition of the test manager – test implementation
Figure 4: Agile transition of the test manager – test implementation

In the sprint review, the implementation of the specifications is verified and validated (see Figure 4).

Agile transition of the test manager – test coordination
Figure 5: Agile transition of the test manager – test coordination

In addition, the stories and their representation on the scrum board and in the backlogs facilitate documentation of the progress and the assessment of the quality level (see Figure 3).

Accordingly, there is an agile tool or artifact for each of the test manager’s responsibilities. And thus, complete agile transition of the test manager’s responsibilities is achieved. Provided that the scrum team has the testing know-how required for the implementation of the upcoming development and testing tasks, small projects do not require a test manager. If the know-how is not yet given, the team has to be coached accordingly.

In the next part, we will examine who can assume the responsibilities on the strategic level, and what happens in projects where several scrum teams work on the same product together.

Test-driven Development

Using agile methods is not a 100% guarantee of better software. This is due to the fact that the agile context constitutes a major challenge for many traditional methods of quality assurance. In a system that frequently changes, how can we permanently ensure that the components that are deemed functional now do not fall back into an undefined state due to future improvements?

While the waterfall model provides for testing only at the end of the project, and the frequently used V-model (see Figure 1) also has a clearly defined chronological sequence, in an agile environment, it is crucial that tests are, if possible, always done under the exact same conditions and with the least possible effort because of the frequency at which they have to be done. Furthermore, they should be ready as soon as possible after the implementation of the function to be tested. This is the only way to ensure that they can keep up with the continuous changes.

V-model and test-driven development
Figure 1: The V-model and test-driven development

Test-driven Development

The biggest problem with traditional testing in this context is the fact that it is usually done downstream. It is only done when there is something to test. This seems to be trivial at first glance because you would think that if you want to test something, you need a corresponding test object.

Even though the test cases and the relevant data can be derived from the specifications, without an actual test object, the execution seemingly does not produce an analyzable result. However, this dogma does not apply for automated tests. Whereas continual manual testing would be far too expensive in the long run, automated tests can safely be written and executed shortly before the actual implementation, so that they temporarily test only a shell of the future test object. A shell like the one provided by an interface specification, for example.

In this case, the test is a kind of automated specification, the requirements of which are not fulfilled at first (red area in Figure 2). During the implementation stage, the developer can check their code against them, getting immediate feedback whether their work meets the requirements. Once this status is achieved (green area), the state of the source code is assured, and the developer can then restructure it (yellow area). In the case of new requirements or fundamental changes, the tests will fail again (red area again) and must first be passed correctly again (green area again).

More finely granulated structures then result from this general sequence of actions. They result in a very high test coverage that allows for errors to be found early on and simplifies the correction of such errors. Furthermore, interfaces and their implementation are defined more thoroughly than with traditional methods. This is what enables the high level of adaptability of the source code, which is essential for agile software projects.

Der Ablauf der testgetriebenen Arbeitsweise
Figure 2: Sequence of the test-driven method

In addition, one of the most important characteristics of automated tests is their legibility. Basically, they can be understood to be not only a test of the functionality, but also a kind of documentation of the latter. Because of how close they are to the functionality they test, and because of their clear structure, they describe the behavior of the code to be tested in a way that a normal written text would not be able to. And as their result is furthermore completely independent from the test object, they automatically tell the observer if they are no longer up to date.

Behavior-driven development

Component tests are clearly legible only for the group of people who understand the respective programming language and who know the test framework used. Outsiders, such as test managers, project managers or even end users, gain very little from the granularity and the level of detail.

Another weakness becomes evident when you consider the development process as a whole. Tests created with test-driven development are excellent for verifying parts of the software based on their specifications, but not for validating them against the acceptance criteria. But those are the criteria that need to be fulfilled, which is why they should come before the implementation in testing.

This is why the so-called behavior-driven or acceptance test-driven development evolved from the well-known test-driven development. With this type of development, the tests and specifications merge thanks to the use of certain tools and keywords, allowing the test results to be analyzed in plain language. For this purpose, we turn away from describing individual functions towards detailing the expected behavior. This way, the definition of the individual test cases no longer requires background knowledge of the program’s architecture, allowing even non-developers to understand it, and often, it even goes so far as to no longer speak of test cases, but of specifications.

Behavior-driven development and test-driven development are not to be understood to be rivals, however. The former primarily serves to validate the software and test the system integration, while the latter serves to verify and thus ensure the functional correctness. Consequently, it makes sense to combine the two.

Effort

Due to the fact that developers are expected to write the tests in addition to the actual product development, test-driven development involves greater effort than traditional methods at first. An adequate infrastructure needs to be established, the use of additional tools needs to be learned, and additional code that does not immediately result in usable features needs to be written. This may be discouraging at first, but in fact it only illustrates the effort that will otherwise be required much later over the course of the project. Instead of the effort involved in long testing and bug-fixing phases near the end of the project, a large part of the quality assurance is prepared at a time when the project team still has quite a lot of room to maneuver. The work required decreases as the project progresses because the infrastructure usually has to be established only once, and then only needs to be adjusted here and there.

Effort of test-driven development
Figure 3: When is TDD profitable?

Conclusion

Using the methods presented in this post constitutes a significant break with the traditional methods of software development in some ways, but it facilitates a kind of quality assurance and documentation that is difficult to achieve otherwise. Basically, the software itself reveals its current status without the need to maintain external resources that can quickly become obsolete. By combining the different approaches, you can ensure a high level of quality and a comprehensive understanding of the project across all levels of abstraction.