To achieve our strategic goals, we work on Strategic Initiatives (SI) throughout the year, focusing on a defined topic. In October 2023, the Strategic Initiative Sustainability was launched to ensure that we can establish a strong position in this area. Sustainability has become increasingly important in all areas of life and society, and as such, it is becoming more important for us at ZEISS Digital Innovation (ZDI). Especially for future and growing generations, sustainability will play a significant role, even in the choice of employer. Our partner companies and customers will eventually wonder how seriously we take the issue of sustainability and what we are doing in this area.
What are the goals of the SI?
Our SI Sustainability is divided into two teams. The first team focuses on the aspect of business operations, while the second delves into sustainable software development.
The Business Operations team aims to develop a dashboard for our employees to get updates on ZDI’s ecological footprint metrics. Furthermore, ZDI’s carbon emissions in business operations will be analyzed to identify areas for improvement.
The Sustainable Software team aims to improve sustainability in our core business of software development. The steps to be taken are state of the art analysis in industry about sustainability practices, data driven analysis of our own software emissions and deriving a concept how to improve. Two communication initiatives will be developed for all employees, and at least two exchange sessions will be established between the different ZEISS units to discuss the concept of sustainable software development.
What has been done so far?
In this section, we will look at what has been done so far in both workstreams, starting with Business Operations.
In Sprint 0, we first focused on the various metrics used in sustainability reporting. We came across the Greenhouse Gas Standard (GHG) and its three scopes. Scope 1 (Burn) emissions are all emissions that are directly caused by ZDI and can be controlled, such as fuel for company cars or heating and cooling systems. Scope 2 (Buy) includes all emissions that are “bought” by ZDI, such as electricity. Scope 3 (Beyond) includes all emissions that ZDI has no direct control over, such as business travel or employee commuting.
In the next step, we contacted the strategic Key Group Program (KGP) Sustainability at ZEISS. In several meetings, we discussed what is being recorded at ZEISS and what might happen in the future, and whether we at ZDI can contribute. At the same time, we looked at what is already being recorded and prepared within ZDI.
As part of Sprint 0 of the Sustainable Software workstream, we have extensively researched the concept of sustainability and resource efficiency in software development. It was discovered that currently there are no standardized guidelines or ‘gold standard’ in this area. We received a table from the ZEISS Industrial Quality & Research segment (IQR) report on sustainable development, which allows us to analyze the sustainability of a software project.
In Sprint 1 we started the analysis of our own software based on the calculation table from IQR. Furthermore, we prepared a Thin[gk]athon together with the co-innovation center Smart Systems Hub including different partners from industry and university to dive deep into the ideation of improvement potential for sustainable and resource efficient development.
Image 1: Co-innovation for sustainable software developmentImage 2: Participants of the Thin[gk]athon 2024
What are the next steps?
As part of Sprint 2, our SI had an important goal in mind: to conduct a Thin[gk]athon. This event took place in June with remarkable success at the Impact Hub in Dresden. Participating teams had three days to develop innovative solutions on how to minimize CO2 emissions during the development of digital solutions. The results were impressive, and three teams were named as winners: “Carbon Cutter,” “Cooler Climate Coders,” and “Proekspert AS.”
Currently, the Workstream Sustainable Software is analyzing and evaluating the results of the Thin[gk]athon as part of Sprint 2.
At the same time, our colleagues in the Workstream Business Operations are working on new guidelines for business travel. To make the emissions from heating, air conditioning, and electricity consumption transparent, we are working on creating a Power-BI dashboard. We have also selected a project to calculate the emissions produced. Additionally, together with our colleagues in Marketing, we are producing content to highlight our efforts towards sustainability. We remain committed and continuously work towards making our contribution for a sustainable future.
The final step of this SI is to determine how to implement the topic of sustainability. There are several areas that require attention: how can we incorporate this theme into our customer communication, how can we integrate sustainable and resource-efficient software development practices into our current workflow and adapt to future developments in this field.
We are staying on top of the issue and will continue to help shape sustainability in software development.
In any given healthcare setting, such as doctor’s offices or hospitals, one encounters a wide array of medical systems and software solutions, each with its own workflows, functions, and user interfaces (UIs). In this landscape of healthcare technology, where medical software is becoming increasingly sophisticated, the incorporation of User Experience (UX) Design is crucial to provide real value to both patients and healthcare professionals.
The user-centric approach according to ISO 9241 creates solutions that are not only technically proficient but also support users in the unpredictable nature of real-world medical practice. It ensures both pragmatic qualities like safety and usability for efficient and effective goal achievement, as well as hedonic aspects like user satisfaction. Particularly in the field of medical technology, the safety aspect is of paramount importance and is precisely defined in regulatory standards such as IEC 62366-1 and the EU’s Medical Device Regulations (MDR).
This article delves into the multifaceted discipline of UX design in the medical field. We explore the significance of UX design in the development of medical software and how the process-oriented approach of user-centered design can be successfully implemented. We demonstrate how its application can sustainably enhance treatment outcomes and standards of care.
Importance of UX design in the medical software development
The role of UX design in developing medical software is crucial for creating thoughtful and user-friendly solutions. These solutions offer numerous advantages, from streamlined workflows for healthcare professionals and enhanced patient safety and comfort to cost savings and increased sales. UX design goes far beyond the mere pixel-perfect design of graphical UIs and focuses on the development of systems specifically tailored to the unique demands and contexts within the healthcare industry. It enables companies to position themselves in a competitive market and differentiate themselves from competitors.
Here are five key reasons why UX design is particularly important in the development of medical software solutions:
Increase in efficiency A good interaction design promotes a natural dialog between software and user. Interaction elements are designed to be quickly accessible and comprehensible, enabling efficient operation, which means that medical professionals can use more time for value-adding tasks. In other words: when users aren’t wrestling with complex UIs or incomprehensible processes, they can better focus on direct patient care, thus enhancing the quality of care delivery.
Prevention and reduction of errors In the medical field, errors can have serious, sometimes fatal, consequences. Medical software manufacturers are legally obliged to eliminate or at least minimize risks to ensure their systems can be used safely. By understanding the users’ needs and mental models, UX designers can integrate features into the software that enhance error tolerance, minimizing the likelihood of mistakes during operation. Concepts that intuitively guide users, offering error-proofing features and fail-safe mechanisms are crucial in this domain where there is no margin for error. The IEC 62366-1 standard outlines a detailed usability engineering process to ensure that risks associated with the use of medical software are minimized.
Reduction of cognitive load Reducing cognitive load allows users to work more naturally, making important information easily accessible and presenting it in a way that aligns with clinical workflows. In the fast-paced, high-stress medical environment, a user-centered approach serves as a crucial strategy to mitigate the risk of cognitive overload. This not only contributes to the overall safety during interaction but also fosters user trust and confidence in the systems.
Reduced cognitive load helps to avoid errors in the hectic and distracting medical environment.
Reduction of training effort User-centered medical software reduces the training effort required for users to become proficient with the system. Intuitive design aligns with the users’ expectations and prior experiences, accelerating the learning process and integration into practice. This aspect is especially crucial in healthcare, where time is a particularly scarce and valuable resource.
Improvement of consistency Effective UX design considers not only individual systems and workflows but the entire existing clinical infrastructure. The goal is to gather and present information in a consistent manner across all systems and workflows. A design system like “Beyond” by ZEISS, combined with tools such as the ZUi Web Component Library that provides ready-to-use software components, ensures consistent interaction patterns and a cohesive appearance. This approach reduces the effort required for design, implementation, and maintenance for medical system manufacturers, and leads to lower training costs and enhanced efficiency for users.
Integration of UX design in the medical software development
In the development of medical software, the discipline of UX design is inextricably linked with the process-oriented approach of user-centered design (UCD). This approach aims to create seamless interactions between users and medical software by involving end users in the development process. The needs and limitations of humans, as well as the context of use, are at the core of the design process. This is particularly challenging in the complex medical field, where the spectrum of users ranges from caregivers and doctors to service staff and patients. Ultimately, medical software should adapt to the user, not the other way around – the user adapting to the software.
The iterative nature of UCD, aligned with an agile software development approach, underscores the need for multiple cycles to refine software solutions. In alignment with ISO 9241-210, the UCD process comprises four key phases:
The user-centered design (UCD) process adapted according to ISO 9241-210.
Understand and specify context of use In this phase, future users and their tasks, activities, and goals are analyzed. The objective is to ensure that the final medical software meets the users’ expectations and needs and supports them in achieving their objectives. Additionally, a context analysis should be conducted to understand the specific work environment and conditions in healthcare settings. Results may include user personas (user profiles), task descriptions, or scenarios. Information is gathered through direct engagement with prospective users such as patients, doctors, and other healthcare providers. Typically, methods such as surveys, interviews, and observations are employed.
Specify the user requirements During this stage of the process, concrete requirements and design recommendations are derived from insights gathered in the previous phase. Both functional and non-functional requirements are captured, tailored to the specific needs of medical institutions. The goal is to bridge the gap between abstract findings from the data collection and the practical implementation, ensuring that user needs and expectations are effectively translated into specific and actionable requirements for the medical software solution.
Produce design solutions to meet user requirements In this step of the process, the identified requirements are translated into concrete interaction and UI concepts that will ultimately be implemented in the software. This includes creating user flows, wireframes, prototypes, and detailed UI designs, as well as collaborating with the development team. Working with prototypes, ranging from simple low-fidelity representations to sophisticated high-fidelity versions, allows UX designers to iterate efficiently through various solution approaches and design variations.
Evaluate designs against requirements In this phase, the functionality and performance of created wireframes, prototypes, or software solutions is assessed to ensure they meet the defined usage requirements. Continuous feedback enables comprehensive evaluation and ensures that user needs and expectations are met. Formative evaluation involves assessments by experts using various qualitative and quantitative methods, such as heuristic evaluations, cognitive walkthroughs, and usability testing in real medical environments. Summative acceptance tests conclude the evaluation process.
Wireframes and prototypes enable UX designers to continuously evaluate different solution variants and design concepts.
Closing thoughts
UX design plays a crucial role in the development of medical software, harmonizing technological advancements with the core values of healthcare. Its focus is on creating software for real people in real situations, aiming not only to meet pragmatic requirements but ultimately to foster positive user experiences. This can significantly enhance treatment outcomes and elevate standards of care.
The integration of UX design in medical software development is an investment in healthcare quality. As medical software becomes increasingly embedded across all sectors of healthcare in the future, the commitment to UX design will determine the success or failure of medical systems.
Dominik Tim Schlackl holds a master’s degree in User Experience Design and works as a Consultant User Experience Design for ZEISS Digital Innovation in the field of medical technology with a focus on ophthalmology. His expertise lies in usability engineering, interaction design, and the application of the user-centered design (UCD) process to design human-machine interfaces that meet both pragmatic and hedonic requirements of specific user groups. His core competencies include wireframing and prototyping processes, user interface design, qualitative user and context analysis, as well as quantitative UX evaluations.
This blog post addresses the high standards of security and compliance that we have to meet in every software project. Trained security engineers are responsible for ensuring that we achieve this within any given project. An especially persistent challenge they face is dealing with the countless dependencies present in software projects, and getting them – and their variety of versions – under control.
Figure 1: An excerpt from the dependency graph of an npm package, taken from npmgraph.js.org/?q=mocha
Challenges in software projects
For some time now, large-scale software projects have consisted of smaller components that can each be reused to serve their particular purpose. Components with features that are not intended to be kept clandestine are increasingly being published in the form of free and open-source software – or FOSS for short – which is freely licensed for reuse.
To assess and prevent security vulnerabilities, it is vital that we have a complete overview of all the third-party libraries we are integrating, as any of our imported modules may be associated with multiple dependencies. This can result in the overall number of dependencies that we are aware of stretching into the thousands – making it difficult to maintain a clear picture of licences and security vulnerabilities among the various versions.
Based on reports of incidents in recent years, such as supply chain attacks and dependency hijacking, there is no mistaking the significant impact that issues like these can have. For an interesting meta-analysis of breaches of this kind, we would recommend Ax Sharma’s article “What Constitutes a Software Supply Chain Attack” (https://blog.sonatype.com/what-constitutes-a-software-supply-chain-attack). Here, we’re going to delve deeper into how to handle components in both large-scale and small-scale software projects, working from the perspective of a security engineer.
FOSS scanning tool solutions
Over time, some projects have managed to overcome the issues associated with identifying FOSS components. Today, there are programs available for creating bills of materials (BOMs) and overviews of security risks, and we have tried these out ourselves.
There are also large catalogues such as Node Package Manager (npm), containing detailed information about the components available in any given case.
Open-source components of this kind might be free to use, but they still involve a certain amount of work, particularly in cases where they are being used in major and long-term software projects.
To perform our own evaluations, we have combined the OWASP Dependency-Check (DC) tool and the OSS Review Toolkit in order to create a solution for identifying security problems through DCs and checking that licensing conditions are being adhered to. Compared with commercial solutions such as Black Duck, these tools provide a free, open option for gaining an overview of FOSS components in projects and evaluating the risks associated with them.
That said, our experience has shown that these tools also involve additional work in the form of configuration and ongoing reviews (in other words, re-running scans in order to identify new security issues).
What software engineers are responsible for
Our guidelines for ensuring secure development and using open-source tools outline the processes we require and the goals that our security engineers have to keep in mind when they are approaching a project. Below is probably the most important part of those guidelines:
It is our responsibility that the following so called Essential FOSS Requirements are fulfilled:
All included FOSS components have been identified and the fitness for purpose has been confirmed.
All licenses of the included FOSS have been identified, reviewed and compatibility to the final product/service offering has been verified. Any FOSS without a (valid) license has been removed.
All license obligations have been fulfilled.
All FOSS are continuously – before and after release – monitored for security vulnerabilities. Any relevant vulnerability is mitigated during the whole lifecycle.
The FOSS Disclosure Statement is available to the user.
The Bill of Material is available internally.
For that it must be ensured that
the relevant FOSS roles are determined and nominated.
the executing development and procurement staff is properly trained and staffed.
These guidelines form the basis for developing mandatory training, equipping subject matter experts with the right knowledge and putting quality control measures in place.
The processes involved
Investigation prior to integration (licences and operational risks such as update frequency)
Update monitoring (operational risks)
Let’s say that a new function needs to be built into a software project. In many cases, developers will already be aware of FOSS tools that could help introduce the function.
Where feasible, it is important that whichever developer is involved in the project knows how to handle package managers and the potential implications of using them so that they know how to account for the results produced by tools or analyses. As an example, developers need to be able to visualise how many parts are involved in a top-level dependency, or evaluate various dependencies associated with the same function in order to maintain security in any future development work. In other words, they must be able to assess operational risks. More and more nowadays, we are seeing projects that aim to keep the number of dependencies low. This needs to be taken into account when selecting components so that, wherever possible, additional dependencies only provide the functions that are really needed.
Before integration, the security engineer also has to check potential imports for any security vulnerabilities and verify that they have a compatible licence. An equally important job is reviewing the operational risks, involving aspects such as the following:
How up-to-date the import is
Whether it is actively maintained or has a keenly involved community
Whether the update cycle is agile enough to deal with any security vulnerabilities that crop up
How important secure handling of dependencies is considered to be
Whether the number of additional dependencies is reasonable and whether it is reduced where possible
During the development process and while operation is taking place further down the line, the project team also has to be notified whenever new security vulnerabilities are identified or closed. This may involve periodic scans or a database with security vulnerability alerts. Periodic scans have the advantage of running more independently than a database, which requires hardware and alerts to be provided. However, alerts are among the benefits offered by software composition analysis solutions such as Black Duck.
As the number of well-marked FOSS tools rises, the amount of time that needs to be invested in curating them manually is becoming comparatively low. The work that does need to be done may involve declaring a licence – and adding easy-to-find, well-formatted copyright details to components, as these have often been given highly unusual formats or left out altogether in older components. Cases in which no licence details are provided should never be misconstrued as carte blanche invitations to proceed – without a licence, a component must not be used without the author’s consent.
Example of a security vulnerability
An example of a complex security vulnerability was published in CVE-2021-32796. The module creating the issue, xmldom, is indirectly integrated via two additional dependencies in our example project here.
Black Duck shows the following security warning related to the module:
Figure 2: Black Duck example summarising a vulnerability
This gives a security engineer enough information to make a broad assessment of the implications that this vulnerability has. Information is also provided with the patch in version 0.7.0.
The importance of having enough time in advance when it comes to running updates/replacing components
Before issuing new publications under @xmldom/xmldom, we have had the time to check how much work would be involved if we were to do without this dependency.
To benefit from this kind of time in a project, it is useful to gain an overview of potential issues right at the development stage, and ensure that there is enough of a time buffer leading up to the point at which the product is published.
This makes it easier for developers to evaluate workarounds for problematic software libraries, whether they are affected by security vulnerabilities, incompatible licences or other operational risks.
Summary
This post has provided an overview of the project work we do involving the large variety of open-source software out there, and has outlined what security engineers need to do when handling open-source software. By using the very latest tools, we are able to maintain control over a whole range of dependencies and establish the transparency and security we need. Dependencies need to be evaluated by a trained team before they are integrated and then monitored throughout the software lifecycle, with the team responding to any issues that may arise.
The “Tester Teatime” is a post format on this blog, in which topics are taken up that testers deal with on a daily basis. Certain problems or themes recur again and again, so we want to create a basis for explaining such phenomena and finding solutions to them. Discussions and new ways of thinking are also to be stimulated. In testing, we can learn a lot from each other by observing our daily lives!
Moderator: Welcome to the first tester teatime! In an interview with testers from ZEISS Digital Innovation (ZDI), we will discuss exciting topics.
Let us now turn to today’s subject. We talk to Sandra Wolf (SW), tester at ZDI. What is the myth of “historical growth” all about?
SW: Well-versed testers will probably roll their eyes in the title of the article unwittingly. Because what sounds so steeped in history here is for many one of the most common answers that cross their paths in everyday work. As soon as a deviation is found in the software that cannot or should not be corrected immediately, this phrase is very popular. Examples of such deviations are poor user-friendliness and a constant inconsistency in the software to be tested. If the testers draw attention to an improvement here, they will be glad to receive the answer from team members from the areas of development, project management, etc. that this particular abnormality has grown so historically and that there is therefore no need for action. Often, the quality-driven testers are left puzzled and frustrated – after all, they are not shown any perspective for action in the sense of their role. What, then, is the meaning of the phrase “historical growth”? And how can this be handled professionally and effectively? That’s what this is about.
Image: Hendrik Lösch and Sandra Wolf during the interview at the Tester Teatime.
Moderator: Okay, that sounds interesting. What effects does this so-called “historical growth” have on the project and its software?
SW: Many project participants are not aware of the importance of the problem. The “historical growth” is an avoidance, because in fact it hides the fear of a renewal of the respective software, which, however, inevitably moves closer with further ignorance. Especially the software components that have been expanded over the years develop more and more interactions, which means that the maintenance effort and the maintenance costs continue to rise. The different extensions may no longer fit together, have neither a uniform operation nor a uniform design. The quality of the software thus decreases congruently to customer satisfaction. At this point, innovative further development becomes increasingly difficult or no longer exists at all. Looking at all these facts, it quickly becomes clear that “historical growth” goes beyond mere words and can have serious consequences for the company, project, and software itself. Something must therefore be done here and a rethink must take place.
Moderator: How can the project challenge “historical growth”?
SW: The fear of many project members about “historical growth” is that the only way to solve this problem is by stomping software and re-build it completely. That this seldom makes sense, however, is something everyone can imagine. A complete renewal of software would also be very costly. The question now arises as to how this problem can be solved. Here I would like to point out that the task of the tester is not to eliminate or control the “historical growth” itself. As testers, we can only provide the impetus for change and should definitely do so, as we act as an important interface between development, technical department and project management. Our perspective on the whole process is worth gold here. We have a comprehensive overview of all steps of the process, as we act at the end of it.
Moderator: What might a solution for “historical growth” look like?
SW: Given this wide-ranging problem, it is advisable to advance the solution in smaller steps first. First of all, contact should be sought with a suitable contact person, e. g. the project manager, since a tester can never decide on such overarching project topics. In conversation with this person, attention can then be drawn to the consequences and risks of “historical growth”. The next step should then be defined jointly by the entire team. Perhaps it is not necessary to renew the entire software, but rather to limit it to certain parts of the application. Here, too, testers can contribute important knowledge from their daily lives. The development of the entire project must be considered and also which sub-areas arose first. The aim of the discussion is to draw attention to the problem and thus to provide the impetus for changes to restore the quality of the software. The form in which this happens – also by external companies, for example – must also be decided. For us as testers, however, this is an opportunity to contribute to the overall quality of the software to be tested. After all, the most important thing is that the very far-reaching theme of “historical growth” is no longer used as an excuse, but is finally placed in the project.
Moderator: A solution can only be found together. In this context, we are also interested in the perspective of a developer. We now talk to Hendrik Lösch (HL), software architect at ZDI. What solutions do you see for “historical growth”?
HL: I myself would not call it “historical growth”, because that implies that growth is in the past. I prefer to speak generally of “software evolution”. Because growth is always there, it only becomes problematic when the structures of the software mutate unintentionally. We also refer to these mutations as technical debts because they require future investments in restructuring and thus tie up funds that cannot be used for value-added measures. The longer you delay these restructurings, the harder they will be to carry out. This is probably where the “historical growth” comes from. It has grown over time and no one knows exactly why anymore. To solve this problem, a thorough analysis of the structures is required, in which experienced software architects determine the actual architecture. A target architecture is then created and a path is sketched to get from one to the other. With the Health Check, we as ZDI offer a suitable procedure.
Moderator: Thank you, Sandra and Hendrik for your insightful presentations. So, we can sum up that the problem of “historical growth” is more of a myth. Growth is always there, but the lack of structure can make it problematic. However, this requires not only action, but also concrete strategies to deal with it. ZDI itself already offers them in its portfolio today.
In the following articles, we will take up further problems from the everyday life of testers and discuss possible solutions.
While observing social distancing, video conferencing with friends has replaced going down to the pub. During one the subject of how corona influences our work life, popped up very fast.
The consensus was, that the ability to work was greatly impacted and that for some it even had completely disappeared. Their employers were simply not able to supply the necessary infrastructure. And even after the infrastructure eventually was set up the reality set in: It is complicated! Who may talk? Camera on or off? Please mute your microphone the kid is screaming in the background! You want to talk? Please un-mute you microphone! The connection is bad I only get every second word… The list is endless.
People simply are not used to working like this. Not least, because communication is more than exchanging words. During phone and video conferences the interpretation of body posture, facial expressions and gestures is far harder. That is a new situation, which requires great discipline and change in behaviour by all participants!
The discussions carry on and they alternate between reviews of video tools and how working in this situation is impossible. Meanwhile I am frantically searching for something to add to the discussion. But no matter how much I search, I only find one answer to the question “What has changed in your work situation since COVID-19?”, “Nothing really“.
Admittingly that is a slight exaggeration! The commute, the collective lunch break with the colleagues, the chat at the coffee machine – all this currently does not take place, and these are drastic changes and great losses. For people like me where home office is also a day-care and both parents work full time, distributed work can become quite a challenge.
But even in this situation the starting point is the same for everyone. So, you only have the choice to accept it and make the best of it. This shows how important it is to have gathered experience with working in distributed settings.
ZEISS Digital Innovation (formerly Saxonia Systems AG) has been following the basic principals of working in distributed teams for years. Not least to ensure a healthy work-life balance for the employees, it is important for companies to ensure that services can be provided from basically everywhere. This keeps travel expenses low and ensures that the necessary infrastructure does not have to be provided by the customer.
Figure 1: Distributed project work from a home office
During the five years that I have been working for ZEISS Digital Innovation I have only encountered one project, that united the complete development team at one site. My current project, that was started in September 2019, is extremely distributed, the six project members of ZEISS Digital Innovation are split between five sites and the project stakeholders are at two sites. Therefore, the distribution existed pre-COVID-19 and was identified as a project risk to be addressed.
In my view this was achieved with a simple sounding solution: direct communication.
The developers often used pair programming to not only ensure knowledge transfer, but also to get to know the programming style of the other developers. Apart from this, code reviews belong to our standard procedure. These are not only a quality assurance measure, but also distribute knowledge throughout the team. Beside the daily meeting for all project members, we also have established a “DEV-Daily”, in which the developers can focus on mainly technical issues.
Business requirements are mapped to technical solutions during regular refinement meetings. Ideas are collected, checked, adapted, discarded and created in no-holds-barred discussions. This leads to an area of creativity and free development, that all team members can contribute to, which in turn feel valued and included. Even during the distribution of the team, a real team spirit has developed!
The establishment of this open communication culture has led to a more intensive and frequent communication within the team. Be it one-on-one or in a larger group. One must always carter for the preferences of the individual team members. Some are early birds, at 8 o’clock they are fully in their flow, bursting with ideas to present to their colleagues, some barely can open their eyes at that time in the morning. Some need planned meetings to have time for preparation, others easily hop between subjects and you can call them anytime.
To be honest, it is not enough to simply communicate more, to work efficiently in a distributed environment. Rather an elaborate concept is necessary, that takes all the aspects and challenges of distributed work into account. This is especially important for teams with little experience with distributed work.
Several years ago, ZEISS Digital Innovation already received attention with their ETEO-concept. ETEO (“Ein Team Ein Office” – One Team One Office) is a framework for distributed work in teams. It gives project teams a guide as to how distributed work can be achieved. A permanent video link between the sites and a digital work board for task assignment enables the teams to work spread out between different sites. Just like being at one site together. The video link is no requirement it is just a building block from ZEISS Digital Innovation’s toolkit for distributed work. There are specifically trained employees (ETEO-Coaches), that train and coach the team members, in techniques for distributed work, during ramp up and the project duration. It often leads to a “aha moment” with our customers that distributed work without performance impact can work.
Figure 2: ETEO (“Ein Team Ein Office” – One Team One Office) – Our Collaboration Model
In general, as with everything the rule applies, the tool is nothing, if you do not know how to use it. With this I mean that distributed work can only work with the discipline and right mindset of the involved. One thing is left to be stated, you have to commit yourself without reservations and just try the tools and techniques. And of course: Practice makes perfect. The longer a team works in a distributed environment, the stronger the best practices for distributed work are highlighted in the individual team setting. One must not forget that there is no such thing as the ultimate concept for distributed work. Interactions within projects are not between roles and responsibilities but between people and they are all different.
Each project is different and therefore also the requirements for distributed work. But the basic principles are basically the same. This can be compared with building a house. No matter how it looks, number of floors or type of roof it has, it always has a foundation it rests on.
The foundations of distributed work at ZEISS Digital Innovation are sound.
Personas in a nutshell – What are personas, and why do we need them?
In the field of agile, distributed software development, and in a B2B context in particular, the client often is the only contact for the software developer. This is usually due to the end users not being available because they are either located elsewhere, separate from the development team, or do not have time. When a user is not close at hand, personas can help and compensate for the lack of contact.
Personas are prototypical users representing a user group and their essential characteristics and needs in regard to the planned product. They consolidate the results of the context analysis, but they cannot replace the user research. They are designed as posters, as “life-like” and realistic as possible because otherwise, the persona is useless. In general, personas are developed in early project stages (analysis and planning), and they are used for performance/target comparison throughout the entire development cycle. This enables the design and development team to focus on the users’ requirements. Personas serve as a reference for decisions in discussions, and by asking questions like “Would person x understand this?”, they give priority to the user. Personas help the product owner in particular to evaluate ideas for new features.
More often than not, it takes more than one persona to cover the prospective users of the software. If there are too many, possibly even contradictory, personas, it is advisable to classify them into primary and secondary personas. This helps to keep the focus on the target user group and prioritize the decision alternatives.
Which elements are needed to create a good persona?
Name: A realistic name that identifies the persona
Age: Allows for conclusions regarding the persona’s attitude
Image: An image, either a photograph or a drawing, to make the persona more realistic
Personal information: Age, gender, professional education, knowledge and skills, marital status, children, …
Type of persona: Associated user group / archetype (e.g. teenager, housewife, pensioner)
Profession: Function, responsibilities and duties
Technical know-how: General computer skills; technical affinity; knowledge of related products, previous systems or competitors’ products
Character: Individual preferences and values of the persona, e.g. are they open to new ideas?
Behavioral patterns and approaches: Values, fears, desires and preferences
Objectives: Key tasks to be handled with the new application
Needs & expectations: Requirements regarding the handling of the product
Frustration: Typical nuisances with products in general, i.e. issues that have to be avoided at all costs
Persona Template
The Usability Team at Saxonia Systems AG (since 03/2020: ZEISS Digital Innovation) has developed a template for creating a persona based on the best practices and their own experience. The template is available as a PDF free of charge: DOWNLOAD
Template for creating a persona
The Persona Template allows you to customize your software even more precisely to the requirements of its prospective users, and never to lose sight of the most important aspect—because we all know: “The user is always right.”
Digitalization and Industry 4.0 entail new requirements for processes and software systems in all company divisions and business areas. Companies that outsource the development of their software or purchase it from third parties face an additional challenge. Considering the interconnected work in the companies’ business operation, the different systems of various manufacturers are required to exchange ever more data. Despite the tests by the internal and external development teams who validate the software in various development-related levels of testing before handing it over to the client, and despite the subsequent approval by way of departmental testing, errors occur when the individual components interact. A test center with a focus on comprehensive integration tests could possibly solve this problem, but it has to meet specific requirements to be successful.
Critical errors that become evident only in live operation mean negative publicity both for the product and the companies involved. To prevent this, testing is a fundamental, integral part of modern software development. Only a sufficient number of tests and prompt feedback of the results allow for the quality and maturity of the product to be appropriately documented and confirmed. In the course of large software development projects, the number of new and/or upgraded functions is often in the hundreds. Development teams use component, integration and system tests to test the software before it is handed over to the client. The department approves the delivered software by way of the acceptance test (see Figure 1: Test pyramid).
Figure 1: test pyramid
Companies have several information systems for different tasks such as logistics, accounting, sales, etc., all of which were built using a wide variety of technologies. These information systems are already exchanging data today. The requirements of the digitalization and Industry 4.0 amplify these effects: New requirements such as increased networking throughout the entire value chain lead to a higher number or extension of the existing interfaces between the information systems. Thus, the overall system becomes more and more complex, as does the life cycle of the software: Dependencies have to be taken into account from the identification of the requirements through to the testing.
Figure 2: Challenges in testing due to digitalization and industry 4.0
The effort required for the integrative tests increases enormously, in particular for companies that have their software developed by various service providers. In most constellations, the software systems are developed by several third-party software manufacturers and/or possibly an in-house IT organization. The providers themselves perform more or less in-depth component, integration and system tests, and verify the quality for the individual information system they create. The departments are now responsible for testing the interconnected information systems as they interact (see Figure 2: Challenges in testing due to digitalization and Industry 4.0).
The worst problems, or errors with a high risk, occur in the interaction of the information systems. However, most companies fail to perform the necessary comprehensive, integrative tests, or the testing done is insufficient, resulting in an insufficient quality statement and errors in live operation. This has various reasons. A comprehensive test at the development level is impossible due to the organizational and geographical separation of the service providers involved, and performing the necessary tests for each release is impossible for the expert users or testers from the department because that is too costly in terms of time and resources. Furthermore, the employees tasked with these tests lack the experience and the know-how necessary for optimal test planning and covering all the requirements for integration tests. The physical distance between the respective testers in the various departments further impede consultations and knowledge-building.
For the company to be successful, it is therefore becoming ever more important to outsource the necessary tests to dedicated testers, significantly increasing both the degree of coverage achieved in testing and the frequency of testing (regression). A possible solution is a test center that oversees the comprehensive integration test that takes place after the tests of the service providers and before the acceptance test of the department (see Figure 3: Comprehensive integration test by a test center). The test center verifies that the information systems interact correctly, and the department ultimately focuses on the approval of the requirements it specified.
Figure 3: Comprehensive integration test by a test center
A test team or test center of dedicated and trained testers has several advantages:
The quality of the information systems is the dedicated test team’s primary objective.
The test results are collected and communicated to the parties involved in an objective manner.
There is a test manager who focuses on quality issues and who is responsible for the management of the test group.
The test manager coordinates with the technical and development departments, determines the requirements to be tested, coordinates the test team, integrates the testers from the departments, communicates with the project management, and documents the results in test reports.
However, there are also disadvantages to an in-house test team or test center: Longer release cycles or delays in the provision of the software to be tested can cause the workload to fluctuate. The in-house test team or test center continuously generates costs, but does not always have enough work. On the other hand, in the case of peaks in the testing work, the team may not, or only with great difficulty, be able to cover them.
Companies that already use service providers for the development of their software can also call on an external provider offering integration testing as a service for the test center. Using a test center does not merely mean outsourcing the testing. A test center based on a test service agreement is a solution where the responsibilities, duties and settlement terms are customized for the individual client.
The third-party test team or test center is as independent as possible from the software development, highly specialized, and due to the nature of the service, adaptable to the client’s testing requirements. This resolves the above-mentioned disadvantages of an in-house test team or test center, and allows the company to focus on its processes.
In order for a test center to be able to optimally respond to the client’s wishes, certain prerequisites need to be fulfilled. The test center must not be a detached organizational unit, but has to establish open channels of communication and information with all the parties involved. Proximity to the client is of particular importance. Based on our experience, the test team should preferably be located on the client’s premises or at a distance of no more than 5 to 10 minutes on foot. This ensures knowledge transfer and target-oriented coordination with the departments.
The service managers and test managers are responsible for coordinating with the client and/or the department. The service manager agrees the planning of the test services with the client. This includes defining the content of the test services and the responsibilities of the test center. As every client has different requirements and processes, the assumption of the testing requires individual coordination with each client, and an individual transition. If the transition, and thus the assumption of the testing, has been successful, the test manager agrees the testing period, the operative test content and the test cases with the department and/or the client’s test coordination for each test release. But the communication is not limited to these two roles. The test experts in the test center and the department need to be in immediate, close contact in order to create, adapt and review the test cases in the best possible way, and to coordinate when deviations are discovered.
The result of the tests largely depends on the know-how of the testers, which has to comprise at least three aspects: Firstly, the technical know-how regarding the applications to be tested, and secondly, comprehensive knowledge of the testing methods. This ensures that optimal coverage of the requirements is achieved, both in technical terms and with respect to the definition of test cases. Thirdly, the testers also need to know the way the developers work. This enables them to better identify and analyze errors and communicate them to the software developers in the best possible way.
Figure 4: Coordination tasks of the service and test manager
On the other side, the test center has to exchange information with the third-party providers and the development. The objective of such coordination is, for example, comparing the content of the tests already done to the downstream integration tests in order to identify any gaps in the testing or redundancies, if any. Furthermore, the delivery of new software releases to the test systems is planned, and the analysis and follow-up of deviations are discussed with the client.
In addition to the planning and execution of the test activities of the comprehensive integration test, the test center can also take over the technical support of the testing in the company. This includes, for example, the development and maintenance of the test infrastructure and test environments, and the development of comprehensive test data management. It is important that all the software systems to be tested are installed in the integration test environments, ensuring that the entire business process can be comprehensively tested.
An additional aspect of the test center is the continuous optimization of the test processes. This not only includes the optimization of the operations that have already been established, but also the introduction and operation of test automation, the dissolution of current interdependencies within and between the test stages, and the review of the level of development of the software developers at an early stage by way of so-called pre-integration tests.
For this purpose, additional test environments besides the test environments for the comprehensive integration tests are created. The service providers provide pre-release versions of the software in the pre-integration environments, giving the test center’s pre-integration team the opportunity to perform tests with interaction with the other applications at an early stage. Thus, the pre-integration tests help to identify possible deviations between the different information systems of the various providers more quickly.
For companies that have their software developed by various providers and still have complex, interconnected system environments, an external test center offers a quick and in-depth quality statement regarding all the software systems. The objective of the external test center is the establishment of an integrative test process that includes not only the interconnection of the test systems, but an interconnected test organization and interconnected test processes as well. This way, the test center responds to the companies’ requirements regarding a more integrative test focus and flexibility through scalability, communication, and concentrated testing expertise.