Recap of the Thin[gk]athon “Manufacturing-X – Dataspace Adoption” 

How can artificial intelligence (AI) models be trained across organizational boundaries without disclosing sensitive data or intellectual property? 

This question was at the core of one of the challenges addressed at the Thin[gk]athon “Manufacturing-X – Dataspace Adoption”, which took place from 27 to 29 January 2026 at the SAP Innovation Hub in Munich. 

The co-innovation format initiated by the Smart Systems Hub brings together companies, research institutions, and technology partners to collaboratively prototype concrete industrial challenges within a short time frame. The Thin[gk]athon focused on data spaces in the context of Manufacturing-X and on the technical realization of cross-company, data-sovereign collaboration. 

©Smart Systems Hub

The Challenge: Joint AI Training While Preserving Data Sovereignty 

In industrial practice, data is typically distributed across different companies, production sites, and heterogeneous system landscapes. Centralizing such data is often neither feasible nor desirable, due to data protection requirements, intellectual property concerns, or competitive considerations. 

At the same time, locally available datasets are frequently insufficient to develop robust and generalizable AI models. Rare fault patterns or specific operating conditions occur only sporadically and are therefore difficult to model when considered in isolation. 

This creates a fundamental tension between the need for collaboration and the requirement for data sovereignty. Addressing this tension was the core objective of the challenge. The goal was to enable multiple companies to jointly train an AI model without exchanging training data and without establishing centralized data storage. 

Technical Approach: Federated Learning within a Data Space Architecture 

The selected solution approach is based on federated learning. Training data remains entirely with the respective participants and is processed locally within their existing IT infrastructures. Only model parameters, such as weights or update information, are exchanged between participants. These parameters do not allow direct inference of the underlying raw data. 

The exchange of model artifacts is realized using the Eclipse Dataspace Connector (EDC). The EDC provides the technical foundation for policy-based, sovereign data exchange within a data space architecture. It enables the definition and enforcement of usage policies, access rights, and contractual rules governing data and artifact exchange. 

Within three days, a complete end-to-end prototype was implemented. The prototype demonstrated the full workflow: local training at multiple participants, controlled exchange of model parameters via the data space, and aggregation into a shared global model. This provided a concrete proof of the technical feasibility of combining AI-based methods with Manufacturing-X-compliant data spaces. 

©Smart Systems Hub
©Smart Systems Hub

Demonstration Based on an Industrial Use Case 

The technical approach was demonstrated using an example from rotor and turbine condition monitoring. Publicly available NASA datasets were used, representing different operating conditions and fault classes. 

Individual turbines were modeled as separate training instances, each representing a distinct company within a federated learning setup. The respective datasets were deployed on different hardware environments to realistically reflect separated enterprise infrastructures

Each instance performed local training on its own compute environment. After each training cycle, the resulting model parameters were made available via the Eclipse Dataspace Connector, aggregated centrally, and subsequently redistributed to the participating instances. 

Another key element of the Thin[gk]athon was a live demonstration environment provided by the Smart Systems Hub using robotic systems. In a laboratory setup, it was shown how data from real industrial assets can be collected, integrated into a data space, and utilized for different application scenarios. In addition, data from a real test bench was integrated to demonstrate that the data exchange mechanism also functions with live operational data. While the data volume was not sufficient for training purposes, it effectively illustrated the end-to-end data flow, including integration into an application and a dedicated training dashboard. 

©Smart Systems Hub
©Smart Systems Hub

Interdisciplinary Collaboration as a Key Success Factor 

In parallel with this challenge, additional teams worked on further topics within the Manufacturing-X ecosystem, including: 

  • Supply chain transparency with a focus on interoperable data exchange 
    This challenge investigated how supply chain information can be structured and exchanged transparently across organizational boundaries. Topics included data models, access control concepts, and the integration of existing enterprise systems into a data space architecture. 
  • Cross-company calculation of Product Carbon Footprints (PCF) 
    This team focused on capturing and sharing emissions-relevant data to enable consistent and traceable CO₂ calculations across company boundaries, while protecting sensitive production information. 
  • Battery Product Passport and structured provision of product-related information 
    The focus was on enabling interoperable access to product data across the entire lifecycle. Architecture concepts for a data-space-compliant implementation were developed. 

All teams were interdisciplinary, combining software engineers, data scientists, system architects, and domain experts. They were supported by mentors who provided guidance on both technical and methodological aspects. 

On the final day, all teams presented their results to an expert jury from industry and technology. Within less than three days, functional prototypesrobust architecture concepts, and concrete demonstrators were created, clearly illustrating the potential of data-space-based collaboration. 

©Smart Systems Hub

Conclusion

The Thin[gk]athon “Manufacturing-X – Dataspace Adoption” demonstrated in a practical and tangible manner how data-space-enabled collaboration can be implemented in industrial contexts. The parallel challenges highlighted the significant potential of interoperable data spaces for future industrial value creation. 

The clear organizational and methodological framework provided by the Smart Systems Hub enabled teams to focus quickly and effectively on their respective challenges. The SAP Innovation Hub offered an inspiring environment that facilitated direct exchange with experts and in-depth technical discussions. This was complemented by hands-on demonstrations providing concrete insights into real-world use cases. 

For companies and professionals engaged in Manufacturing-X and industrial data spaces, the Thin[gk]athon offers a valuable opportunity to validate their own questions under realistic conditions and to prototype technical solutions. Further events are already planned by the Smart Systems Hub. 

Further Information 

Smart Systems Hub Dresden – Organization and co-innovation formats around Manufacturing-X 
https://www.smart-systems-hub.de/en 

SAP Innovation Center / SAP Innovation Hub – Host and supporter of the Thin[gk]athon 
https://www.sap.com/germany/about/munich.html 

Eclipse Dataspace Connector – Open-source technology for sovereign data exchange in data spaces 
https://projects.eclipse.org/projects/technology.dataspaceconnector 

Manufacturing-X – Initiative for interoperable industrial data spaces 
https://factory-x.org/manufacturing-x/ 

Central data platform: The key to future-proof production processes

Modern industrial production requires an expansion and digitalization of classical production flow control when it comes to smart approaches and networking. To do so, we must connect and integrate all levels of the automation pyramid with the aid of digital solutions and data processing systems.

Currently, there are numerous new technologies, such as artificial intelligence, digital twins and augmented reality, with a steadily growing significance for the smart production of the future. In order to use these innovative methods, they need to be linked to existing systems, albeit this has only been possible to a limited extent thus far. For example, there is no standardized approach to providing data for the use of artificial intelligence or for creating digital twins yet. Novel use cases, such as predictive maintenance, also require individual access to the required data.
New technologies and their application can only be successfully implemented by close cooperation between departments and with a clear integration strategy.

Feasibility in brownfield

Figure 1: Access and provision of data in a new, third dimension

Most digital transformation projects take place in brownfield production environments. This means that the production facilities are already in operation and, from an economic point of view, there is a need to find solutions that can be integrated with the existing machinery and software systems.

The development towards smart production requires new exchange channels that open up a third dimension of data flow and facilitate existing data to be made available centrally. It is economically inefficient to implement these new channels in every new project. Consequently, a generic approach should be taken, in which data is obtained from the respective production systems and made homogeneously accessible regardless of the individual use case. A central data platform, on which all existing production information is made accessible, is the basis of a flexible and scalable path for further development and optimization of production processes.

Advantages of a central data platform

  • Democratic provision of existing machine data from the brownfield
  • Fast integration of new technologies
  • Implementation of innovative use cases
  • Simple implementation of data transparency and data governance
  • Access to historical and real-time data
  • Scalable application development
  • Increased efficiency and quality in data transfer

Challenges of data processing

There are defined interfaces for individual systems available that allow data to be easily read, e.g. from ERP or MES systems. The situation is different with a SCADA system, however, since it has a very heterogeneous and domain-specific build. Its interfaces and those of the subordinate machine control systems (PLC) are not uniformly defined. There are also no uniform industry access standards for direct access at sensor level, since this kind of use case has not yet been addressed by machine or sensor manufacturers. However, it is worthwhile to have direct access to the sensor system, as sensors deliver much valuable data beyond their actual functions, usually without exploiting the data.

Use case

Our example shows a classical inductive sensor, where usually only the “sensor on/off” signal is used. The following functions are already implemented at the factory and could also be evaluated:
  • Switch mode
  • Switching cycle counter, reset counter
  • Operating hours counter
  • Absorption (analog measurement of the electric field)
  • Indoor temperature
  • Device information
  • Application-specific identifier, system identifier, location code

Regardless of the existing exchange channel, there are challenges related to data at all levels. Examples include data protection, the creation of data silos, the processing of mass data, the interaction between humans and machines, and absent or non-standardized communication channels.

Given the highly heterogeneous infrastructure in production, solutions that are individually adapted to the existing conditions can provide a remedy and address the specific challenges at the individual levels. Data governance and data security as well as cybersecurity are taken into account.

Figure 2: Challenges in technology and communication

Holistic approach

Figure 3: Integration of all production-relevant data

It is not only production flow control data that is relevant for optimal linkage of information and the associated benefits, such as efficient use of resources, increased productivity and quality. Companies have to take a large number of parameters into account, including: Deployment and maintenance planning, warehousing, availability of personnel and much more. The logical linking of this data can usually only be done manually. This information being available in digital form could save much time.


Due to the complexity of this topic and the strongly differing requirements of individual production environments, it is clear that standard solutions are insufficient for paving the way to unrestricted data availability, and thus to new technologies. It is therefore important to start by looking at the use cases that create the most added value.

The vision for more efficiency, flexibility and quality

Comprehensive plant-to-plant communication aimed at improving production processes and identifying the causes underlying quality issues can be realized using a central data platform. It allows the data provided to be exchanged across facilities via standardized interfaces. This fully automated exchange of information has many benefits for production. Production planning and control can react flexibly to information from suppliers and customers. Real-time data allow bottlenecks and problems to be identified and rectified more quickly. Quality deviations can also be traced back to their cause across facilities and recurring problems can be avoided through early anomaly detection. The exchange of data also reduces transportation and logistics costs. Moreover, direct communication between the facilities improves cooperation: The exchange of knowledge and experience can give rise to new ideas and innovations that further improve production.

Cross-factory communication in semiconductor and automotive production
Figure 4: Data transparency in wafer production from front-end to back-end (Semiconductor)
Figure 5: Data integration for more efficient and future-proof communications between suppliers and manufacturers (Automotive)

Maturity of the data platform

As described above, the availability of data is the foundation for future technologies. This access if provided by a central data platform. The real added value is created when the collected data can be used and put to good use in production. For this purpose, applications must be linked to the respective platform.

One future scenario describes a standardized data storage system that is accessed by all applications across different productions. By using the data platform, the applications can exchange data, thus rendering other storage locations obsolete.


With regard to the decision about transformation to a central data platform, we recommend taking an iterative approach and keep developing communication channels and systems at an appropriate pace. The advantage of customized software development is that it evolves in line with the requirements and needs of the company in question, always maintaining the necessary balance between evolution and revolution. In the first step, we therefore usually start with data engineering. However, we also consider future use cases in our architecture and take these into account in the continued development.

Conclusion

Merging data from different layers of the automation pyramid and other data silos onto a homogeneous platform allows companies to fully democratize and transform their data. Data management rules help to ensure the quality and security of the data. A cloud-based approach offers many benefits, such as scalability and flexibility. The utilization of a central data platform lets companies use their data more effectively and exploit the data’s full potential.

More information in our white paper: Industrial Data Platform

Cyber-physical systems as a pillar of Industry 4.0

What is that?

A cyber-physical system (CPS) is used to control a physical-technical process and, for this purpose, combines electronics, complex software and network communication, e.g. via the Internet. One characteristic feature is that all elements make an inseparable contribution to the functioning of the system. For this reason, it would be wrong to consider any device with some software and a network connection to be a CPS.

Especially in manufacturing, CPS’ are often mechatronic systems, e.g. interconnected robots. Embedded systems form the core of these systems, are interconnected by networks and supplemented by central software systems, e.g. in the cloud.

Due to their interconnection, cyber-physical systems can also be used to automatically control infrastructures that are located far away from each other or a large number of locations. These could only be automated to a limited extent – until now. Some examples of this are decentrally controlled power grids, logistics processes and distributed production processes.

Thanks to their automation, digitalization and interconnection, CPS provide a high degree of flexibility and autonomy in manufacturing. This enables matrix production systems, which support a wide range of variants at large and small quantities [1].

So far, no standardized definition has been established, as the term is used broadly and non-specifically and is sometimes used to market utopian-futuristic concepts [2].

Where did this term originate?

In recent years, innovations in the fields of IT, network technology, electronics, etc. have made complex, automated and interconnected control systems possible. Academic disciplines such as control engineering and information technology offered no suitable concept for the new mix of technical processes, complex data and software. As a result, a new concept with a suitable name was needed.

The term is closely related to the Internet of Things (IoT). Moreover, cyber-physical systems make up the technical core of many innovations that bear the label “smart” in their name: Smart Home, Smart City, Smart Grid etc.

Features of CPS

As mentioned above, there is no generally recognized definition. But the following characteristics can be destilled from the multitude of definitions:

  • At its core there is a physical or technical process.
  • There are sensors and models to digitally record the status of the process.
  • There is complex software to allow for a (partially) automatic decision to be made based on the status. While human intervention is possible, it is not absolutely required.
  • There are technical means for implementing the selected decision.
  • All elements of the system are interconnected in order to exchange information.

One CPS design model is the layer model according to [2]

Figure 1: Layer model for the internal structure of cyber-physical systems

Examples of cyber-physical systems

  • Self-controlled manufacturing machines and processes (Smart Factory)
  • Decentralized control of power generation and consumption (Smart Grids)
  • Household automation (Smart Home)
  • Traffic control in real time, via central or decentral control with traffic management systems or apps (element of the Smart City)

Example of an industrial cyber-physical system

This example shows a manufacturing machine that can operate largely autonomously thanks to software and interconnection, thereby minimizing idle times, downtimes and maintenance times. Let us assume that we are dealing with a machine tool for cutting as example.

Interconnected elements of the system:

  • Machine tool with
    • QR code camera for workpiece identification
    • RFID reader for tool identification
    • Automatic inventory monitoring
    • Wear detection and maintenance prediction
  • Central IT system for design data and tool parameters (CAM)
  • MES/ERP system

The manufacturing machine of our example is capable of identifying the workpiece and the tool. The common technologies RFID or QR code can be used for this purpose. A central IT system manages design and specification data, e.g. a computer-aided manufacturing system (CAM) for CNC machines. The manufacturing machine retrieves all the data required for processing from the central system using the ID of workpiece and tool. As a result, there is no need to enter parameters manually as the data is processed digitally throughout. The identification allows the physical layer and data layer of a cyber-physical system to be linked.

The digitized data for workpieces, machines and other manufacturing elements can be grouped under the term digital twin, which was presented in the blog article “Digital twins: a central pillar of Industry 4.0” by Marco Grafe.

The set-up tools and the material and resource inventories available in the machine are checked on the basis of the design and specification data. The machine notifies personnel if necessary. By performing this validation before processing begins, rejects can be avoided and utilization increased.

The machine monitors its status (in operation, idle, failure) and reports the status digitally to a central system that records utilization and other operating indicators. These types of status monitoring functions are typically integrated into a Manufacturing Execution System (MES) and are now in widespread use. In our example, the machine is also able to measure its own wear and tear in order to predict and report maintenance requirements, thereby increasing its autonomy. These functions are known as predictive maintenance. All these measures improve machine availability and make maintenance and work planning easier.

Through the use of electronics and software, our fictitious manufacturing machine is capable of working largely autonomously. The role of humans is reduced to feeding, set-up, troubleshooting and maintenance; humans only support the machine in the manufacturing process.

References

[1] Forschungsbeirat Industrie 4.0, „Expertise: Umsetzung von cyber-physischen Matrixproduktionssystemen,“ acatech – Deutsche Akademie der Technikwissenschaften, München, 2022.

[2] P. H. J. Nardelli, Cyber-physical systems: theory, methodology, and applications, Hoboken, New Jersey: Wiley, 2022.

[3] P. V. Krishna, V. Saritha und H. P. Sultana, Challenges, Opportunities, and Dimensions of Cyber-Physical Systems, Hershey, Pennsylvania: IGI Global, 2015.

[4] P. Marwedel, Eingebettete Systeme: Grundlagen Eingebetteter Systeme in Cyber-Physikalischen Systemen, Wiesbaden: Springer Vieweg, 2021.

Digital twins: a central pillar of Industry 4.0

Computers and automation technology began to gain a foothold in the production industry in the 1970s, making it possible to set up flexible mass production options in locations away from physical assembly lines. With the advent of this technology, machines were optimised to ensure maximum workpiece throughput. This process, which continued into the 2000s, is generally known as the third industrial revolution.

Industry 4.0 – as it was coined by the German government and others in the 2010s – has different aims, however. Since machine uptimes and entire production lines in many industries are already being optimised as much as they possibly can, attention is now turning to the methods that can be used to optimise downtimes – a term that is primarily used to refer to points in production when machines are at a standstill or are producing reject parts.

Two major approaches to optimising downtimes

100% capacity utilisation as a goal for every machine

If my production machines aren’t doing what they were purchased and installed for, they are not being productive and are not adding any value. There are two types of downtime to consider: planned and unplanned. To ensure that no time is wasted even when downtimes are taking place as planned, machines can be deployed flexibly at several points in the production chain – resulting in more of their capacity being used overall. A good example of this is a six-axis robot with a gripper that can be moved from one workstation to another as needed, working on the basis of where it can be put to good use at that point in time. This approach uses the concept of changeable production. Unplanned downtimes, meanwhile, usually happen as a result of component or assembly failures within a machine. In these cases, machine maintenance needs to be scheduled so that it is only performed at the times when planned maintenance is due to take place anyway (as part of predictive maintenance measures).

Reducing rejects

Another major approach involves reducing the amount of rejects produced: this also makes it easier to perform the process monitoring stages that take place after a production cycle (or perhaps even during it). Process control builds on the process monitoring stage, feeding the findings it gains back into the production cycle (process) in order to improve the outcomes of the next cycle. It is easy to visualise how this could take place in a single milling cycle, for example: after a circle outline has been milled, the diameter of the circle is measured and then evaluated. If there are any deviations, the milling programme can then be adapted for the next part undergoing the process. More complex approaches aim to manage process monitoring across multiple steps – or perhaps even multiple steps distributed across multiple subcontractors.

Data: the key to more efficient planning

Improved planning and control over processes are important elements in both of these approaches to optimising production. To improve planning, it is vital to have a significantly increased pool of data that is highly varied and needs to be analysed in specific ways, but the sheer volume and level of detail involved in this kind of data far exceed the capacity of humans to perform the analyses. Instead, complex software solutions are needed to derive added value from data.

Depending on the level of maturity of both the data and the analyses, software can be a useful tool – and is increasingly being called upon – in supporting the work of humans in production environments. The level of input that software has can range all the way up to fully autonomous production. The data analytics maturity model shown in Figure 1 can help you map out what is happening in your own production scenario. It provides an overview of the relationship between data maturity and the potential impact of software on the development process.

Data analytics maturity model. Reproduced.
Figure 1: Data analytics maturity model. Reproduced with permission from[1]

At the lowest level of the model (Descriptive), the data only provides information about events on a machine or production line that have happened in the past. A lot of human interaction, diagnostics and – ultimately – human decisions are required to initiate the necessary machine actions and put them into practice. The more mature the data (at the Diagnostic and Predictive levels), the less human interaction is needed. At the highest level (Prescriptive), which is what useful systems aim to achieve, it is possible for software to plan and execute every production process fully autonomously.

Aspects of data in digital twins

As soon as data of any kind starts to be collected, questions concerning how it needs to be sorted and organised quickly come about. Let’s take the example of a simple element or component as shown in Figure 2. Live data is produced cyclically while this component is being operated within a production machine; however, data of other kinds – such as the bill of materials and technical drawings – may be more pertinent during other stages in the component’s life (during its manufacture, for example). If the component is not actually present but you still want to collect data about it, you can draw on a 3D model and a functioning behaviour model that can be used to simulate statuses and functions.

Figure 2: Various aspects of data shown using the example of the Zeiss Stylus straight M5.

On encountering the term “digital twin”, most people’s minds simply jump straight to a 3D model that has been augmented with behaviour data and can be used for simulation purposes. However, this is only one side of the story in an Industry 4.0 context.

Germany’s Plattform Industrie 4.0 initiative defines a digital twin as follows:

A digital twin is a digital representation of a product* that is sufficient for fulfilling the requirements of a range of application cases.

Under this definition, a digital twin is even considered to exist in the specific scenario I outlined above – which indicates that there are lots of different ways of defining what a digital twin is, depending on your perspective and application. The most important thing is that you and the party you are working with agree on what constitutes a digital twin.

Digital twin: type and instance

To gain a better understanding of the digital twin concept, we need to look at two different states in which it can exist: type and instance.

A digital twin type describes all the generic properties of all product instances. The best comparison in a software development context is a class. Digital twin types are mostly used in the life cycle stages that take place before a product is manufactured – that is, during the engineering phase. Digital twin types are often linked to test environments in which efforts are made to optimise the properties of the digital twin, so that an improved version of the actual product can then be manufactured later on.

A digital twin instance describes the digital twin of exactly one specific product, and is linked explicitly with that product. There is only one of this digital twin instance anywhere in the world. However, it is also completely possible for this instance of a digital twin to represent just one specific aspect of the actual product – which means that multiple digital twins, all representing different aspects, can exist alongside each other. The closest comparison for this in a software development context is an object. Digital twin instances are usually encountered in the context of operating actual products. In many cases, digital twin instances are derived from types (in a similar way to objects being derived from classes in software development).

Digital twins in the manufacturing process

In the context of Industry 4.0 and, therefore, the manufacturing process, it is important to be constantly aware of whether a digital twin is referring to the production machine or the workpiece (product) to avoid any misunderstandings. Both can have a digital twin, but the applications in which each is involved are very different.

Digital twin of a machine and a workpiece. Both scenarios are possible.
Figure 3: Digital twin of a machine and a workpiece. Both scenarios are possible.

If the process of planning the actual production work were the most important aspect of the scenarios I am presenting here, I would be more likely to deal with digital twins of my production machines. If my focus were on augmenting data for my workpiece, and therefore for my product, I would be more likely to turn to a digital twin for the product. There is no clear distinction in how these aspects are used: both approaches can be used for me and my production work, but may also be beneficial for the users of my products.

Categorising digital twins according to the flow of information

To gain a better understanding of how a digital model evolves into a digital twin, it is possible to consider aspects relating to the flow of information that leads from a real-life object to a digital object[2].

Today, digital models of objects are already an industry standard. A 3D model of a component provides a succinct example of this: it is augmented by information from a 3D CAD program, for example, and can be used to visualise pertinent scenarios (such as collision testing using other 3D models). When a digital model is cyclically and automatically enhanced with data during the production process, the result is what is known as a digital shadow. A simple example of this is an operating hours counter for an object: the counter is automatically triggered by the actual object and the data is stored in the digital object. Analyses would continue to be conducted manually in this case. Now and again, the term digital footprint is also used to mean the same thing as a digital shadow. If the digital shadow then automatically feeds information back to the actual object and affects how it works, the result is a digital twin. Plattform Industrie 4.0 contexts still refer to cyber-physical systems, involving digital twins and real twins that are linked to one another via data streams and have an impact on one another.

Definition of digital model, shadow and twin based on flows of information[2]
Figure 4: Definition of digital model, shadow and twin based on flows of information[2].

It is only really useful to break things down based on flows of information in the case of digital twin instances. The same breakdown is not used for digital twin types because the actual object simply does not exist. If a digital twin encompasses multiple aspects, a breakdown of this kind should be applied to each separate aspect, since different definitions are applied to the various aspects.

From digital twin to improved production planning

The definitions and categorisations introduced up to this point should be enough to establish a common language for describing digital twins. While it is not possible to come up with a single definition that covers digital twins, we do not actually need one either.

So why do we need digital twins anyway?

Considering the essential role that data and evaluations play in improving production efficiency, there are some steps that can be taken to achieve good results:

  1. Centralise all data collected to date
    All the data that a machine, for example, has accumulated up to a certain point is currently stored according to aspect in most companies’ systems. For instance, maintenance data is compiled in an Excel sheet that the maintenance manager possesses, but quality control data concerning the workpieces is kept in a CAQ database. There are no logical links between the two aspects of data even though they could have a direct or indirect relationship with one another. But it also take some effort to assess whether there actually is a relationship between them. The only way to identify relationships (with the help of software) is to store the data in a central location with logical links. As a result, it may then be possible to generate added value from the data.
  2. Use standardised interfaces
    When data is stored centrally, it is useful for it to be accessible via standardised interfaces. Once this has been established, it is very easy to program automatic flows of information, in turn making it easier to manage the transition from simple model to cyber-physical system. The resulting digital twin of a component or a machine forms the basis for subsequent analyses.
  3. Creating business logic
    Once all the conditions for automated data analyses have been put in place, it is easier to use software (business logic) that assists in making better decisions – or is able make decisions all on its own. This is where the added value that we are aiming for comes in.

While stages 1 and 2 create only a little added value, or none at all, they form the basis for stage 3.

While I always advocate for changing or improving production processes instead of applications, it is clear that there is a certain amount of fundamental work that has to be put in first. Creating digital twins is an essential stepping stone on the road to future success – and therefore an important pillar in an Industry 4.0 context.

Practical solutions

Plattform Industrie 4.0’s Asset Administration Shell concept is a useful tool that allows you to start on an extremely small scale but then benefit from agile expansion. The concept predefines general interfaces, with specific interfaces then able to be added later. It provides a basis for creating a standardised digital twin for any given component. Whether you choose to use this concept as a starting point or program an information model that is all your own is up to you – however, the advantage of using widespread standards is that data may be interoperable, something that is particularly useful in customer/supplier relationships. We can also expect to see the market introduce reusable software that is able to handle these exact standards. As of 2022, the Asset Administration Shell concept is a suitable tool for creating digital twins in industry contexts – and this is improving all the time. Now, the task is to use it in complex projects.

Sources:

[1] J. Hagerty: 2017 Planning Guide for Data and Analytics. Gartner 2016

[2] W. Kritzinger, M. Karner, G. Traar, J. Henjes and W. Sihn: 2018 Digital Twin in manufacturing: A categorical literature review and classification.

*The original definition uses the word asset rather than product. The word I have chosen here is simpler, even if it does not cover all bases.