IoT (and more) with Azure Digital Twins

As the Industry 4.0 concepts mature, we will look at the Azure edition of Digital Twins. The definition of Digital Twins assumes the digital representation of real-world things’ (or people’s) properties, either in real time (for control and predictive maintenance) or in simulations to acquire and test behaviors before actual deployment. As such, Azure Digital Twins are closely related to Azure IoT services; however, they could do a bit more, as we will see below.

Symbolical image of a male and a female engineer in a modern manufactory while he uses a tablet to control a machine in the production line

How are models created and deployed?

Azure Digital Twins relies on Digital Twins Definition Language (DTDL), which follows JavaScript Object Notation for Linked Data (JSON-LD), making it language-agnostic and connected to certain ontology standards. Root structure is declared as interface, which can contain properties, relationships, and components. Properties can contain telemetries (event-based, like temperature readings) or values (e.g., name or aggregate consumption), relationships describe connection between Twins (for example, floor contains room), and finally, components are also models referenced in interface per id (e.g., phone has camera component).

Models support inheritance, thus one can already think of these models as serverless (POCOs) classes. Indeed, from these models, instances are created which live in Azure Digital Twins. The logical question arises, if these are somehow classes, what about methods? This is where serverless Azure Functions find its use very well, as all events from Digital Twins can be captured and processed with Azure Functions. Hence, Azure Digital Twins, paired with Azure Functions, create powerful serverless infrastructure which can implement very complex event-driven scenarios by utilizing provided REST API for model and data manipulation. The price for this flexibility is a rather steep learning curve, and one must write functions for data input and output from scratch.

Json models can be created by hand, or even easier, Microsoft provides sample ontologies (prefabricated domain models) available in Excel that can be extended or adapted. Using Digital Twins Explorer (currently on preview in Azure Portal), these models can be uploaded in Azure with already prescribed instance and relationship creation automatization. Underneath Azure Digital Twins Explorer is REST API, so one can also programmatically do this as well.

In our sample smart building implementation (depicted in Image 1) we created and uploaded models (shown on the left) and instances with relations (shown on the graph on the right). There is a company model instance for ZEISS Digital Innovation (ZDI), which has two buildings Dresden and Munich each containing floors, rooms, and elevators.

Screenshot from a programme for modeling Azure Digital Twins
Figure 1: Modeling

How data come into the system?

In our smart building implementation (depicted in Image 2) we utilize IoT Hub to collect sensor data from rooms (temperature, energy consumption, number of people in the rooms, etc.), as well as OPC UA converted data from elevators.

Schematic representation of the architecture of a smart building implementation
Figure 2: Architecture

Normally, IoT Hub easily integrates with Insight Time Series with a couple of clicks out of the box, but some functions are necessary to intercept this data with Digital Twins. The first function reacts to IoT Hub Event Grid changes and propagates updates to Digital Twins, which can then trigger other functions, for example calculating and updating the aggregate energy consumption in the room and propagating this to all parents. All these changes in Digital Twins are streamed to the Event Hub in an update patch format that is not readable by Insight Time Series. Here comes another function which converts these patch changes and streams them to another Event Hub to which Insight Time Series can subscribe and save the data. Sounds over-engineered? It is! As mentioned, there is a lot of heavy lifting that needs to be done from scratch, but once familiar with the concepts, the prize is flexibility in implementing almost any scenario. From vertical hierarchies with data propagations (such as heat consumption aggregations) to horizontal interactions between twins based on relationships (as in when one elevator talks and influences the other elevators’ performance in the same building based on an AI model) can be implemented.

Another powerful feature is that we can stream and mix data from virtually any source in Digital Twins to extend their use for business intelligence. From ERP and accounting systems, to sensors and OPC UA Servers, data can be fed and cross-referenced in real time to create useful information streams – from tea bags that will run out in the kitchen on a snowy winter day to whether the monetary costs for elevator maintenance are proportional to the number of trips in year.

How are the data analyzed and reported?

In many industrial systems, and thanks to increasing cheap storage, all telemetry data usually land in a time series for analytics and archiving.

However, data alarms, averages, and aggregations can be a real asset in reporting in real time. Digital Twins offer full REST API where twins can be queried based on relations, hierarchies, or values. These APIs can be also composed and exposed to third parties in API Management Services for real-time calls.

Another way is to utilize Time Series Insights for a comprehensive analysis on complete data sets or using time series dumps to create interactive reports using Power BI.

Both real-time and historical reporting has its place and optimal usage determination should be based on concrete scenarios.

Summary

Azure Digital Twins offer language-agnostic modeling that can accept a myriad of data types and support various ontologies. In conjunction with serverless functions, very complex and powerful interactive solutions can be produced. However, this flexibility comes with high costs for manual implementation in data flow using events and functions. For this reason, it is reasonable to expect that Microsoft (or an open source) will provide middleware with generic functions and libraries for standard data flows in the future.

Patient care of the future – Digital Health Solutions with Azure Health Data Services

Since the beginning of the Covid pandemic the healthcare sector has been under enormous pressure. The demographic development, the change in the spectrum of diseases, legal regulations, cost pressure and a shortage of specialists combined with the increasing demands of patients, present healthcare organisations with a number of challenges. Here, digitalisation and the use of modern technologies such as artificial intelligence or machine learning offer numerous opportunities and potentials for increasing efficiency, reducing errors and thus improving patient treatment.

Doctor uses cloud-based medical application on smartphone, healthcare professionals talking in the background
Figure 1: Digital Health Solutions with Azure Health Data Services for optimal and future-proof patient care

Use of medical data as the basis for optimised patient care

The basis for the use of these technologies and for future-oriented predictive and preventive care is medical data. This can already be found everywhere today. However, most healthcare professionals and the medical devices in use still store this on-premise, resulting in millions of isolated medical data sets. In order to get a fully comprehensive overview of a patient’s medical history and, based on this, to create treatment plans in terms of patient-centred therapy and to be able to derive overarching insights from these data sets, organisations need to integrate and synchronise health data from different sources.

To support the development of healthcare ecosystems, the major global public cloud providers (Microsoft Azure, Amazon Web Service and Google Cloud Platform) are increasingly offering special SaaS and PaaS services for the healthcare sector that can provide companies with a basis for their own solutions. Through our experience at ZEISS Digital Innovation as an implementation partner of Carl Zeiss Meditec AG and of customers outside the ZEISS Group, we recognised early on that Microsoft offers a particularly powerful healthcare portfolio and is continuing to expand it strongly. This became clear again at this year’s Ignite.

Screenshot of a video in which two people are talking virtually about a certain topic
ZEISS Digital Innovation (right) at Ignite 2021 talking about how to get long-term value from healthcare data with Microsoft Cloud for Healthcare. (Click here for the full video)

Medical data platforms based on Azure Health Data Services

One possibility for building such a medical data platform as the basis of an ecosystem is the use oAzure Health Data Services. With the help of these services, the storage, access and processing of medical data can be made interoperable and secure. Thousands of medical devices can be connected to each other and the data generated in this way can be used by numerous applications in a scalable and robust manner. As Azure Health Data Services are PaaS solutions, they can be used out of the box and are fully developed, managed and operated by Microsoft. They are highly available with little effort, designed for security and are in compliance with regulatory requirements. This significantly reduces the implementation effort and thus also the costs.

Carl Zeiss Meditec AG also relies on Azure Health Data Services to develop its digital, data-driven ecosystem. The ZEISS Medical Ecosystem, developed together with ZEISS Digital Innovation, connects devices and clinical systems with applications via a central data platform, creating added value at various levels to optimise clinical patient management.

The DICOM service within Azure Health Data Services is used here as the central interface for device connection. As DICOM is an open standard for storing and exchanging information in medical image data management, the majority of medical devices that generate image data communicate using the DICOM protocol. Through an extensible connectivity solution based on Azure IoT Edge, these devices can connect directly to the data platform in Azure using the DICOM standard. This allows a wide range of devices that have been in use with customers for years to be integrated into the ecosystem. This increases acceptance and ensures that more data can flow into the cloud and be further processed to enable clinical use cases and develop new procedures.

Azure API for FHIR® serves as the central data hub of the platform. All data of the ecosystem are stored there in a structured way and linked with each other in order to make them centrally findable and available to the applications. HL7® FHIR® (Fast Healthcare Interoperability Resources) offers a standardised and comprehensive data model for healthcare data. Not only can it be used to implement simple and robust interfaces to one’s own applications, but it also ensures interoperability with third-party systems such as EMR systems (Electronic Medical Record), hospital information systems or the electronic patient record. The data from the medical devices, historical measurement data from local PACS solutions and information from other clinical systems are automatically processed, structured and aggregated centrally in Azure API for FHIR® after upload. This is a key factor in collecting more valuable data for clinical use cases and providing customers with a seamlessly integrated ecosystem.

Schematic representation of building a medical data platform with Azure Healthcare APIs
Figure 2: Building a medical data platform with Azure Health Data Services

Successful collaboration between ZEISS Digital Innovation and Microsoft

As early adopters of Azure Health Data Services, our development teams at ZEISS Digital Innovation work closely with the Azure Health Data Services product group at Microsoft headquarters in Redmond, USA, helping to shape the services for the benefit of our customers. In regular co-creation sessions between the ZEISS Digital Innovation and Microsoft teams, the solution design for currently implemented features of the Azure Health Data Services is discussed. In this way, we can ensure that even the most complex use cases currently known are taken into account.

We are working very closely with ZEISS Digital Innovation to shape Azure’s next generation health services alongside their customer needs. Their strong background in the development of digital medical products for their customers is a core asset in our collaboration and enables the development of innovative solutions for the healthcare sector.

Steven Borg (Director, Medical Imaging at Microsoft)

You too can benefit from our know-how and contact us. Together, we will develop the vision for your innovative solution and support you during implementation.

This post was written by:

Elisa Kunze

Elisa Kunze has been working at ZEISS Digital Innovation since 2013. During her various sales and marketing activities she supported lots of different projects, teams and companies in various sectors. Today she supports her clients in the health sector as a key account manager and supports them in implementing their project vision.

See author’s posts

Serverless Computing: An Overview of Amazon’s and Microsoft’s Services

With the serverless model, application components such as databases or data processing components are provided and operated, automated and demand-based, by the cloud service provider. The cloud user is responsible for configuring these resources, e.g. with their own code or application-specific parameters, and combining them.

The costs incurred depend on the capacities used, and scaling takes place automatically based on the load. The cloud service provider is responsible for the provision, scaling, maintenance, high availability and management of the resources.

Serverless computing is particularly convenient for workloads that are difficult to anticipate or are short-lived, for automation tasks, or for prototypes. Serverless computing is less suitable for resource-intensive, long-term, and predictable tasks because in this case, the costs can be significantly higher than with self-managed execution environments.

Building blocks

Within the framework of a “serverless computing” Advent calendar, we compared the cloud services of AWS and Azure. The windows open under the hashtag #ZEISSDigitalInnovationGoesServerless.

CategoryAWSAzure
COMPUTE
Serverless Function
AWS LambdaAzure Functions
COMPUTE
Serverless Containers
AWS Fargate
Amazon ECS/EKS
Azure Container Instances / AKS
INTEGRATION
API Management
Amazon API GatewayAzure API Management
INTEGRATION
Pub-/Sub-Messaging
Amazon SNSAzure Event Grid
INTEGRATION
Message Queues
Amazon SQSAzure Service Bus
INTEGRATION
Workflow Engine
AWS Step FunctionsAzure Logic App
INTEGRATION
GraphQL API
AWS AppSyncAzure Functions mit Apollo Server
STORAGE
Object Storage
Amazon S3Azure Storage Account
DATA
NoSQL-Datenbank
Amazon DynamoDBAzure Table Storage
DATA
Storage Query Service
Amazon Aurora ServerlessAzure SQL Database Serverless
SECURITY
Identity Provider
Amazon CognitoAzure Active Directory B2C
SECURITY
Key Management
AWS KMSAzure Key Vault
SECURITY
Web Application Firewall
AWS WAFAzure Web Application Firewall
NETWORK
Content Delivery Network
Amazon CloudFrontAzure CDN
NETWORK
Load Balancer
Application Load BalancerAzure Application Gateway
NETWORK
Domain Name Service
Amazon Route 53Azure DNS
ANALYTICS
Data Stream
Amazon KinesisAnalytics
ANALYTICS
ETL Service
AWS GlueAzure Data Factory
ANALYTICS
Storage Query Service
Amazon AthenaAzure Data Lake Analytics

We compiled an overview of the above-mentioned services and their characteristics, including some exemplary reference architectures on a poster (english version follows). This overview offers a simple introduction to the topic of serverless architecture.

Figure 1: Preview poster “Serverless Computing”

We will gladly send you the poster in original size (1000 x 700 mm). Simply send us an e-mail with your address to info.digitalinnovation@zeiss.com. Please note our privacy policy.

Best practices for serverless functions

Each function should be responsible for a single task (single responsibility principle):  this improves maintainability and reusability. Storage capacity, access rights and timeout settings can be configured more specifically.

As the allotted storage space of a Lambda function is increased, the capacity of the CPU and the network increases as well. The optimal ratio between execution time and costs should be determined by way of benchmarking.

A function should not call up another synchronous function. The wait causes unnecessary costs and increased coupling. Instead, you should use asynchronous processing, e.g. with message queues.

The deployment package of each function should be as small as possible. Large external libraries are to be avoided. This improves the cold start time. Recurring initializations of dependencies should be executed outside of the handler function so that they have to be executed only once upon the cold start. It is advisable to define operative parameters by means of a function’s environment variables. This improves the reusability.

The rights to access other cloud resources should be defined individually for each function and as restrictively as possible. Stateful database connections are to be avoided. Instead, you should use service APIs.

Look Me in the Eyes – Application Monitoring with Azure Application Insights

With Application Insights, Microsoft provides an application monitoring service for development and DevOps. It can log virtually everything, from response times and rates to errors and exceptions, page impressions, users and user sessions, the back end, and desktop applications.

Monitoring is not restricted to websites, either. Application Insights can also be used with web services and in the back end. It can even monitor desktop applications. The data can then be analyzed and interpreted in various ways (see Figure 1).

Figure 1: Possible applications of Application Insights (Source: https://docs.microsoft.com/en-us/azure/azure-monitor/app/app-insights-overview)

Logging

As a starting point, you need an Azure subscription with an Application Insights instance. Once the latter has been set up, you will find the so-called Instrumentation Key in the overview, which functions as a connection string.

As soon as the instance has been provided, you can immediately start the implementation. In terms of programming, you are in no way limited to Azure resources or .Net. Microsoft supports a wide variety of languages and platforms.

We will use a small .Net Core console application as an example. All you have to do is integrate the NuGet package Microsoft.Applicationlnsights, and you can get started.

First, you create a Telemetry Client. Simply insert the corresponding Instrumentation Key from your own Application Insights instance, and just like that, the application is ready for the first log entries.

  • Trace generates a simple trace log entry with a corresponding message and appropriate severity level.
  • Events are appropriate for structured logs that can contain both text and numerical values.

Metrics, on the other hand, are numerical values only, and therefore used mainly to log periodic events.

  • Trace generates a simple trace log entry with a corresponding message and appropriate severity level.
  • Events are appropriate for structured logs that can contain both text and numerical values.
  • Metrics, on the other hand, are numerical values only, and therefore used mainly to log periodic events.
static void Main(string[] args)
{
Console.WriteLine("Schau mir in die Augen");

       var config = TelemetryConfiguration.CreateDefault();
       config.InstrumentationKey = "INSTRUMENTATIONKEY";
       var tc = new TelemetryClient(config);

       // Track traces
       tc.TrackTrace("BlogTrace", SeverityLevel.Information);

       // Track custom events
       var et = new EventTelemetry();
       et.Name = "BlogEvent";
       et.Properties.Add("Source", "console");
       et.Properties.Add("Context", "Schau mir in die Augen");
       tc.TrackEvent(et);

       // Track custom metric
       var mt = new MetricTelemetry();
       mt.Name = "BlogMetric";
       mt.Sum = new Random().Next(1,100);
       tc.TrackMetric(mt);

       tc.Flush();
}

As a side note, keep in mind that log entries appear in Application Insights with a delay of up to five minutes.

Interaction with NLog

Application Insights can also be integrated into an existing NLog configuration in a few simple steps.

You have to install the NuGet package Microsoft.Applicationlnsights.NLogTarget, and then add the following entries to the NLog configuration:

  • Add Extensions with reference to the Application Insights Assembly
  • New target of the Application Insights Target type (again specifying your own instrumentation key)
  • New rule targeting the Application Insights Target
<nlog xmlns="http://www.nlog-project.org/schemas/NLog.xsd"
      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
      throwConfigExceptions="true">

  <extensions>
    <add assembly="Microsoft.ApplicationInsights.NLogTarget" />
  </extensions>

  <targets>
    <target name="logfile" xsi:type="File" fileName="log.txt" />
    <target name="logconsole" xsi:type="Console" />
    <target xsi:type="ApplicationInsightsTarget" name="aiTarget">
      <instrumentationKey>INSTRUMENTATIONKEY</instrumentationKey>
      <contextproperty name="threadid" layout="${threadid}" />
    </target>
  </targets>

  <rules>
    <logger name="*" minlevel="Info" writeTo="logconsole" />
    <logger name="*" minlevel="Debug" writeTo="logfile" />
    <logger name="*" minlevel="Trace" writeTo="aiTarget" />
  </rules>
</nlog>

Analysis

The data are then analyzed by means of the Application Insights portal. All the logs can subsequently be found in the respective table under Logs (see Figure 2).

Figure 2: Analyses with Application Insights

The trace logs created in the console application can be found in the traces table. Queries are phrased using the Kusto Query Language (KQL). The traces from the example above can be requested using the following query:

traces
| where message contains "BlogTrace"

The logged metrics can also be represented as a line chart using the following query (see Figure 3):

customMetrics
| where timestamp >= ago(12h)
| where name contains "Blog"
| render timechart 
Grafische Darstellung der geloggten Metriken
Figure 3: Graphic representation of the logged metrics

Dashboards & alert rules

To identify irregularities at an early stage, you can create customized dashboards and alert rules. In the case of the metrics used above, you can pin the chart to an enabled dashboard. This can be continued with additional queries as desired until all the required information is compiled in an overview.

The following dashboard shows the metric of the console application. It also contains examples of information regarding server requests, incorrect queries, response times, and performance and availability (see Figure 4).

Figure 4: Information on the dashboard at a glance

If an anomaly occurs at a time when you do not have an eye on the dashboard, it is also possible to be alerted immediately by email or text message by means of alert rules.

Individual alert rules can be created and managed in the Warnings menu in the Application Insights portal. An alert rule consists of a signal logic (condition) and an action group.

For the condition, you select a signal, e.g. a metric, and define a threshold: “traces greater than 80”. If you get more than 80 trace entries during a defined period of time, the alert is triggered.

The action group defines precisely what is to be done in the case of an alert. Here, you can have simple notices sent to specified persons by email or text message, or program more complex actions by means of runbooks, Azure Functions, logic apps or webhooks (see Figure 5).

Figure 5: Various types of action in an action group

REST API

If it is necessary for the data to be processed outside of Application Insights as well, they can be requested by means of a REST API.

The URL for API requests consists of a basic part and the required operation. Operations are metrics, events or query. In addition, an API key has to be submitted as “X-API-Key” HTTP header:

https://api.applicationinsights.io/v1/apps/{app-id}/{operation}/[path]?[parameters]

The app ID can be found in the settings under API Access.

Abbildung 6: URL der API-Aufrufe
Figure 6: URL for API requests (Source: https://dev.applicationinsights.io/quickstart)

Continuing with the metrics described above, this is the API request with a query operation for the total number of entries during the last 24 hours:

https://api.applicationinsights.io/v1/apps/{}app-id}/query?query=customMetrics | where timestamp >= ago(24h) | where name contains “Blog” | summarize count()

ago(24h) | where name contains “Blog” | summarize count()

The result is returned in JSON format:

{
    "tables": [
        {
            "name": "PrimaryResult",
            "columns": [
                {
                    "name": "count_",
                    "type": "long"
                }
            ],
            "rows": [
                [
                    13
                ]
            ]
        }
    ]
}

Conclusion

As this example shows, centralized logging can easily be set up and managed by means of Application Insights. In addition to the quick and simple integration, the automated infrastructure is an advantage. No need to deal with the hosting, and if the load increases, Application Insights scales automatically.

Cloud Special Day – OOP 2019

Not just start-ups, but also large and well-established companies rely more and more on cloud-based solutions to digitalize their supply chain. But which technical possibilities are available on cloud platforms like Amazon Webservices and Microsoft Azure for the development of critical applications, e.g. in the medical context? We as Saxonia Systems (since 03/2020 ZEISS Digital Innovation) answered exactly these questions during our Cloud Special Days on OOP 2019 in Munich together with among others Carl Zeiss Meditec AG.


Presentation 1

Save and compliant: How you build a medical cloud platform

In their first contribution, Thorsten Bischoff (Carl Zeiss Meditec AG) and Dirk Barchmann (Saxonia Systems AG) offered an impression into the development of an already internationally introduced mobile application: It enables doctors to synchronise information concerning patients and surgeries between the doctor’s office and a remote surgical suite using a cloud platform based on Amazon Webservices (AWS). The main focus was placed on the security of the data to be transmitted and stored (encryption in transit / at rest), whereby a large number of industrial norms and legal regulations in the various target countries had to be observed and fulfilled. Examples include the DSGVO in Europe or the HIPAA Privacy, Security, Transactions Rule in the USA as well as the worldwide approved ISO-27001 standards. Central key elements are already reviewed, and certified cloud services like for example the Amazon Key Management Service (KMS) for encrypting data or the Amazon Simple Storage Service (S3) for storing it. Not only technical questions had to be clarified – organisational and procedural adjustments were also made because of the usage of cloud services to achieve the necessary certifications.


Presentation 2

Pre- and post-processing of cataract surgeries in the cloud

In the following presentation of Rainer Scheubeck (Carl Zeiss Meditec AG) and Alexander Casall (Saxonia Systems AG), they reported how a solution used for preparation, planning and post-processing of eye surgeries is developed and brought into production in the cloud.

In addition to the possibility of rolling out updates centrally and maintaining data (e.g. master data) centrally, the dynamic scalability of the application was an argument for a cloud-based solution. To secure the sustainability and expandability of the application, the components of the application are run as Docker containers based on the cluster management solution Kubernetes. The used cloud-native services like Azure Kubernetes Service and Azure CosmosDB are connected over standardised and conventional interfaces. This way, the application can run relatively independent of the public cloud provider and it is possible to change the chosen provider with little effort needed. Because this application is a medical product and therefore its development and distribution is regulated by several institutions, special emphasis was placed on infrastructure and test automation during conception and development.


Presentation 3

Explore new ways with the cloud

In the third talk, Dr. Andreas Zeidler (Carl Zeiss Meditec AG) and Leo Lindhorst (Saxonia Systems AG) presented the current perceptions of a F&E project of Carl Zeiss Meditec AG. The goal is to validate, how existing on-premises solutions can be migrated on AWS. The background is the rising demand of calculation-intensive analyses of physicians for which the established on-premises infrastructure is to weak. For validation, multiple minimal products are developed based on the prototype of a medical cloud platform to explore the different approaches and challenges of a cloud migration. In this process, modern concepts and technologies like data lake or serverless architectures is used.


Presentation 4

Private Cloud – An alternative?

For those cases where the transformation into a public cloud infrastructure is not possible, the private cloud can be an alternative. In the last presentation of the day, Günther Buchner (Saxonia Systems AG) explained his experiences regarding the introduction of an OpenStack-based private cloud infrastructure in a large-scale enterprise. This is an infrastructure centrally run from the company on their own hardware for different parties within the organisation. This infrastructure has cloud properties like on-demand scalable provision and billing of resources, high availability, and central implementation of cross-cutting functionalities. In the described scenario, OpenStack was used as base technology.

OpenStack offers an Infrastructure as a Service (IaaS) layer as base for the service of, in this presented project, Cloud Foundry as Platform as a Service (PaaS). The costs, analogous to the public cloud, are allocated to the departments that use the private cloud services based on a cost key. Although the introduction of such a complex IT infrastructure involves considerable effort, it also offers a number of advantages: For example, companies can increase agility and flexibility in software development with cloud technologies without being dependent on public cloud providers or having to deal with the more complex data protection situation with providers outside their own organization.