Look Me in the Eyes – Application Monitoring with Azure Application Insights

With Application Insights, Microsoft provides an application monitoring service for development and DevOps. It can log virtually everything, from response times and rates to errors and exceptions, page impressions, users and user sessions, the back end, and desktop applications.

Monitoring is not restricted to websites, either. Application Insights can also be used with web services and in the back end. It can even monitor desktop applications. The data can then be analyzed and interpreted in various ways (see Figure 1).

Figure 1: Possible applications of Application Insights (Source: https://docs.microsoft.com/en-us/azure/azure-monitor/app/app-insights-overview)

Logging

As a starting point, you need an Azure subscription with an Application Insights instance. Once the latter has been set up, you will find the so-called Instrumentation Key in the overview, which functions as a connection string.

As soon as the instance has been provided, you can immediately start the implementation. In terms of programming, you are in no way limited to Azure resources or .Net. Microsoft supports a wide variety of languages and platforms.

We will use a small .Net Core console application as an example. All you have to do is integrate the NuGet package Microsoft.Applicationlnsights, and you can get started.

First, you create a Telemetry Client. Simply insert the corresponding Instrumentation Key from your own Application Insights instance, and just like that, the application is ready for the first log entries.

  • Trace generates a simple trace log entry with a corresponding message and appropriate severity level.
  • Events are appropriate for structured logs that can contain both text and numerical values.

Metrics, on the other hand, are numerical values only, and therefore used mainly to log periodic events.

  • Trace generates a simple trace log entry with a corresponding message and appropriate severity level.
  • Events are appropriate for structured logs that can contain both text and numerical values.
  • Metrics, on the other hand, are numerical values only, and therefore used mainly to log periodic events.
static void Main(string[] args)
{
Console.WriteLine("Schau mir in die Augen");

       var config = TelemetryConfiguration.CreateDefault();
       config.InstrumentationKey = "INSTRUMENTATIONKEY";
       var tc = new TelemetryClient(config);

       // Track traces
       tc.TrackTrace("BlogTrace", SeverityLevel.Information);

       // Track custom events
       var et = new EventTelemetry();
       et.Name = "BlogEvent";
       et.Properties.Add("Source", "console");
       et.Properties.Add("Context", "Schau mir in die Augen");
       tc.TrackEvent(et);

       // Track custom metric
       var mt = new MetricTelemetry();
       mt.Name = "BlogMetric";
       mt.Sum = new Random().Next(1,100);
       tc.TrackMetric(mt);

       tc.Flush();
}

As a side note, keep in mind that log entries appear in Application Insights with a delay of up to five minutes.

Interaction with NLog

Application Insights can also be integrated into an existing NLog configuration in a few simple steps.

You have to install the NuGet package Microsoft.Applicationlnsights.NLogTarget, and then add the following entries to the NLog configuration:

  • Add Extensions with reference to the Application Insights Assembly
  • New target of the Application Insights Target type (again specifying your own instrumentation key)
  • New rule targeting the Application Insights Target
<nlog xmlns="http://www.nlog-project.org/schemas/NLog.xsd"
      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
      throwConfigExceptions="true">

  <extensions>
    <add assembly="Microsoft.ApplicationInsights.NLogTarget" />
  </extensions>

  <targets>
    <target name="logfile" xsi:type="File" fileName="log.txt" />
    <target name="logconsole" xsi:type="Console" />
    <target xsi:type="ApplicationInsightsTarget" name="aiTarget">
      <instrumentationKey>INSTRUMENTATIONKEY</instrumentationKey>
      <contextproperty name="threadid" layout="${threadid}" />
    </target>
  </targets>

  <rules>
    <logger name="*" minlevel="Info" writeTo="logconsole" />
    <logger name="*" minlevel="Debug" writeTo="logfile" />
    <logger name="*" minlevel="Trace" writeTo="aiTarget" />
  </rules>
</nlog>

Analysis

The data are then analyzed by means of the Application Insights portal. All the logs can subsequently be found in the respective table under Logs (see Figure 2).

Figure 2: Analyses with Application Insights

The trace logs created in the console application can be found in the traces table. Queries are phrased using the Kusto Query Language (KQL). The traces from the example above can be requested using the following query:

traces
| where message contains "BlogTrace"

The logged metrics can also be represented as a line chart using the following query (see Figure 3):

customMetrics
| where timestamp >= ago(12h)
| where name contains "Blog"
| render timechart 
Grafische Darstellung der geloggten Metriken
Figure 3: Graphic representation of the logged metrics

Dashboards & alert rules

To identify irregularities at an early stage, you can create customized dashboards and alert rules. In the case of the metrics used above, you can pin the chart to an enabled dashboard. This can be continued with additional queries as desired until all the required information is compiled in an overview.

The following dashboard shows the metric of the console application. It also contains examples of information regarding server requests, incorrect queries, response times, and performance and availability (see Figure 4).

Figure 4: Information on the dashboard at a glance

If an anomaly occurs at a time when you do not have an eye on the dashboard, it is also possible to be alerted immediately by email or text message by means of alert rules.

Individual alert rules can be created and managed in the Warnings menu in the Application Insights portal. An alert rule consists of a signal logic (condition) and an action group.

For the condition, you select a signal, e.g. a metric, and define a threshold: “traces greater than 80”. If you get more than 80 trace entries during a defined period of time, the alert is triggered.

The action group defines precisely what is to be done in the case of an alert. Here, you can have simple notices sent to specified persons by email or text message, or program more complex actions by means of runbooks, Azure Functions, logic apps or webhooks (see Figure 5).

Figure 5: Various types of action in an action group

REST API

If it is necessary for the data to be processed outside of Application Insights as well, they can be requested by means of a REST API.

The URL for API requests consists of a basic part and the required operation. Operations are metrics, events or query. In addition, an API key has to be submitted as “X-API-Key” HTTP header:

https://api.applicationinsights.io/v1/apps/{app-id}/{operation}/[path]?[parameters]

The app ID can be found in the settings under API Access.

Abbildung 6: URL der API-Aufrufe
Figure 6: URL for API requests (Source: https://dev.applicationinsights.io/quickstart)

Continuing with the metrics described above, this is the API request with a query operation for the total number of entries during the last 24 hours:

https://api.applicationinsights.io/v1/apps/{}app-id}/query?query=customMetrics | where timestamp >= ago(24h) | where name contains “Blog” | summarize count()

ago(24h) | where name contains “Blog” | summarize count()

The result is returned in JSON format:

{
    "tables": [
        {
            "name": "PrimaryResult",
            "columns": [
                {
                    "name": "count_",
                    "type": "long"
                }
            ],
            "rows": [
                [
                    13
                ]
            ]
        }
    ]
}

Conclusion

As this example shows, centralized logging can easily be set up and managed by means of Application Insights. In addition to the quick and simple integration, the automated infrastructure is an advantage. No need to deal with the hosting, and if the load increases, Application Insights scales automatically.

Mocks in the test environment (Part 1)

The complexity and interaction of software applications used in companies has increased dramatically in recent years. When releases are rolled out today, some of the new functions are related to the exchange of data with other applications. We also notice this in software testing: The focus of testing has expanded from pure system testing to the area of integration testing. Therefore, mocks are also used.

We work in a test centre and carry out the comprehensive integration test for our customers on their complex application infrastructure. Therefore, we build the test environments to match the productive environment. So why do we need mocks if the integration test environment already has all software components?

This way of thinking is partly correct, but different focuses are set on the different staging levels during testing. Mocks are used less in system testing, since this is where the software functions are tested. Instead, mocks are more often used in integration testing. However, since it is not always possible to test the complete message flow, communication at the interface is tested as an alternative.

If 3 components are tested in the integration test, which are located directly one after the other in the message flow, it cannot be guaranteed one hundred percent that the procedure will run without problems when using all 3 software components. It is possible that the respective errors of the components could falsify the message flow and be corrected afterwards, the error is quasi masked. Therefore, we put mocks at the respective ends of the components to get specific inputs and outputs.

To explain the described problems and peculiarities of a mock, we use a residual service as the application to be tested. The Rest-Service should be able to perform a GET and a POST command. If our component would now ask the mock for personal data with a GET command, we could configure our mock to return standardized answers or perform simple calculations on POST commands.

Using Microsoft Visual Studio, you could quickly create a WebAPI project in C# that has the basic function of a rest service. The methods of the controller only have to be adapted and you have a working Rest-API available.

File > New > Project > Visual C# > Web > Web Application

If you look at the WebAPI controllers, you can see that certain URLs call functions within the API. In this example we use a small WebAPI that is used as a hero database.

[Route("api/hero/get")]
[HttpGet]
public Iherolist Get()
{
    Console.WriteLine("Return of all heroes of the database"); 
    return heroRepository.GetAllHeroes();
    
}

[Route("api/hero/add")]
[HttpPost]
public string AddHeroDetails([FromBody]Hero hero)
{
    //AddHeroToDatabase(hero);
    return "The hero with the name: " + hero.Name + hero.Klasse + " has been added";
}

The route describes the URL path to call the HttpGet(GET command) or HttpPost(POST command) function.

Example: http:localhost/api/hero/add

Once you get the Rest API up and running, you can use an API tool (Postman) or a browser to send different Rest commands to the URL. When a POST command is sent to the Rest API, the service accepts it and recognizes from the URL that the function AddHeldenDetails() should be called. The function takes the sent data and adds it to its database. In response it returns the return value of the function. In this case the confirmation about adding the desired hero.

POST command:

POST /api/hero/add HTTP/1.1
Host: localhost:53521
Content-Type: application/x-www-form-urlencoded
Cache-Control: no-cache
Postman-Token: b169b96f-682d-165d-640f-dbcafe52789e
{ „name“:“Maria“,“class“: „lord“,“age“:68,“level“:57 }

Answer:

The heroine with the name: Maria was added

We have now added the heroine Maria to the database with our POST command. Now you can retrieve the stored heroes with the GET command. Here is an example of the format of the GET command, which is sent to the Rest-API with the corresponding response of the API:

Query:

GET /api/hero/get HTTP/1.1
Host: localhost:53521
Content-Type: application/json
Cache-Control: no-cache
Postman-Token: b3f19b01-11cf-85f1-100f-2cf175a990d9

Answer:

{"theList":
[
    {"name":"Lukas","class":"healer","age":"25","level":"12"};
    {"name":"Martin","class":"warrior","age":"30","level":"13"};
    {"name":"Gustav","class":"thief","age":"18","level":"19"};
    {"name":"Maria","class":"lord","age":"68","level":"57"};
]
}

In the answer, you can see that the heroine Maria has been added to the list and can now be called up at any time.

Now we know how the Rest-Services work, which input leads to which output. With this information we can start building a mock. I will deal with this topic in the next part.

That was “Mocks in the Test Environment Part 1” … Part 2 follows 🙂