GraalVM has now been on the market for over two years. It promises two main benefits: better runtime properties and the integration of multiple programming languages.
This blog post will focus on performance; it will not look primarily at whether and to what extent a particular program is executed faster on the Graal JDK than on a conventional JDK. After all, the specific measured values and the relative comparisons are not solely dependent on the program being tested, and they have little general applicability. Moreover, they are merely snapshots: Both GraalVM and, to take one example, OpenJDK, are undergoing continuous development, meaning that the measured values will be continuously changing too. This blog post will instead look at mainly the following questions: Why should GraalVM have a much better performance? What makes it different from conventional JDKs? This will allow us to evaluate whether all programs are executed with a better performance or if no appreciable improvement can be expected, or whether the performance increase is only to be expected in certain application scenarios. And ultimately, whether this means “conventional” Java is too slow.
The development of compilers
The performance of a Java program is fundamentally determined by the compiler, and in this case too, our key focus is to establish what it is that makes GraalVM different. So first, let’s get an understanding of compilers.
In the early days of programming, there were no compilers at all – the machine code was programmed directly. This was confusing and difficult to understand, and it soon led to the development of an assembler code. However, in principle, this was a direct mapping of the machine code, the only difference being that alphabetic abbreviations were now used instead of binary or hexadecimal opcodes. We cannot speak of a programming language and compiler here, at least not within the scope of this blog post.
Over time, it became necessary to develop more and more complicated programs, and the assembly code became increasingly impracticable. For this reason, the first higher programming languages were developed in the 1950s. These needed a compiler to convert the source text into machine code.
The first of these was the classic AOT (ahead-of-time) compiler. The source text is analysed (syntax analysis) and transferred to an internal tree structure (syntax tree), which is used to generate machine code (code generation). This creates a binary file that can then be directly executed.
As an alternative to AOT compilation, a program can also be executed by an interpreter. In this case, the source text is read in and implemented by the interpreter line-by-line. The actual operations (e.g. addition, comparison, program output) are then executed by the interpreter.
Compared to the interpreter, the AOT compiler has the benefit that the programs are executed much faster. However, the generated binary files are system-dependent. What’s more, the interpreter has better error analysis capabilities, since it has access to runtime information, for example.
Java, interpreters and JIT compilers
When the Java programming language was being designed, one of the goals was to ensure that it was architecture-neutral and portable. For this reason, the Java source code was translated into platform-independent bytecode right from the outset. This could then be interpreted by a runtime environment or JRE (Java Runtime Environment). This meant the translated bytecode was platform-independent. For example, applets could be executed on a Windows PC, Mac or Unix workstation without any adaptations, as long as the JRE – regardless of the applet – was already installed on the workstations.
It is worth noting that this idea of a combined solution – AOT up to the bytecode, then interpretation at runtime – does not come from the world of Java. Pascal, for example, was using p-Code as long ago as the 1970s. [1]
When Java technology was released in 1995 [2], this platform independence initially went hand-in-hand with major drawbacks when it came to performance. Many of the programming languages that were relevant at the time, such as “C”, compile their source code directly in machine code (AOT). This can be run natively on the appropriate system, meaning performance is much better than with interpretation of the bytecode. At that time, many IT specialists had come to the conclusion that “Java is slow” – and they were right.
However, “high performance” became an additional goal in the process of designing the Java programming language, which is why the JIT (just-in-time) compiler was introduced in 1998 [2]. This significantly reduced the losses in performance due to pure interpretation.
In JIT compilation, the bytecode is initially interpreted at the program start, but the system continuously analyses the parts of the program to determine which are executed and how often. The frequently executed parts of the program are then translated into machine code at runtime. In future, these parts of the program will no longer be interpreted, and the native machine code will be executed instead. So in this case, execution time is initially “invested” for compilation purposes so that execution time can then be saved at each subsequent call.
JIT compilation therefore represents a middle ground between AOT compilation and interpretation. Platform independence is retained since the machine code is only generated at runtime. And as the frequently used parts of the program are executed as native machine code after a certain warm-up time, the performance is approximately as good as with AOT compilation. As a general rule, the more frequently individual parts of the program are executed, the more the other, interpreted parts of the program can be disregarded in the performance analysis. And this applies above all to frequently run loops or long-running server applications with methods that are called continuously.
Runtime optimisations
With the mechanisms we have looked at so far, the JIT compiler was platform-independent but could not achieve the execution time of the AOT compiler, let alone exceed it. At the time the JIT compiler was integrated into the JDK, it was by no means certain that this would be enough to see it triumph.
However, the JIT compiler does have one significant advantage over the AOT compiler: It is not solely reliant on the static source code analysis, and it can monitor the program directly at runtime. As the vast majority of programs behave differently depending on inputs and/or environment states, the JIT compiler can make optimisations with much greater precision at runtime.
One major benefit in this context is speculative optimisations, where assumptions are made that are true in most cases. To ensure that the program works correctly in all cases, the assumption is supported by a “guard”. For instance, the JVM assumes that the polymorphism is pretty much never used when a productive program is running. Polymorphism is obviously a good idea in general, but the practical application scenarios are usually limited to testing or to decoupling a codebase – and usually to enable use by various programs or for future expansion possibilities. Nonetheless, during the runtime of a specific productive program – and this is the scope of the JVM – the polymorphism is rarely used. The problem here is that, when calling an interface module, it takes a relatively long time for the existing object to find the appropriate method implementation, which is why the method calls are traced. If, for example, the method “java.util.List.add(…)” is called multiple times on an object of the type “java.util.ArrayList”, the JVM makes a note of this. For the subsequent method calls “List::add”, it is speculated that they are ArrayLists again. At first, the assumption is supported by a guard, and a check is made to determine whether the object is actually of the ArrayList type. This is usually the case, and the method that has already been determined multiple times is simply called directly using the “noted” reference.
Over two decades have passed since the JIT compiler was integrated into the JDK. During this time, a great many runtime optimisations have been integrated. The polymorphism speculation presented here is just one small example intended to illustrate the fact that a large number of optimisations have been devised – in addition to the compilation of machine code – that only work at runtime in a complex language like Java. If, for example, an instance is generated using reflection, it is difficult or even impossible for an AOT compiler to identify the specific type and implement speculative optimisation. The speed advantages of the current JIT compilers are therefore primarily based on the fact that they can monitor the program during execution, identify regular operations and ultimately integrate shortcuts.
GraalVM
GraalVM is a JDK from Oracle based on OpenJDK. It offers a virtual machine and a host of developer tools – but the same is also true of the other JDKs. So why is GraalVM generating much more attention that the other JDKs?
Firstly, GraalVM provides a GraalVM compiler, which was developed in Java. In addition, the entire JVM is to be rewritten in Java. In the last section, we showed that current JVMs offer high performance primarily because various optimisations have been included over the decades that are now adding up. These optimisations are mainly Java-specific and are mostly developed by people with a background in Java. If the execution environment is implemented in Java rather than C++, it is possible to make optimisations without any knowledge of C++. This puts the developer community on a broader footing over the medium and long term.
Another exciting aspect of GraalVM is that it supports more than just Java-based languages. The “Truffle language implementation framework” is a starting point for developing domain-specific languages (DSL). The GraalVM compiler supports languages developed with the Truffle framework, meaning these can also be executed in GraalVM and enjoy all the corresponding benefits. Certain languages, such as JavaScript, Python or Ruby, are already supported by GraalVM as standard. Since all Truffle languages can be executed jointly and simultaneously in GraalVM, it is referred to as a polyglot VM.
LLVM-based languages are also supported. LLVM is a framework project for optimising compilers [4][5]. In the LLVM project, compiler components and technologies for external compiler developments are provided, and compilers for many programming languages, like C/C++ or Fortran, are offered too. LLVM runtime is another component of GraalVM that can be used to execute LLVM-based languages on the basis of the Truffle framework in GraalVM. However, we will not go into the polyglot aspect any further as it deserves its own blog post.
GraalVM native image
The innovation that is most relevant to this blog post is native image technology. Native image is a GraalVM developer tool that uses bytecode to generate an executable file. It aims to achieve better performance and reduced main memory usage at runtime. However, we have said that Java is getting faster, with the JIT compiler translating all commonly executed (i.e. relevant) parts of the program into native machine code. The program is monitored during execution, and runtime optimisations are continuously made. So this might lead us to ask: What exactly could be improved here by orders of magnitude?
The answer is incredibly simple: the start time. Even with JIT compilers, the bytecode is initially interpreted. Firstly, the program start is not usually a frequently executed part of the program. Secondly, these parts of the program usually need to run a couple of times first for the JIT compiler to recognise that they are a worthwhile translation target. With runtime optimisations, the behaviour is the same: The program first needs to be monitored at runtime so that the appropriate optimisations can be recognised and integrated. Another complicating factor is that all required objects and their classes, including the complete inheritance hierarchy, need to be initialised at start-up.
Since we now have an idea of “what”, we are interested in knowing “how”: How can our program be made to start faster?
When generating the native image, the bytecode initially undergoes highly extensive static analysis. Amongst other things, this checks which parts of the code can actually be executed at runtime. This includes not only the classes provided by the user but the entire classpath, with the Java class libraries provided by the JVM. Only the identified source code fragments are included in the native image, so the scope is significantly reduced at this stage. However, a “closed-world assumption” is also proposed: As soon as anything is loaded dynamically at runtime, the native image tool has a problem. It does not recognise that these source code parts can also be executed and are thus required. For this reason, anything more than a simple HelloWorld program would not work this way, so when creating the native image you can – and must – give the tool more information about anything that can be called dynamically.
Following the static analysis, the first element that increases the start-up speed is implemented: Since the JIT compiler would start with interpretation, an AOT compiler is used to create machine code. The generated native image is, as the name suggests, machine code that can be executed natively. However, this means that platform independence is lost.
In addition to the natively compiled program, the Substrate VM is included in the native image. This is a stripped-down VM containing only the components required to execute the native image, like thread scheduling or garbage collection. The Substrate VM has its own limitations, with no support provided for a security manager, for example.
An additional increase in the start-up speed is achieved by initialising the native image in advance during creation. Following compilation, the program is started until the key initialisations have completed but no external input needs to be processed. On the basis of this started state, a disk image is created and included in the native image.
We have looked at the “what” and the “how”, and now we turn to a rather critical “why”: The AOT compiler has been known for half a century, and Java has now existed for a quarter of a century. Particularly in early days of Java, various AOT approaches were tried but none ever became established. Why should it be different this time? Why is a reduced start time now of interest, when it goes hand-in-hand with certain disadvantages? Why are high-performance response times in operation over consecutive days or weeks suddenly less important?
The answer can be found in cloud computing, where the services are provided in a different form. Previously, the services were primarily operated in an application container executed day and night and in which the program had already been fully optimised for days. It was usually the case that the application container was not eventually shut down, even when used sparingly (e.g. depending on the time of day). By contrast, the service infrastructure in the cloud can be shut down without problems when not used, enabling capacity to be preserved. At the next call, the infrastructure is started up again and the call is executed. This means that the programs in the cloud may execute a cold start for each call rather than running continuously. As a consequence, the “all at once” start time is extremely crucial. And as it can be expected that even more Java programs will be executed in the cloud rather than in an application container in future, there is likely to be an increased focus on the start time.
Hands on: HelloWorld
After all of that theory, let’s take a look at the JDK in operation. First, we will use the HelloWorld class shown in listing 1.
package de.zeiss.zdi.graal; public class HelloWorld { public static void main(String[] args) { System.out.println("Moin!"); } }
Listing 1
Here is the classic variant: We are on a Linux VM and an OpenJDK is installed:
> java --version openjdk 11.0.11 2021-04-20 OpenJDK Runtime Environment (build 11.0.11+9-Ubuntu-0ubuntu2.20.04) OpenJDK 64-Bit Server VM (build 11.0.11+9-Ubuntu-0ubuntu2.20.04, mixed mode, sharing) java 11.0.12 2021-07-20 LTS Java(TM) SE Runtime Environment GraalVM EE 21.2.0.1 (build 11.0.12+8-LTS-jvmci-21.2-b08) Java HotSpot(TM) 64-Bit Server VM GraalVM EE 21.2.0.1 (build 11.0.12+8-LTS-jvmci-21.2-b08, mixed mode, sharing)
With this setup, we compile the HelloWorld class (javac) and execute the generated bytecode on a JVM:
time java -cp target/classes de.zeiss.zdi.graal.HelloWorld
This gives us the following output:
Moin! real 0m0.055s user 0m0.059s sys 0m0.010s
The total of the two lines “user” and “sys” is relevant to the evaluation here. This is the computing time required to execute the program – in this case, approx. 69 ms.
One note on the 55 ms: From the start to the end, the HelloWorld program required 55 ms of “real time” (the time perceived by the user), which is less than the 69 ms of computing time required. This is due to the Linux system having multiple processors. However, for the purposes of our measurements, we will analyse the computing time applied by the system. Firstly, the computing time is less dependent on the number of processors that have executed the program. And secondly, in the cloud, for example, this is the time that must be paid for by the application operator.
Now we are curious about GraalVM, which is available to download from its website [3]. The Enterprise version (“free for evaluation and development”) is suitable for our evaluation, as most of the performance optimisations are only found here.
The installation is very well documented for Linux and works with virtually no problems. GraalVM is then available for use as JDK.
> java --version java version "11.0.12" 2021-07-20 LTS Java(TM) SE Runtime Environment GraalVM EE 21.2.0.1 (build 11.0.12+8-LTS-jvmci-21.2-b08) Java HotSpot(TM) 64-Bit Server VM GraalVM EE 21.2.0.1 (build 11.0.12+8-LTS-jvmci-21.2-b08, mixed mode, sharing)
We can now compile and execute our HelloWorld program in the same manner with the GraalJDK (javac). This gives us the following output:
Moin! real 0m0.084s user 0m0.099s sys 0m0.017s
Interestingly, the JVM of the GraalJDK needs almost 70% more computing time to execute our HelloWorld example as bytecode. However, the significant performance benefits promised by GraalVM primarily relate to the use of native image technology, rather than to the execution of bytecode.
The native image (the developer tool) is not contained in the downloaded GraalVM, but the command-line tool “gu” (GraalVM Updater) exists for this purpose. This enables us to load, manage and update additional components. In this case too, we find very good support in the GraalVM documentation. Once the developer tool is loaded, we can now generate the native image from the bytecode. With such a trivial program as our HelloWorld example, a single command with the fully qualified class name as the argument is sufficient:
cd ~/dev/prj/graal-eval/target/classes native-image de.zeiss.zdi.graal.HelloWorld
Creating the HelloWorld native image requires a good three minutes of computing time, and the executable program is approx. 12 MB in size. At first glance, we might compare the size with the bytecode: HelloWorld.class is only 565 bytes. However, the native image contains not only the compiled class but also all relevant parts of the Java class library and the Substrate VM. As a rough estimate, the native image is only 10% of the size of a JRE.
But let’s return to our native image, which we have now managed to create. We can then execute it and obtain the following output.
time ./de.zeiss.zdi.graal.helloworld
Moin! real 0m0.004s user 0m0.003s sys 0m0.001s
For now, we can consider this result to be a relevant speed gain.
Hands on: HelloScript
One of the features of GraalVM that is highlighted again and again is that it is a polyglot VM, rather than just a Java VM. For this reason, we will expand our HelloWorld program to include a short digression into the world of JavaScript. The relevant source code is shown in listing 2. The key difference here is the transition required from the world of Java to the world of JavaScript.
package de.zeiss.zdi.graal; import javax.script.ScriptEngine; import javax.script.ScriptEngineManager; import javax.script.ScriptException; public class HelloScriptEngine { public static void main(String[] args) throws ScriptException { ScriptEngine jsEngine = new ScriptEngineManager().getEngineByName("javascript"); System.out.print("Hello "); jsEngine.eval("print('JavaScript!')"); } }
Listing 2
Alongside this universal JavaScript connection via javax.script.ScriptEngine, we also want to try out the Graal-specific JavaScript connection using org.graalvm.polyglot.Context. The source text is shown in listing 3.
package de.zeiss.zdi.graal; import org.graalvm.polyglot.Context; public class HelloScriptPolyglot { public static void main(String[] args) { System.out.print("Hello "); try (Context context = Context.create()) { context.eval("js", "print('JavaScript!')"); } } }
Listing 3
The two HelloScript programs are translated into bytecode in the same way as the HelloWorld program. When the native images are created, the developer tool must be informed that the world of JavaScript will be used. This is done with the following call:
cd ~/dev/prj/graal-eval/target/classes native-image --language:js de.zeiss.zdi.graal.HelloScriptEngine native-image --language:js de.zeiss.zdi.graal.HelloScriptPolyglot
The bytecode can then be executed natively on the VMs or the native images. Since the HelloScriptPolyglot is Graal-specific, we cannot simply execute it on the OpenJDK.
A look at the measured values
Each of the three scenarios was executed as bytecode on the OpenJDK, bytecode on the GraalJDK and as a native image. The average program execution times are listed in Table 1.
Hello World | HelloScriptEngine | HelloScriptPolyglot | |
Bytecode OpenJDK | 69 ms | 1321 ms | X |
Bytecode GraalJDK | 116 ms | 2889 ms | 2775 ms |
Native Image | 4 ms | 13 ms | 11 ms |
Table 1: Example of average program execution times
At first glance, we can see that the execution as a native image is much faster in all three scenarios than the conventional bytecode execution.
However, at second glance, we notice that the bytecode execution with GraalJDK requires much more computing time than with OpenJDK: In the HelloWorld example it needs nearly 70% more time, and in the HelloScriptEngine example it needs over 100% more. This was not communicated by Oracle, but in general it is not such a big problem since the faster bytecode execution is probably not the motivation for using GraalVM . Nevertheless, we should keep this fact in the back of our minds when we want to determine the relevant speed-up from the native image, since GraalVM must be installed in order to create the native image. If we measure the bytecode execution for comparison purposes and execute “java -jar …”, the bytecode is executed via GraalVM. However, since it is most likely that OpenJDK has tended to be used until now in productive operations, we should use this for comparison – and this means the speed-up would be “only” just over half as high.
Things to consider
If we want to achieve the promised performance gains, it is not enough to simply install GraalVM instead of a conventional JDK. During bytecode execution, it was not possible to achieve any performance gains – at least with our examples and setup. This is only possible if we use a native image, but we must keep in mind that this has several disadvantages when compared with bytecode execution.
- In the native image, Substrate VM is used as JVM. This comes with certain restrictions. Aside from the fact that not all features are currently implemented, there are some things that are not even on the agenda, like a security manager.
- We should also keep in mind the duration of the build process: For the native image, the start time does not simply disappear. With different approaches, the computing time is “simply” shifted, from the execution time to the build time. In our environment, it took more than three minutes to create our HelloWorld example, and the process of creating the HelloScript program took more than 20 minutes (HelloScriptEngine: 1291 s, HelloScriptPolyglot: 1251 s).
- However, the biggest challenge is the “closed world assumption”. When the native image is created, a static code analysis is run and only the parts of the code that are run through are compiled in the native image. Although this works for our HelloWorld program, command line parameters had to be input for the JavaScript examples. Classes loaded via “reflection” are only recognised if the fully qualified class name is hard-wired in the source code. This results in problems with any technology that uses dynamic class loading in any form, including JNDI and JMX.
The parts of the program that are loaded dynamically can (and must) be explicitly specified when the native image is created. This includes all parts of the program, from the actual project code to all the libraries used, right up to those of the JRE. Since this configuration is a real challenge for “genuine” programs, tools are provided that are likely to be needed for it to work in practice. For example, the tracing agent monitors a program executed as bytecode. It detects all reflective access operations and uses them to generate a JSON configuration. This can now be used to create the native image.
In practice, the build pipeline would therefore initially create the bytecode variant. All automated tests can then be run with this bytecode variant, and the tracing agent detects the reflective access operations. Assuming that every program path is really executed in this process, the native image can then be generated in a further build step. This takes us directly to the next point: When working with native image technology, the build process becomes longer and more complex overall.
In summary, this means that some things are impossible or close to impossible when using native image technology (e.g. security manager). Although many other things generally work, they require extensive configuration. Tool support is available for this and is undergoing extremely dynamic development. The hope here is that the tools will be able to compensate for the additional work (aside from the build duration). However, this will also make the build process more complex and thus more susceptible to errors.
Pitfalls in Windows
Finally, we will take a look at the Windows platform, which is now also supported. In preparation for this blog post, the “GraalVM Enterprise 20.3.0” and “GraalVM Enterprise 21.0.0.2” versions were tested on a Windows system. Unfortunately, the relevant documentation was still a little lacking here and the tooling does not mesh quite as well as in the Linux environment, so there were some obstacles that were not noticeable in Linux. For instance, there was a problem creating a native image when the underlying bytecode was generated by a different JDK (in this case, by OpenJDK). The error message that appeared was not very helpful either, as it gave no indication of the actual cause.
native-image de.zeiss.zdi.graal.HelloWorld [de.zeiss.zdi.graal.helloworld:20764] classlist: 947.02 ms, 0.96 GB [de.zeiss.zdi.graal.helloworld:20764] (cap): 3,629.54 ms, 0.96 GB [de.zeiss.zdi.graal.helloworld:20764] setup: 5,005.98 ms, 0.96 GB Error: Error compiling query code (in C:UsersxyzAppDataLocalTempSVM-13344835136940746442JNIHeaderDirectives.c). Compiler command ''C:Program Files (x86)Microsoft Visual Studio2019BuildToolsVCToolsMSVC14.28.29333binHostX64x64cl.exe' /WX /W4 /wd4244 /wd4245 /wd4800 /wd4804 /wd4214 '-IC:Program FilesJavagraalvm-ee-java11-21.0.0.2includewin32' '/FeC:UsersxyzAppDataLocalTempSVM-13344835136940746442JNIHeaderDirectives.exe' 'C:UsersxyzAppDataLocalTempSVM-13344835136940746442JNIHeaderDirectives.c' ' output included error: [JNIHeaderDirectives.c, Microsoft (R) Incremental Linker Version 14.28.29337.0, Copyright (C) Microsoft Corporation. All rights reserved., , /out:C:UsersxyzAppDataLocalTempSVM-13344835136940746442JNIHeaderDirectives.exe , JNIHeaderDirectives.obj , LINK : fatal error LNK1104: Datei "C:UsersxyzAppDataLocalTempSVM-13344835136940746442JNIHeaderDirectives.exe" kann nicht ge?ffnet werden.] Error: Use -H:+ReportExceptionStackTraces to print stacktrace of underlying exception Error: Image build request failed with exit status 1
There was another pitfall when it came to operating across drives, as it is unfortunately not possible in Windows to install GraalVM on one drive (in this case, C:\Program Files) and execute it on another drive (in this case, D:\dev\prj\…):
native-image de.zeiss.zdi.graal.HelloWorld [de.zeiss.zdi.graal.helloworld:10660] classlist: 3,074.80 ms, 0.96 GB [de.zeiss.zdi.graal.helloworld:10660] setup: 314.93 ms, 0.96 GB Fatal error:java.lang.IllegalArgumentException: java.lang.IllegalArgumentException: 'other' has different root at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) […]
In addition, it was not possible to identify any performance benefits with the native image in the Windows environment. At present, Windows support (both the GraalVM toolchain itself and the generated native images) is thus rather experimental.
Summary
This blog post has looked primarily at how the start time for Java programs can be massively improved with GraalVM native image technology. It has shown what approaches and technologies GraalVM uses to do this. The results are backed up by measurements from example programs. However, certain challenges were mentioned that arise when native image technology is used.
Virtually no performance improvements can be expected for longer-running programs since the optimisations in conventional JVMs would also apply in such cases. For now, this is just an assumption. An investigation of this aspect would be beyond our scope here and has enough potential to merit its own blog post.
Let’s now turn to the questions from the introduction. In principle, “conventional” Java has not been slow for a very long time; in fact, it is extremely fast. Its use in the (computationally intensive) big data environment is enough of an indication that this is the case. The main prerequisite for a high level of performance is a certain warm-up time. The reverse conclusion is that starting a conventional Java program leaves a lot to be desired, and this is exactly where native image technology comes in. On the other hand, this technology comes with a number of drawbacks, particularly for large, technology-heavy applications.
In summary, GraalVM has the potential to establish itself in various fields of application. Applications in multiple languages could make use of the polyglot properties, and the use of the native image technology that we covered is definitely a viable option for cloud services in particular. However, the use of GraalVM is probably not worthwhile for applications that are computationally intensive (usually those that are longer-running) and non-trivial.
Finally, we should mention the fact that the compiler and optimiser are implemented in Java as a benefit of GraalVM. Although this is initially not better or worse than the previous implementations, it increases the chances of making better use of the potential of the Java community in the medium and long term.
Overall, it remains exciting. At the moment, it is not possible to foresee OpenJDK being replaced entirely. And we should remember that developments in that area are also continuing apace. Nevertheless, GraalVM certainly has the potential to establish itself (at least initially) in specific fields of application.