Modern Java in the Cloud, Part 3: Long Live the Quark(u)s!

True, but remember what we mentioned last time, startup time is crucial. So let’s optimize our runtime for speed. There are, as always, several solutions possible, but one solution is Quarkus.[1]

Quarkus is a runtime tailored for running in a containerized environment. Eh, wait, what?

Java in general is intended to run on a variety of platforms, and to be interchangeable, so that you can write your code on a Mac, compile it on Windows and run it on Linux, and then the Java Virtual Machine takes care of the rest.

Enterprise Java application servers and runtimes are traditionally dynamic, so that you can drop new enterprise beans or even JAR files into them as the application is running. This is an architecture suitable for on-prem solutions, where you typically want to run several apps on one machine. This means that the runtime scans for beans and changes during runtime, which of course requires resources and time.

However, as we mentioned in the previous blog post in this series, nowadays your application typically runs in a container, where your application code is bundled along with your dependencies and the runtime into one image, which is completely separated from your other apps.

This means that you have all the code and all the information you need when you build your image, and in turn that you don’t need to scan for anything new when you run your code. There are not, and will not be, any new code or any changes dropping in. When you change something, you rebuild the entire image and replace your container fully.

Quarkus uses this as a premise, which is a central reason why it starts unusually fast. This is to a huge degree made possible by the dependency injection implementation used. Most runtimes use Weld for this, which is a full implementation of the CDI standard, whereas Quarkus deliberately does not implement the dynamic parts of the specification. Instead, Quarkus uses ArC for CDI. This is an implementation of the new CDI Lite, which became part of the brand new Jakarta Core Profile in Jakarta EE 10, and also is an essential building block of MicroProfile.

Another important characteristic of Quarkus is that it provides an excellent developer experience in that it has automatic reload of code changes, so that during development you hardly ever need to restart your application — you just change your code, invoke the endpoint again and the changes are applied. This works out of the box for both Java and Kotlin. Conveniently enough, this looks to be a direction several runtimes are heading in, as the current version of OpenLiberty also does hot reloading. After having used Quarkus for a while, I notice that this is one of the features I miss the most when switching to another application server.

Let’s take the idea of everything known at build time a step further. Conceptually this also enables us to build an executable of everything we need when we build the code, so that we won’t even need the JVM. While the default mode for Quarkus is the JVM mode, it also gives us the possibility to run in native mode — we compile our code and the entire image into an executable we can run.

Quarkus Native

Quarkus native is when you compile your application and your image to a binary executable file. This enables startup times hardly ever seen in enterprise Java, down to a fraction of the startup time you get from Quarkus in JVM mode — which in turn is considered fast!

Quarkus native relies heavily on GraalVM under the hood, pretty similar to what Spring native also does. This also means that you’ll get all the performance improvements as the GraalVM development moves forward.

This is of course a trade-off, and what you are trading is build time for runtime. Building with GraalVM requires a lot of resources, both in terms of memory, CPU and time, so you’ll need to revisit your build process to ensure that the builders have sufficient computing power. You’ll also lose the benefits of running inside the JVM, so think through if the benefits are worth it. On the flip side, you’ll end up with an image of only a few tens of megabytes, and a tiny footprint.

If you’re running serverless, especially if scaling down to zero instances, startup times are crucial, and Quarkus native would be my go to solution if you prefer the Java or Kotlin sphere. A case where I’ve had great success using Quarkus native has been as a backend-for-frontend for a webpage with a very varying use and amount of users. Then my backend has been able to scale down to zero, and start again whenever a user tries to log in. The waiting time for the container to be up and running is short enough that users hardly get impatient.

Do you have to change your code to compile it to native, though? Probably not, although it of course depends on what your code looks like and what you’re creating. By now, most popular libraries can be compiled with GraalVM without further due, but you might be unlucky on this part. If so, I’d recommend looking into , which provides Quarkus versions for a plethora of libraries.

[1]: Another solution might be CRaC, which is runtime neutral, but somewhat cumbersome to configure.

Modern Java in the Cloud, Part 3: Long Live the Quark(u)s! was originally published in Compendium on Medium, where people are continuing the conversation by highlighting and responding to this story.