Skip to main content

Innovation and Product Development

Apache Sling JVM Performance Comparison

Performance Banner

With the recent proliferation of Java Virtual Machine (JVM) implementations, it’s difficult to know which implementation is the best for your use case. While proprietary vendors generally prefer Oracle Java, there are several open source options with different approaches and capabilities. 

Given how the implementations vary in some underlying technical specifics, the “correct” JVM implementation will vary based on the use case. Apache Sling, in particular has some specific needs given the complexity of the OSGi / JCR backend and the Sling Resource Resolution framework. 

Test Strategy

To help get some real data on which JVM implementation works best for running Apache Sling, I created a project to:

  1. Install a number of JVM implementations and monitoring tools
  2. For each JVM:
    1. Setup an instance of Apache Sling CMS, using no additional parameters
    2. Install a content package to the Sling CMS instance
    3. Run a performance test using siege
  3. Consolidate the data into a CSV

If you are curious, you can checkout the Sling JVM Comparison project on Github.

The project installs and compares the following JVM implementations on version 11:

To create a meaningful comparison, I setup and ran the test an Amazon EC2 m5.large instance running Ubuntu 18.04 LTS “Bionic Beaver” and captured the results.

Startup / Install Performance

An important performance comparison is the amount of time it takes to get an instance running. To measure this, I captured the time in milliseconds to start the Apache Sling CMS instance and the amount of time required to upload and install the same content package. There is a potential variance in the capture of the startup time as the test process polls the Sling instance to see when it responds successfully to a request to determine startup time.

OpenJDK Hotspot and Amazon Coretto are essentially tied as the leaders of the pack with Oracle JDK and GraalVM following shortly behind. Azul Zulu and Eclipse OpenJ9 take 78% and 87% longer to start as OpenJDK Hotspot. Interestingly, most of the JVM implementations take approximately the same time to install the content package, however, Eclipse OpenJ9 takes 35% longer to install the content package.

Performance Under Load

To check performance under load, I tested the instances using siege using a list of URLs over the course of an hour with blocks of 15 minutes on and 15 minutes off. 

First, we can take a look at the throughput per second:

And next, we can look at the raw transaction count:

Both show the same story, OpenJDK Hotspot, Amazon Coretto and Oracle JDK taking the top spots for performance with GraalVM, Azul Zulu and Eclipse OpenJ9 trailing behind.

Memory Usage

Finally, given how memory intensive Java applications can be, it’s important to consider memory usage and here the differences are quite stark:

Eclipse OpenJ9 is significantly less memory intensive, using only 55% of the average memory of the 4 middle-tier JVM implementations. GraalVM also sits outside the average, using 15% more memory than the same middle-tier JVM implementations.

Summary and Findings

From a raw performance perspective, OpenJDK Hotspot is the clear winner with Amazon Coretto close behind. If you are all in on Amazon or want a long-term supported JVM option, Amazon Coretto would be worth considering.

For those running Apache Sling on memory-limited hosting options, Eclipse OpenJ9 is the best option. While there is a tradeoff for performance, when you only have a Gigabyte or two of memory, reducing the load by 45% will make a tremendous difference.

Credit

Thanks to Paul Bjorkstrand for coming up with idea for this post. 

Thoughts on “Apache Sling JVM Performance Comparison”

  1. You might want to redo the test a few times, with some shuffles in the order you run the different VMs in. Given that your “OpenJDK HotSpot”, Amazon Corretto, and Azul Zulu are all the same version (11.0.8) and based on the same OpenJDK code, actual differences in startup times between those three would be highly suprprising. Differences are likely a result of test-to-test or run-to-run environment noise, and not a reflection of actual speed. If you see these results consistently across e.g. 10 runs of each, done without some specific pattern (e.g. not the same A,B,C combo each time), it would be very interesting and worth finding out why. If such 10 runs of eacxh reveal a similar startup time range, gemean, etc., it would be goopd to update the results and the conclusions to reflect that.

  2. Good point @Gil, so I did add a fairly significant pause between each run, but especially since EC2 instances are running in the cloud as you said it could be noise surrounding the actual run. I would note that they are not ordering in the same order as executed, but running several times and randomizing the order each time should eliminate background noise or inter-run contamination.

    I’ll definitely look into your suggestion though, thanks for commenting!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Dan Klco, Adobe Digital Marketing Technical Director

Dan is a certified Adobe Digital Marketing Technologist, Architect, and Advisor, having led multiple successful digital marketing programs on the Adobe Experience Cloud. He's passionate about solving complex problems and building innovative digital marketing solutions. Dan is a PMC Member of the Apache Sling project, frequent Adobe Beta participant and committer to ACS AEM Commons, allowing a unique insight into the cutting edge of the Adobe Experience Cloud platform.

More from this Author

Follow Us
TwitterLinkedinFacebookYoutubeInstagram