Skip to main content


HCL Commerce Redis Solution Creates a Game Changing Experience

snowflake time travel

HCL Commerce’s Version 9 provides many great benefits for eCommerce businesses. One of the benefits is its cloud-native architecture, which allows dynamic scalability and improves the speed and agility of deployments.  It’s no surprise that customers that are still on older versions of the product are rushing to upgrade.  This switch to the newest version doesn’t come without challenges.  Due to the containerization and stateless model of the Commerce servers, some customers have had to refactor their code away from using HTTP Session objects.  In order to support this change, HCL introduced Redis, an open-source service that allows the containers to share cache and other session objects in Redis cache.

HTTP Session Issue in V9

In HCL Commerce V7 and V8, applications had the ability to save user session-related data in the HTTP session object, by utilizing session affinity, or by enabling session replication across the nodes in the cluster.

V9 issue with HTTP Session

Due to the stateless containers that are used in HCL Commerce V9, the application no longer supports session affinity or replication. For example, if you stored an object in the HTTP Session on container1 and your next request hits container2, that session information would not be available to container2 in Version 9.

The Redis solution

HCL had recognized this challenge and introduced Redis, which is a centralized, in-memory database that was introduced in HCL Commerce V9. By using Redis in combination with HCL Cache, Perficient was able to refactor existing solutions to utilize the data cache which could then be stored remotely in Redis and shared across all the pods in the Kubernates cluster.  HCL Cache then handles both the caching and invalidations to allow the application’s business logic to remain the same and only change the mechanism of how the session information is stored.

High-level steps to solve HTTP Session issue in V9.1

  1. Create the custom object cache: Run engine command can be used to create a custom cache.
  2. Redis: Configure the object cache to be remote since all the pods need to access the same object cache. This will ensure, whatever you change from one pod will be reflected immediately when the request comes from other pods.
  3. Serialization: Since the objects are stored outside the JVM, the objects need to be serialized.
  4. Replace the HTTP Session code to lookup the object cache by its JNDI name using the InitialContext and use Redis cache for storage.

Depending on the business requirements, you can also configure the data cache as local, remote, or both. In the “both” configuration, when an entry needs to be cached, HCL Cache would first check to see if it has already been cached in the current node’s local JVM. If it finds the entry, it returns the cached entry to the request, otherwise, it would then check the remote cache in Redis for the entry. When a cache entry needs to be invalidated, HCL Commerce sends an invalidation message to Redis, which handles the message.  The HCL cache for the entry would be cleared from the local cache first and then from the remote cache.

It’s important to consider the business factors in determining how to configure the cache. You want to know how often the data will be changing and whether the data needs to be shared across the application nodes.  You don’t want to hit the remote cache often due to latency, but there’s also overhead for setting and invalidating the cache, so understanding the business usage is critical.  Local caching is the fastest because it resides in the JVM, but is restricted to the node.  Remote caching is slower, but it is available to all nodes.  In general, if the data is changing often, then it should be loaded in the remote cache for shared usage. If it is fairly static data, then local may make more sense and be treated like a registry.  The “both” setting provides the best of both worlds but has the overhead when invalidating the cache information.  The good news is that the configuration is fairly easy to change in the event that the business circumstances change.

Redis Limitations

One of the limitations of Redis is that it doesn’t support an inactivity feature, which is the amount of time to keep the cache entry in the cache after the last cache hit. One workaround for this issue is to set up a static timeout period for the cache such that after the timeout, the cache entry is automatically removed. The challenge is to choose a reasonable timeout value that is long enough for the user on the site to finish his activity.

Redis also doesn’t have a Least Recently Used (LRU) algorithm of the cache to decide which entries to remove from the cache if the cache runs out of storage space.  One workaround for this is to set the memory large enough and the time-out configuration in a way that you are less likely to run out of memory. When the cache expires Redis cleans up the actual cache entries from the memory and then HCL Cache runs a background job that will clean up all the dependencies that are pointed to expired entries.


Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Vijay Ciliveru

Vijay Ciliveru is a Technical Architect at Perficient Inc. He has over 15+ years of experience with proven expertise in the architecture design, development, and migration of HCL commerce-based applications from older platforms into containerized cloud-native architecture of Version 9.1.

More from this Author

Follow Us