Skip to main content

Sitecore

Should I Run Sitecore on Kubernetes or App Services?

For a while it looked like Kubernetes and Containers were the future of Sitecore Infrastructure. At one point it was widely believed that Sitecore would deprecate support for App Services. Current thinking is that App Services will be supported by Sitecore well into the future, so a common question when building or upgrading a Sitecore environment is whether to choose Kubernetes or App Services. I think the answer to the question largely depends on you. Here are some questions you should consider when evaluating your future infrastructure architecture.

Are you planning for XP or XM?

Infrastructure requirements for Sitecore XP and xDB are much more complex than requirements for XM. XP requires roles for Content Delivery, Content Management, Identity, Processing, Reporting, Search, Collection, Reference Data, Marketing Automation and Marketing Automation Reporting. Sitecore XM only requires Content Delivery, Content Management & Identity. And if you’re looking toward more headless architectures that offload content delivery to CDN’s (like Experience Edge), you may not even need the Content Delivery Role.

Where Kubernetes really shines is in managing the deployment and dependencies across roles. With Sitecore XP, the value proposition is very high. There are a ton of services, and managing how they are deployed together provides tangible benefits. With Sitecore XM, there just isn’t as much complexity in managing the environment configuration. There’s still a benefit, but it’s less compelling.

Are you committed to Azure?

As you would expect, Azure App Services are only available if you are hosting on Azure. Kubernetes provides more cross platform support. Azure has its own implementation of Kubernetes, but you can run Sitecore on containers on any cloud that supports Windows containers. This current includes both AWS and GKE.

That being said, Sitecore still has a very close relationship with Microsoft and Azure, and most customers will target Azure Kubernetes Services, so much of the documentation and support is tailored to Azure. Despite that, adopting Kubernetes will give you some confidence that you could move to another provider if needed. Adopting into the ecosystem around Kubernetes with cloud agnostic tools for managing CI/CD (helm, Kudo), logging and monitoring (Prometheus) limit direct dependencies on Azure services, although you can usually configure them to leverage azure tools like App Insights to provide additional benefits.

But if you’re not concerned with needing to move to a different cloud provider, leveraging Azure app services may meet your needs just fine.

Do you have other applications that can or will run in Containers?

Running SOLR on Kubernetes can be a gateway to Kubernetes adoption. SOLR in general is difficult to manage and running a SOLR cluster with Zookeeper on VM’s isn’t fun if you don’t know what you’re doing. Kubernetes makes things a little easier with the SOLR operator and Helm charts that make it a more configuration driven affair. I’d still recommend solutions like Search Stax if you don’t have the in house expertise to manage a SOLR environment, let alone a Kubernetes cluster.

But chances are SOLR isn’t the only dependency your Sitecore solution has. Microservice architectures have gained popularity in recent years, and Kubernetes have become the preferred way of deploying and managing them. If your solution is highly dependent on services, then have a consistent deployment and configuration approach for your solution and services brings many advantages.

Do you have expertise in Managing Kubernetes Environments?

If you aren’t running Kubernetes in production, you need to be prepared to get your engineering team up to speed it what it takes to support a Kubernetes implementation. Even with Azure, where AKS provides Kubernetes specific integrations for things like Application Insights and Web Application Firewall, they tend to operate within a Kubernetes context so you need to really understand how they integrate into your stack.

To be ready to manage a Kubernetes environment in production make sure your team fully understands:

  • Deployments – While App Services supported slots to achieve blue green deployment, Kubernetes is much more robust and supports blue green, canary, ramped and even AB testing.
  • Networking, Security & Ingress Options – Kubernetes makes it easy to hide your application from the rest of the environment. Exposing the right parts, working with Ingress Controllers, Key Vault, Web Application Gateway and Front Door give you tons of flexibility but a decent learning curve.
  • Logging & Monitoring – Kubernetes native tools like Grafana and Prometheus provide centralized logging, visualizations and monitoring. Azure provides hooks to connect these to Log Analytics and Application Insights. Making sure you understand your options and configure things correctly is key to being able to manage the environments.
  • Disaster Recovery – Make sure you understand how a cutover would work in the event of a disaster. How you architect hot hot, hot warm and hot cold DR architectures will have its own nuances (and cost implications) when using Kubernetes vs. App Services.

Are your developers ready to adopt Containers?

The learning curve for Kubernetes is probably the biggest barrier to adoption. It’s like learning a completely new language: Docker, Compose, KubeCtl, Kudo, Prometheus, Grafana, Operators, Helm, Charts, are just some of the new terminology that you’ll need to be become fluent in.

There are many advantages to getting past that learning curve including:

  • Faster Local Environment Setups – with containers, getting up and running with your solution and environment is usually just running a few scripts.
  • You can run multiple projects easier – If you have the need to support multiple Sitecore instances and solutions, being able to run one solution, shut it down and then run another one makes it easy.
  • Replicate Upper Environments – If your upper environments use containers, you can run those images locally to debug and troubleshoot faster.
  • Ability to Compose Additional Features – Instead of reverse engineering Sitecore packages and figuring out how to properly deploy them, you can just use multi stage builds to pull in the artifacts needed when creating an image. Many open source modules provide this option, and you can use this strategy to improve reusability.

For these reasons alone, you may want to consider a local environment setup and even use Kubernetes to run containers in Dev or QA.  This could give you a way to get up to speed with containers and Kubernetes before going on the deep end trying to run it in production.

Need Help Deciding?

At Perficient we’ve been honing our approach to helping our clients make this decision. With discovery workshops to understand your requirements and Architecture workshops to hone in on the target architecture, we can make sure you’re making the right decision and understand the downstream implications including cost differentiators for both hosting and maintenance.

If this is a path you’re considering going down, We’d love to help. Reach out to me on LinkedIn, Twitter or fill out our contact form.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

David San Filippo, Principal

David is the Prinicpal of the Sitecore and Optimizely practice at Perficient, David estimates, architects and delivers digital marketing solutions at scale on the Sitecore Platform. A 4X Sitecore Technology MVP, David has written articles for MSDN Magazine and the Microsoft Architecture Journal. He has spoken at Sitecore Symposium, Sitecore Virtual Developer Day, user group meetings and code camps.

More from this Author

Categories
Follow Us
TwitterLinkedinFacebookYoutubeInstagram