Ritesh Bhange, Author at Perficient Blogs https://blogs.perficient.com/author/rbhange/ Expert Digital Insights Wed, 19 Jun 2024 18:55:42 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Ritesh Bhange, Author at Perficient Blogs https://blogs.perficient.com/author/rbhange/ 32 32 30508587 Kubernetes Solution & Strategy for Enterprise Magento 2 Application https://blogs.perficient.com/2019/11/21/kubernetes-solution-strategy-for-enterprise-magento-2-application/ https://blogs.perficient.com/2019/11/21/kubernetes-solution-strategy-for-enterprise-magento-2-application/#respond Thu, 21 Nov 2019 14:20:22 +0000 https://blogs.perficientdigital.com/?p=241865

The hosting of an enterprise application is always challenging. It needs many complex components to work correctly and achieve agility, reliability, and security. In the past couple of years, container technologies have evolved rapidly to do the heavy lifting for enterprise applications in the cloud. The container has played a significant role in delivering robust distributed microservice architecture and continues to integrate into the DevOps space. But on a large scale, container management needs lots of effort.
Kubernetes is built to manage the complexity for you. The cloud providers also came up with Kubernetes Services, which can remove maintenance tasks and other processes from the Kubernetes workload. The combination of a cloud platform and Kubernetes let you focus on strategy, deployment, and integration for enterprise applications.
The Kubernetes for the on-premises Magento 2 Enterprise application is still challenging because of its long downtime during deployment and IO intensive tasks. This guide points out some of the key elements for infrastructure and deployment, which will help you deploy Magento 2 faster. Other enterprise applications can also benefit from this guide to build their infrastructure.

Strategy

The strategy is a crucial part of creating an architecture and building a solution around an enterprise application. The top considerations would be the requirement, selection of toolsets, and flexible integration capability. The solution you are building should be flexible enough to integrate easily with future releases. The dynamic approach will help provide a scalable infrastructure, which can scale up or down on demand. At the same time, the solution must not impact your application, deployment, or security. Kubernetes is a great fit to manage such intricacy.
The strategic approach and strong continuous integration/continuous delivery (CI/CD) approach could help reduce Magento 2 deployment time. In CI/CD, the strong deployment pipeline could create a Magento 2 Docker Image, install Nginx and PHP packages, and set configurations using Dockerfile. You can also prepare composer install, static content deploy, patching, and module installation. Try to keep the Docker image size as small as possible using the compress file system and build a Docker image. In the deploy stage, perform Kubernetes deployment using helm or kubectl and set the Magento media folder as the mount point with Elastic File System (EFS). The CDN or Varnish can serve static content faster.

Platform Decision

Initially, I had a similar basic question: What platform should I select? There are quite a few good options available such as dedicated and cloud server hosting solutions (e.g., Amazon Web Services, Google Cloud Platform, and Rackspace), Magento Cloud hosting, self-hosted Docker solutions, and Kubernetes on-prem. I decided to go with the cloud offer Kubernetes service to offload the Kubernetes management workload. I wanted to have a scripted and automation-supportive platform so that would integrate smoothly with the rest of the toolset. The top cloud Kubernetes services are Google Cloud Platform (GCP) with Google Kubernetes Engine (GKE), Azure with Azure Kubernetes Service (AKS), and Amazon Web Services with Amazon Elastic Kubernetes Service (EKS). These options are all competitive enough and could be a good choice for your solution. I personally like GKE for its out-of-box features, but I ended up choosing AWS EKS service due to other requirements (which I will cover later).

Architecture and Implementation

With the platform chosen, it’s now time to focus on architecture. I wanted to keep in the solution as private as possible for security reasons, but at the same time create a fully scalable infrastructure to pump resources on demand. These are the few simple but effective tools I ended up choosing.
Tools used to build Magento EE Kubernetes solution:

  • AWS – Cloud Infrastructure
  • AWS CloudFormation
  • EKS – Kubernetes
  • Nodes Autoscaling
  • Cluster Autoscaler
  • Elastic File System (EFS)
  • Istio

The infrastructure was prepared to host more than 100 different applications and 2,600 Docker containers. The application should be distrusted and isolated. Here is the infrastructure architecture of Kubernetes solution:

Magento Kubernetes 1 Min

Fig. Infrastructure Architecture for Kubernetes Solution

GKE vs. EKS

In the above architectural map, the EKS cluster is placed in a private segment of Amazon Virtual Private Cloud (VPC) where nodes reside. To accomplish high availability and durability, the nodes are provisioned in multiple availability zones. The nodes’ scalability is managed by AWS Autoscaling and tied up with the Cluster Autoscaler. The Cluster Autoscaler keeps track of the healthy nodes and probes for resource requirements. It also supports to readiness and liveness probes configured for pods deployment in the cluster.
One of the significant reasons to go with EKS is the volume mount capability with multiple nodes offered by EFS service. The GCP Cloud Filestore (beta) offers the same feature, but requires a 1TB minimum fileshare size, whereas EFS priced by GB/month. Yes, there will be a latency penalty compared to EBS volumes, but that is something you can manage with your application design and specified mount points. As an alternative, you can configure the network file system (NFS) in GCP and AWS, but there will be an extra effort required to manage the NFS service. In this architecture, the EFS is provisioned in the private segment as intended. The public segment of VPC has load balancing, which will receive external traffic for port 80/443. Then Istio is able to manage the rest of the routing and communication between EKS cluster pods.
The scaling and auto-healing is handled by Cluster Autoscaler. The EKS logs are uploaded in the CloudWatch Logs. The nodes/pods are monitored by the CloudWatch matrix and Kubernetes monitoring tools.

Kubernetes Node Structure

The following map explains more about internal pods isolation and the flow of requests handled by microservices. The infrastructure is very flexible and closely integrated with several components to provide security.

Magento Kubernetes 2 Min

Fig. Insight of K8s Nodes

The containerized projects are easily deployable using CICD tools without affecting other environments. The CICD build phase packages Docker container image and ships as a part of the deployment release. 95% of the deployment workload is handled by the CICD tool for the Magento M2 application and will finish within a couple of minutes. The dev team is provided with the necessary access to the application using the jump box server and can access the container. The access rights are securely managed and restricted for their specific projects.
This solution can work with any enterprise-level applications and deploy at any scale in a staging, UAT, or production environment using CloudFormation or Terraform templates.

]]>
https://blogs.perficient.com/2019/11/21/kubernetes-solution-strategy-for-enterprise-magento-2-application/feed/ 0 241865
Automation with AWS for WordPress, Drupal & Magento Applications https://blogs.perficient.com/2019/10/02/automation-aws-wordpress-drupal-magento-applications/ https://blogs.perficient.com/2019/10/02/automation-aws-wordpress-drupal-magento-applications/#respond Wed, 02 Oct 2019 17:30:52 +0000 https://blogs.perficient.com/?p=245202

Objective

Automatic installation and configuration solutions for WordPress, Drupal, and Magento applications using AWS CloudFormation infrastructure orchestration and content management tools like Puppet, Ansible, or Chef. The infrastructure as code (IaC) solution should follow architecture best practices such as setting up the database in a private segment, secure authentication, and provisioning. It should also have the system configuration optimization for web, database, and ready-to-go solutions.

A CloudFormation template accepts the user inputs as parameters where needed – for example admin credentials for WordPress, and URL and admin credentials for Magento. The template will also set up Amazon Virtual Private Cloud (VPC) in AWS and create the infrastructure as per best practices. It should also create subnets and launch instances. It will pull required code from the code repository to set up the application or use Amazon S3 for this purpose. Lastly, it will perform optimization using Puppet and set up the base WordPress setup.

Overview

Here we are using WordPress as a base setup, but this is not limited to WordPress – you can go with Magento and Drupal with small changes. The solution is prepared to accomplish automatic setup and I tried to make it simple as possible. I also wanted to take the opportunity to utilize different techniques that show the various possible ways to integrate different pieces of components required for WordPress application setup and deployment.

Component Involves

  • Amazon S3 (code storage)
  • AWS CloudFormation Template (YAML)
  • Puppet 5.0 (Masterless Setup)
  • Nginx
  • PHP-FPM
  • MySQL
  • WP-CLI
  • WordPress

Prerequisites

The solutions are well tested in the US West region (Oregon, [us-west-2]) with Amazon Linux 1 AMIs and prepared to work seamlessly with AWS US East (N. Virginia, [us-east-1]). It can also work for US West (N. California, [us-west-1]) with a small change. The solutions is fully customizable as per need.

Before you start, there are a couple of prerequisites.

  • KeyName:
    • Please set up a SSH key pair in your AWS account (CF Input).
  • S3 Bucket:
    • Please create a “code pull” S3 bucket (it’s hardcoded if you want to use S3)
    • Suggestion: You can use GIT to pull code with a small change in “userdata” of the instance in the CF template.
  • AWS Credentials (optional):
    • I hardcoded the provided AWS credentials to configure AWS-CLI.
    • Note: With a small CF Parameter change, you can input the AWS credentials.
    • Suggestion: You can add an IAM role to the instance and provide input with CF.

Note: Make sure AWS Credentials have access to the S3 bucket.

Solutions

Once the prerequisites have been set, the AWS CloudFormation template is going to accept inputs to set WordPress and provide CF output with the accessible domain name URL of the WordPress application. The CloudFormation template is going to perform the following activity:

CloudFormation Stack Flow

  • Collect inputs.
  • Provision VPC, public, and private subnets.
  • Provision web and DB instances (t2.small/t2.micro type) as per input (Preferred t2.small) and deploy them into the private and public subnets respectively.
  • Provision EIPs for the NAT gateway and the CM/web instance.
  • Setup security groups to make connectivity for communication.
  • Install the Puppet agent for the Puppet MasterLess setup and pull the required custom puppet module from the S3 bucket (code pull).
    • I used ready-to-go Puppet modules instead, created custom modules configuration and placed named devopshv1_AWSLinux1.tar.gz into the code repository.
  • Run Puppet module installation and setup Nginx and PHP-FPM (PHP extension) and perform configurations of Nginx and PHP_FPM.
    • You can modify Nginx and PHP-FPM configuration based on system RAM as a part of optimization in the Puppet module.
  • Use the cfn-init script CF stack to set up WordPress configuration.
  • Use the instance “userdata” configured WP-CLI to install the WP database and finalize WP setup.
  • Stack output provides the WP application access points URL.

Architecture Map

Steps to Create a Setup

  1. Set prerequisites Keypair (KeyName) and S3 Bucket (code pull).
  2. Log in to the AWS account and create the CloudFormation stack using
  3. Upload code to the S3 bucket, including devopshv1_AWSLinux1.tar.gz file (solution extract it).
  4. Create the CloudFormation stack in the Oregon region.
  5. Supply input to the CF template.
  6. The stack template will take a few minutes to provision the AWS resource and WP configurations.
  7. Check the stack output to gain the access domain URL or public IP.

Final WP Application Output

]]>
https://blogs.perficient.com/2019/10/02/automation-aws-wordpress-drupal-magento-applications/feed/ 0 245202
Automation IaC Solution for WordPress, Drupal & Magento Applications https://blogs.perficient.com/2019/10/02/automation-iac-solution-for-wordpress-drupal-magento-applications/ https://blogs.perficient.com/2019/10/02/automation-iac-solution-for-wordpress-drupal-magento-applications/#respond Wed, 02 Oct 2019 15:14:27 +0000 https://blogs.perficientdigital.com/?p=239240

 

Objective

Automatic installation and configuration solutions for WordPress, Drupal, and Magento application using AWS CloudFormation infrastructure orchestration and content management  tools like Puppet, Ansible, or Chef. The infrastructure as code (IaC) solution should follow architecture best practices such as setting up the database in a private segment, secure authentication, and provisioning. It should also have the system configuration optimization for web, database, and ready-to-go solutions.
A CloudFormation template accepts the user inputs as parameters where needed – for example admin credentials for WordPress, and URL and admin credentials for Magento. The template will also set up Amazon Virtual Private Cloud (VPC) in AWS and create the infrastructure as per best practices. It should also create subnets and launch instances. It will pull required code from the code repository to set up the application or use Amazon S3 for this purpose. Lastly, it will perform optimization using Puppet and set up the base WordPress setup.

Overview

Here we are using WordPress as a base setup, but this is not limited to WordPress – you can go with Magento and Drupal with small changes. The solution is prepared to accomplish automatic setup and I tried to make it simple as possible. I also wanted to take the opportunity to utilize different techniques that show the various possible ways to integrate different pieces of components required for WordPress application setup and deployment.

Component Involves

  • Amazon S3 (code storage)
  • AWS CloudFormation Template (YAML)
  • Puppet 5.0 (Masterless Setup)
  • Nginx
  • PHP-FPM
  • MySQL
  • WP-CLI
  • WordPress

Prerequisites

The solutions are well tested in the US West region (Oregon, [us-west-2]) with Amazon Linux 1 AMIs and prepared to work seamlessly with AWS US East (N. Virginia, [us-east-1]). It can also work for US West (N. California, [us-west-1]) with a small change. The solutions is fully customizable as per need.
Before you start, there are a couple of prerequisites.

  • KeyName:
    • Please set up a SSH key pair in your AWS account (CF Input).
  • S3 Bucket:
    • Please create a “code pull” S3 bucket (it’s hardcoded if you want to use S3)
    • Suggestion: You can use GIT to pull code with a small change in “userdata” of the instance in the CF template.
  • AWS Credentials (optional):
    • I hardcoded the provided AWS credentials to configure AWS-CLI.
    • Note: With a small CF Parameter change, you can input the AWS credentials
    • Suggestion: You can add an IAM role to the instance and provide input with CF.

Note: Make sure AWS Credentials have access to the S3 bucket.

Solutions

Once the prerequisites have been set, the AWS CloudFormation template is going to accept inputs to set WordPress and provide CF output with the accessible domain name URL of the WordPress application. The CloudFormation template is going to perform the following activity:

CloudFormation Stack Flow

  • Collect inputs.
  • Provision VPC, public, and private subnets.
  • Provision web and DB instances (t2.small/t2.micro type) as per input (Preferred t2.small) and deploy them into the private and public subnets respectively.
  • Provision EIPs for the NAT gateway and the CM/web instance.
  • Setup security groups to make connectivity for communication.
  • Install the Puppet agent for the Puppet MasterLess setup and pull the required custom puppet module from the S3 bucket (code pull).
    • I used ready-to-go Puppet modules instead, created custom modules configuration and placed named devopshv1_AWSLinux1.tar.gz into the code repository.
  • Run Puppet module installation and setup Nginx and PHP-FPM (PHP extension) and perform configurations of Nginx and PHP_FPM.
    • You can modify Nginx and PHP-FPM configuration based on system RAM as a part of optimization in the Puppet module.
  • Use the cfn-init script CF stack to set up WordPress configuration.
  • Use the instance “userdata” configured WP-CLI to install the WP database and finalize WP setup.
  • Stack output provides the WP application access points URL.

Architecture Map

Steps to Create a Setup

  1. Set prerequisites Keypair (KeyName) and S3 Bucket (code pull).
  2. Log in to the AWS account and create the CloudFormation stack using
  3. Upload code to the S3 bucket, including devopshv1_AWSLinux1.tar.gz file (solution extract it).
  4. Create the CloudFormation stack in the Oregon region.
  5. Supply input to the CF template.

  1. The stack template will take a few minutes to provision the AWS resource and WP configurations.
  2. Check the stack output to gain the access domain URL or public IP.

Final WP Application Output

]]>
https://blogs.perficient.com/2019/10/02/automation-iac-solution-for-wordpress-drupal-magento-applications/feed/ 0 269765
How Docker Containerization Reduces CI/CD Deployment Costs https://blogs.perficient.com/2018/07/06/how-docker-containerization-reduces-ci-cd-deployment-costs/ https://blogs.perficient.com/2018/07/06/how-docker-containerization-reduces-ci-cd-deployment-costs/#respond Fri, 06 Jul 2018 12:00:08 +0000 https://blogs.perficient.com/commerce/?p=6842

Whenever I hear the word “container,” the first thing that comes up in my mind is the containers on large cargo ships. The same context applies to the containerized application in IT as well. The containers on the cargo ships are packed with goods to ship to their destinations. Similarly, in IT, we can use containers to deliver and manage our content much more efficiently.
Container technology has existed for a long time, but it was hard to use previously. This has changed in the last couple of years with the help of Docker. Docker gives a simple and straightforward way to interact with Linux kernel container technology and has changed IT operation and application deployment. With Docker, things are changing rapidly and getting better and better in the areas of application development, testing, deployment, management, performance, security, and scaling. You can utilize and integrate it with any level you need.
Well, “container” sounds cool and powerful, but who can use it? Will it support and work for a small client with the just a single server, mid-level clients with several servers, or for an enterprise client with 500+ servers? We will explore this further, but first, we need to understand what a container is in technical terms and how it’s unique compared to virtualization.

What is a Docker container?

Lots of people try to compare containers with virtualization, which is fine for easily understanding the structure of containers. But does it provide a complete understanding of how containers help fit into our existing setup or deployment and other processes? Let’s have a look.
Docker Container vs. Virtualization - Docker          Docker Container vs. Virtualization - Virtualization
Containerization vs. virtualization; Docker.com
As the figures show, virtualization requires a hypervisor to run multiple OS images and applications combined. Whereas the container requires only a Docker center layer to interact with the base host OS infrastructure. Containerization ultimately saves system resources and gives a way to isolate applications with the different versions of applications and dependencies.
For an example of a traditional approach, let’s say you want to host multiple applications (Magento) with different PHP versions, web server (Nginx, Apache), and different components of caching. With the help of virtualization, this can be possible, but you will need different OS virtual machines and need to install the software you need. If you need five different PHP versions, then you would need to set five different virtual machines with five OS and different components.
Now, with the help of Docker, you only need the base host machine with single OS and Docker engine software. That’s it. Then you only have to use Docker images with different PHP versions and build your application containers. The best part of it is that you can move, migrate, or deploy these container images seamlessly to any environment like Dev, QA, UAT, staging, or production. It will give you the same result, regardless of your base OS.

How will this fit into your existing deployment process?

The traditional deployment process is where the developer stores their code in code repositories, like GIT, and sets up a local development environment, small virtual machines, Vagrant-like virtualization tools, etc. The developer then builds code, performs their testing, and pushes code into GIT for QA. The QA team would have their own QA setup, either in their working machine or hosted in a centralized location. The QA testing might fail because the developer performed their testing on a local machine and underlying packages like the OS, PHP, and web server on the dev environment don’t match with the QA instance. Then what? The developer has to rework on their build and not focus on anything else until the build moves successfully to production.
Docker containerization solves this problem. You can create Docker container images for application and dependencies like PHP, the web server, and caching. Then you upload the container images into the Docker registry (you can use Docker Hub or create your own Docker registry). The developer has to pull the same version of the container images, which is set according to production setup, and start developing an application on their local machines or in the dev environment.
Once the code is ready, the development team can perform their application functionality testing on their local system. They will able to see the same results QA is going to see, as QA will also pull the same version of application component images for testing. It saves lots of developer time and allows them to focus on the next task. By using modern continuous integration, continuous delivery (CI/CD) deployment process tools, you can automate this process and let the build move as per the defined pipelines. And no need to worry about unexpected issues may occur during QA, UAT, or the staging review. This automation and Docker containerization will save development hours, achieve your go-live timeline easily, and ultimately save money for the customer.
Similarly, containerization also helps system operational teams to keep container images up-to-date and perform patching and upgrading most efficiently. You can build the containerized application as a package and release it into your environments. These combinations reduce production uncertainty, application failure, downtime, and errors and give a better customer experience.
Is it that simple? Yes. If you build your containerized environment strategically, there will be fewer challenges.

What to consider while creating a containerized environment and build process?

Architectural design

As this is going to be at the heart of your deployment release cycle, you need to choose high-availability (HA) robust CI/CD automation that provides everyone with visibility, responsibility, and security.

Select the right tool

There are many tools available to perform similar tasks. Review your requirement, cost associated with it, and underlying hosting platform.

Select secure Docker images

Try to choose official Docker container images or build your own images as per the requirement. There are great tips available to follow. For example, choose the volume, port connectivity, size of Docker images, resource utilization, persistent data storage…etc. Also, store container images in your own Docker registry, or somewhere you can control your image registry securely.

Select the right hosting platform

Always try to choose the platform where you can have full control of whether it’s on-premises or with an IaaS solutions provider. Make sure to have an HA, scalable base on demand and data security.
As with a lot of software, containerized applications also require some kind of management, especially for big enterprise customers where an application runs on hundreds or thousands of application containers. It’s not possible to manage and monitor manually, but there are very good tools available to solve this problem. One of these is Kubernetes, which can do monitoring, management of container clusters, scaling, and rollout.
I hope this will help you to make a more robust, controlled, and secure CI/CD deployment process.

Additional Resources:

https://www.docker.com/what-container

]]>
https://blogs.perficient.com/2018/07/06/how-docker-containerization-reduces-ci-cd-deployment-costs/feed/ 0 269257