Jeevanantham Balakrishnan, Author at Perficient Blogs https://blogs.perficient.com/author/jbalakrishnan/ Expert Digital Insights Tue, 25 Jun 2024 06:51:53 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Jeevanantham Balakrishnan, Author at Perficient Blogs https://blogs.perficient.com/author/jbalakrishnan/ 32 32 30508587 AWS Systems Manager (SSM) https://blogs.perficient.com/2024/06/25/aws-systems-manager-ssm/ https://blogs.perficient.com/2024/06/25/aws-systems-manager-ssm/#respond Tue, 25 Jun 2024 06:51:53 +0000 https://blogs.perficient.com/?p=365057
  • It provides a unified interface for viewing and controlling your infrastructure, automating tasks, and managing configurations. Here are some key features and components of AWS Systems Manager
  • AWS Systems Manager Core Components:
    • Run Command: Executes commands on instances.
    • State Manager: Maintains desired state configurations.
    • Parameter Store: Stores configuration data and secrets.
    • Automation: Automates workflows and tasks.
    • Inventory: Collects and stores instance metadata.
    • Patch Manager: Automates patching for instances.
    • Session Manager: Provides secure access to instances.
    • OpsCenter: Manages and resolves operational issues.
    • Fleet Manager: Manages server fleets.
    • Maintenance Windows: Schedules maintenance tasks.
  • Integration with Other AWS Services:
    • S3: Often used for storing logs, inventory data, and automation documents.
  • Resource Targets:
    • EC2 Instances: Managed instances running on AWS.
    • RDS Instances: Managed database instances.
    • On-Premises Instances: Servers running in your data centers or other clouds, managed through Systems Manager hybrid capabilities.
    • S3 Buckets: Storage for various outputs and configurations.
  • Network Architecture:
    • VPC / On-Premises: Environments where your instances and resources reside. Systems Manager can manage instances within a VPC, across multiple regions, and on-premises systems.

Use Cases:

  • Automation of Routine Tasks: Use Automation to create standardized, repeatable workflows.
  • Secure Remote Management: Use Session Manager for secure, audited access without SSH or RDP.
  • Configuration Management: Use State Manager and Parameter Store for maintaining consistent configurations and managing secrets.
  • Compliance and Inventory Tracking: Use Inventory to collect and query instance metadata for compliance and audit purposes.
  • Patch Management: Use Patch Manager to automate the application of patches across your fleet.
]]>
https://blogs.perficient.com/2024/06/25/aws-systems-manager-ssm/feed/ 0 365057
Types of Databases and use cases https://blogs.perficient.com/2024/05/30/types-of-databases-and-use-cases/ https://blogs.perficient.com/2024/05/30/types-of-databases-and-use-cases/#respond Thu, 30 May 2024 15:02:30 +0000 https://blogs.perficient.com/?p=363704

SQL Databases (MySQL, PostgreSQL, Oracle):

  • Use Case: Perfect for storing and managing structured data with well-defined relationships, like:
  • E-commerce applications: Customers, orders, products, and their attributes (names, prices, descriptions) with clear connections.
  • Enterprise resource planning (ERP): Inventory, employees, departments, and their interactions.

Document Databases (MongoDB, Couchbase):

  • Use Case: Ideal for storing flexible, ever-changing data that doesn’t fit neatly into rows and columns, like:
  • Content management systems (CMS): Articles, blog posts, user profiles with various data types (text, images, comments).
  • E-commerce product catalogs: Products with rich descriptions, reviews, and dynamic attributes (e.g., color variations, sizes).

Columnar Databases (Apache Cassandra, HBase):

  • Use Case: Built for large-scale data analytics where you primarily query specific columns, like:
  • Telecommunications companies: Call detail records (CDRs) with timestamps, locations, call durations.
  • Financial institutions: Stock market data with historical prices, volumes, and market trends.

Key-Value Stores (Redis, Amazon DynamoDB):

  • Use Case: Excel for super-fast data retrieval for caching frequently accessed data or storing temporary session information, like:
  • Shopping cart applications: Storing shopping cart items and quantities for quick retrieval by users.
  • Social media platforms: Caching user profiles and preferences for faster loading times.

Vector Databases (Faiss, Milvus):

  • Use Case: Designed for machine learning applications that work with vector data (multidimensional data points), like:
  • Recommendation systems: User preferences and product features represented as vectors for personalized recommendations.
  • Image recognition systems: Storing and searching image features for efficient image retrieval.

Object Databases (db4o, ObjectDB):

  • Use Case: Well-suited for applications built with object-oriented programming (OOP) where data naturally aligns with real-world objects, like:
  • CAD/CAM software: Storing and manipulating 3D object models with properties and behaviors.
  • Simulation software: Representing complex systems with interacting objects and their attributes.

Graph Databases (Neo4j, Amazon Neptune):

  • Use Case: Ideal for modeling and querying relationships between data entities, like:
  • Social network analysis: Users, their connections, and interactions for understanding social dynamics.
  • Fraud detection: Identifying connections between suspicious transactions and entities.

In-Memory Databases (Redis, Memcached):

  • Use Case: Perfect for applications requiring lightning-fast read/write speeds, often used for caching or leaderboards, like:
  • Real-time gaming: Storing player locations, health points, and in-game data for smooth gameplay.
  • Online auctions: Keeping track of current bids and displaying them instantly to users.

Time Series Databases (InfluxDB, Prometheus):

  • Use Case: Optimized for storing and analyzing data with timestamps, making them ideal for applications like:
  • Internet of Things (IoT): Sensor data from connected devices with timestamps for further analysis.
  • Monitoring systems: Server performance metrics with timestamps to identify trends and troubleshoot issues.

]]>
https://blogs.perficient.com/2024/05/30/types-of-databases-and-use-cases/feed/ 0 363704
AWS Glue Complete View https://blogs.perficient.com/2024/02/01/aws-glue-complete-view/ https://blogs.perficient.com/2024/02/01/aws-glue-complete-view/#respond Thu, 01 Feb 2024 16:03:35 +0000 https://blogs.perficient.com/?p=355489

AWS Glue is a serverless data integration service that simplifies the discovery, preparation, and movement of data for analytics, machine learning (ML), and application development. With Glue, you can:

  • Centralize data discovery and metadata management: Create a unified Data Catalog to identify and understand your data across diverse sources.
  • Build scalable ETL pipelines: Visually develop and schedule data extraction, transformation, and loading (ETL) processes using Spark or Python without managing infrastructure.
  • Run efficient Spark jobs: Leverage serverless Spark environments for data processing, eliminating the need to provision and manage clusters.
  • Integrate with various data stores: Access and process data from a wide range of on-premises, cloud, and streaming sources.
  • Automate data quality checks: Define and enforce data quality rules to ensure data integrity and reliability.
  • Monitor and manage data jobs: Track pipeline execution, performance, and cost through the intuitive Glue console.

Key Features and Architecture

  • Data Catalog: Stores metadata about your data assets, including location, schema, and lineage.
  • ETL Jobs: Visually create and run data processing workflows using Glue Studio or code-based methods.
  • Spark Environments: Serverless execution environments for running Apache Spark jobs.
  • Crawlers: Automatically discover and register data in the Data Catalog.
  • Job Scheduler: Schedule regular executions of ETL jobs and workflows.
  • Connectors: Integrates with a variety of data sources and destinations.
  • Glue Data Quality: Define and enforce data quality rules and monitor data health.
  • AWS Glue Data Lake for Windows: Enables seamless Glue integration with data sources and operations on Windows machines.

Real-Time Use Cases

  • Sensor Data Processing: Continuously ingest and analyze sensor data for real-time monitoring and insights.
  • Log Stream Analytics: Process and analyze log streams in near real-time for operational monitoring, security, and troubleshooting.
  • Fraud Detection: Analyze transactions in real-time to identify and prevent fraudulent activity.
  • Recommendation Engines: Collect and process user behavior data to generate personalized recommendations in real-time.
  • IoT Analytics: Ingest and analyze sensor data from IoT devices to enable real-time insights and actions.

Benefits

  • Simplified data integration: Streamline data movement and transformations without managing infrastructure.
  • Reduced costs: Pay only for the resources you use with serverless Spark environments.
  • Improved data quality: Define and enforce data quality rules to ensure reliable data.
  • Enhanced data governance: Gain visibility and control over your data assets.
  • Faster time to insights: Accelerate data-driven decision making with efficient data processing.

Getting Started

  1. Set up your AWS account: If you don’t have one, create a free tier account at https://aws.amazon.com/.
  2. Launch the AWS Glue console: Navigate to the Glue service in the AWS Management Console.
  3. Create a Data Catalog: Establish a central repository for your data asset metadata.
  4. Build your first ETL job: Use Glue Studio or code to create a data processing workflow.
  5. Connect to data sources: Choose from a variety of pre-built connectors or create custom connectors.
  6. Run and monitor your jobs: Schedule and execute your ETL jobs and track their progress and performance.
]]>
https://blogs.perficient.com/2024/02/01/aws-glue-complete-view/feed/ 0 355489
Hosting a Static Website using AWS https://blogs.perficient.com/2023/07/26/hosting-a-static-website-using-aws/ https://blogs.perficient.com/2023/07/26/hosting-a-static-website-using-aws/#respond Wed, 26 Jul 2023 07:09:13 +0000 https://blogs.perficient.com/?p=335992

Services used –

  1. Amazon S3
  2. AWS Cloud Front
  3. Route 53

Creating the bucket –

  •  Login to AWS account in the AWS Management console – https://aws.amazon.com/console/
  • Sign in using the valid credentials if you already have an account, else you can create a free tier account by signing up.
  • Once logged in to the account, you can locate Amazon S3 in the services listed out, Or you can search Amazon S3 using the Search option in the left top corner of the AWS Management console.
  • Once the Amazon S3 opens, click “Create Bucket” on the top right side of the page.
  • Enter the “Bucket Name”, Bucket name should be globally unique. I have given the name as “cloudopsbootcamp”
  • Select the closest AWS region in the “AWS Region”, I have selected “Asia Pacific (Mumbai) ap-south-1, which is the closest region to me since I am trying to host the website from India.
  • Keeping the default settings and create the bucket by clicking “create bucket”.

Enabling the Static Website Hosting –

  • After creating the bucket, select the bucket name which was created and open it.
  • Select “Properties” tab.
  • Scroll down to the bottom of the page, and select “Static Website Hosting”, and click “edit”.
  • “Enable” the “static website hosting”.
  • In the Index document – select the name of the html file where the default page or home page of the website.
  • Scroll down and click “Save changes”.

 

Remove the block public access setting –

  • Select the bucket, and click “Permissions” tab.
  • Under “Block Public Access” setting, click edit.
  • Uncheck the “Block all public access” and “save changes”.

 

Adding the bucket policy which makes our bucket content publicly available –

  • Choose the bucket already created,
  • Select permissions, under permissions, select the “Bucket Policy”, choose edit.
  • To grant the public read access to our website, copy and paste the below bucket policy and paste it in the “Bucket Policy Editor”.

{

“Version”: “2012-10-17”,

“Statement”: [

{

“Sid”: “PublicReadGetObject”,

“Effect”: “Allow”,

“Principal”: “*”,

“Action”: [

“s3:GetObject”

],

“Resource”: [

“arn:aws:s3:::cloudopsbootcamp /*”

]

}

]

}

  • “cloudopsbootcamp” is the bucket name added in the above code.
  • Click “Save Changes”.

 

Configure the Index Document –

Creating the index file.

  • Open a notepad and create a .html file. I have used the below script.

<html xmlns=”http://www.w3.org/1999/xhtml” >

<head>

<title>My Website Home Page</title>

</head>

<body>

<h1>Welcome to my website</h1>

<p>Hi, this is Vishak M U Srivatsa creating the page for Boot Camp</p>

<p>Now hosted on Amazon S3!</p>

</body>

</html>

  • Once the html file is created, save the file locally and upload it to the S3 bucket.
  • Click, “Upload”, select the file.
  • Once the file is added, scroll down and click “Upload” again so that file gets uploaded to the bucket.
  • After uploading the .html document, open the bucket, go to “Permissions”, scroll down the page, and in the “Static website hosting”, copy the link from “Bucket website endpoint”.

 

And paste it in the address bar of a browser to check if the website works. You should get the desired result.

 

 

Speeding up the website with Amazon CloudFront

Creating the CloudFront Distribution –

  • Type “CloudFront” in the AWS Web Console, open the CloudFront service.
  • Click on the “Create a CloudFront distribution”.
  • Copy the Website Endpoint to “Origin Domain”.
  • Origin ID (name) is auto filled.
  • For the “default cache behavior” settings, keep the default settings.
  • For the distribution settings, select “Use all edge locations”.
  • For the Alternate domain name use the website name, www.cloudopsbootcamp.com.
  • Set the “Standard Logging” on.
  • For S3 bucket – Choose the bucket which has been created.
  • Click on the “Create Distribution” to create the distribution.
  • Record the value of Distribution Domain Name shown in the CloudFront console, for example, http://d22220ccjzmz5.cloudfront.net/.

 

And paste it in the address bar of a browser to check if the website works. You should get the desired result.

 

Update the record sets for the domain and subdomain

  • Open the Route 53 from the search option in AWS Management Console.
  • In the left navigation window, select “Hosted Zones”
  • In the left navigation, choose Hosted zones.
  • On the “Hosted Zones” page, choose the hosted zone that you created for your subdomain, for example, www.cloudopsbootcamp.com
  • Under “Records”, select the A record that you created for your subdomain.
  • Under “Record details”, choose Edit record.
  • Under “Route traffic to”, choose Alias to CloudFront distribution.
  • Under “Choose distribution”, choose the CloudFront distribution.
  • Choose Save.

Note – To add the DNS records and to create domains, sub domains, we need to purchase the  domain.

Once the task is completed, make sure to disable and delete all the resources.

]]>
https://blogs.perficient.com/2023/07/26/hosting-a-static-website-using-aws/feed/ 0 335992
AWS Managed Microsoft AD https://blogs.perficient.com/2023/07/04/aws-managed-microsoft-ad/ https://blogs.perficient.com/2023/07/04/aws-managed-microsoft-ad/#respond Tue, 04 Jul 2023 14:26:04 +0000 https://blogs.perficient.com/?p=339393

AWS Managed Microsoft AD:

 

 

AWS Managed Microsoft AD is an Amazon Web Services (AWS) service that provides a managed version of Microsoft Active Directory in the cloud. It offers the familiar features and capabilities of Microsoft AD without the need for infrastructure deployment, management, and maintenance.

Here are Some Key Features of AWS Managed Microsoft AD:

  1. Managed Service: AWS handles the underlying infrastructure, including patching, backups, and ensuring high availability, allowing you to focus on managing your directory and user accounts.
  2. Compatibility: AWS Managed Microsoft AD is fully compatible with Microsoft Active Directory, enabling standalone directory usage or integration with existing on-premises AD environments.
  3. Multi-Availability Zone Deployment: It is deployed across multiple availability zones within a region, ensuring fault tolerance and high availability for directory services.
  4. User and Group Management: You can create, manage, and organize user accounts and groups using the AWS Management Console or APIs, providing control over access to AWS resources and applications.
  5. Domain Trusts: AWS Managed Microsoft AD supports establishing trust relationships with on-premises Active Directory domains, allowing extension of existing AD infrastructure to AWS.
  6. Group Policies: You can define and enforce group policies across the AWS Managed Microsoft AD directory, ensuring consistent configurations and security settings for users and resources.
  7. Integration with AWS Services: It seamlessly integrates with various AWS services, including Amazon EC2, Amazon RDS, and AWS Single Sign-On, enabling authentication and authorization using AD credentials.
  8. Security and Compliance: AWS Managed Microsoft AD includes built-in security features like encryption, secure remote access and support for multi-factor authentication (MFA). It also helps meet compliance requirements such as HIPAA and PCI DSS.

Above attached Image for the Detailed architecture of How AD has been linked to other services.

It’s important to note that AWS Managed Microsoft AD should not be confused with the AWS Directory Service for Microsoft Active Directory, which is a separate service providing a fully managed, standalone Microsoft AD in the AWS cloud.

 

]]>
https://blogs.perficient.com/2023/07/04/aws-managed-microsoft-ad/feed/ 0 339393
Enable/Disable Termination Protection in EC2 For 100 Instances https://blogs.perficient.com/2023/06/22/enable-disable-termination-protection-in-ec2-for-100-instances/ https://blogs.perficient.com/2023/06/22/enable-disable-termination-protection-in-ec2-for-100-instances/#respond Thu, 22 Jun 2023 12:13:53 +0000 https://blogs.perficient.com/?p=338391

Manually enable/disable for 100 Instances is a difficult task. To overcome this task we have some automated processes.

It’s possible to automate the process in one short using Python SDK.

List of Ec2 instances:

Python Code:

import boto3

 

def disable_termination_protection(instance_id):
    ec2 = boto3.client('ec2')
    response = ec2.modify_instance_attribute(
        InstanceId=instance_id,
        DisableApiTermination={
            'Value': False
        }
    )
    print(f"Termination protection disabled for instance {instance_id}")

 

# Get running instances
ec2 = boto3.client('ec2')
response = ec2.describe_instances(
    Filters=[
        {
            'Name': 'instance-state-name',
            'Values': ['running']
        }
    ]
)

 
# Disable termination protection for each running instance
for reservation in response['Reservations']:
    for instance in reservation['Instances']:
        instance_id = instance['InstanceId']
        disable_termination_protection(instance_id)

The above code you will find the running instance and disable the instance termination.

After executing we can check this manually for any random 5 instances.

Happy learning!!

]]>
https://blogs.perficient.com/2023/06/22/enable-disable-termination-protection-in-ec2-for-100-instances/feed/ 0 338391
Install Docker on an Amazon EC2 Instance Using the Yum Package Manager https://blogs.perficient.com/2023/06/11/install-docker-on-an-amazon-ec2-instance-using-the-yum-package-manager/ https://blogs.perficient.com/2023/06/11/install-docker-on-an-amazon-ec2-instance-using-the-yum-package-manager/#respond Sun, 11 Jun 2023 10:44:24 +0000 https://blogs.perficient.com/?p=337490

To install Docker on an Amazon EC2 instance using the yum package manager, you can follow the steps below:

 

  1. Connect to your EC2 instance using SSH.
  2. Update the package index and upgrade installed packages by running the following command:
    sudo yum update -y
  3. Install Docker’s dependencies by executing the following command:
    sudo yum install -y yum-utils device-mapper-persistent-data lvm2
  4. Configure the Docker repository by running the command:
    sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
  5. Install Docker by executing the following command:
    sudo yum install -y docker-ce docker-ce-cli containerd.io
  6. Start the Docker service using the command:
    sudo systemctl start docker
  7. Enable Docker to start on system boot:
    sudo systemctl enable docker
  8. Verify that Docker is installed correctly by running the following command:
    docker --version

    If Docker is installed properly, you should see the version information.

While installing docker if you faced any issue respond to this blog will try to solve!!

Thank you

]]>
https://blogs.perficient.com/2023/06/11/install-docker-on-an-amazon-ec2-instance-using-the-yum-package-manager/feed/ 0 337490
Introduction to Terraform (EC2 Creation using Terraform) https://blogs.perficient.com/2023/06/10/introduction-to-terraform-day-1/ https://blogs.perficient.com/2023/06/10/introduction-to-terraform-day-1/#respond Sat, 10 Jun 2023 12:57:42 +0000 https://blogs.perficient.com/?p=337384

TABLE OF CONTENTS

  • Introduction
  • Infrastructure as Code (IaC)
  • Benefits of IaC include:
  • Why Terraform?
  • Key Concepts and Components of Terraform
  •  Setting Up Terraform and Creating Your First Configuration
  • Conclusion

Key Concepts and Components of Terraform

Terraform has several key concepts and components that you should be familiar with:

  • Providers: Providers are responsible for managing the lifecycle of resources within a specific cloud platform or service. Terraform has a wide range of built-in providers, and you can also create your own custom providers.
  • Resources: Resources are the individual components of your infrastructure, such as virtual machines, storage accounts, or network interfaces. In Terraform, you define your resources using the HCL code.
  • State: Terraform maintains a state file that tracks the current state of your infrastructure. This allows Terraform to determine what changes need to be made to bring your infrastructure in line with your desired state.
  • Modules: Modules are reusable, self-contained units of Terraform code that can be shared and reused across different projects. Modules help you to create modular, maintainable infrastructure code.

🔹 Setting Up Terraform and Creating Your First Configuration

To get started with Terraform, follow these steps:

  1. Download and install Terraform: Visit the Terraform Download Link and download the appropriate binary for your operating system. Extract the binary to a directory in your system’s PATH.
  2. Verify the installation: Open a terminal or command prompt and run terraform -v to verify that Terraform is installed correctly. You should see the version number displayed.
  3. Create a new directory for your Terraform project: Create a new directory to store your Terraform configuration files. Navigate to this directory in your terminal or command prompt.
  4. Create a Terraform configuration file: In your project directory, create a new file called main.tf. This file will contain your Terraform configuration.
  5. Define a provider and resource: In main.tf, define a provider and a resource using HCL.
  6. For example, to create a simple AWS EC2 instance, you might use the following HCL code.

COPY

provider “aws” {
  region = “eu-north-1”
  access_key =”*************************”
  secret_key = “***********************************************”
}
resource “aws_instance” “my-first-ec2-instance” {
  ami       = “ami-04980462b81b515f6”
  instance_type=”t3.micro”
  tags = {
    Name = “my-first-ec2-instance”
  }
}
Since I have chosen the north region as per convenient also added my region AMI because ec2 instance and AMI are region specific.
  1. Initialize Terraform: In your terminal or command prompt, run terraform init to initialize your Terraform project. This will download the necessary provider plugins and set up the backend for storing your state file.

2.Plan your configuration: Run terraform plan, the plan will show you a detailed summary of the actions Terraform will take, such as creating, modifying, or destroying resources.

3.Apply your configuration: Run terraform apply to create your infrastructure. Terraform will prompt you to confirm that you want to proceed.

  1. Type yes and press Enter to continue

 

 

 

 

4.Verify your infrastructure: Once Terraform has finished applying your configuration, you should see your new infrastructure in your cloud provider’s console

  1. Destroy your infrastructure: Run terraform destroy command is used to destroy the resources created by a Terraform configuration. It is used when you want to tear down or remove the infrastructure that was previously created.

Thank you!! Happy learning

]]>
https://blogs.perficient.com/2023/06/10/introduction-to-terraform-day-1/feed/ 0 337384
Deployment of Spring Boot App on Amazon EKS Using GitHub, Jenkins, Maven, Docker, and Ansible https://blogs.perficient.com/2023/06/08/deployment-of-spring-boot-app-on-amazon-eks-using-github-jenkins-maven-docker-and-ansible/ https://blogs.perficient.com/2023/06/08/deployment-of-spring-boot-app-on-amazon-eks-using-github-jenkins-maven-docker-and-ansible/#respond Thu, 08 Jun 2023 17:29:08 +0000 https://blogs.perficient.com/?p=336968

Deployment of Spring Boot App

This is a common use case scenario that is used by several organizations. I hope this detailed blog is helpful to understand the CI/CD process.

Let’s get started and dig deeper into each of these steps.

Step 1 — Create an Ubuntu T3 Large Instance

1

Select an existing key pair or make a new one. Enable HTTP and HTTPS Traffic. Once the instance is in a running state, you can connect via SSH Client.

Step 2 — Install JDK

sudo su
sudo apt-get update
sudo apt install openjdk-11-jdk -y
java --version


Step 3 — Install and Setup Jenkins

curl -fsSL https://pkg.jenkins.io/debian-stable/jenkins.io.key | sudo tee /usr/share/keyrings/jenkins-keyring.asc > /dev/null

echo deb [signed-by=/usr/share/keyrings/jenkins-keyring.asc] https://pkg.jenkins.io/debian-stable binary/ | sudo tee /etc/apt/sources.list.d/jenkins.list > /dev/null

sudo apt update
sudo apt install jenkins
sudo systemctl status jenkins
Below attached Image for the normal procedure to install Jenkins. 
but I have faced some issues. 

While installing Jenkins faced some issues after some debugging methods found the proper link and installation :

Debian Jenkins Packages use this link to install Jenkins on the Ubuntu server.

steps to follow:

This is the Debian package repository of Jenkins to automate installation and upgrade. To use this repository, first add the key to your system:

    
  curl -fsSL https://pkg.jenkins.io/debian-stable/jenkins.io-2023.key | sudo tee \
    /usr/share/keyrings/jenkins-keyring.asc > /dev/null
  
Then add a Jenkins apt repository entry:
    
  echo deb [signed-by=/usr/share/keyrings/jenkins-keyring.asc] \
    https://pkg.jenkins.io/debian-stable binary/ | sudo tee \
    /etc/apt/sources.list.d/jenkins.list > /dev/null
  
Update your local package index, then finally install Jenkins:

   
  sudo apt-get update
  sudo apt-get install fontconfig openjdk-11-jre
  sudo apt-get install jenkins

 

Since Jenkins works on Port 8080, we will need to go to our EC2 Security Group and add Port 8080 in our Inbound Security Rules.

Next, go to your <Public IP Address:8080> to access Jenkins

To get the initial admin password on Jenkins

sudo cat /var/lib/jenkins/secrets/initialAdminPassword

Jenkins is set up successfully now.

Step 4 — Update Visudo and Assign Administrative Privileges to Jenkins User

Let’s add jenkins user as an administrator and also ass NOPASSWD so that during the pipeline run it will not ask for root password.

So, basically sudoers file is needed to make any user admin.

Open the file /etc/sudoers in vi mode

sudo vi /etc/sudoers

Add the following line at the end of the file:

jenkins ALL=(ALL) NOPASSWD: ALL

After adding the line save and quit the file with: wq!

Now we can use Jenkins as the root user and for that run the following command:

$ sudo su - jenkins



Step 5 — Install Docker with user Jenkins

Remember, all these commands are run as a Jenkins user, not a Ubuntu user.

sudo apt install docker.io
docker --version


docker ps


sudo usermod -aG docker jenkins
sudo docker ps
sudo reboot

Step 6 — Install and Setup AWS and EKS CLI

sudo apt install awscli
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
sudo apt install unzip
unzip awscliv2.zip
sudo ./aws/install --update
aws --version

Now after installing AWS CLI, let’s configure the AWS CLI so that it can authenticate and communicate with the AWS environment.

aws configure

Login to your AWS Console, go to Security Credentials and create a new Secret access key. Remember, to download this key, as you will not be able to access it later if you haven’t downloaded it. Enter all these details:

Step 7 — Install and Setup kubectl

curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin
kubectl version


Step 8 — Creating an Amazon EKS Cluster using eksctl

First, we have to install and set up eksctl using these commands.

curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
sudo mv /tmp/eksctl /usr/local/bin
eksctl version

You can refer to this URL to create an Amazon EKS Cluster. https://docs.aws.amazon.com/eks/latest/userguide/create-cluster.html .

Here I have created a cluster using eksctl.

You need the following in order to run the eksctl command:

  1. Name of the cluster: — name first-eks-cluster1
  2. Version of Kubernetes: — version 1.24
  3. Region : — region us-east-2
  4. Nodegroup name/worker nodes: — nodegroup-name worker-nodes
  5. Node Type: — nodegroup-type t2.micro
  6. Number of nodes: — nodes 2

This command will set up the EKS Cluster in our EC2 Instance. The command for the same is as below,

eksctl create cluster --name new-eks-cluster --version 1.24 --region us-east-2 --nodegroup-name worker-nodes --node-type t2.micro --nodes 2

Kindly note that it took me almost 20–25 minutes to get this installed and running.

Step 9 — Add the EKS IAM Role to EC2

Here, first, we will need to create an IAM Role with administrator access.

You need to create an IAM role with the AdministratorAccess policy.
Go to the AWS console, IAM, and click on Roles. create a role

Select AWS services, Click EC2, and Click on Next permissions.

Now search for AdministratorAccess policy and click

Skip on create tag.
Now give a role name and create it.

Assign the role to the EC2 instance
Go to the AWS console, click on EC2, select EC2 instance, and Choose Security.
Click on Modify IAM Role

Choose the role you have created from the dropdown.
Select the role and click on Apply.

Step 10 — Add Docker Credentials on Jenkins

Goto -> Jenkins -> Manage Jenkins -> Manage Credentials -> Stored scoped to jenkins -> global -> Add Credentials [ GIVE YOUR DOCKER HUB CREDENTIALS ]

Next, we will create GitHub credentials:

The Global Credentials Dashboard should look like this:

Step 11 — Add Maven in Global Tool Configuration

Step 12 — Add Jenkins Shared Library

Go to Manage Jenkins → Configure System → Global Pipeline Libraries →

Library name — jenkins-shared-library

Default Version — main

Project Repository — https://github.com/jeeva0406/jenkins-shared-library1.git

Click on Apply and Save

Step 13 — Build, Deploy, and Test Jenkins Pipeline

Create a new Pipeline: Go to Jenkins Dashboard

click on New Item Pipeline Name: EKS-Demo-Pipeline

select Pipeline Add pipeline script

Add the below script

@Library('jenkins-shared-library@main') _
pipeline {

  agent any
  
  parameters {
 choice(name: 'action', choices: 'create\nrollback', description: 'Create/rollback of the deployment')
    string(name: 'ImageName', description: "Name of the docker build", defaultValue: "kubernetes-configmap-reload")
 string(name: 'ImageTag', description: "Name of the docker build",defaultValue: "v1")
 string(name: 'AppName', description: "Name of the Application",defaultValue: "kubernetes-configmap-reload")
    string(name: 'docker_repo', description: "Name of docker repository",defaultValue: "writetoritika")
  }
      
  tools{ 
        maven 'maven3'
    }
    stages {
        stage('Git-Checkout') {
            when {
    expression { params.action == 'create' }
   }
            steps {
                gitCheckout(
                    branch: "main",
                    url: "https://github.com/writetoritika/spring-cloud-kubernetes.git"
                )
            }
        }
        stage('Build-Maven'){
            when {
    expression { params.action == 'create' }
   }
      steps {
          dir("${params.AppName}") {
           sh 'mvn clean package'
          }
      }
     }
     stage("DockerBuild and Push") {
         when {
    expression { params.action == 'create' }
   }
         steps {
             dir("${params.AppName}") {
                 dockerBuild ( "${params.ImageName}", "${params.docker_repo}" )
             }
         }
     }
     stage("Docker-CleanUP") {
         when {
    expression { params.action == 'create' }
   }
         steps {
             dockerCleanup ( "${params.ImageName}", "${params.docker_repo}" )
   }
  }
       stage("Ansible Setup") {
         when {
    expression { params.action == 'create' }
   }
         steps {
             sh 'ansible-playbook ${WORKSPACE}/kubernetes-configmap-reload/server_setup.yml'
   }
  }
     stage("Create deployment") {
   when {
    expression { params.action == 'create' }
   }
         steps {
             sh 'echo ${WORKSPACE}'
             sh 'kubectl create -f ${WORKSPACE}/kubernetes-configmap-reload/kubernetes-configmap.yml'
         }
     }
     stage ("wait_for_pods"){
     steps{
              
                sh 'sleep 300'
             
     }
     }
  stage("rollback deployment") {
         steps {                                   
               sh """
                    kubectl delete deploy ${params.AppName}
         kubectl delete svc ${params.AppName}
      """        
         }
     }
    }
}

Step 14 — Ansible Python Setup

Lets goto our EC2 Instance, and enter these commands

sudo apt update
sudo apt install software-properties-common
sudo add-apt-repository --yes --update ppa:ansible/ansible
sudo apt install ansible
sudo apt install python3
sudo apt install python3-pip
pip3 install Kubernetes

Finally, after many failed attempts our pipeline is running:

Step 15 — Add Webhook for Continuous Delivery

Go to your GitHub Project, Settings → webhooks →<EC2IP Address:8080>/github-webhook/ and make the content type to application/json

Lastly, we need to trigger with Webhook

IN JENKINS -> GO TO GENERAL TAB -> BUILD TRIGGERS -> ADD GITHUB WEB HOOK TRIGGER

Now,

Step 16 — Delete Cluster

Lastly, use this command:

eksctl delete cluster --name new-eks-cluster

Keep learning!

]]>
https://blogs.perficient.com/2023/06/08/deployment-of-spring-boot-app-on-amazon-eks-using-github-jenkins-maven-docker-and-ansible/feed/ 0 336968
Simplifying Data Management with Amazon S3 Lifecycle Configuration https://blogs.perficient.com/2023/06/06/simplifying-data-management-with-amazon-s3-lifecycle-configuration/ https://blogs.perficient.com/2023/06/06/simplifying-data-management-with-amazon-s3-lifecycle-configuration/#respond Tue, 06 Jun 2023 11:08:56 +0000 https://blogs.perficient.com/?p=337090

Introduction:

In the world of cloud storage, effective data management is crucial to optimize costs and ensure efficient storage utilization. Amazon S3, a popular and highly scalable object storage service provided by Amazon Web Services (AWS), offers a powerful feature called Lifecycle Configuration.

With S3 Lifecycle Configuration, you can automate the process of moving objects between different storage classes or even deleting them based on predefined rules. In this blog post, we will explore the steps involved in setting up S3 Lifecycle Configuration, enabling you to streamline your data management workflow and save costs in the long run.

Step 1:

Access the Amazon S3 Management Console To begin, log in to your AWS account and access the Amazon S3 Management Console. This web-based interface provides an intuitive way to manage your S3 buckets and objects.

Step 2:

Choose the Desired Bucket Select the bucket for which you want to configure the lifecycle rules. If you don’t have a bucket yet, create one by following the on-screen instructions.

I have chosen this bucket.

Step 3:

Navigate to the Lifecycle Configuration Settings Within the selected bucket, locate the “Management” tab and click on “Lifecycle.” This section allows you to define and manage lifecycle rules for the objects in your bucket.

Step 4:

Create a New Lifecycle Rule Click on the “Add lifecycle rule” button to create a new rule. Give your rule a descriptive name to help you identify its purpose later.

Step 5:

Define the Rule Scope Specify the objects to which the rule applies. You can choose to apply the rule to all objects in the bucket or define specific prefixes, tags, or object tags to narrow down the scope.

Step 6:

Set the Transition Actions Define the actions that should occur during the lifecycle of the objects. Amazon S3 offers three primary transition actions:

a. Transition to Another Storage Class: Choose when objects should be transitioned to a different storage class, such as moving from the Standard storage class to the Infrequent Access (IA) or Glacier classes.

b. Define Expiration: Specify when objects should expire and be deleted automatically. This feature is particularly useful for managing temporary files or compliance-related data retention policies.

c. Noncurrent Version Expiration: If versioning is enabled for your bucket, you can configure rules to expire noncurrent object versions after a specific period.

In my scenario I have log files are storing the bucket hence for I want to remove the logs for 30 days. 

proc 1: first 30 days the files will be moved to Glacier. 

proc 2: After 7 days the file s will be deleted (expire)

note :

Step 7:

Set the Transition Conditions To fine-tune your rule, you can define transition conditions. For example, you might want to transition objects to a different storage class only if they have been untouched for a specific number of days or meet certain criteria based on object tags.

Step 8:

Review and Save the Lifecycle Rule Carefully review the settings of your lifecycle rule to ensure they align with your data management requirements. Once you are satisfied, save the rule to activate it.

Step 9:

Monitor and Modify Lifecycle Rules After saving the lifecycle rule, you can monitor its performance and make modifications as needed. The Amazon S3 Management Console provides various metrics and logs to track the rule’s execution and evaluate its effectiveness.

 

 

Conclusion:

Amazon S3 Lifecycle Configuration empowers you to automate data management tasks and optimize storage costs effortlessly. By following the steps outlined in this blog post, you can easily set up lifecycle rules to transition objects between storage classes or define expiration policies. Embracing the power of S3 Lifecycle Configuration allows you to achieve better data organization, and improved performance.

]]>
https://blogs.perficient.com/2023/06/06/simplifying-data-management-with-amazon-s3-lifecycle-configuration/feed/ 0 337090
Install Docker on a Linux system using the yum package manager https://blogs.perficient.com/2023/05/31/install-docker-on-a-linux-system-using-the-yum-package-manager/ https://blogs.perficient.com/2023/05/31/install-docker-on-a-linux-system-using-the-yum-package-manager/#respond Wed, 31 May 2023 14:48:37 +0000 https://blogs.perficient.com/?p=336715

To install Docker on a Linux system using the yum package manager

In Linux we have various packages manager for our use case use yum or dnf package manager (since amazon ec2 Linux recommend using the yum package manager for easy installation)

we can follow these steps to install docker on Linux. 

Update the package manager by running the following command:

step 1: sudo yum update

step 2: sudo yum-config-manager –add-repo https://download.docker.com/linux/centos/docker-ce.repo

After running the above command, we will get the below Error:

1

[root@ip-172-31-45-*** home]# sudo yum-config-manager –add-repo https://download.docker.com/linux/centos/docker-ce.repo

sudo: yum-config-manager: command not found.

To resolve this issue, we have to use this method:

1.sudo yum install yum-utils

again try to run the same command

2.sudo yum-config-manager –add-repo https://download.docker.com/linux/centos/docker-ce.repo

step 3: sudo yum install docker-ce

while install the docker-ce we will get this below error

2

[root@ip-172-31-45-*** home]# sudo yum install docker-ce

Docker CE Stable – x86_64                                                                                                         575  B/s | 397  B     00:00

Errors during downloading metadata for repository ‘docker-ce-stable’:

  • Status code: 404 for https://download.docker.com/linux/centos/2023.0.20230503/x86_64/stable/repodata/repomd.xml (IP: 65.9.55.48)

Error: Failed to download metadata for repo ‘docker-ce-stable’: Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried

Ignoring repositories: docker-ce-stable

Last metadata expiration check: 1 day, 20:04:19 ago on Mon May 29 17:08:22 2023.

No match for argument: docker-ce

Error: Unable to find a match: docker-ce

3

To resolve this issue we have follow this approach :

step 1: sudo rm /etc/yum.repos.d/docker-ce.repo

step 2 : sudo vi /etc/yum.repos.d/docker-ce.repo

Paste the below statement inside the page

[docker-ce-stable]

name=Docker CE Stable – $basearch

baseurl=https://download.docker.com/linux/centos/7/$basearch/stable

enabled=1

gpgcheck=1

gpgkey=https://download.docker.com/linux/centos/gpg

4

 

step 1: sudo yum update

step 2: sudo yum install docker-ce

step 3: sudo systemctl start docker

step4 :sudo systemctl enable docker.

step5: sudo systemctl status docker

5

 

Finally, we can see the docket was installed successfully!! Happy learning!!

]]>
https://blogs.perficient.com/2023/05/31/install-docker-on-a-linux-system-using-the-yum-package-manager/feed/ 0 336715
List of Standard Ports used for AWS and RDS https://blogs.perficient.com/2023/05/17/list-of-standard-ports-used-for-aws-and-rds/ https://blogs.perficient.com/2023/05/17/list-of-standard-ports-used-for-aws-and-rds/#respond Wed, 17 May 2023 11:52:55 +0000 https://blogs.perficient.com/?p=335761

Here’s a list of standard ports you should see at least once. You shouldn’t remember them, but you should be able to differentiate between an Important (HTTPS – port 443) and a database port (PostgreSQL – port 5432)

Important ports:

FTP: 21

SSH: 22

SFTP: 22 (same as SSH)

HTTP: 80

HTTPS: 443

vs RDS Databases ports:

PostgreSQL: 5432

MySQL: 3306

Oracle RDS: 1521

MSSQL Server: 1433

MariaDB: 3306 (same as MySQL)

Aurora: 5432 (if PostgreSQL compatible) or 3306 (if MySQL compatible)

 

]]>
https://blogs.perficient.com/2023/05/17/list-of-standard-ports-used-for-aws-and-rds/feed/ 0 335761