Sanghapal Gadpayale, Author at Perficient Blogs https://blogs.perficient.com/author/sgadpayale/ Expert Digital Insights Tue, 04 Mar 2025 06:00:17 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Sanghapal Gadpayale, Author at Perficient Blogs https://blogs.perficient.com/author/sgadpayale/ 32 32 30508587 RDS Migration: AWS-Managed to CMK Encryption https://blogs.perficient.com/2025/03/04/rds-migration-aws-managed-to-cmk-encryption/ https://blogs.perficient.com/2025/03/04/rds-migration-aws-managed-to-cmk-encryption/#respond Tue, 04 Mar 2025 06:00:17 +0000 https://blogs.perficient.com/?p=377717

As part of security and compliance best practices, it is essential to enhance data protection by transitioning from AWS-managed encryption keys to Customer Managed Keys (CMK).

Business Requirement

During database migration or restoration, it is not possible to directly change encryption from AWS-managed keys to Customer-Managed Keys (CMK).

During migration, the database snapshot must be created and re-encrypted with CMK to ensure a secure and efficient transition while minimizing downtime. This document provides a streamlined approach to saving time and ensuring compliance with best practices.

P1

                        Fig: RDS Snapshot Encrypted with AWS-Managed KMS Key

 

Objective

This document aims to provide a structured process for creating a database snapshot, encrypting it with a new CMK, and restoring it while maintaining the original database configurations. This ensures minimal disruption to operations while strengthening data security.

  • Recovery Process
  • Prerequisites
  • Configuration Overview
  • Best Practices

 

Prerequisites

Before proceeding with the snapshot and restoration process, ensure the following   prerequisites are met:

  1. AWS Access: You must have the IAM permissions to create, copy, and restore RDS snapshots.
  2. AWS KMS Key: Ensure you have a Customer-Managed Key (CMK) available in the AWS Key Management Service (KMS) for encryption.
  3. Database Availability: Verify that the existing database is healthy enough to take an accurate snapshot.
  4. Storage Considerations: Ensure sufficient storage is available to accommodate the snapshot and the restored instance.
  5. Networking Configurations: Ensure appropriate security groups, subnet groups, and VPC settings are in place.
  6. Backup Strategy: Have a backup plan in case of any failure during the process.

Configuration Overview

Step 1: Take a Snapshot of the Existing Database

  1. Log in to the AWS console with your credentials.
  2. Navigate to the RDS section where you manage database instances.
  3. Select the existing database for which you want to create the snapshot.
  4. Click on the Create Snapshot button.
  5. Provide a name and description for the snapshot, if necessary.
  6. Click Create Snapshot to initiate the snapshot creation process.
  7. Wait for the snapshot creation to complete before proceeding to the next step.

P2

Step 2: Copy Snapshot with New Encryption Keys

  1. Navigate to the section where your snapshots are stored.
  2. Locate the newly created snapshot in the list of available snapshots.
  3. Select the snapshot and click the Copy Snapshot option.
  4. In the encryption settings, choose New Encryption Key (this will require selecting a new Customer Managed Key (CMK)).
  5. Follow the prompts to copy the snapshot with the new encryption key. Click Next to continue.

P3

 

P4

Step 4: Navigate to the Newly Created Snapshot, Action to Restore

  1. Once the new snapshot is successfully created, navigate to the list of available snapshots.
  2. Locate the newly created snapshot.
  3. Select the snapshot and choose the Restore or Action → Restore option.

P5

 

Step 5: Fill in the Details as Old One

  1. When prompted to restore the snapshot, fill in the details using the same configuration as the old database. This includes:

Instance size, Database configurations, Networking details, Storage options

  1. Ensure all configurations match the old setup to maintain continuity.

Step 6: Create the Restored Database Output

  1. After filling in the necessary details, click Create to restore the snapshot to a new instance.
  2. Waiting for the process to be completed.
  3. Verify that the new database is restored successfully.

P6

 

Best Practices for RDS Encryption

  • Enable automated backups and validate snapshots.
  • Secure encryption keys and monitor storage costs.
  • Test restored databases before switching traffic.
  • Ensure security groups and CloudWatch monitoring are set up.
  • This ensures a secure and efficient RDS snapshot process.

 

Conclusion

Following these steps ensures a secure, efficient, and smooth process for taking, encrypting, and restoring RDS snapshots in AWS. Implementing best practices such as automated backups, encryption key management, and proactive monitoring can enhance data security and operational resilience. Proper planning and validation at each step will minimize risks and help maintain business continuity.

]]>
https://blogs.perficient.com/2025/03/04/rds-migration-aws-managed-to-cmk-encryption/feed/ 0 377717
Windows Password Recovery with AWS SSM https://blogs.perficient.com/2025/02/25/windows-password-recovery-with-aws-ssm/ https://blogs.perficient.com/2025/02/25/windows-password-recovery-with-aws-ssm/#respond Wed, 26 Feb 2025 05:27:12 +0000 https://blogs.perficient.com/?p=377706

The Systems Manager (SSM) streamlines managing Windows instances in AWS. If you’ve ever forgotten the password for your Windows EC2 instance, SSM offers a secure and efficient way to reset it without additional tools or manual intervention.

Objective & Business Requirement

In a production environment, losing access to a Windows EC2 instance due to an unknown or non-working password can cause significant downtime. Instead of taking a backup, creating a new instance, and reconfiguring the environment—which is time-consuming and impacts business operations—we leverage AWS Systems Manager (SSM) to efficiently recover access without disruption.

  • Recovery Process
  • Prerequisites
  • Configuration Overview
  • Best Practices
  • Conclusion

Prerequisites

Before you start, ensure the following prerequisites are met:

  1. SSM Agent Installed: The SSM agent must be installed and run on the Windows instance. AWS provides pre-configured AMIs with the agent installed.
  2. IAM Role Attached: Attach an IAM role to your instance with the necessary permissions. The policy should include:
    • AmazonSSMManagedInstanceCore
    • AmazonSSMFullAccess (or custom permissions to allow session management and run commands).
  3. Instance Managed by SSM: The instance must be registered as a managed instance in Systems Manager.

Configuration Overview

Follow this procedure if all you need is a PowerShell prompt on the target instance.

1. Log in to the AWS Management Console

  • Navigate to the EC2 service in the AWS Management Console.
  • Open the instance in the AWS console & click Connect.

S1

  • This opens a PowerShell session with “ssm-user”.

Picture2

2. Verify the Active Users

Run Commands to Reset the Password

With the session active, follow these steps to reset the password:

  • Run the following PowerShell command to list the local users: get-localuser

Picture3

  • Identify the username for which you need to reset the password.
  • Reset the password using the following command:

Replace <username> with the actual username and <password> with your new password.

net user Username password

3. Validate the New Password

  • Use Remote Desktop Protocol (RDP) to log into the Windows instance using the updated credentials.
  • To open an RDP connection to the instance in your browser, follow this procedure.
  • Open the instance in the AWS console & click Connect:
  • Switch to the “RDP client” tab & use Fleet Manager:

Picture4

  • Able to access the server using “RDP client,” Please refer to the below screenshot.

Picture5

 

Best Practices

  1. Strong Password Policy: Ensure the new password adheres to your organization’s password policy for security.
  2. Audit Logs: Use AWS CloudTrail to monitor who initiated the SSM session and track changes made.
  3. Restrict Access: Limit who can access SSM and manage your instances by defining strict IAM policies.

Troubleshooting Tips for Password Recovery

  • SSM Agent Issues: If the instance isn’t listed in SSM, verify that the SSM agent is installed and running.
  • IAM Role Misconfigurations: Ensure the IAM role attached to the instance has the correct permissions.
  • Session Manager Setup: If using the CLI, confirm that the Session Manager plugin is installed and correctly configured on your local machine.

 

Conclusion

AWS Systems Manager is a powerful tool that simplifies Windows password recovery and enhances the overall management and security of your instances. By leveraging SSM, you can avoid downtime, maintain access to critical instances, and adhere to AWS best practices for operational efficiency.

 

]]>
https://blogs.perficient.com/2025/02/25/windows-password-recovery-with-aws-ssm/feed/ 0 377706
Migration of DNS Hosted Zones in AWS https://blogs.perficient.com/2024/12/31/migration-of-dns-hosted-zones-in-aws/ https://blogs.perficient.com/2024/12/31/migration-of-dns-hosted-zones-in-aws/#respond Tue, 31 Dec 2024 08:00:47 +0000 https://blogs.perficient.com/?p=374245

Transferring Route 53 hosted zone records between AWS accounts using the CLI involves exporting the records from one account and then importing them to another. Here’s a step-by-step guide:

Migration of DNS Hosted Zones in AWS

The primary objective of this process is to migrate Route 53 hosted zone records seamlessly between AWS accounts while ensuring minimal disruption to DNS functionality. This involves securely transferring DNS records, preserving their integrity, maintaining availability, and ensuring linked AWS resources remain accessible. Additionally, cross-account DNS access may be implemented as needed to meet business requirements.

Img1

Objectives:

  • Migration Process Overview
  • Prerequisites
  • Configuration Overview
  • Best Practices
  • Conclusion

 

Prerequisites:

  • Account Permissions: Ensure you have AmazonRoute53FullAccess permissions in both source and destination accounts. For domain transfers, additional permissions (TransferDomains, DisableDomainTransferLock, etc.) are required.
  • Export Tooling: Use the AWS CLI or SDK for listing and exporting DNS records, as Route 53 does not have a built-in export feature.
  • Destination Hosted Zone: Create a hosted zone in the destination account with the same domain name as the original. Note the new hosted zone ID for use in subsequent steps.
  • AWS Resource Dependencies: Identify resources tied to DNS records (such as EC2 instances or ELBs) and ensure these are accessible or re-created in the destination account if needed.

 

Configuration Overview:

1. Crete EC2 Instance and Download the cli53 in Using Below Commands:

  • Use the AWS CLI53 to list DNS records in the source account and save them to a JSON file:

Wget https://github.com/barnybug/cli53/releases/download/0.8.16/cli53-linux-amd64

Note: Linux can also be used, but it requires cli53 dependency and AWS credentials

 

  • Move the cli53 to the bin folder and change the permission

Img2

2. Create Hosted Zone in Destination Account:

  • In the destination account, create a new hosted zone with the same domain name using cli or GUI:
    • Take note of the new hosted zone ID.

3. Export DNS Records from Existing Hosted Zone:

  • Export the records using cli53 in ec2 instance using below command and remove NS and SOA records from this file, as the new hosted zone will generate these by default.

Img3

Note: Created Microsoft.com as dummy hosted zone.

4. Import DNS Records to Destination Hosted Zone:

  • Use the exported JSON file to import records into the new hosted zone for that just copy all records from the domain.com.txt file

Img4

  • Now login to other AWS route53 account and just import the records those copied from the exported file, please refer to below ss
  • Now save the file and verified the records

Img5

5. Test DNS Records:

  • Verify DNS record functionality by querying records in the new hosted zone and ensuring that all services resolve correctly.

 

Best practices:

When migrating Route 53 hosted zones between AWS accounts, applying best practices helps ensure a smooth transition with minimal disruption. Here are key best practices for a successful Route 53 hosted zone migration:

1. Plan and Document the Migration Process

  • Detailed Planning: Outline each step of the migration process, including DNS record export, transfer, and import, as well as any required changes in the destination account.
  • Documentation: Document all DNS records, configurations, and dependencies before starting the migration. This helps in troubleshooting and serves as a backup.

2. Schedule Migration During Low-Traffic Periods

  • Reduce Impact: Perform the migration during off-peak hours to minimize potential disruption, especially if you need to update NS records or other critical DNS configurations.

3. Test in a Staging Environment

  • Dry Run: Before migrating a production hosted zone, perform a test migration in a staging environment. This helps identify potential issues and ensures that your migration plan is sound.
  • Verify Configurations: Ensure that the DNS records resolve correctly and that applications dependent on these records function as expected.

4. Use Route 53 Resolver for Multi-Account Setups

  • Centralized DNS Management: For environments with multiple AWS accounts, consider using Route 53 Resolver endpoints and sharing resolver rules through AWS Resource Access Manager (RAM). This enables efficient cross-account DNS resolution without duplicating hosted zones across accounts.

5. Avoid Overwriting NS and SOA Records

  • Use Default NS and SOA: Route 53 automatically creates NS and SOA records when you create a hosted zone. Retain these default records in the destination account, as they are linked to the new hosted zone’s configuration and AWS infrastructure.

6. Update Resource Permissions and Dependencies

  • Resource Links: DNS records may point to AWS resources like load balancers or S3 buckets. Ensure that these resources are accessible from the new account and adjust permissions if necessary.
  • Cross-Account Access: If resources remain in the source account, establish cross-account permissions to ensure continued access.

7. Validate DNS Records Post-Migration

  • DNS Resolution Testing: Test the new hosted zone’s DNS records using tools like dig or nslookup to confirm they are resolving correctly. Check application connectivity to confirm that all dependent services are operational.
  • TTL Considerations: Set a low TTL (Time to Live) on records before migration. This speeds up DNS propagation once the migration is complete, reducing the time it takes for changes to propagate.

8. Consider Security and Access Control

  • Secure Access: Ensure that only authorized personnel have access to modify hosted zones during the migration.

9. Establish a Rollback Plan

  • Rollback Strategy: Plan for a rollback if any issues arise. Keep the original hosted zone active until the new configuration is fully tested and validated.
  • Backup Data: Maintain a backup of all records and configurations so you can revert to the original settings if needed.

Conclusion

Migrating a Route 53 hosted zone between AWS accounts involves careful planning, especially to ensure DNS records are exported and imported correctly. After migrating, testing is crucial to confirm that DNS resolution works as expected. Cross-account setups may require additional configuration, such as Route 53 Resolver rules, to ensure seamless DNS functionality across environments.

]]>
https://blogs.perficient.com/2024/12/31/migration-of-dns-hosted-zones-in-aws/feed/ 0 374245
Analysis Performance of Applications Using AWS DLT service https://blogs.perficient.com/2024/08/08/analysis-performance-of-applications-using-aws-dlt-service/ https://blogs.perficient.com/2024/08/08/analysis-performance-of-applications-using-aws-dlt-service/#respond Thu, 08 Aug 2024 10:22:14 +0000 https://blogs.perficient.com/?p=366692

What is AWS DLT?

Distributed Load Testing on AWS helps you automate the testing of your software applications at scale and at load to identify bottlenecks before you release your application. This solution creates and simulates thousands of connected users generating transactional records at a constant pace without the need to provision servers.

For more info, please refer to AWS documentation: https://docs.aws.amazon.com/solutions/latest/distributed-load-testing-on-aws/solution-overview.html

S1

Fig: Distributed Load Testing on AWS architecture

Learn more about DLT and AWS architecture.

Objectives

  • Prerequisites
  • Configuration Overview
  • Conclusion

Prerequisites

Users also need access to AWS S3, CloudFormation, the Basic Amazon elastic container service (ECS), and CloudWatch.

Picture22

Configuration Overview

How to Launch the Configuration Using a CloudFormation Template

AWS CloudFormation is used to automate the deployment of Distributed Load Testing on AWS. The following AWS CloudFormation template is included, which you can download before deployment. Launch the solution and its components using this template.

Default configurations include Amazon Elastic Container Service (Amazon ECS), Amazon Faregate, Amazon Virtual Private Cloud (Amazon VPC), Amazon Lambda, Amazon Simple Storage Service (Amazon S3), AWS Lambda, Amazon Simple Storage Service (Amazon S3), AWS Step Functions, Amazon DynamoDB, Amazon CloudWatch Logs, Amazon API Gateway, Amazon Cognito, AWS Identity and Access Management (IAM), and Amazon CloudFront, but it is also possible to customize the template to meet your network needs.

Referencehttps://docs.aws.amazon.com/solutions/latest/distributed-load-testing-on-aws/deployment.html

Step 1: Create a CloudFormation stack for the DLT web console so users can access the dashboard and perform load tests.

For creating the CloudFormation stack please refer to the below details reference link, it’s a time activity.

As soon as the CloudFormation stack is launched, all the components shown on the architecture diagram will be created

Additionally, the DLT dashboard username and password will be provided to the user once the stack is completed.

Step 2: The AWS DLT dashboard will appear once you log in with the received credentials

Picture23

 

Step 3: The dashboard has three sections: Dashboard, create test, and manage.

So, to create a load test, we need to click on ‘Create Test’.”

Picture24

 

Once you upload the JMX files or zip files and click on ‘Run Now,’ you will see the load test details as shown below

Picture24

Picture26

 

Step 4: To verify if the load test is working properly, there are options to click on the Amazon ECS console and the Amazon CloudWatch metrics dashboard.

  • Amazon ECS console: It is useful for monitoring test results and failures via containers. Please refer to the information below.

 

Picture27

 

In the ECS console user can verify all the running test scenarios in containers.

  • Amazon CloudWatch metrics dashboard: To verify the test logs, you need to log in to Amazon CloudWatch.

 

Picture28

 

Step 5:  Once all the tests are successfully executed, you can see the results on the dashboard, as shown below.

Picture29Picture30

 

Step 6: All the results are stored in an S3 bucket after the completion or failure of tests, and you can find all the details there

Picture31

 

Note: You can only upload JMX files and not CSV files. Therefore, it seems like we need to create a zip file to run the tests. You can run only one test at a time, as it works in a loop.

DLT on AWS is a powerful approach that combines scalability, flexibility, and cost-efficiency, making it an ideal choice for developers and businesses looking to ensure the performance and reliability of their applications under varying loads.

 

]]>
https://blogs.perficient.com/2024/08/08/analysis-performance-of-applications-using-aws-dlt-service/feed/ 0 366692
AWS Cross-Account Best Practices https://blogs.perficient.com/2024/08/08/aws-cross-account-best-practices/ https://blogs.perficient.com/2024/08/08/aws-cross-account-best-practices/#respond Thu, 08 Aug 2024 10:21:21 +0000 https://blogs.perficient.com/?p=366648

Implementing AWS cross-account access is crucial to managing a secure and scalable cloud environment. This setup simplifies the management process, enhances security by adhering to the principle of least privilege, streamlines operations by reducing the need to switch accounts, and facilitates compliance and auditing by centralizing access and control.

Imagine GlobalTech’s website is hosted on EC2 instances in Account A. The company’s DNS management team, responsible for updating and managing DNS records, operates within Account B using Amazon Route 53. By configuring cross-account access, the DNS team can update DNS records to reflect changes in the IP addresses of the EC2 instances or manage traffic routing without needing direct access to them. This centralization improves efficiency, enhances security, and ensures that DNS configurations are managed consistently across the company’s infrastructure.

Let’s understand this using the examples below:

Scenario: Cross-Account Access for Route 53 DNS Management

Business Context

Your company, “GlobalTech,” has a multi-account AWS environment managed through AWS Organizations. The company uses:

  • Account A: Hosting EC2 instances for various applications.
  • Account B: Managing DNS records using Amazon Route 53.

You want to configure cross-account access so that Route 53 in Account B can manage DNS records for EC2 instances running in Account A. This allows your DNS management team to handle DNS configurations centrally without requiring direct access to EC2 instances.

  • Prerequisites
  • Configuration Overview
  • Best Practices
  • Conclusion

Prerequisites

  • AWS Organizations: GlobalTech uses AWS Organizations with an organizational unit (OU) structure.
  • IAM Roles and Policies: Proper IAM roles and policies must be configured to allow cross-account access.
  • Route 53 Hosted Zone: Hosted zones are set up in Account B.

Configuration Overview

Steps to Implement the Scenario

  1. Create IAM Role in Account A (EC2 Instances Account):
    • Log in to AWS Management Console for Account A.
    • Navigate to IAM and click on Roles, then Create role.
    • Select Trusted Entity:
    • Choose Another AWS account.
    • Enter the Account ID of Account B (Route 53 account).
  2. Add Permissions:
    • Attach the following policies to the role:
      • AmazonRoute53FullAccess: Grants full access to Route 53.
      • Custom policy for specific permissions, if needed.
  1. Role Name:
    • Name the role Route53CrossAccountRole.
  2. Create Role.

Update Trust Relationship in Account A Role

  • Navigate to the Trust relationships tab for the Route53CrossAccountRole.
  • Edit the trust policy to allow Account B to assume the role:

Screenshot 2

Create IAM Role in Account B (Route 53 Account)

  1. Log in to AWS Management Console for Account B.
  2. Navigate to IAM and click on Roles, then Create role.
  3. Select Trusted Entity:
    • Choose Another AWS account.
    • Enter the Account ID of Account A (EC2 account).
  4. Add Permissions:
    • Attach policies needed for Route 53 management (if additional permissions are required).
  5. Role Name:
    • Name the role EC2ManagementRole.
  6. Create Role.

Configure Route 53 Hosted Zone in Account B

  1. Log in to AWS Management Console for Account B.
  2. Navigate to Route 53 and create or select the existing hosted zone.
  3. Add DNS records pointing to the public IP addresses of the EC2 instances in Account A.

Assign IAM Role to EC2 Instances in Account A

  1. Log in to AWS Management Console for Account A.
  2. Navigate to EC2, and select the instances you want to associate with the role.
  3. Click on Actions > Security > Modify IAM Role.
  4. Select the Route53CrossAccountRole and save changes.

Automate DNS Updates (Optional)

  • Use AWS Lambda or a similar service to automatically update DNS records in Route 53 when EC2 instances are launched or terminated in Account A.
  • Ensure the Lambda function assumes the Route53CrossAccountRole to make necessary API calls to Route 53.

Testing and Verification

  1. Verify Role Assumption: Ensure that Account B can assume the Route53CrossAccountRole in Account A.
  2. Update DNS Records: Try updating DNS records in Route 53 for EC2 instances in Account A.
  3. Check DNS Resolution: Verify that the DNS records are correctly resolving to the EC2 instances in Account A.

Multiple Scenarios

  • Use Case: Developers in Account A need temporary access to resources in Account B, such as an S3 bucket.
  • Use Case: The finance team in Account A needs access to billing information for multiple AWS accounts
  • Use Case: Enable network communication between VPCs in different AWS accounts

Best Practices

To ensure secure and efficient cross-account access management in AWS, follow these best practices:

1. Use AWS Organizations

  • Centralized Management: Use AWS Organizations to manage multiple accounts centrally, allowing for better control and governance.
  • Service Control Policies (SCPs): Apply SCPs to enforce permission boundaries and ensure accounts only have the necessary permissions.

2. Implement IAM Roles

  • Cross-Account Roles: Create IAM roles for cross-account access instead of using root accounts or IAM users.
  • Role Assumption: Set up trust relationships to allow users or services in one account to assume roles in another account using the sts: AssumeRole API.

3. Principle of Least Privilege

  • Minimum Permissions: Grant only the permissions necessary for users, roles, and services to perform their tasks.
  • Fine-Grained Policies: Use detailed IAM policies to control access at a granular level.

4. Enable Multi-Factor Authentication (MFA)

  • MFA for Sensitive Operations: To add an extra layer of security, require MFA for roles and users performing sensitive operations.
  • MFA Enforcement: Use IAM policies to enforce MFA for specific actions or API calls Centralize Logging and Monitoring
  • AWS CloudTrail: Enable CloudTrail in all accounts to capture and log all API calls and user activities.
  • Centralized Logging: Store CloudTrail logs in a centralized S3 bucket for easier analysis and monitoring.
  • Amazon GuardDuty: Enable GuardDuty for continuous threat detection and monitoring.

5. Establish a Secure Network Architecture

  • VPC Peering and Transit Gateway: Use VPC peering or AWS Transit Gateway to enable secure and efficient network connectivity between accounts.

6. Regular Security Audits

  • Compliance Checks: Perform regular security audits and compliance checks to ensure cross-account access configurations meet security and compliance requirements.
  • Security Best Practices: Regularly update your knowledge of AWS security features and follow AWS security best practices.

7. Utilize AWS Trusted Advisor

  • Security Checks: Use AWS Trusted Advisor to perform security checks and receive recommendations for improving your security posture.
  • Review Recommendations: Regularly review and act on the recommendations provided by AWS Trusted Advisor

To ensure secure and efficient cross-account access management in AWS, implement AWS Organizations for centralized management, use IAM roles for granting cross-account access, enforce the principle of least privilege, enable MFA for sensitive operations, centralize logging and monitoring with CloudTrail and GuardDuty, and utilize AWS Resource Access Manager (RAM) for secure resource sharing. Automate account management, establish a secure network architecture, perform regular security audits, and adopt a tagging strategy for resource organization. Following these best practices will enhance security, streamline management, and maintain compliance in your AWS environment.

]]>
https://blogs.perficient.com/2024/08/08/aws-cross-account-best-practices/feed/ 0 366648
Create and Build Packer Template & Images for AWS https://blogs.perficient.com/2024/08/08/create-and-build-packer-template-images-for-aws/ https://blogs.perficient.com/2024/08/08/create-and-build-packer-template-images-for-aws/#respond Thu, 08 Aug 2024 10:19:55 +0000 https://blogs.perficient.com/?p=366675

As you may already know, there are several AMIs that you may utilize in the Amazon EC2 environment. However, you can see that these are missing some specific configuration software that you need. If, like me, you prefer to use third-party software like Docker or Ansible to implement configuration changes and have pre-built operating system templates with your software need to keep things productive, then you’ll probably want to consider building your own AMI.

In Packer language, the configuration file used to describe what image we want to build and how is referred to as a template. A template’s format is basic JSON. In this piece, I’ll describe how to achieve that per the diagram below.

Picture1

Fig: Packer architecture diagram for AWS

 

What is Packer?

Packer is HashiCorp’s open-source tool for creating machine images from source configuration. You can configure Packer images with an operating system and software for your specific use case.

Terraform configuration for a compute instance can use a Packer image to provision your instance without manual configuration.

Objectives

  • Prerequisites
  • Install Packer
  • Select the Amazon Machine Image from the AWS Console
  • Overview of Packer
  • Create a JSON file for use with Packer.
  • Run Packer to create our AMI
  • Conclusion

Prerequisites

To use Packer with Amazon, you must first get your AWS Access Key and AWS Secret Key. This can be created in the IAM section of the Amazon account. Please refer to the screenshot and link below for the required IAM permission  https://www.packer.io/plugins/builders/amazon

Picture2

 

Picture3

Note: As mentioned in the above IAM permission link, the existing permission should be required.

Install Packer

Install the packer using below mentioned instructions,

  1. Download Packer https://www.packer.io/downloads.html
  2. Unzip the downloaded file and extract it in any folder.
  3. I have installed the packer in C:\packer.
  4. Let us add this path to the $PATH variable
  5. To validate if it is working, type:  packer -version
    1. Version to use Packer 1.8.0

To learn more about Packer, please refer to this document: https://learn.hashicorp.com/packer

Select the Amazon Machine Image from the AWS Console

Before creating the Packer template file, we need to get ready with the AMI name. AMI images that work on the US-East-1 region may not work with the US-West-1 region.

Make sure you find the AMI on Amazon Marketplace and click Accept the Software Terms. Otherwise, you won’t be able to use the AMI, and Packer will exit with an error. So, we need to validate the AMI before running the Packer model.

To select the base image, you need to go to the AWS console or Have access to the AWS CLI. Log in to the AWS interface, and on the left, select AMI.

Picture4

For the purposes of this article, I actually have selected an Ubuntu AMI.

Overview of Packer

In this section, we will provide an overview of Packer like how different sections are used in Packer and how we used them in the script.

We divide the packer into four sections “builders“, “provisioners“, “post-processors” and “variables“. We will go step by step and, at the end, will run the final script.

Builders – Building Your First AMI

Builders are Packer components that can generate a machine image for a single platform. Builders read certain configurations and use them to run and build machine images. A builder is called as part of the build to generate the actual resulting images. For more details on the builders, click here.
In the screenshot below, we specify AWS environment access credentials along with various parameters of our AWS environment such as region, source AMI, and instance type, ssh_username.

We have saved the below file with the name packer.json (only used for the builders section)

Picture5

Run the packer command – packer build packer.json

Picture6

Once it’s done, you’ll find the image in the AMI section with the name my-test-ami, but it’s a pointless AMI. We just created the AMI from the ec2 instance.

Provisioners – Configuring Provisioners

Provisioners use built-in and third-party software to install and configure the machine image after booting. Provisioners prepare the system for use, so common use cases for provisioners include installing packages, patching the kernel, creating users, and downloading application code. For details, click here

Now that we have the provisioning script, we need to write a simple bash script to install the appropriate software into our new Packer-generated AMI. For the purposes of this article, we will create our script and tell it to install Nginx. We’ll then place it in the provisioner section.

Picture8

 

In the builder section, when we run the packer command—packer build packer.json, it creates a pointless AMI, but when we use the provisioners section in the packer.json file (only used for the provisioners section), it creates an AMI with Nginx software.

Note: At the final step, we will store provisioners section commands in packages.sh.

Post-processors

The manifest post-processor can store the list of all of the artifacts in JSON format, which is produced by the packer during the build. For more details, please click here.

Create a JSON file for use with Packer

In this section, we are using three different files for Packer: Packer.json (Builders), packages.json (provisioners) and manifests.json

Variables

In the variables section we have created some variables like ami_target_name,source_ami,ami_source_owner,ssh_username,aws_access_key, and aws_secret_key details that are required for the creation of custom Ami and passed that variable names in builder section. please refer below for more details.

“ami_name” – It is used to provide the name after the custom AMI is created.

“ami_source_owner” – It is used to capture source images from AWS account eg. AWS AMI  name.

“ssh_username” – provide username as per aws standard like for redhat – ec2-user, for ubuntu – ubuntu,etc.

“aws_access_key” and “aws_secret_key” – provide credentials of a user while execution. Eg. export credentials in a terminal so our credentials will be secure no need to be exposed anywhere.

Key

Please refer variables documentation link

Packer.json

Picture10

 

As explained above, you now know how Packer works with different sections like builders and provisioners. We also created variables and used them in the builders’ section. Now, we move on to the final script to create the custom AMI.

Run Packer to create our AMI – Final script packer.json

Step 1: You need a separate package.sh, file, and add the lines below to install Nginx.

#!/usr/bin/env bash

sudo apt update

sudo apt install nginx -y

Ss2

Step 2: Create an empty manifest.json file to store the final result.

Step 3: Create the file packer.json and add the lines below to create the custom packer AMI.

Ss3

Screenshot 1

 

Step 4: The final output of the folder structure looks like the below

Ss4

 

Step 5: Now that our JSON configuration file and bash provisioning script are ready, we can run Packer to create our new AMI by running the command below:

packer build packer.json

When you run this, assuming your credentials are correct, you should see something similar to the following:

Ss4

Screenshot 4

Step 6: While executing step 5, log in to the AWS console and click EC2.

Next, you’ll see the commands executed inside the instance as part of the provisioning script and their output.

In the end, you should see something similar to the following, assuming everything goes as planned:

Ssss

Picture15

As you can see in the output below, we have a new AMI! It’s that simple

Picture20

 

Step 7: Finally, the AMI’s output will be in the manifest.json file, as shown below.

Screenshot 5

Conclusion:

Packer is a great software that really simplifies deployment in large and small virtualized/container environments. It can be used with many other virtualization and cloud platforms, including, but not limited to, GCP, AWS, AZURE, etc.

 

]]>
https://blogs.perficient.com/2024/08/08/create-and-build-packer-template-images-for-aws/feed/ 0 366675
EC2 Instance Recovery: Fixing Block Device Issues via /etc/fstab and Rescue Instance https://blogs.perficient.com/2024/02/21/ec2-instance-recovery-fixing-block-device-issues-via-etc-fstab-and-rescue-instance/ https://blogs.perficient.com/2024/02/21/ec2-instance-recovery-fixing-block-device-issues-via-etc-fstab-and-rescue-instance/#respond Wed, 21 Feb 2024 06:22:24 +0000 https://blogs.perficient.com/?p=356783

In this blog post, I will share my firsthand experience tackling and resolving a critical issue with an inaccessible and failed EC2 instance. I’ll provide a detailed account of the problem, its impact, and the step-by-step approach I took to address it. Additionally, I’ll share valuable insights and lessons learned to help prevent similar issues in the future.

EC2 Instance Recovery

S1

An EC2 instance faced Instance Status Check failures and was inaccessible through SSM due to a boot process transitioning into emergency mode. After analyzing the OS boot log, it was identified that the issue stemmed from a mount point failure caused by a malformed/missing secondary block device; there are several steps you can take to troubleshoot and resolve the issue.

Benefits of EC2 Instance Recovery

  • Quick Diagnosis and Resolution
  • Effective Mitigation
  • Accurate Problem Localization
  • Minimal Downtime
  • Restoration of SSM (Systems Manager) Access

Here’s a general guide to help you identify and address the problem:

Step 1: Check Instance Status Checks

  • Go to the AWS Management Console.
  • Navigate to the EC2 dashboard and select “Instances.”
  • Identify the problematic instance and check the status checks.
  • There are two types: “System Status Checks” and “Instance Status Checks.”
  • Look for the specific error messages that may provide insights into the issue.

Picture1

 

Step 2: Check System Logs

  • Review the system logs for the instance to gather more information on the underlying issue.
  •  Access the AWS EC2 Instance and go to “Action” –> “Monitor and Troubleshoot” to view the logs.

Picture2

Step 3: Verify IAM Role Permissions

  • Ensure that the IAM role associated with the EC2 instance has the necessary permissions for SSM (System Manager).
  • The role should have the ‘AmazonSSMManagedInstanceCore’ policy attached.
  • If the mentioned policy is not attached, then you need to attach the policy.

Picture3

 

Certainly, if the issue is related to a malformed device name in the /etc/fstab file, you can follow the below steps to correct it:

1. Launch a Rescue Instance

  • Launch a new EC2 instance in the same region as your problematic instance. This instance will be used to mount the root volume of the problematic instance.

2. Stop the Problematic Instance

  • Stop the problematic EC2 instance to detach its root volume.

3. Detach the Root Volume from the problematic Instance

  • Go to the AWS Management Console –> Navigate to the EC2 dashboard and select “Volumes.” –> Identify the root volume attached to the problematic instance and detach it.

Picture4

Picture5

 

4. Attach the Root Volume to the Rescue Instance

  • Attach the root volume of the problematic instance to the rescue instance. Make a note of the device name it gets attached to (e.g., /dev/xvdf).

Picture6

Picture7

 

5. Access the Rescue Instance

  • Connect to the rescue instance using SSH or other methods.

Mount the Root Volume:

  • Create a directory to mount the root volume. For example: sudo mkdir /mnt/rescue
  • Mount the root volume to the rescue instance: sudo mount /dev/xvdf1 /mnt/rescue
  • Edit the /etc/fstab File: Open the /etc/fstab file for editing :
  • You can use a text editor such as nano or vim: sudo nano /mnt/rescue/etc/fstab

Locate the entry that corresponds to the secondary block device and correct the device name. Ensure that the device name matches the actual device name for the attached volume.

Save and Exit:

  • Save the changes to the /etc/fstab file and exit the text editor.
  • Unmount the Root Volume: sudo umount /mnt/rescue
  • Detach the Root Volume from the Rescue Instance

6. Attach the Root Volume back to the Problematic Instance

  • Go back to the AWS Management Console.
  • Attach the root volume back to the problematic instance using the original device name.
  • Start the Problematic Instance: Start the problematic instance and monitor its status checks to ensure it comes online successfully.

This process involves correcting the /etc/fstab file on the root volume by mounting it on a rescue instance. Once corrected, you can reattach the volume to the original instance and start it to check if the issue is resolved. Always exercise caution when performing operations on production instances, and ensure that you have backups or snapshots before making changes.

Conclusion

Resolving EC2 instance status check failures involves a systematic approach to identify and address the underlying issues. Common causes include networking problems, operating system issues, insufficient resources, storage issues, and AMI or instance configuration issues.

]]>
https://blogs.perficient.com/2024/02/21/ec2-instance-recovery-fixing-block-device-issues-via-etc-fstab-and-rescue-instance/feed/ 0 356783