AWS Articles / Blogs / Perficient https://blogs.perficient.com/tag/aws/ Expert Digital Insights Tue, 18 Nov 2025 13:20:08 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png AWS Articles / Blogs / Perficient https://blogs.perficient.com/tag/aws/ 32 32 30508587 A Tool For CDOs to Keep Their Cloud Secure: AWS GuardDuty Is the Saw and Perficient Is the Craftsman https://blogs.perficient.com/2025/11/18/a-tool-for-cdos-to-keep-their-cloud-secure-aws-guardduty-is-the-saw-and-perficient-is-the-craftsman/ https://blogs.perficient.com/2025/11/18/a-tool-for-cdos-to-keep-their-cloud-secure-aws-guardduty-is-the-saw-and-perficient-is-the-craftsman/#respond Tue, 18 Nov 2025 13:20:08 +0000 https://blogs.perficient.com/?p=388374

In the rapidly expanding realm of cloud computing, Amazon Web Services (AWS) provides the infrastructure for countless businesses to operate and innovate. But with an ever-increasing amount of data, applications, and workloads on the cloud protecting this data poses significant security challenges. As a firm’s data, applications, and workloads migrate to the cloud, protecting them from both sophisticated threats as well as brute force digital attacks is of paramount importance. This is where Amazon GuardDuty enters as a powerful, vigilant sentinel.

What is Amazon GuardDuty?

At its core, Amazon GuardDuty is a continuous security monitoring service designed to protect your AWS accounts and workloads. The software serves as a 24/7 security guard for your entire AWS environment, not just individual applications, and is constantly scanning for malicious activity and unauthorized behavior.

The software works by analyzing a wide variety of data sources within your firm’s AWS account—including AWS CloudTrail event logs, VPC flow logs, and DNS query logs—using machine learning, threat intelligence feeds, and anomaly detection techniques.

If an external party tries a brute-force login, a compromised instance is communicating with a known malicious IP address, or an unusual API call is made, GuardDuty is there to spot it and can be configured to trigger automated actions through services can trigger automated actions through services like Amazon CloudWatch Events and AWS Lambda when a threat is found as well as alert human administrators to take action.

When a threat is detected, GuardDuty generates a finding with a severity level (high, medium, or low) and a score. The severity and score both help minimize time spent on more routine exceptions while highlighting significant events to your data security team.

Why is GuardDuty So Important?

In today’s digital landscape, relying solely on traditional, static security measures is not sufficient. Cybercriminals are constantly evolving their tactics, which is why GuardDuty is an essential component of your AWS security strategy:

  1. Proactive, Intelligent Threat Detection

GuardDuty moves beyond simple rule-based systems. Its use of machine learning allows it to detect anomalies that human security administrators might miss, identifying zero-day threats and subtle changes in behavior that indicate a compromise. It continuously learns and adapts to new threats without requiring manual updates from human security administrators.

  1. Near Real-Time Monitoring and Alerting

Speed is critical in incident response. GuardDuty provides findings in near real-time, delivering detailed security alerts directly to the AWS Management Console, Amazon EventBridge, and Amazon Security Hub. This immediate notification allows your firm’s security teams to investigate and remediate potential issues quickly, minimizing potential damage and alerting your firm’s management.

  1. Broad Protection Across AWS Services

GuardDuty doesn’t just watch over your firm’s Elastic Compute Cloud (“EC2”) instances. GuardDuty also protects a wide array of AWS services, including:

  • Simple Storage Service (“S3”) Buckets: Detecting potential data exfiltration or policy changes that expose sensitive data.
  • EKS/Kubernetes: Monitoring for threats to your container workloads.  No more running malware or mining bitcoin in your firm’s containers.
  • Databases (Aurora; RDS – MySQL, PostgreSQL, MariaDB, Oracle, and Microsoft SQL Server; and Redshift): Identifying potential compromise or unauthorized access to data.

Conclusion:

In the cloud, security is a shared responsibility. While AWS manages the security of the cloud infrastructure itself, you are responsible for security in the cloud—protecting your data, accounts, and workloads. Amazon GuardDuty is an indispensable tool in fulfilling that responsibility. It provides an automated, intelligent, and scalable layer of defense that empowers you to stay ahead of malicious actors.

To enable Amazon GuardDuty, consider contacting Perficient to help enable, configure, and train staff. Perficient is an AWS partner and has achieved Premier Tier Services Partner status, the highest tier in the Amazon Web Services (AWS) Partner Network. This elevated status reflects Perficient’s expertise, long-term investment, and commitment to delivering customer solutions on AWS.

Besides the firm’s Partner Status, Perficient has demonstrated significant expertise in areas like cloud migration, modernization, and AI-driven solutions, with a large team of AWS-certified professionals.

In addition to these competencies, Perficient has been designated for specific service deliveries, such as AWS Glue Service Delivery, and also has available Amazon-approved software in the AWS Marketplace.

Our financial services experts continuously monitor the financial services landscape and deliver pragmatic, scalable solutions that meet the required mandate and more. Reach out to Perficient’s Director and Head of Payments Practice Amanda Estiverne-Colas to discover why Perficient has been trusted by 18 of the top 20 banks, 16 of the 20 largest wealth and asset management firms, and 25+ leading payment + card processing companies.

 

]]>
https://blogs.perficient.com/2025/11/18/a-tool-for-cdos-to-keep-their-cloud-secure-aws-guardduty-is-the-saw-and-perficient-is-the-craftsman/feed/ 0 388374
Perficient Earns AWS Premier Tier Services Partner Status and Elevates AI Innovation in the Cloud https://blogs.perficient.com/2025/08/25/perficient-earns-aws-premier-tier-services-partner-status-and-elevates-ai-innovation-in-the-cloud/ https://blogs.perficient.com/2025/08/25/perficient-earns-aws-premier-tier-services-partner-status-and-elevates-ai-innovation-in-the-cloud/#respond Mon, 25 Aug 2025 19:39:26 +0000 https://blogs.perficient.com/?p=386488

At Perficient, we don’t just embrace innovation, we engineer it. That’s why we’re proud to share that we’ve achieved Amazon Web Services (AWS) Premier Tier Services Partner status, a milestone that solidifies our position as a leader in delivering transformative AI-first solutions.

This top-tier AWS designation reflects the depth of our technical expertise, the success of our client outcomes, and our commitment to helping enterprises modernize and thrive in a digital world. But what sets us apart isn’t just cloud proficiency; it’s how we can blend AI into every layer of digital transformation.

“We’re thrilled to join an elite group of technology innovators holding the AWS Premier Tier Services Partner status. This achievement is a testament to our strategic commitment to AWS, our partner-to-partner model, and the transformative outcomes we deliver for our clients,” said Santhosh Nair, senior vice president, Perficient. “Together with AWS, we’re building and deploying AI-first solutions at scale with speed and precision. From real-time analytics to AI-first product development, our approach empowers enterprises to innovate faster, personalize customer experiences, and unlock new business value.”

Combining the Power of AWS and AI

Whether it’s through intelligent automation, predictive analytics, or generative AI, we help organizations infuse intelligence across their operations using AWS’s scalable infrastructure. Our solutions are built to adapt, evolve, and deliver measurable outcomes from streamlining clinical workflows in healthcare to enhancing customer experiences in financial services.

As an AWS Premier Tier Services Partner, we now gain even more direct access to AWS tools, early service previews, and strategic collaboration opportunities, allowing us to deliver smarter, faster, and more impactful AI-first solutions for our clients.

Unlocking What’s Next

Our talented cloud and AI teams continue to push boundaries, helping clients harness the full potential of cloud and data while solving their toughest challenges with precision and innovation.

Ready to explore what AI and cloud transformation could look like for your business? Let’s talk.

]]>
https://blogs.perficient.com/2025/08/25/perficient-earns-aws-premier-tier-services-partner-status-and-elevates-ai-innovation-in-the-cloud/feed/ 0 386488
Creating Data Lakehouse using Amazon S3 and Athena https://blogs.perficient.com/2025/07/31/creating-data-lakehouse-using-amazon-s3-and-athena/ https://blogs.perficient.com/2025/07/31/creating-data-lakehouse-using-amazon-s3-and-athena/#respond Thu, 31 Jul 2025 10:41:17 +0000 https://blogs.perficient.com/?p=385527

As organizations accumulate massive amounts of structured and unstructured data, consequently, the need for flexible, scalable, and cost-effective data architectures becomes more important than ever. Moreover, with the increasing complexity of data environments, organizations must prioritize solutions that can adapt and grow. In addition, the demand for real-time insights and seamless integration across platforms further underscores the importance of robust data architecture. As a result, Data Lakehouse — combining the best of data lakes and data warehouses — comes into play. In this blog post, we’ll walk through how to build a serverless, pay-per-query Data Lakehouse using Amazon S3 and Amazon Athena.

What Is a Data Lakehouse?

A Data Lakehouse is a modern architecture that blends the flexibility and scalability of data lakes with the structured querying capabilities and performance of data warehouses.

  • Data Lakes (e.g., Amazon S3) allow storing raw, unstructured, semi-structured, or structured data at scale.
  • Data Warehouses (e.g., Redshift, Snowflake) offer fast SQL-based analytics but can be expensive and rigid.

Lakehouse unify both, enabling:

  • Schema enforcement and governance
  • Fast SQL querying over raw data
  • Simplified architecture and lower cost

Flow

Tools We’ll Use

  • Amazon S3: For storing structured or semi-structured data (CSV, JSON, Parquet, etc.)
  • Amazon Athena: For querying that data using standard SQL

This setup is perfect for teams that want low cost, fast setup, and minimal maintenance.

Step 1: Organize Your S3 Bucket

Structure your data in S3 in a way that supports performance:

s3://Sample-lakehouse/

└── transactions/

└── year=2024/

└── month=04/

└── data.parquet

Best practices:

  • Use columnar formats like Parquet or ORC
  • Partition by date or region for faster filtering
  • In addition, compressing files (e.g., Snappy or GZIP) can help reduce scan costs.

 Step 2: Create a Table in Athena

You can create an Athena table manually via SQL. Athena uses a built-in Data Catalog

CREATE EXTERNAL TABLE IF NOT EXISTS transactions (

transaction_id STRING,

customer_id STRING,

amount DOUBLE,

transaction_date STRING

)

PARTITIONED BY (year STRING, month STRING)

STORED AS PARQUET

LOCATION ‘s3://sample-lakehouse/transactions/’;

Then run:

MSCK REPAIR TABLE transactions;

This tells Athena to scan the S3 directory and register your partitions.

Step 3: Query the Data

Once the table is created, querying is as simple as:

SELECT year, month, SUM(amount) AS total_sales

FROM transactions

WHERE year = ‘2024’ AND month = ’04’

GROUP BY year, month;

Benefits of This Minimal Setup

Benefit Description
Serverless No infrastructure to manage
Fast Setup Just create a table and query
Cost-effective Pay only for storage and queries
Flexible Works with various data formats
Scalable Store petabytes in S3 with ease

Building a data Lakehouse using Amazon S3 and Athena offers a modern, scalable, and cost-effective approach to data analytics. With minimal setup and no server management, you can unlock insights from your data quickly while maintaining flexibility and governance. Furthermore, this streamlined approach reduces operational overhead and accelerates time-to-value. Whether you’re a startup or an enterprise, this setup provides the foundation for data-driven decision-making at scale. In fact, it empowers teams to focus more on innovation and less on infrastructure.

]]>
https://blogs.perficient.com/2025/07/31/creating-data-lakehouse-using-amazon-s3-and-athena/feed/ 0 385527
Developing a Serverless Blogging Platform with AWS Lambda and Python https://blogs.perficient.com/2025/06/11/developing-a-serverless-blogging-platform-with-aws-lambda-and-python/ https://blogs.perficient.com/2025/06/11/developing-a-serverless-blogging-platform-with-aws-lambda-and-python/#respond Thu, 12 Jun 2025 04:55:52 +0000 https://blogs.perficient.com/?p=382159

Introduction

Serverless is changing the game—no need to manage servers anymore. In this blog, we’ll see how to build a serverless blogging platform using AWS Lambda and Python. It’s scalable, efficient, and saves cost—perfect for modern apps.

How It Works

 

Lalit Serverless

Prerequisites

Before starting the demo, make sure you have: an AWS account, basic Python knowledge, AWS CLI and Boto3 installed.

Demonstration: Step-by-Step Guide

Step 1: Create a Lambda Function

Open the Lambda service and click “Create function.” Choose “Author from scratch,” name it something like BlogPostHandler, select Python 3.x, and give it a role with access to DynamoDB and S3. Then write your code using Boto3 to handle CRUD operations for blog posts stored in DynamoDB.

Lamda_Function.txt

Step 2: Set Up API Gateway

First, go to REST API and click “Build.” Choose “New API,” name it something like BlogAPI, and select “Edge optimized” for global access. Then create a resource like /posts, add methods like GET or POST, and link them to your Lambda function (e.g. BlogPostHandler) using Lambda Proxy integration. After setting up all methods, deploy it by creating a stage like prod. You’ll get an Invoke URL which you can test using Postman or curl.

Picture1

 

Step 3: Configure DynamoDB

Open DynamoDB and click “Create table.” Name it something like BlogPosts, set postId as the partition key. If needed, add a sort key like category for filtering. Default on-demand capacity is fine—it scales automatically. You can also add extra attributes like timestamp or tags for sorting and categorizing. Once done, hit “Create.”

.

 

Picture2

Step 4: Deploy Static Content on S3

First, make your front-end files—HTML, CSS, maybe some JavaScript. Then go to AWS S3, create a new bucket with a unique name, and upload your files like index.html. This will host your static website.

Index.html

After uploading, set the bucket policy to allow public read access so anyone can view your site. That’s it—your static website will now be live from S3.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "PublicReadGetObject",
            "Effect": "Allow",
            "Principal": "*",
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::your-bucket-name/*"
        }
    ]
}

After uploading, don’t forget to replace your-bucket-name in the bucket policy with your actual S3 bucket name. This makes sure the permissions work properly. Now your static site is live—S3 will serve your HTML, CSS, and JS smoothly and reliably.

Step 5: Distribute via CloudFront

Go to CloudFront and create a new Web distribution. Set the origin to your S3 website URL (like your-bucket-name.s3-website.region.amazonaws.com, not the ARN). For Viewer Protocol Policy, choose “Redirect HTTP to HTTPS” for secure access. Leave other settings as-is unless you want to tweak cache settings. Then click “Create Distribution”—your site will now load faster worldwide.

Picture3

To let your frontend talk to the backend, you need to enable CORS in API Gateway. Just open the console, go to each method (like GET, POST, DELETE), click “Actions,” and select “Enable CORS.” That’s it—your frontend and backend can now communicate properly.

Picture4

Additionally, in your Lambda function responses.(We already added in our lambda function), make sure to include the following headers.

 

Results

That’s it—your serverless blogging platform is ready! API Gateway gives you the endpoints, Lambda handles the logic, DynamoDB stores your blog data, and S3 + CloudFront serve your frontend fast and globally. Fully functional, scalable, and no server headaches!

 

Picture5

Conclusion

Building a serverless blog with AWS Lambda and Python shows how powerful and flexible serverless really is. It’s low-maintenance, cost-effective, and scales easily perfect for anything from a personal blog to a full content site. A solid setup for modern web apps!

]]>
https://blogs.perficient.com/2025/06/11/developing-a-serverless-blogging-platform-with-aws-lambda-and-python/feed/ 0 382159
Perficient Boldly Advances Business Through Technology Partnerships and Collaborative Innovation https://blogs.perficient.com/2025/06/10/perficient-boldly-advances-business-through-technology-partnerships-and-collaborative-innovation/ https://blogs.perficient.com/2025/06/10/perficient-boldly-advances-business-through-technology-partnerships-and-collaborative-innovation/#respond Tue, 10 Jun 2025 13:36:50 +0000 https://blogs.perficient.com/?p=382005

With thousands of skilled strategists and technologists worldwide, Perficient harnesses a collaborative spirit that delivers real results through the power of technology. We combine our deep industry expertise with trusted partnerships alongside leading technology innovators to transform some of the world’s biggest brands.  

Perficient’s vision is to be the place where great minds and great companies converge to boldly advance business. We fulfill this through our mission to Shatter Boundaries, Obsess Over Outcomes, and Forge the Future. In this final blog of our series celebrating Perficient’s recognition as a 2025 USA Today Top Workplace, we spotlight how our award-winning partnerships and diverse expertise fuel innovation and client success. Watch the video below to see how Perficient reimagines and revolutionizes digital transformation through collaboration with our clients and partners.  

 

Perficient’s Award-Winning Technology Partnerships 

Our broad partnerships with industry-leading technology innovators empower us to deliver customized, scalable solutions across industries and platforms that drive long-term growth. With an industry-first approach, we keep our solutions flexible and adaptable to evolving client needs. This versatility allows us to tailor each solution with the best-fit technologies, addressing unique industry challenges with greater personalization and enhancing client outcomes. 

“I see a lot of potential in Perficient in continuing to accelerate by tapping into the deep knowledge of our talent in the company and continuing to invest in some of the technologies and partners that we’re working with today.” Gugu Mabuza, Director, App Modernization

By leveraging our world-class expertise and strategic partnerships, we design and implement award-winning solutions that transform how our clients connect with their customers and grow their business. Our partner network’s tradeshows are also great opportunities to connect, engage, and learn. Perficient colleagues regularly attend these events, showcasing their expertise and deepening their knowledge of our partners’ technology stacks. 

“When we think about being partners with big companies like Adobe and Salesforce—that have a lot of knowledge that could bring us more knowledge and clients—we need to be working closely with them on a high level. We need to take advantage of that to be working together as a team.” – Lina Jaramillo, General Manager, Colombia 

Although we maintain many strategic alliances, our largest partnerships with leading technology innovators are some of the most impactful. Let’s explore these key partnerships and how our expertise delivers exceptional value through high-impact solutions. 

Adobe 

As an Adobe Platinum Partner with more than 300 certifications, we deliver top-tier expertise that reimagines customer experiences and accelerates business velocity for the world’s biggest brands. Our end-to-end marketing solutions on Adobe Experience Cloud empower clients to build deeper customer connections through smarter engagement, enhanced personalization, and data-driven insights.  

We proudly stand as Adobe’s leading digital experience partner, having successfully completed more than 800 engagements across Adobe, Marketo, and Adobe Commerce platforms. As an award-winning Adobe Specialized Partner, we hold seven Adobe specializationsExperience Manager Sites, Experience Manager Run and Operate, Analytics, Commerce, Marketo Engage, Target, and Customer Journey Analyticsdemonstrating our proven track record of successful implementations. 

READ MORE: How Sunbelt Rentals Used First-Party Data to Convert Millions in Revenue  

At the forefront of digital marketing innovation, our Adobe experts led the first large-scale customer deployment to Adobe Experience Manager as a Cloud Service (AEMaaCS) and became the first globally to earn the Marketo Engage specialization. Powered by GenStudio and Sensei, our Adobe Generative AI (GenAI) solutions transform marketing by accelerating content creation, streamlining omnichannel campaigns, and enabling precise customer behavior forecasting to maximize personalization and ROI.   

Amazon Web Services 

In addition to our valued Adobe partnership, we are proud to be an Amazon Web Services (AWS) Advanced Services Partner. With over 15 years of experience and more than 150 successful AWS implementations, our deep expertise in cloud and enterprise applications helps clients optimize infrastructure, streamline operations, and enhance data analytics.  

By combining AWS’s cross-industry flexibility with our broad industry expertise, we develop tailored cloud transformation strategies aligned to each client’s unique objectives. This adaptable approach drives innovation and collaboration across our partner ecosystem, consistently delivering high-value, measurable business outcomes 

Our AWS solutions address critical business needs such as cloud migration, cost optimization, application modernization, product development, and advanced data capabilitiesempowering organizations to improve operational efficiency, scalability, security, and speed to market. We also lead in AI-driven AWS solutions, leveraging Amazon Q Developer, Amazon Q Business, AWS Transform, Bedrock, and SageMaker to build and scale GenAI applications, accelerate product development, and deliver personalized, data-rich customer experiences. 

LEARN MORE: Revolutionizing Patient Journeys With Amazon Connect 

Our longstanding AWS partnership and highly specialized solutions have earned us multiple key competencies and service delivery designations, including: 

Salesforce 

As a leading Salesforce consulting partner, we deliver intelligent, data-driven solutions that enhance efficiency, collaboration, personalization, and actionable insights. With deep expertise across Data Cloud, Agentforce, Marketing Cloud, and Experience Cloud, we have successfully completed more than 3,000 Salesforce implementations. Our end-to-end solutions span customer service, digital marketing, communities, automation, commerce, platforms, and MuleSoft—enabling seamless digital experiences and personalized customer journeys. 

Harnessing Agentforce and Data Cloud, we drive AIpowered engagement to elevate sales, customer service, and marketing performance. As a member of the Agentforce Partner Network, we lead in building and deploying third-party AI agents that boost customer engagement and operational efficiency 

By uniting our Salesforce expertise with cross-industry knowledge, we accelerate transformation across manufacturing, healthcare, financial services, and insurance sectors 

  • Manufacturing: Automate marketing, optimize partner management, and drive connected customer experiences with data insights.  
  • Healthcare: Use Salesforce Health Cloud to unify care ecosystems, streamline operations, and improve patient experiences. 
  • Financial Services: Trusted by more than 50 leading public financial services companies, we enable cross-channel personalization, real-time engagement, and smarter business intelligence with Salesforce Financial Services Cloud.  
  • Insurance: Our Salesforce Digital Direct solution scales operating models, accelerates time to market, centralizes customer data, and delivers personalized experiences. 

READ MORE: Strengthening Provider Relationship Management with Salesforce CRM 

Microsoft 

Recognized as a Microsoft Solutions Partner with over 20 years of experience, we accelerate business growth by enhancing internal productivity and collaboration, while elevating customer service, sales, and marketing through personalized, connected experiences.  

Our transformative Microsoft Cloud solutions empower clients to build modern workplaces by leveraging our award-winning expertise in app modernization, cloud-native development, intelligent business applications, and employee experience platforms. With extensive capabilities across Microsoft Azure, Dynamics, Modern Work, and Power Platform, we are proud to be one of only 40 Microsoft ESI (Enterprise & Strategic Industry) Managed Partners globallya designation reserved for elite partners trusted to deliver large-scale, industry-focused digital transformation. This exclusive status gives Perficient and our clients priority access to Microsoft innovations, enhanced support, and joint go-to-market opportunities. 

Perficient holds five of the six Microsoft Solutions Partner designations in Data and AI, Digital and App Innovation, Business Applications, Security, and Modern Work. As a result of our technical prowess and outstanding performance, we have also attained Microsoft Specializations in the following areas: 

  • Adoption and Change Management 
  • Advanced Custom Teams Solutions  
  • Low-Code Application Development 
  • Migrate Enterprise Apps to Azure 
  • AI Platform on Azure 
  • Intelligent Automation 

Our technical experts deliver integrated Microsoft AI solutions across industries and job functions, leveraging technologies such as Copilot and Azure to drive productivity, intelligent task automation, and actionable data insights for our clients. We also transform the healthcare and life sciences and manufacturing industries using Microsoft AI technologies to improve patient care, centralize data, optimize operations, and enhance supply chain efficiency. 

LEARN MORE: Architecting a Blueprint for Application Innovation at Builders FirstSource  

Our unwavering dedication to collaboration and innovation drives award-winning solutions that are reshaping the digital consulting landscape. By forging long-term partnerships with leading technology innovators and harnessing cutting-edge technologies, we boldly advance business alongside our clients. This final blog in our series not only celebrates Perficient’s recognition as a top workplace but also reaffirms our vision to empower clients and accelerate business growth. If you’re new to the series, we invite you to explore our earlier blogs to discover how our mission brings this vision to life. 

READY TO GROW YOUR CAREER? 

It’s no secret our success is because of our people. No matter the technology or time zone, our colleagues are committed to delivering innovative, end-to-end digital solutions for the world’s biggest brands, and we bring a collaborative spirit to every interaction. We’re always seeking the best and brightest to work with us. Join our team and experience a culture that challenges, champions, and celebrates our people.  

Visit our Careers page  to see career opportunities and more!  

Go inside Life at Perficient  and connect with us on LinkedIn, YouTube, X, Facebook, TikTok, and Instagram.  

]]>
https://blogs.perficient.com/2025/06/10/perficient-boldly-advances-business-through-technology-partnerships-and-collaborative-innovation/feed/ 0 382005
Boost Cloud Efficiency: AWS Well-Architected Cost Tips https://blogs.perficient.com/2025/06/09/boost-cloud-efficiency-aws-well-architected-cost-tips/ https://blogs.perficient.com/2025/06/09/boost-cloud-efficiency-aws-well-architected-cost-tips/#respond Mon, 09 Jun 2025 06:36:11 +0000 https://blogs.perficient.com/?p=378814

In today’s cloud-first world, building a secure, high-performing, resilient, and efficient infrastructure is more critical than ever. That’s where the AWS Well-Architected Framework comes in a powerful guide designed to help architects and developers make informed decisions and build better cloud-native solutions.

What is the AWS Well-Architected Framework?

The AWS Well-Architected Framework provides a consistent approach for evaluating and improving your cloud architecture. It’s built around six core pillars that represent key areas of focus for building robust and scalable systems:

  • Operational Excellence – Continuously monitor and improve systems and processes.
  • Security – Protect data, systems, and assets through risk assessments and mitigation strategies.
  • Reliability – Ensure workloads perform as intended and recover quickly from failures.
  • Performance Efficiency – Use resources efficiently and adapt to changing requirements.
  • Cost Optimization – Avoid unnecessary costs and maximize value.
  • Sustainability – Minimize environmental impact by optimizing resource usage and energy consumption

98bb6d5d218aea2968fc8e8bba96ef68b6a7730c 1600x812

Explore the AWS Well-Architected Framework here https://aws.amazon.com/architecture/well-architected

AWS Well -Architected Timeline

Time to time, AWS made some changes in the framework and introduce new resources which we can follow to utilize them better for our use cases and get better architecture.

Oip

AWS Well-Architected Tool

To help you apply these principles, AWS offers the Well-Architected Tool—a free service that guides you through evaluating your workloads against the six pillars.

How it Works:

  • Select a workload.
  • Answer a series of questions aligned with the framework.
  • Review insights and recommendations.
  • Generate reports and track improvements over time.

Try the AWS Well-Architected Tool here https://aws.amazon.com/well-architected-tool/

Go Deeper with Labs and Lenses

AWS also Provides:

Deep Dive: Cost Optimization Pillar

Cost Optimization is not just about cutting costs—it’s about maximizing value. It ensures that your cloud investments align with business goals and scale efficiently.

Why It Matters:

  • Understand your spending patterns.
  • Ensure costs support growth, not hinder it.
  • Maintain control as usage scales.

5 Best Practices for Cost Optimization

  1. Practice Cloud Financial Management
  • Build a cost optimization team.
  • Foster collaboration between finance and tech teams.
  • Use budgets and forecasts.
  • Promote cost-aware processes and culture.
  • Quantify business value through automation and lifecycle management.
  1. Expenditure and Usage Awareness
  • Implement governance policies.
  • Monitor usage and costs in real-time.
  • Decommission unused or underutilized resources.
  1. Use Cost-Effective Resources
  • Choose the right services and pricing models.
  • Match resource types and sizes to workload needs.
  • Plan for data transfer costs.
  1. Manage Demand and Supply
  • Use auto-scaling, throttling, and buffering to avoid over-provisioning.
  • Align resource supply with actual demand patterns.
  1. Optimize Over Time
  • Regularly review new AWS features and services.
  • Adopt innovations that reduce costs and improve performance.

Conclusion

The AWS Well-Architected Framework is more than a checklist—it’s a mindset. By embracing its principles, especially cost optimization, you can build cloud environments that are not only efficient and scalable but also financially sustainable.

]]>
https://blogs.perficient.com/2025/06/09/boost-cloud-efficiency-aws-well-architected-cost-tips/feed/ 0 378814
Perficient Achieves AWS Glue Service Delivery Designation https://blogs.perficient.com/2025/03/19/perficient-achieves-aws-glue-service-delivery-designation/ https://blogs.perficient.com/2025/03/19/perficient-achieves-aws-glue-service-delivery-designation/#respond Wed, 19 Mar 2025 14:43:34 +0000 https://blogs.perficient.com/?p=378901

Perficient has earned the AWS Glue Service Delivery Designation, demonstrating our deep technical expertise and proven success in delivering scalable, cost-effective, and high-performance data integration, data pipeline orchestration, and data catalog solutions.

What is the AWS Service Delivery Program?

The AWS Service Delivery Program is an AWS Specialization Program designed to validate AWS Partners with deep technical knowledge, hands-on experience, and a history of success in implementing specific AWS services for customers.

By achieving the AWS Glue specialization, Perficient is now recognized as a trusted partner to help organizations unlock the full potential of their data—from discovery and transformation to governance and automation.

What This Means for Customers

With the AWS Glue Service Delivery Designation, Perficient provides customers with a faster, more reliable, and cost-effective approach to data transformation, integration, and analytics. This designation translates into tangible business outcomes:

  • Accelerated Time-to-Insight – Automate and streamline ETL processes, enabling real-time and predictive analytics that drive smarter decision-making.
  • Cost Efficiency & Scalability – Reduce operational overhead with a serverless, pay-as-you-go model, ensuring businesses only pay for what they use.
  • Enhanced Data Governance & Compliance – Leverage a centralized, searchable data catalog for better data discovery, security, and compliance across industries.
  • Seamless Data Integration – Connect structured and unstructured data from multiple sources, improving accessibility and usability across the enterprise.
  • Future-Ready Data Strategy – Enable AI/ML-powered insights by preparing data pipelines that fuel advanced analytics and innovation.

Why Perficient?

Perficient is an AWS Advanced Services Partner dedicated to helping enterprises transform and innovate through cloud-first solutions. We specialize in delivering end-to-end data and cloud strategies that drive business growth, efficiency, and resilience.

With deep expertise in AWS Glue and broader AWS analytics services, we empower organizations to modernize their data ecosystems, optimize cloud infrastructure, and harness AI-driven insights. Our industry-focused solutions enable companies to unlock new business value and gain a competitive edge.

Beyond technology, we are committed to building long-term partnerships, solving complex challenges, and making a positive impact in the communities where we operate.

]]>
https://blogs.perficient.com/2025/03/19/perficient-achieves-aws-glue-service-delivery-designation/feed/ 0 378901
Automating Backup and Restore with AWS Backup Service using Python https://blogs.perficient.com/2025/03/05/automating-backup-and-restore-with-aws-backup-service-using-python/ https://blogs.perficient.com/2025/03/05/automating-backup-and-restore-with-aws-backup-service-using-python/#respond Wed, 05 Mar 2025 07:29:59 +0000 https://blogs.perficient.com/?p=377944

Protecting data is vital for any organization, and AWS Backup Service offers a centralized, automated solution to back up your AWS resources. This blog will examine how to automate backup and restore operations using AWS Backup and Python, ensuring your data remains secure and recoverable.

Why We use AWS Backup Service

Manual backup processes can be error-prone and time-consuming. AWS Backup streamlines and centralizes our backup tasks, providing consistent protection across AWS services like EC2, RDS, DynamoDB, EFS, and more. By leveraging Python to manage AWS Backup, we can achieve further automation, integrate with other systems, and customize solutions to meet our business needs.

How It Works

AWS Backup enables us to set up backup policies and schedules through backup plans. These plans determine the timing and frequency of backups and their retention duration. By utilizing Python scripts, we can create, manage, and monitor these backup operations using the AWS SDK for Python, Boto3.

Prerequisites

Before we begin, we must have:

  1. An AWS account.
  2. Basic knowledge of Python programming.
  3. AWS CLI installed and configured.
  4. Boto3 library installed in your Python environment.

Automating Backup/Restore with AWS Backup

Step 1: Set Up AWS Backup

To start, we log into the AWS Management Console and navigate to the AWS Backup service. Once there, we create a new backup vault to serve as the designated storage location for our backups. After setting up the vault, the next step is to define a backup plan. This plan should clearly specify the AWS resources we intend to back up, as well as outline the backup schedule and retention period for each backup. By following these steps, we effectively organize and automate our data protection strategy within AWS.

Step 2: Write Python Scripts for Backup Automation

To automate our EC2 instance backups using AWS Backup with Python, we begin by installing the boto3 library with pip install boto3 and configuring our AWS credentials in ~/.aws/credentials. Using boto3, we connect to the AWS Backup service and define a backup plan with our desired schedule and retention policy. We then assign the EC2 instance to this plan by specifying its ARN. Finally, we run the Python script to create the backup plan and associate the instance, efficiently automating the backup process.

Find the complete code here.

import boto3
from botocore.exceptions import ClientError
def start_backup_job(instance_arn):
    client = boto3.client('backup', region_name='eu-west-1') # Ensure the correct region

After Running the code, we will get the output as below.

1

We can see the Job ID triggered via Code in the AWS Backup Job Console.

2

Step 3: Automate Restore Operations:

To automate our restore operations for an EC2 instance using AWS Backup with Python, we start by using the boto3 library to connect to the AWS Backup service. Once connected, we retrieve the backup recovery points for our EC2 instance and select the appropriate recovery point based on our restore requirements. We then initiate a restore job by specifying the restored instance’s recovery point and desired target. By scripting this process, we can automatically restore EC2 instances to a previous state, streamlining our disaster recovery efforts and minimizing downtime.

Find the complete code here.

import boto3
from botocore.exceptions import ClientError
def restore_backup(recovery_point_arn):
    client = boto3.client('backup', region_name='eu-west-1')

After Running the code, we will get the output as below.

3

We can see the Job ID triggered via Code in AWS Restore Job Console.

4

After the Restore Job is Completed, we can navigate to the EC2 region and see a new EC2 instance launched using the below Job.

5

 

Step 4: Monitor and Schedule

Additionally, we may implement Amazon CloudWatch to monitor our backup and restore operations by tracking key metrics. To automate these processes, we schedule our scripts to run automatically, using either cron jobs on our servers or AWS Lambda for serverless execution. This approach enables us to streamline and manage our backup activities efficiently.

Conclusion

We enhance our data protection strategy by automating backup and restore operations using AWS Backup and Python. By leveraging AWS Backup’s centralized capabilities and Python’s automation power, we ensure consistent and reliable backups, allowing us to focus on more strategic initiatives. We experiment with various backup policies and extend automation to meet our organization’s unique needs.

]]>
https://blogs.perficient.com/2025/03/05/automating-backup-and-restore-with-aws-backup-service-using-python/feed/ 0 377944
How To Create High Availability Kubernetes Cluster on AWS using KUBEONE: Part-2 https://blogs.perficient.com/2025/02/24/how-to-create-high-availability-kubernetes-cluster-on-aws-using-kubeone-part-2/ https://blogs.perficient.com/2025/02/24/how-to-create-high-availability-kubernetes-cluster-on-aws-using-kubeone-part-2/#respond Mon, 24 Feb 2025 06:40:22 +0000 https://blogs.perficient.com/?p=377339

In Part-1, we learned about the importance of KUBEONE. Now, lets explore on demo part in this practical session will focus on creating High Availability Kubernetes Cluster on AWS using KUBEONE

Setup KubeOne 

1. Downloading KubeOne

CREATE ec2 instance first with any type as suitable, then download KubeOne from below script. The below commands will be used to download KubeOne

sudo apt-get update

sudo apt-get -y install unzip

curl -sfL https://get.kubeone.io | sh

 

The above script downloads the latest version of KubeOne from GitHub, and unpacks it in the /usr/local/bin directory.

2. Downloading Terraform

We will be use Terraform to manage the infrastructure for the control plane, for this we need to install it. We will use the following scripts to download the terraform

Below is the official documentation link to install terraform:

https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli

 

This is the HashiCorp’s GPG signature and it will install HashiCorp’s Debian package repository.

sudo apt-get update && sudo apt-get install -y gnupg software-properties-common

 

Now Install the HashiCorp GPG key:

wget -O- https://apt.releases.hashicorp.com/gpg | \

gpg –dearmor | \

sudo tee /usr/share/keyrings/hashicorp-archive-keyring.gpg > /dev/nul

 

Verify the Key’s Fingerprint:

gpg –no-default-keyring \

–keyring /usr/share/keyrings/hashicorp-archive-keyring.gpg \

–fingerprint

 

Add the Official HashiCorp Repository into System:

echo “deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] \

https://apt.releases.hashicorp.com $(lsb_release -cs) main” | \

sudo tee /etc/apt/sources.list.d/hashicorp.list

 

Download the package information from HashiCorp.

sudo apt update

 

Install Terraform from the new repository.

sudo apt-get install terraform -y

 

3. Configuring The Environment

Download The AWS CLI

sudo apt install unzip -y

curl “https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip” -o “awscliv2.zip”

unzip awscliv2.zip

sudo ./aws/install

KubeOne and Terraform need the cloud provider credentials exported as the environment variables.

 

Create an IAM user

We need the IAM account with the appropriate permissions for Terraform to create the infrastructure and for the machine-controller to create worker nodes.

 

Iam 1

 

 

Click on ‘Users’ and ‘Create User’

 

Iam User 2

 

Once the User Created then ‘attach Policy’

 

Iam Permission 3

 

Click to the create “Access Key and Secret Key”

 

Accesskey 4

 

We will use “Aws Configure” to config the  both keys

aws configure

 

4. Creating The Infrastructure

Create a key pair on the server

ssh-keygen -t rsa -b 4096 Now let’s move the directory with the example Terraform configs that has been created while installing KubeOne.cd kubeone_1.9.1_linux_amd64/examples/terraform/aws 

Before use the Terraform, we will initialize the directory structure and download the required plugins. Use the below init command

terraform init

 

Init 5

 

Also in same directory, create the file i.e. terraform.tfvars it will contain Terraform variables to customize the infrastructure creation process.

vim terraform.tfvars

 

Now add  the below two variables

cluster_name = “kubeone-cluster”

ssh_public_key_file = “~/.ssh/id_rsa.pub”

 

The cluster_name variable is used as prefix for cloud resources. The ssh_public_key_file is the path to a SSH public key this will deployed on the instances. KubeOne will be connects to instances over SSH for provisioning and configuration. If you want to generate then run the ssh-keygen.

Now run terraform plan command to see what changes done.

terraform plan

 

Plan 6

 

Now run terraform apply command and enter “YES”

terraform apply

The above command will create all the infrastructure that we needed to get started.

 

Apply 7

 

Finally, we need to save the Terraform state in a format KubeOne can read to get info about the AWS resources. This helps with setting up the cluster and creating worker nodes later. The format is already set in a file called output.tf, so all you have to do is run the output command.

terraform output -json > tf.json

This command creates a file named tf.json with the Terraform state in JSON format, which KubeOne can read. Once that’s done, we’re ready to set up our cluster in the next step.

 

Step 5. Provisioning The Cluster

Now that the infrastructure is ready, we can use KubeOne to set up a Kubernetes cluster.

The first step is to create a KubeOne configuration file (kubeone.yaml). This file will define details like how the cluster will be set up and which version of Kubernetes to use..

vim kubeone.yaml

 

Add the below code into above yaml file

apiVersion: kubeone.k8c.io/v1beta2

kind: KubeOneCluster

versions:

  kubernetes: ‘1.30.0’

cloudProvider:

  aws: {}

  external: true

 

Before proceeding, choose the Kubernetes version you want to use and replace any placeholder values with the actual ones.

Now set the environment variable, use the export command for this

 

export AWS_ACCESS_KEY_ID=$(aws configure get aws_access_key_id)

export AWS_SECRET_ACCESS_KEY=$(aws configure get aws_secret_access_key)

 

Now run the below command again

kubeone apply -m kubeone.yaml -t tf.json

If we get the below error

 

Error 8

 

Now Run these below Commands to start agents

Start the fresh agent

# Start SSH agent correctly

eval “$(ssh-agent)”

# Verify the environment variables

echo $SSH_AUTH_SOCK

echo $SSH_AGENT_PID

 

Add the ssh keys

# Add your private key

ssh-add ~/.ssh/id_rsa

# Verify keys are added

ssh-add -l

 

Set the Correct Permissions:

 

# Fix SSH directory permissions

chmod 700 ~/.ssh

chmod 600 ~/.ssh/id_rsa

chmod 644 ~/.ssh/id_rsa.pub

 

Run the apply command again
kubeone apply -m kubeone.yaml -t tf.json

This will be creating cluster.

 

6. Install Kubectl

Let’s install the Kubectl:

curl -LO “https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl”

curl -LO “https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256”

echo “$(cat kubectl.sha256)  kubectl” | sha256sum –check

sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl

chmod +x kubectl

mkdir -p ~/.local/bin

mv ./kubectl ~/.local/bin/kubectl

 

7. Configuring The Cluster Access 

KubeOne automatically downloads the Kubeconfig file for your cluster, named <cluster_name>-kubeconfig (where <cluster_name> is from the terraform.tfvars file). You can use this file with kubectl like this:

kubectl –kubeconfig=<cluster_name>-kubeconfig

kubectl get nodes –kubeconfig=kubeone-cluster-kubeconfig

 

Get Nodes 9

 

Now Copy the config to .Kube folder

cp kubeone-cluster-kubeconfig ~/.kube/config

Try now without –kubeconfig

kubectl get nodes

 

Get Nodes 10

 

Let’s create one “nginx” pod using below scripts

Kubectl run nginx –image=nginx

Kubectl get pods

 

Get Pods 11

 

Shutting down the cluster

The goal of unprovisioning is to delete the cluster and free up cloud resources. Use it only if you no longer need the cluster. If you want to undo this, you can use the below reset command.

kubeone reset –manifest kubeone.yaml -t tf.json 

Removing Infrastructure Using Terraform

If you’re using Terraform, you can delete all resources with the destroy command. Terraform will list what will be removed and ask you to confirm by typing “yes.” If your cluster is on GCP, you need to manually remove Routes created by kube-controller-manager in the cloud console before running terraform destroy.

terraform destroy

Remove all Servers and IAM user at end.

Conclusion:

KubeOne is a solid, reliable choice for automating Kubernetes cluster management, especially for users who need high availability and multi-cloud or hybrid support. It is particularly well-suited for organizations looking for a simple yet powerful solution for managing Kubernetes at scale without the overhead of more complex management platforms. However, it might not have as broad an ecosystem or user base as some of the more widely known alternatives.

]]>
https://blogs.perficient.com/2025/02/24/how-to-create-high-availability-kubernetes-cluster-on-aws-using-kubeone-part-2/feed/ 0 377339
Securely Interacting with AWS Services Using Boto3 API https://blogs.perficient.com/2025/01/17/securely-interacting-with-aws-services-using-boto3-api/ https://blogs.perficient.com/2025/01/17/securely-interacting-with-aws-services-using-boto3-api/#respond Fri, 17 Jan 2025 08:15:03 +0000 https://blogs.perficient.com/?p=374956

In today’s cloud-centric world, AWS (Amazon Web Services) stands out as a leading provider of scalable and reliable cloud services. Python’s Boto3 library is a powerful tool that allows developers to interact with AWS services programmatically. However, ensuring secure interactions is crucial to protect sensitive data and maintain the integrity of your applications.

Main objective of this blog is to explain how we can interact with different AWS services in a secure way. In this blog, I explained how we can create a session object from AWS credentials (keys and secret keys) which we are fetching from OS environment variables and use session object to interact with AWS services.

Setting Up Python, Boto3 API, AWS and VS Code Editor

Python

You could ensure if Python installed in your system/server by running “python –version” command. We can run same command in any operating system either that is Windows, Linux/Unix or MacOS. if python not installed, then we need to install it first before moving forward.

You can download and install the python from its official page Download Python | Python.org

VS Code

I am using VS Code editor tool for developing the boto3 Api code, so we also need to ensure few things in code editor.

  1. We need to install Python extension for Visual Studio Code which integrate and offer support for IntelliSense (Pylance), debugging (Python Debugger), formatting, linting, code navigation, refactoring, variable explorer, test explorer, and many more.

1

  1. We also need to ensure if python version showing on right bottom bar when writing in python file. this will be available once we setup the python properly in our system.

2

Boto3

Once python and VS Code setup done then we need to install the python boto3 package from command “pip install boto3”.

  • boto3 package will not be recognize and give error during execution until we install it. see in given screenshot where you can see yellow underline under boto3.

3

  • To run this command in VS Code editor, we can open terminal from Terminal >> New Terminal and run this command there. you can see the installation in below screenshot where few other dependent packages also installed along with boto3 package. later it was also asking me to upgrade pip (python package manager), so I run that command as well.

4

  • Now we are ready with boto3 api

AWS

Configure your AWS credentials using the AWS CLI or by setting environment variables.

Securely Managing AWS Credentials

Managing AWS credentials securely is the first step in ensuring secure interactions with AWS services. There is two way we can use to interact with different AWS services.

  1. Environment Variables: Store your AWS credentials in environment variables instead of hardcoding them in your scripts.
import os
import boto3

aws_access_key = os.getenv('AWS_ACCESS_KEY_ID')
aws_secret_key = os.getenv('AWS_SECRET_ACCESS_KEY')

session = boto3.Session(
    aws_access_key_id=aws_access_key,
    aws_secret_access_key=aws_secret_key
)
  1. IAM Roles: Use IAM roles for EC2 instances to avoid storing credentials on the instance.
session = boto3.Session()
s3 = session.resource('s3')

Different AWS Services Interaction with boto3 API

Let’s explore how to interact with some common AWS services securely.

Amazon S3

Amazon S3 is a widely used storage service. Here’s how to securely interact with S3 using Boto3.

  1. Uploading Files
import os
import boto3

aws_access_key = os.getenv('aws_access_key_id')
aws_secret_key = os.getenv('aws_secret_access_key')
session = boto3.Session( aws_access_key_id=aws_access_key, aws_secret_access_key=aws_secret_key )

s3 = session.resource('s3')
bucket_name = 'sachinsinghfirstbucket'
file_path = 'temp/first.txt'
s3.Bucket(bucket_name).upload_file(file_path, 'first.txt')

5

6

  1. Downloading Files
import os
import boto3

aws_access_key = os.getenv('aws_access_key_id')
aws_secret_key = os.getenv('aws_secret_access_key')
session = boto3.Session( aws_access_key_id=aws_access_key, aws_secret_access_key=aws_secret_key )

s3 = session.resource('s3')
bucket_name = 'sachinsinghfirstbucket'
file_path = 'temp/first_copy.txt'
s3.Bucket(bucket_name).download_file('first.txt', file_path)

7

Amazon EC2

Amazon EC2 provides scalable computing capacity. Here’s how to manage EC2 instances securely.

  1. Launching an Instance
import os
import boto3

aws_access_key = os.getenv('aws_access_key_id')
aws_secret_key = os.getenv('aws_secret_access_key')
session = boto3.Session( aws_access_key_id=aws_access_key, aws_secret_access_key=aws_secret_key )

ec2 = session.resource('ec2')
instance = ec2.create_instances(
    ImageId='ami-07b69f62c1d38b012',
    MinCount=1,
    MaxCount=1,
    InstanceType='t2.micro'
)

8

9

  1. Stopping an Instance
import os
import boto3

aws_access_key = os.getenv('aws_access_key_id')
aws_secret_key = os.getenv('aws_secret_access_key')
session = boto3.Session( aws_access_key_id=aws_access_key, aws_secret_access_key=aws_secret_key )

instance_id = 'i-00ab4568503979da4'
ec2 = session.resource('ec2')
ec2.Instance(instance_id).stop()

10

11

For Other Services

You can go through other services and detailed documentation here Boto3 1.35.91 documentation

Best Practices for Secure Boto3 Interactions

  1. Use Least Privilege: Ensure that your IAM policies grant the minimum permissions required for your tasks.

  2. Rotate Credentials Regularly: Regularly rotate your AWS credentials to reduce the risk of compromise.

  3. Enable Logging and Monitoring: Use AWS CloudTrail and CloudWatch to monitor and log API calls for auditing and troubleshooting.

Interacting with AWS services using Boto3 is powerful and flexible, but security should always be a top priority. By following best practices and leveraging AWS’s security features, you can ensure that your applications remain secure and resilient.

]]>
https://blogs.perficient.com/2025/01/17/securely-interacting-with-aws-services-using-boto3-api/feed/ 0 374956
Migration of DNS Hosted Zones in AWS https://blogs.perficient.com/2024/12/31/migration-of-dns-hosted-zones-in-aws/ https://blogs.perficient.com/2024/12/31/migration-of-dns-hosted-zones-in-aws/#respond Tue, 31 Dec 2024 08:00:47 +0000 https://blogs.perficient.com/?p=374245

Transferring Route 53 hosted zone records between AWS accounts using the CLI involves exporting the records from one account and then importing them to another. Here’s a step-by-step guide:

Migration of DNS Hosted Zones in AWS

The primary objective of this process is to migrate Route 53 hosted zone records seamlessly between AWS accounts while ensuring minimal disruption to DNS functionality. This involves securely transferring DNS records, preserving their integrity, maintaining availability, and ensuring linked AWS resources remain accessible. Additionally, cross-account DNS access may be implemented as needed to meet business requirements.

Img1

Objectives:

  • Migration Process Overview
  • Prerequisites
  • Configuration Overview
  • Best Practices
  • Conclusion

 

Prerequisites:

  • Account Permissions: Ensure you have AmazonRoute53FullAccess permissions in both source and destination accounts. For domain transfers, additional permissions (TransferDomains, DisableDomainTransferLock, etc.) are required.
  • Export Tooling: Use the AWS CLI or SDK for listing and exporting DNS records, as Route 53 does not have a built-in export feature.
  • Destination Hosted Zone: Create a hosted zone in the destination account with the same domain name as the original. Note the new hosted zone ID for use in subsequent steps.
  • AWS Resource Dependencies: Identify resources tied to DNS records (such as EC2 instances or ELBs) and ensure these are accessible or re-created in the destination account if needed.

 

Configuration Overview:

1. Crete EC2 Instance and Download the cli53 in Using Below Commands:

  • Use the AWS CLI53 to list DNS records in the source account and save them to a JSON file:

Wget https://github.com/barnybug/cli53/releases/download/0.8.16/cli53-linux-amd64

Note: Linux can also be used, but it requires cli53 dependency and AWS credentials

 

  • Move the cli53 to the bin folder and change the permission

Img2

2. Create Hosted Zone in Destination Account:

  • In the destination account, create a new hosted zone with the same domain name using cli or GUI:
    • Take note of the new hosted zone ID.

3. Export DNS Records from Existing Hosted Zone:

  • Export the records using cli53 in ec2 instance using below command and remove NS and SOA records from this file, as the new hosted zone will generate these by default.

Img3

Note: Created Microsoft.com as dummy hosted zone.

4. Import DNS Records to Destination Hosted Zone:

  • Use the exported JSON file to import records into the new hosted zone for that just copy all records from the domain.com.txt file

Img4

  • Now login to other AWS route53 account and just import the records those copied from the exported file, please refer to below ss
  • Now save the file and verified the records

Img5

5. Test DNS Records:

  • Verify DNS record functionality by querying records in the new hosted zone and ensuring that all services resolve correctly.

 

Best practices:

When migrating Route 53 hosted zones between AWS accounts, applying best practices helps ensure a smooth transition with minimal disruption. Here are key best practices for a successful Route 53 hosted zone migration:

1. Plan and Document the Migration Process

  • Detailed Planning: Outline each step of the migration process, including DNS record export, transfer, and import, as well as any required changes in the destination account.
  • Documentation: Document all DNS records, configurations, and dependencies before starting the migration. This helps in troubleshooting and serves as a backup.

2. Schedule Migration During Low-Traffic Periods

  • Reduce Impact: Perform the migration during off-peak hours to minimize potential disruption, especially if you need to update NS records or other critical DNS configurations.

3. Test in a Staging Environment

  • Dry Run: Before migrating a production hosted zone, perform a test migration in a staging environment. This helps identify potential issues and ensures that your migration plan is sound.
  • Verify Configurations: Ensure that the DNS records resolve correctly and that applications dependent on these records function as expected.

4. Use Route 53 Resolver for Multi-Account Setups

  • Centralized DNS Management: For environments with multiple AWS accounts, consider using Route 53 Resolver endpoints and sharing resolver rules through AWS Resource Access Manager (RAM). This enables efficient cross-account DNS resolution without duplicating hosted zones across accounts.

5. Avoid Overwriting NS and SOA Records

  • Use Default NS and SOA: Route 53 automatically creates NS and SOA records when you create a hosted zone. Retain these default records in the destination account, as they are linked to the new hosted zone’s configuration and AWS infrastructure.

6. Update Resource Permissions and Dependencies

  • Resource Links: DNS records may point to AWS resources like load balancers or S3 buckets. Ensure that these resources are accessible from the new account and adjust permissions if necessary.
  • Cross-Account Access: If resources remain in the source account, establish cross-account permissions to ensure continued access.

7. Validate DNS Records Post-Migration

  • DNS Resolution Testing: Test the new hosted zone’s DNS records using tools like dig or nslookup to confirm they are resolving correctly. Check application connectivity to confirm that all dependent services are operational.
  • TTL Considerations: Set a low TTL (Time to Live) on records before migration. This speeds up DNS propagation once the migration is complete, reducing the time it takes for changes to propagate.

8. Consider Security and Access Control

  • Secure Access: Ensure that only authorized personnel have access to modify hosted zones during the migration.

9. Establish a Rollback Plan

  • Rollback Strategy: Plan for a rollback if any issues arise. Keep the original hosted zone active until the new configuration is fully tested and validated.
  • Backup Data: Maintain a backup of all records and configurations so you can revert to the original settings if needed.

Conclusion

Migrating a Route 53 hosted zone between AWS accounts involves careful planning, especially to ensure DNS records are exported and imported correctly. After migrating, testing is crucial to confirm that DNS resolution works as expected. Cross-account setups may require additional configuration, such as Route 53 Resolver rules, to ensure seamless DNS functionality across environments.

]]>
https://blogs.perficient.com/2024/12/31/migration-of-dns-hosted-zones-in-aws/feed/ 0 374245
Enabling AWS IAM DB Authentication https://blogs.perficient.com/2024/12/24/enabling-aws-iam-db-authentication/ https://blogs.perficient.com/2024/12/24/enabling-aws-iam-db-authentication/#respond Tue, 24 Dec 2024 07:15:02 +0000 https://blogs.perficient.com/?p=374192

IAM Database Authentication lets you log in to your Amazon RDS database using your IAM credentials. This makes it easier to manage access, improves security, and provides more control over who can do what. Let’s look at how to set it up and use it effectively.

Objective:

IAM DB Authentication improves security, enables centralized user management, supports auditing, and ensures scalability for database access.

How it Works:

We can enable and use this feature in simple three steps:

  1. Enabling IAM DB authentication
  2. Enabling RDS access to AWS IAM User,
  3. Generating Token & Connecting DB using AWS IAM user.

 To Enable IAM DB Authentication You Must Follow The Steps Below:

  1. Select the RDS instance
    1
  2. Click on Modify Button
    Picture2
  3. Navigate DB Authentication button & Select the Password and IAM Database authentication

Picture3

  1. For Lower version of RDS, it does not show this option, but you can enable it by using CLI
  2. Once you Selected, it will ask you to confirm the master password, after that click on modify option save the changes.

Enable RDS Access to AWS IAM User:

  1. Create and IAM policy

For example:

{

  “Version”: “2012-10-17”,

  “Statement”: [

    {

      “Effect”: “Allow”,

      “Action”: “rds-db:connect”,

      “Resource”: “arn:aws:rds-db:<region>:<account-id>:dbuser:<db-cluster-id>/<username>”

    }

  ]

}

  1. After creating the policy, Navigate the user whom you want to provide the access, attach that policy to user.

Picture4

 

Connecting DB using AWS IAM User:

  1. first you must generate token to connect the RDB; to generate a token you can run below command

aws rds generate-db-auth-token –hostname <db endpoint url> –port 3306 –region <region> –username <db_username>

Make sure you have AWS configured, otherwise you will get the error below, to configure AWS you have to use your IAM AWS account which you want use to connect db.

Picture5

Picture5

  1. then after that you can connect mysql by passing that token in below command:

mysql –host=<db_endpoint_url> –port=3306 –ssl-ca=<provide ssl if you are using ssl> –user=<db_username> –password=’genrated_token_value’

Picture7

Conclusion:

IAM DB Authentication makes it easier to manage access to your Amazon RDS databases by removing the need for hardcoded credentials. By following the above-mentioned steps, you can enable and use IAM-based authentication securely. This approach improves security, simplifies access control, and helps you stay compliant with your organization’s policies.

]]>
https://blogs.perficient.com/2024/12/24/enabling-aws-iam-db-authentication/feed/ 0 374192