Cloud Articles / Blogs / Perficient https://blogs.perficient.com/category/services/platforms-and-technology/cloud/ Expert Digital Insights Tue, 31 Dec 2024 08:00:47 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Cloud Articles / Blogs / Perficient https://blogs.perficient.com/category/services/platforms-and-technology/cloud/ 32 32 30508587 Migration of DNS Hosted Zones in AWS https://blogs.perficient.com/2024/12/31/migration-of-dns-hosted-zones-in-aws/ https://blogs.perficient.com/2024/12/31/migration-of-dns-hosted-zones-in-aws/#respond Tue, 31 Dec 2024 08:00:47 +0000 https://blogs.perficient.com/?p=374245

Transferring Route 53 hosted zone records between AWS accounts using the CLI involves exporting the records from one account and then importing them to another. Here’s a step-by-step guide:

Migration of DNS Hosted Zones in AWS

The primary objective of this process is to migrate Route 53 hosted zone records seamlessly between AWS accounts while ensuring minimal disruption to DNS functionality. This involves securely transferring DNS records, preserving their integrity, maintaining availability, and ensuring linked AWS resources remain accessible. Additionally, cross-account DNS access may be implemented as needed to meet business requirements.

Img1

Objectives:

  • Migration Process Overview
  • Prerequisites
  • Configuration Overview
  • Best Practices
  • Conclusion

 

Prerequisites:

  • Account Permissions: Ensure you have AmazonRoute53FullAccess permissions in both source and destination accounts. For domain transfers, additional permissions (TransferDomains, DisableDomainTransferLock, etc.) are required.
  • Export Tooling: Use the AWS CLI or SDK for listing and exporting DNS records, as Route 53 does not have a built-in export feature.
  • Destination Hosted Zone: Create a hosted zone in the destination account with the same domain name as the original. Note the new hosted zone ID for use in subsequent steps.
  • AWS Resource Dependencies: Identify resources tied to DNS records (such as EC2 instances or ELBs) and ensure these are accessible or re-created in the destination account if needed.

 

Configuration Overview:

1. Crete EC2 Instance and Download the cli53 in Using Below Commands:

  • Use the AWS CLI53 to list DNS records in the source account and save them to a JSON file:

Wget https://github.com/barnybug/cli53/releases/download/0.8.16/cli53-linux-amd64

Note: Linux can also be used, but it requires cli53 dependency and AWS credentials

 

  • Move the cli53 to the bin folder and change the permission

Img2

2. Create Hosted Zone in Destination Account:

  • In the destination account, create a new hosted zone with the same domain name using cli or GUI:
    • Take note of the new hosted zone ID.

3. Export DNS Records from Existing Hosted Zone:

  • Export the records using cli53 in ec2 instance using below command and remove NS and SOA records from this file, as the new hosted zone will generate these by default.

Img3

Note: Created Microsoft.com as dummy hosted zone.

4. Import DNS Records to Destination Hosted Zone:

  • Use the exported JSON file to import records into the new hosted zone for that just copy all records from the domain.com.txt file

Img4

  • Now login to other AWS route53 account and just import the records those copied from the exported file, please refer to below ss
  • Now save the file and verified the records

Img5

5. Test DNS Records:

  • Verify DNS record functionality by querying records in the new hosted zone and ensuring that all services resolve correctly.

 

Best practices:

When migrating Route 53 hosted zones between AWS accounts, applying best practices helps ensure a smooth transition with minimal disruption. Here are key best practices for a successful Route 53 hosted zone migration:

1. Plan and Document the Migration Process

  • Detailed Planning: Outline each step of the migration process, including DNS record export, transfer, and import, as well as any required changes in the destination account.
  • Documentation: Document all DNS records, configurations, and dependencies before starting the migration. This helps in troubleshooting and serves as a backup.

2. Schedule Migration During Low-Traffic Periods

  • Reduce Impact: Perform the migration during off-peak hours to minimize potential disruption, especially if you need to update NS records or other critical DNS configurations.

3. Test in a Staging Environment

  • Dry Run: Before migrating a production hosted zone, perform a test migration in a staging environment. This helps identify potential issues and ensures that your migration plan is sound.
  • Verify Configurations: Ensure that the DNS records resolve correctly and that applications dependent on these records function as expected.

4. Use Route 53 Resolver for Multi-Account Setups

  • Centralized DNS Management: For environments with multiple AWS accounts, consider using Route 53 Resolver endpoints and sharing resolver rules through AWS Resource Access Manager (RAM). This enables efficient cross-account DNS resolution without duplicating hosted zones across accounts.

5. Avoid Overwriting NS and SOA Records

  • Use Default NS and SOA: Route 53 automatically creates NS and SOA records when you create a hosted zone. Retain these default records in the destination account, as they are linked to the new hosted zone’s configuration and AWS infrastructure.

6. Update Resource Permissions and Dependencies

  • Resource Links: DNS records may point to AWS resources like load balancers or S3 buckets. Ensure that these resources are accessible from the new account and adjust permissions if necessary.
  • Cross-Account Access: If resources remain in the source account, establish cross-account permissions to ensure continued access.

7. Validate DNS Records Post-Migration

  • DNS Resolution Testing: Test the new hosted zone’s DNS records using tools like dig or nslookup to confirm they are resolving correctly. Check application connectivity to confirm that all dependent services are operational.
  • TTL Considerations: Set a low TTL (Time to Live) on records before migration. This speeds up DNS propagation once the migration is complete, reducing the time it takes for changes to propagate.

8. Consider Security and Access Control

  • Secure Access: Ensure that only authorized personnel have access to modify hosted zones during the migration.

9. Establish a Rollback Plan

  • Rollback Strategy: Plan for a rollback if any issues arise. Keep the original hosted zone active until the new configuration is fully tested and validated.
  • Backup Data: Maintain a backup of all records and configurations so you can revert to the original settings if needed.

Conclusion

Migrating a Route 53 hosted zone between AWS accounts involves careful planning, especially to ensure DNS records are exported and imported correctly. After migrating, testing is crucial to confirm that DNS resolution works as expected. Cross-account setups may require additional configuration, such as Route 53 Resolver rules, to ensure seamless DNS functionality across environments.

]]>
https://blogs.perficient.com/2024/12/31/migration-of-dns-hosted-zones-in-aws/feed/ 0 374245
From Code to Cloud: AWS Lambda CI/CD with GitHub Actions https://blogs.perficient.com/2024/12/30/from-code-to-cloud-aws-lambda-ci-cd-with-github-actions/ https://blogs.perficient.com/2024/12/30/from-code-to-cloud-aws-lambda-ci-cd-with-github-actions/#respond Tue, 31 Dec 2024 02:31:48 +0000 https://blogs.perficient.com/?p=374755

Introduction:

Integrating GitHub Actions for Continuous Integration and Continuous Deployment (CI/CD) in AWS Lambda deployments is a modern approach to automating the software development lifecycle. GitHub Actions provides a platform for automating workflows directly from your GitHub repository, making it a powerful tool for managing AWS Lambda functions.

Understanding GitHub Actions CI/CD Using Lambda

Integrating GitHub Actions for CI/CD with AWS Lambda streamlines the deployment process, enhances code quality, and reduces the time from development to production. By automating the testing and deployment of Lambda functions, teams can focus on building features and improving the application rather than managing infrastructure and deployment logistics. This integration is essential to modern DevOps practices, promoting agility and efficiency in software development.

Prerequisites:

  • GitHub Account and Repository:
  • AWS Account:
  • AWS IAM Credentials:

DEMO:

First, we will create a folder structure like below & open it in Visual Studio.

Image 1

After this, open AWS Lambda and create a function using Python with the default settings. Once created, we will see the default Python script. Ensure that the file name in AWS Lambda matches the one we created under the src folder.

Image 2

Now, we will create a GitHub repository with the same name as our folder, LearnLambdaCICD. Once created, it will prompt us to configure the repository. We will follow the steps mentioned in the GitHub Repository section to initialize and sync the repository.

Image 3

Next, create a folder named .github/workflows under the main folder. Inside the workflows folder, create a file named deploy_cicd.yaml with the following script.

Image 4

As per this YAML, we need to set up the AWS_DEFAULT_REGION according to the region we are using. In our case, we are using ap-south-1. We will also need the ARN number from the AWS Lambda page, and we will use that same value in our YAML file.

We then need to configure the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY. To do this, navigate to the AWS IAM role and create a new access key.

Once created, we will use the same access key and secret access key in our YAML file. Next, we will map these access keys in our GitHub repository by navigating to Settings > Secrets and variables > Actions and configuring the keys.

Updates:

We will update the default code in the lambda_function.py file in Visual Studio. This way, once the pipeline builds successfully, we can see the changes in AWS Lambda as well. This modified the file as shown below:

Image 5

Our next step will be to push the code to the Git repository using the following commands:

  • Git add .
  • Git commit -m “Last commit”
  • Git push

Once the push is successful, navigate to GitHub Actions from your repository. You will see the pipeline deploying and eventually completing, as shown below. We can further examine the deployment process by expanding the deploy section. This will allow us to observe the steps that occurred during the deployment.

Image 6

Now, when we navigate to AWS Lambda to check the code, we can see that the changes we deployed have been applied.

Image 7

We can also see the directory changes in the left pane of AWS Lambda.

Conclusion:

As we can see, integrating GitHub Actions for CI/CD with AWS Lambda automates and streamlines the deployment process, allowing developers to focus on building features rather than managing deployments. This integration enhances efficiency and reliability, ensuring rapid and consistent updates to serverless applications. By leveraging GitHub’s powerful workflows and AWS Lambda’s scalability, teams can effectively implement modern DevOps practices, resulting in faster and more agile software delivery.

]]>
https://blogs.perficient.com/2024/12/30/from-code-to-cloud-aws-lambda-ci-cd-with-github-actions/feed/ 0 374755
Enabling AWS IAM DB Authentication https://blogs.perficient.com/2024/12/24/enabling-aws-iam-db-authentication/ https://blogs.perficient.com/2024/12/24/enabling-aws-iam-db-authentication/#respond Tue, 24 Dec 2024 07:15:02 +0000 https://blogs.perficient.com/?p=374192

IAM Database Authentication lets you log in to your Amazon RDS database using your IAM credentials. This makes it easier to manage access, improves security, and provides more control over who can do what. Let’s look at how to set it up and use it effectively.

Objective:

IAM DB Authentication improves security, enables centralized user management, supports auditing, and ensures scalability for database access.

How it Works:

We can enable and use this feature in simple three steps:

  1. Enabling IAM DB authentication
  2. Enabling RDS access to AWS IAM User,
  3. Generating Token & Connecting DB using AWS IAM user.

 To Enable IAM DB Authentication You Must Follow The Steps Below:

  1. Select the RDS instance
    1
  2. Click on Modify Button
    Picture2
  3. Navigate DB Authentication button & Select the Password and IAM Database authentication

Picture3

  1. For Lower version of RDS, it does not show this option, but you can enable it by using CLI
  2. Once you Selected, it will ask you to confirm the master password, after that click on modify option save the changes.

Enable RDS Access to AWS IAM User:

  1. Create and IAM policy

For example:

{

  “Version”: “2012-10-17”,

  “Statement”: [

    {

      “Effect”: “Allow”,

      “Action”: “rds-db:connect”,

      “Resource”: “arn:aws:rds-db:<region>:<account-id>:dbuser:<db-cluster-id>/<username>”

    }

  ]

}

  1. After creating the policy, Navigate the user whom you want to provide the access, attach that policy to user.

Picture4

 

Connecting DB using AWS IAM User:

  1. first you must generate token to connect the RDB; to generate a token you can run below command

aws rds generate-db-auth-token –hostname <db endpoint url> –port 3306 –region <region> –username <db_username>

Make sure you have AWS configured, otherwise you will get the error below, to configure AWS you have to use your IAM AWS account which you want use to connect db.

Picture5

Picture5

  1. then after that you can connect mysql by passing that token in below command:

mysql –host=<db_endpoint_url> –port=3306 –ssl-ca=<provide ssl if you are using ssl> –user=<db_username> –password=’genrated_token_value’

Picture7

Conclusion:

IAM DB Authentication makes it easier to manage access to your Amazon RDS databases by removing the need for hardcoded credentials. By following the above-mentioned steps, you can enable and use IAM-based authentication securely. This approach improves security, simplifies access control, and helps you stay compliant with your organization’s policies.

]]>
https://blogs.perficient.com/2024/12/24/enabling-aws-iam-db-authentication/feed/ 0 374192
[Webinar] Oracle Project-Driven Supply Chain at Roeslein & Associates https://blogs.perficient.com/2024/12/20/webinar-oracle-project-driven-supply-chain-at-roeslein-associates/ https://blogs.perficient.com/2024/12/20/webinar-oracle-project-driven-supply-chain-at-roeslein-associates/#respond Fri, 20 Dec 2024 20:50:29 +0000 https://blogs.perficient.com/?p=374060

Roeslein & Associates, a global leader in construction and engineering, had complex business processes that could not scale to meet its needs. It wanted to set standard manufacturing processes to fulfill highly customized demand originating from its customers.

Roeslein chose Oracle Fusion Cloud SCM, which included Project-Driven Supply Chain for Inventory, Manufacturing, Order Management, Procurement, and Cost Management, and partnered with Perficient to deliver the implementation.

Join us as project manager, Ben Mitchler, discusses the migration to Oracle Cloud. Jeff Davis, Director, Oracle ERP at Perficient will be joining Ben to share this great PDSC story.

Discussion will include:

  • Challenges with the legacy environment
  • On-premises to cloud migration approach
  • Benefits realized with the global SCM implementation

Save the date for this insightful webinar taking place January 22, 2025! Register now!

An Oracle Partner with 20+ years of experience, we are committed to partnering with our clients to tackle complex business challenges and accelerate transformative growth. We help the world’s largest enterprises and biggest brands succeed. Connect with us at the show to learn more about how we partner with our customers to forge the future.

]]>
https://blogs.perficient.com/2024/12/20/webinar-oracle-project-driven-supply-chain-at-roeslein-associates/feed/ 0 374060
Building GitLab CI/CD Pipelines with AWS Integration https://blogs.perficient.com/2024/12/18/building-gitlab-ci-cd-pipelines-with-aws-integration/ https://blogs.perficient.com/2024/12/18/building-gitlab-ci-cd-pipelines-with-aws-integration/#respond Wed, 18 Dec 2024 11:05:19 +0000 https://blogs.perficient.com/?p=373778

Building GitLab CI/CD Pipelines with AWS Integration

GitLab CI/CD (Continuous Integration/Continuous Deployment) is a powerful, integrated toolset within GitLab that automates the software development lifecycle (SDLC). It simplifies the process of building, testing, and deploying code, enabling teams to deliver high-quality software faster and more efficiently.

Understanding GitLab CI/CD

Getting started with GitLab CI/CD is simple. Start by creating a GitLab account and setting up a project for your application if you don’t have then install and configure a GitLab Runner, a tool responsible for executing the tasks defined in your .gitlab-ci.yml file. The runner handles building, testing, and deploying your code, ensuring the pipeline works as intended. This setup streamlines your development process and helps automate workflows efficiently.

What is a GitLab Pipeline?

A pipeline automates the process of building, testing, and deploying applications. CI (Continuous Integration) means regularly merging code changes into a shared repository. CD (Continuous Deployment/Delivery) automates releasing the application to its target environment.

Gitlab 1

Related CODE: In this step, you push your local code changes to the remote repository and commit any updates or modifications.

CI Pipeline: Once your code changes are committed and merged, you can run the build and test jobs defined in your pipeline. After completing these jobs, the code is ready to be deployed to staging and production environments.

Important Terms in GitLab CI/CD

1. .gitlab-ci.yaml file

A .gitlab-ci.yml file in a GitLab repository is used to define the Continuous Integration/Continuous Deployment (CI/CD) pipeline configuration. This file contains instructions on building, testing, and deploying your project.

2. Gitlab-Runner

In GitLab CI/CD, a “runner” refers to the agent that executes the jobs defined in the .gitlab-ci.yml pipeline configuration. Runners can be either shared or specific to the project.

Here’s how runners work:

  1. Shared Runners: GitLab provides shared runners available to all projects within a GitLab instance. These runners are managed by GitLab administrators and can be used by any project. Shared runners are convenient if we don’t want to set up and manage our own runners.
  2. Specific Runners: We can also set up our own runners that are dedicated to our project. These runners can be deployed on our infrastructure (e.g., on-premises servers, cloud instances) or using a variety of methods like Docker, Kubernetes, shell, or Docker Machine. Specific runners offer more control over the execution environment and can be customized to meet the specific needs of our project.

3. Pipeline:

Pipelines are made up of jobs and stages:

  • Jobs define what you want to do. For example, test code changes, or deploy to a dev environment.
  • Jobs are grouped into stages. Each stage contains at least one job. Common stages include build, test, and deploy.
  • You can run the pipeline either manually or from the pipeline schedule Job.

First is manually means directly commit, when you merged or commit any changes into code pipeline directly trigger.

And second is by using rules for that, you need to create a scheduled job.

 

Gitlab 2

 

 4. Schedule Job:

We use scheduled jobs to automate pipeline execution. To create a scheduled job, follow these steps:

  1. Navigate to Schedule Settings: Go to Build, select Pipeline Schedules, and click Create New Schedule.
  2. Configure Schedule Details:
    1. Description: Enter a name for the scheduled job.
    2. Cron Timezone: Set the timezone according to your requirements.
    3. Interval Pattern: Define the cron schedule to determine when the pipeline should run. If you   prefer to run it manually by clicking the play button when needed, uncheck the Activate button at the end.
    4. Target Branch: Specify the branch where the cron job will run.
  3. Add Variables: Include any variables mentioned in the rules section of your .gitlab-ci.yml file to ensure the pipeline runs correctly.
    1. Input variable key = SCHEDULE_TASK_NAME
    2. Input variable value = prft-deployment

Gitlab 3

 

Gitlab3.1

Demo

Prerequisites for GitLab CI/CD 

  • GitLab Account and Project: You need an active GitLab account and a project repository to store your source code and set up CI/CD workflows.
  • Server Environment: You should have access to a server environment, like a AWS Cloud, where your install gitlab-runner.
  • Version Control: Using a version control system like Git is essential for managing your source code effectively. With Git and a GitLab repository, you can easily track changes, collaborate with your team, and revert to previous versions whenever necessary.

Configure Gitlab-Runner

  • Launch an AWS EC2 instance with any operating system of your choice. Here, I used Ubuntu. Configure the instance with basic settings according to your requirements.
  • SSH into the EC2 instance and follow the steps below to install GitLab Runner on Ubuntu.
  1. sudo apt install -y curl
  2. curl -L https://packages.gitlab.com/install/repositories/runner/gitlab-runner/script.deb.sh | sudo bash
  3. sudo apt install gitlab-runner

After installing GitLab Runner, proceed to register it. Navigate to GitLab, go to Settings, then CI/CD, and under Runners, click on the three dots to access the registration options.

And copy-paste the below cmd:

Gitlab 4

Run the following command on your EC2 instance and provide the necessary details for configuring the runner based on your requirements:

  1. URL: Press enter to keep it as the default.
  2. Token: Use the default token and press enter.
  3. Description: Add a brief description for the runner.
  4. Tags: This is critical; the tag names define your GitLab Runner and are referenced in your .gitlab-ci.yml file.
  5. Notes: Add any additional notes if required.
  6. Executor: Choose shell as the executor.

Gitlab 5

Check GitLab-runner status and active status using the below cmd:

  • gitlab-runner verify
  • gitlab-runner list

Gitlab 6

Check gitlab-runner is active in gitlab also:

Navigate to GitLab, then go to Settings and select GitLab Runners.

 

Gitlab 7

 Configure gitlab-ci.yaml file

  • Stages: Stages that define the sequence in which jobs are executed.
    • build
    • deploy
  • Build-job: This job is executed in the build stage, the first run stage.
    • Stage: build
    • Script:
      • Echo “Compiling the code…”
      • Echo “Compile complete.”‘
    • Rules:
      • if: ‘$CI_PIPELINE_SOURCE == “schedule” && $SCHEDULE_TASK_NAME == “prft-deployment”‘
    • Tags:
      • prft-test-runner
  • Deploy-job: This job is executed in the deploy stage.
    • Stage: deploy   #It will only execute when both jobs in the build job & test job (if added) have been successfully completed.
    • script:
      • Echo “Deploying application…”
      • Echo “Application successfully deployed.”
    • Rules:
      • if: ‘$CI_PIPELINE_SOURCE == “schedule” && $SCHEDULE_TASK_NAME == “prft-deployment”‘
    • Tags:
      • prft-test-runner

Note: If needed, you can add a test job similar to the BUILD and DEPLOY jobs.

Run Pipeline

Since the Cron job is already configured in the schedule, simply click the Play button to automatically trigger your pipeline.

Gitlab 8

To check pipeline status, go to Build and then Pipeline. Once the Build Job is successfully completed, the Test Job will start, and once the Test Job is completed, the deploy job will start.

Gitlab 9

Output

We successfully completed BUILD & DEPLOY Jobs.

Gitlab 10

Build Job

Gitlab 11

Deploy Job

Gitlab 12

Conclusion

As we can see, the BUILD & DEPLOY jobs pipeline has successfully passed.

We’ve provided a brief overview of GitLab CI/CD pipelines and a practical demonstration of how its components work together. Hopefully, everything is running smoothly on your end!

 

]]>
https://blogs.perficient.com/2024/12/18/building-gitlab-ci-cd-pipelines-with-aws-integration/feed/ 0 373778
A New Normal: Developer Productivity with Amazon Q Developer https://blogs.perficient.com/2024/12/13/a-new-normal-developer-productivity-with-amazon-q-developer/ https://blogs.perficient.com/2024/12/13/a-new-normal-developer-productivity-with-amazon-q-developer/#comments Fri, 13 Dec 2024 21:35:17 +0000 https://blogs.perficient.com/?p=373559

Amazon Q was front and center at AWS re:Invent last week.  Q Developer is emerging as required tooling for development teams focused on custom development, cloud-native services, and the wide range of legacy modernizations, stack conversions and migrations required of engineers.  Q Developer is evolving beyond “just” code generation and is timing its maturity well alongside the rise of agentic workflows with dedicated agents playing specific roles within a process… a familiar metaphor for enterprise developers.

The Promise of Productivity

Amazon Q Developer makes coders more effective by tackling repetitive and time-consuming tasks. Whether it’s writing new code, refactoring legacy systems, or updating dependencies, Q brings automation and intelligence to the daily work experience:

  • Code generation including creation of full classes based off natural language comments
  • Transformation legacy code into other programming languages
  • AI-fueled analysis of existing codebases
  • Discovery and remediation of dependencies and outdated libraries
  • Automation of unit tests and system documentation
  • Consistency of development standards across teams

Real Impacts Ahead

As these tools quickly evolve, the way in which enterprises, product teams and their delivery partners approach development must now transform along with them.  This reminds me of a favorite analogy, focused on the invention of the spreadsheet:

The story goes that it would take weeks of manual analysis to calculate even minor changes to manufacturing formulas, and providers would compute those projections on paper, and return days or weeks later with the results.  With the rise of the spreadsheet, those calculations were completed nearly instantly – and transformed business in two interesting ways:  First, the immediate availability of new information made curiosity and innovation much more attainable.  And second, those spreadsheet-fueled service providers (and their customers) had to rethink how they were planning, estimating and delivering services considering this revolutionary technology.  (Planet Money Discussion)

This certainly rings a bell with the emergence of GenAI and agentic frameworks and their impacts on software engineering.  The days ahead will see a pivot in how deliverables are estimated, teams are formed, and the roles humans play across coding, testing, code reviews, documentation and project management.  What remains consistent will be the importance of trusted and transparent relationships and a common understanding of expectations around outcomes and value provided by investment in software development.

The Q Experience

Q Developer integrates with multiple IDEs to provide both interactive and asynchronous actions. It works with leading identity providers for authentication and provides an administrative console to manage user access and assess developer usage, productivity metrics and per-user subscription costs.

The sessions and speakers did an excellent job addressing the most common concerns: Safety, Security and Ownership.  Customer code is not used to train models using the Pro Tier but requires opt-out using Free version.  Foundation models are updated on a regular basis.  And most importantly: you own the generated code, although with that, the same level of responsibility and ownership falls to you for testing & validation – just like traditional development outputs.

The Amazon Q Dashboard provides visibility to user activity, metrics on lines of code generated, and even the percentage of Q-generated code accepted by developers, which provides administrators a clear, real-world view of ROI on these intelligent tooling investments.

Lessons Learned

Experts and early adopters at re:Invent shared invaluable lessons for making the most of Amazon Q:

  • Set guardrails and develop an acceptable use policy to clarify expectations for all team members
  • Plan a thorough developer onboarding process to maximize adoption and minimize the unnecessary costs of underutilization
  • Start small and evangelize the benefits unique to your organization
  • Expect developers to become more effective Prompt Engineers over time
  • Expect hidden productivity gains like less context-switching, code research, etc.

The Path Forward

Amazon Q is more than just another developer tool—it’s a gateway to accelerating workflows, reducing repetitive tasks, and focusing talent on higher-value work. By leveraging AI to enhance coding, automate infrastructure, and modernize apps, Q enables product teams to be faster, smarter, and more productive.

As this space continues to evolve, the opportunities to optimize development processes are real – and will have a huge impact from here on out.  The way we plan, execute and measure software engineering is about to change significantly.

]]>
https://blogs.perficient.com/2024/12/13/a-new-normal-developer-productivity-with-amazon-q-developer/feed/ 2 373559
All In on AI: Amazon’s High-Performance Cloud Infrastructure and Model Flexibility https://blogs.perficient.com/2024/12/10/all-in-on-ai-amazons-high-performance-cloud-infrastructure-and-model-flexibility/ https://blogs.perficient.com/2024/12/10/all-in-on-ai-amazons-high-performance-cloud-infrastructure-and-model-flexibility/#respond Tue, 10 Dec 2024 14:00:09 +0000 https://blogs.perficient.com/?p=373238

At AWS re:Invent last week, Amazon made one thing clear: it’s setting the table for the future of AI. With high-performance cloud primitives and the model flexibility of Bedrock, AWS is equipping customers to build intelligent, scalable solutions with connected enterprise data. This isn’t just about technology—it’s about creating an adaptable framework for AI innovation:

Cloud Primitives: Building the Foundations for AI

Generative AI demands robust infrastructure, and Amazon is doubling down on its core infrastructure to meet the scale and complexity of these market needs across foundational components:

  1. Compute:
    • Graviton Processors: AWS-native, ARM-based processors offering high performance with lower energy consumption.
    • Advanced Compute Instances: P6 instances with NVIDIA Blackwell GPUs, delivering up to 2.5x faster GenAI compute speeds.
  2. Storage Solutions:
    • S3 Table Buckets: Optimized for Iceberg tables and Parquet files, supporting scalable and efficient data lake operations critical to intelligent solutions.
  3. Databases at Scale:
    • Amazon Aurora: Multi-region, low-latency relational databases with strong consistency to keep up with massive and complex data demands.
  4. Machine Learning Accelerators:
    • Trainium2: Specialized chip architecture ideal for training and deploying complex models with improved price performance and efficiency.
    • Trainium2 UltraServers: Connected clusters of Trn2 servers with NeuronLink interconnect for massive scale and compute power for training and inference for the world’s largest models – with continued partnership with companies like Anthropic.

 Amazon Bedrock: Flexible AI Model Access

Infrastructure provides the baseline requirements for enterprise AI, setting the table for business outcome-focused innovation.  Enter Amazon Bedrock, a platform designed to make AI accessible, flexible, and enterprise-ready. With Bedrock, organizations gain access to a diverse array of foundation models ready for custom tailoring and integration with enterprise data sources:

  • Model Diversity: Access 100+ top models through the Bedrock Marketplace, guiding model availability and awareness across business use cases.
  • Customizability: Fine-tune models using organizational data, enabling personalized AI solutions.
  • Enterprise Connectivity: Kendra GenAI Index supports ML-based intelligent search across enterprise solutions and unstructured data, with natural language queries across 40+ enterprise sources.
  • Intelligent Routing: Dynamic routing of requests to the most appropriate foundation model to optimize response quality and efficiency.
  • Nova Models: New foundation models offer industry-leading price performance (Micro, Lite, Pro & Premier) along with specialized versions for images (Canvas) and video (Reel).

 Guidance for Effective AI Adoption

As important as technology is, it’s critical to understand success with AI is much more than deploying the right model.  It’s about how your organization approaches its challenges and adapts to implement impactful solutions.  I took away a few key points from my conversations and learnings last week:

  1. Start Small, Solve Real Problems: Don’t try to solve everything at once. Focus on specific, lower risk use cases to build early momentum.
  2. Data is King: Your AI is only as smart as the data it’s fed, so “choose its diet wisely”.  Invest in data preparation, as 80% of AI effort is related to data management.
  3. Empower Experimentation: AI innovation and learning thrives when teams can experiment and iterate with decision-making autonomy while focused on business outcomes.
  4. Focus on Outcomes: Work backward from the problem you’re solving, not the specific technology you’re using.  “Fall in love with the problem, not the technology.”
  5. Measure and Adapt: Continuously monitor model accuracy, retrieval-augmented generation (RAG) precision, response times, and user feedback to fine-tune performance.
  6. Invest in People and Culture: AI adoption requires change management. Success lies in building an organizational culture that embraces new processes, tools and workflows.
  7. Build for Trust: Incorporate contextual and toxicity guardrails, monitoring, decision transparency, and governance to ensure your AI systems are ethical and reliable.

Key Takeaways and Lessons Learned

Amazon’s AI strategy reflects the broader industry shift toward flexibility, adaptability, and scale. Here are the top insights I took away from their positioning:

  • Model Flexibility is Essential: Businesses benefit most when they can choose and customize the right model for the job. Centralizing the operational framework, not one specific model, is key to long-term success.
  • AI Must Be Part of Every Solution: From customer service to app modernization to business process automation, AI will be a non-negotiable component of digital transformation.
  • Think Beyond Speed: It’s not just about deploying AI quickly—it’s about integrating it into a holistic solution that delivers real business value.
  • Start with Managed Services: For many organizations, starting with a platform like Bedrock simplifies the journey, providing the right tools and support for scalable adoption.
  • Prepare for Evolution: Most companies will start with one model but eventually move to another as their needs evolve and learning expands. Expect change – and build flexibility into your AI strategy.

The Future of AI with AWS

AWS isn’t just setting the table—it’s planning for an explosion of enterprises ready to embrace AI. By combining high-performance infrastructure, flexible model access through Bedrock, and simplified adoption experiences, Amazon is making its case as the leader in the AI revolution.

For organizations looking to integrate AI, now is the time to act. Start small, focus on real problems, and invest in the tools, people, and culture needed to scale. With cloud infrastructure and native AI platforms, the business possibilities are endless. It’s not just about AI—it’s about reimagining how your business operates in a world where intelligence is the new core of how businesses work.

]]>
https://blogs.perficient.com/2024/12/10/all-in-on-ai-amazons-high-performance-cloud-infrastructure-and-model-flexibility/feed/ 0 373238
CCaaS Migration Best Practices: Tips for moving your customer care platform to the cloud https://blogs.perficient.com/2024/12/06/ccaas-migration-best-practices-tips-for-moving-your-customer-care-platform-to-the-cloud/ https://blogs.perficient.com/2024/12/06/ccaas-migration-best-practices-tips-for-moving-your-customer-care-platform-to-the-cloud/#respond Fri, 06 Dec 2024 16:28:56 +0000 https://blogs.perficient.com/?p=373159

Migrating to a cloud-delivered Contact Center as a Service (CCaaS) solution can revolutionize how your organization delivers customer service. However, this transition requires careful planning and execution to avoid disruptions. Assuming you have selected a CCaaS platform that aligns with your organizational needs, the following best practices outline key considerations for a seamless migration.

A successful migration to CCaaS not only enhances operational efficiency and scalability but also ensures a significant improvement in service delivery, directly impacting customer satisfaction and retention. Organizations should consider the risks of not embracing modern cloud-based customer care solutions, which can

include diminished customer service capabilities and potential costs due to outdated or inflexible systems. Moreover, organizations that delay this shift risk falling behind competitors who can adapt more quickly to market demands and customer needs. Thus, embarking on a well-planned migration journey is imperative for companies aiming to optimize their customer care operations and secure a competitive advantage in their respective markets.

 

  1. Physical Infrastructure Migration

Understanding your current environment is critical for a successful transition. Start with a thorough site review to document the infrastructure and identify unique user requirements. Engage with call center managers, team leaders, and power users to uncover specific needs and configured features such as whisper settings, omnichannel components, call management, etc.

Factors such as bandwidth and latency are paramount for seamless operations. Evaluate your facility’s connectivity for both on-site and remote users, ensuring it aligns with the CCaaS product requirements. Fortunately, modern CCaaS solutions such as Amazon Connect, Twilio Flex and Five9 supply agent connectivity tools to verify that workers have sufficient resources to provide good customer service over various channels.

Additionally, document call treatments and station-specific configurations like call coverage paths. Legacy components requiring continued functionality should be cataloged to prepare for integration.

 

  1. Change Management Planning

Change management is essential to mitigate risks and maximize adoption. A staged cutover strategy is recommended over a single-event migration, allowing for gradual testing and adjustments.

Develop a robust testing strategy to validate the platform’s performance under real-world conditions. Complement this with an organizational enablement strategy to train users and ensure they are comfortable with the new system. Adoption by your business units and users is one of the most critical factors which will determine the success of your CCaaS migration.

 

  1. Operational Considerations

Operational continuity is vital during migration. Start by understanding the reporting requirements for business managers to ensure no loss of visibility into critical metrics. Additionally, review monitoring processes to maintain visibility into system performance post-migration.

 

  1. Integration Planning

Integrating legacy infrastructure with the new CCaaS platform can present significant challenges. Document existing components, including FXO/FXS interfaces, Workforce Management solutions, FAX systems, wallboards, and specialty dialers. Verify that integrations comply with any regulatory requirements, such as HIPAA or FINRA.

Interactive Voice Response (IVR) systems often require specific integrations with local data sources or enterprise middleware. Assess these integrations to ensure call flows function as intended. For specialized applications, verify that they meet operational needs within the new environment.

 

  1. Fault Tolerance and Disaster Recovery

Testing fault tolerance and disaster recovery capabilities are critical steps in any CCaaS migration. Develop and execute a failsafe testing plan to ensure resilience against both premise-level and carrier-level failures. It is important to align to your IT organization’s standards for recovery time objective (RTO) and business up-time expectations. Disaster recovery plans must reflect these measures and be tested to protect against potential downtime.

 

  1. Scalability and Compliance

CCaaS solutions must scale with your business. Validate scalability by conducting load tests and documenting performance metrics. Compliance is equally important—ensure your migration adheres to industry standards like HIPAA, FedRAMP, or FINRA through thorough compliance testing and documentation.

 

Conclusion

A successful CCaaS migration hinges on meticulous planning, comprehensive testing, and strong change management. By following these best practices, you can minimize risks, ensure operational continuity, and set your organization up for long-term success with its new contact center platform. The result? An enhanced customer experience and a contact center infrastructure that grows with your business.

 

 

]]>
https://blogs.perficient.com/2024/12/06/ccaas-migration-best-practices-tips-for-moving-your-customer-care-platform-to-the-cloud/feed/ 0 373159
Perficient Achieves AWS Healthcare Services Competency, Strengthening Our Commitment to Healthcare https://blogs.perficient.com/2024/11/29/perficient-achieves-aws-healthcare-services-competency-strengthening-our-commitment-to-healthcare/ https://blogs.perficient.com/2024/11/29/perficient-achieves-aws-healthcare-services-competency-strengthening-our-commitment-to-healthcare/#respond Fri, 29 Nov 2024 16:30:18 +0000 https://blogs.perficient.com/?p=372789

At Perficient, we’re proud to announce that we have achieved the AWS Healthcare Services Competency! This recognition highlights our ability to deliver transformative cloud solutions tailored to the unique challenges and opportunities in the healthcare industry.

Healthcare organizations are under increasing pressure to innovate while maintaining compliance, ensuring security, and improving patient outcomes. Achieving the AWS Healthcare Services Competency validates our expertise in helping providers, payers, and life sciences organizations navigate these complexities and thrive in a digital-first world.

A Proven Partner in Healthcare Transformation

Our team of AWS-certified experts has extensive experience working with leading healthcare organizations to modernize systems, accelerate innovation, and deliver measurable outcomes. By aligning with AWS’s best practices and leveraging the full suite of AWS services, we’re helping our clients build a foundation for long-term success.

The Future of Healthcare Starts Here

This milestone is a reflection of our ongoing commitment to innovation and excellence. As we continue to expand our collaboration with AWS, we’re excited to partner with healthcare organizations to create solutions that enhance lives, empower providers, and redefine what’s possible.

Ready to Transform?

Learn more about how Perficient’s AWS expertise can drive your healthcare organization’s success.

]]>
https://blogs.perficient.com/2024/11/29/perficient-achieves-aws-healthcare-services-competency-strengthening-our-commitment-to-healthcare/feed/ 0 372789
Discover the Benefits of Salesforce Pay Now https://blogs.perficient.com/2024/11/22/discover-the-benefits-of-salesforce-pay-now/ https://blogs.perficient.com/2024/11/22/discover-the-benefits-of-salesforce-pay-now/#respond Fri, 22 Nov 2024 06:21:29 +0000 https://blogs.perficient.com/?p=372039

Blog Objectives

  • Understand the advantages that Pay Now provides for both your business and furthermore, your customers.
  • Understand how Pay Now links can help reduce overdue payments.
  • Understand how Pay Now streamlines payment processes across various channels, including Commerce, Sales, and Service.

 

Accelerate Your Payments:

Late or overdue payments can significantly affect your business operations.

Here are some striking statistics to consider:

  • Midsize companies spend an average of 14 hours weekly pursuing unpaid invoices.
  • Approximately one-third of small businesses in the Market. are at risk of closing due to late payments.

These delays can hinder your ability to pay suppliers, drive revenue, and expand your business. Additionally, the time and resources spent on collecting overdue payments can be considerable.

Introducing Salesforce Pay Now:

Salesforce Pay Now is a straightforward payment solution that simplifies the collection process. By embedding payment links within your Salesforce applications, you provide customers with an easy and convenient way to complete transactions. The process is seamless: just share the link, and get paid.

Salesforce Pay Now is uniquely designed to extend payment functionalities throughout the Salesforce CRM ecosystem. With all your data integrated on one platform, you can accelerate revenue collection without complicated setups.

Prerequisites:

Before you can enable Payments manually, first enable Digital Experiences and set up a dedicated Experience Cloud site for Payments.

Prereq

Image credit: Salesforce

How Does Pay Now Work?:

With Pay Now, you can create a unique or reusable payment link to share with your customers. This link directs them to a customized, mobile-responsive webpage where they can select their preferred payment method. Pay Now even allows you to customize payment options and supports multiple currencies.

1

Image credit: Salesforce

Pay Now with Salesforce Starter Suite:

Both the Salesforce Starter Suite and Pro Suite now include Pay Now, making it an ideal solution for driving revenue growth through direct payments. When an opportunity closes, creating a Pay Now link is simple, allowing customers to make payments easily.

Enhance Field Service Operations:

Your field service technicians are not only skilled workers but also potential sales representatives. With Pay Now, they can collect payments on-site for services and products, or even upsell additional offerings.

 

For instance, a technician can quickly generate a Pay Now link and send it to a customer’s phone. The customer can pay immediately before the technician leaves, ensuring a smooth transaction and an improved service experience.

2

Image credit: Salesforce

Upselling and Cross-Selling Made Easy:

Field service agents have unique opportunities to engage with customers during service calls. For example, Jessica Tanaka, a support agent at Ursa Major Solar, utilizes Pay Now within the Salesforce Service Cloud to promote add-ons or warranties. After making a sale, she sends a payment link directly to the customer’s mobile device, allowing for instant payment and turning service calls into revenue opportunities.

 

Integrate Pay Now into Bot Interactions:

Sales bots can also leverage Pay Now to facilitate transactions. By embedding Pay Now links, these bots can assist customers with order payments and prepayments seamlessly. Furthermore, using Einstein bot recommendations, your service team can identify and act on revenue-generating opportunities based on customer interactions.

3

Image credit: Salesforce

Key Advantages of Pay Now in Starter and Pro Suites:

  1. Available out of the box with both suites.
  2. Easy guided setup for configuring Pay Now.
  3. Quickly generate and send payment links directly from opportunity records.
  4. Start collecting payments within days of setup.
  5. Enhance cash flow and reduce the risks associated with late payments.

Recent Enhancements Include:

  1. Expanded Checkout Options: Customers can view taxes and shipping on a new Pay Now Checkout page.
  2. Link Management: Configure payment links for single or multiple uses and set expiration dates.
  3. Itemized Billing: Customers can see detailed charges on the Pay Now page.
  4. QR Code Payments: Customers can make payments via a scannable QR code sent via text or email.
  5. Express Payments: Enable fast checkout options using digital wallets.

 

Pay Now Across All Salesforce Clouds:

Salesforce Pay Now is versatile and integrates seamlessly across all Salesforce clouds—be it Commerce, Sales, Service, or Marketing. With Pay Now, the possibilities are limitless. Engage customers wherever they are and streamline your payment process with just a click.

]]>
https://blogs.perficient.com/2024/11/22/discover-the-benefits-of-salesforce-pay-now/feed/ 0 372039
A Step-by-Step Guide to Extracting Workflow Details for PC-IDMC Migration Without a PC Database https://blogs.perficient.com/2024/11/08/a-step-by-step-guide-to-extracting-workflow-details-for-pc-idmc-migration-without-a-pc-database/ https://blogs.perficient.com/2024/11/08/a-step-by-step-guide-to-extracting-workflow-details-for-pc-idmc-migration-without-a-pc-database/#respond Fri, 08 Nov 2024 06:29:05 +0000 https://blogs.perficient.com/?p=371403

In the PC-IDMC conversion process, it can be challenging to gather detailed information about workflows. Specifically, we often need to determine:

  • The number of transformations used in each mapping.
  • The number of sessions utilized within the workflow.
  • Whether any parameters or variables are being employed in the mappings.
  • The count of reusable versus non-reusable sessions used in the workflow etc.

To obtain these details, we currently have to open each workflow individually, which is time-consuming. Alternatively, we could use complex queries to extract this information from the PowerCenter metadata in the database tables.

This section focuses on XQuery, a versatile language designed for querying and extracting information from XML files. When workflows are exported from the PowerCenter repository or Workflow Manager, the data is generated in XML format. By employing XQuery, we can effectively retrieve the specific details and data associated with the workflow from this XML file.

Step-by-Step Guide to Extracting Workflow Details Using XQuery: –

For instance, if the requirement is to retrieve all reusable and non-reusable sessions for a particular workflow or a set of workflows, we can utilize XQuery to extract this data efficiently.

Step 1:
Begin by exporting the workflows from either the PowerCenter Repository Manager or the Workflow Manager. You have the option to export multiple workflows together as one XML file, or you can export a single workflow and save it as an individual XML file.

Step 1 Pc Xml Files

Step 2:-
Develop the XQuery based on our specific requirements. In this case, we need to fetch all the reusable and non-reusable sessions from the workflows.

let $header := "Folder_Name,Workflow_Name,Session_Name,Mapping_Name"
let $dt := (let $data := 
    ((for $f in POWERMART/REPOSITORY/FOLDER
    let $fn:= data($f/@NAME)
    return
        for $w in $f/WORKFLOW
        let $wn:= data($w/@NAME)
        return
            for $s in $w/SESSION
            let $sn:= data($s/@NAME)
            let $mn:= data($s/@MAPPINGNAME)
            return
                <Names>
                    {
                        $fn ,
                        "," ,
                        $wn ,
                        "," ,
                        $sn ,
                        "," ,
                        $mn
                    }
                </Names>)
    |           
    (for $f in POWERMART/REPOSITORY/FOLDER
    let $fn:= data($f/@NAME)
    return          
        for $s in $f/SESSION
        let $sn:= data($s/@NAME)
        let $mn:= data($s/@MAPPINGNAME)
        return
            for $w in $f/WORKFLOW
            let $wn:= data($w/@NAME)
            let $wtn:= data($w/TASKINSTANCE/@TASKNAME)
            where $sn = $wtn
            return
                <Names>
                    {
                        $fn ,
                        "," ,
                        $wn ,
                        "," ,
                        $sn ,
                        "," ,
                        $mn
                    }
                </Names>))
       for $test in $data
          return
            replace($test/text()," ",""))
      return
 string-join(($header,$dt), "
")

Step 3:
Select the necessary third-party tools to execute the XQuery or opt for online tools if preferred. For example, you can use BaseX, Altova XMLSpy, and others. In this instance, we are using Basex, which is an open-source tool.

Create a database in Basex to run the XQuery.

Step 3 Create Basex Db

Step 4: Enter the created XQuery into the third-party tool or online tool to run it and retrieve the results.

Step 4 Execute XqueryStep 5:
Export the results in the necessary file extensions.

Step 5 Export The Output

Conclusion:
These simple techniques allow you to extract workflow details effectively, aiding in the planning and early detection of complex manual conversion workflows. Many queries exist to fetch different kinds of data. If you need more XQueries, just leave a comment below!

]]>
https://blogs.perficient.com/2024/11/08/a-step-by-step-guide-to-extracting-workflow-details-for-pc-idmc-migration-without-a-pc-database/feed/ 0 371403
Effortless Data Updates in Salesforce: Leveraging the Update Record Function in LWC https://blogs.perficient.com/2024/11/04/leveraging-the-update-record-function-in-lwc/ https://blogs.perficient.com/2024/11/04/leveraging-the-update-record-function-in-lwc/#respond Mon, 04 Nov 2024 12:08:44 +0000 https://blogs.perficient.com/?p=371275

The updateRecord function in Lightning Web Components (LWC) is a powerful tool for Salesforce developers, allowing for seamless data updates directly from the user interface. This feature enhances user experience by providing quick and efficient updates to Salesforce records without the need for page refreshes. In this guide, we’ll explore how the update record function works, its key benefits, and best practices for implementing it in your LWC projects.

UpdateRecord Function:

The updateRecord function in Lightning Web Components (LWC) is used to update a record in Salesforce. It is part of the lightning/uiRecordApi module and allows you to update records with minimal Apex code. The updateRecord function takes an object as input. It includes the fields to update, and optionally, client options to control the update behaviour.

import {updateRecord} from 'lightning/uiRecordApi';

updateRecord(recordInput, clientOptions)
    .then((record) => {
    //handle success
    })
    .catch((error) =>{
    // handle error
    });

Reference:https://developer.salesforce.com/docs/platform/lwc/guide/reference-update-record.html

The function directly modifies the record data in Salesforce, eliminating the need for manual API calls or complex data manipulation.

Key Features and Usage of  updateRecord:

  • Field-Level Security: Ensure that the fields you’re updating are accessible to the current user based on field-level security settings.
  • Data Validation: Perform necessary data validation before updating the record to prevent invalid data from being saved.
  • Field-Specific updates: You can target specific fields for modification, ensuring granular control over the updated data.
  • Automatic UI Refresh: After a successful update, the component’s UI is automatically refreshed to reflect the changes. it is providing a seamless user experience.Example:
import {LightningElement, api } from 'lwc';
import { updateRecord } from 'lightning/uiRecordApi';
import { ShowToastEvent } from 'lightning/platformShowToastEvent';
export default class UpdateRecordExample extends LightningElement { 
    @api recordId; // Assume this is passed to the component
    handleUpdate() {
     const fields = {
       Id: this.recordId,
       Name: 'Updated Name', // Example field to update 
       Phone: '123-456-7890' // Another example field
      };

     const recordInput = { fields };
      updateRecord(recordInput)
         .then(() => {
             this.dispatchEvent(
                new ShowToastEvent({
                        title: 'Success',
                        message: 'Record updated successfully!', 
                        variant: 'success'
                        })
                      );
                  })
          .catch((error) => {
             this.dispatchEvent(
                 new ShowToastEvent({
                      title: 'Error updating record',
                      message: error.body.message,
                      variant: 'error'
                     })
                 );
             });
     }
}

Conclusion:

Incorporating the update record function in Lightning Web Components can greatly enhance both the functionality and user experience of your Salesforce applications. By simplifying the process of data manipulation on the client side, this function reduces the need for page reloads, improves performance, and allows for a more interactive and responsive interface. Mastering this feature not only streamlines development but also empowers users with a smoother, more efficient workflow. Embracing such tools keeps your Salesforce solutions agile and ready to meet evolving business needs.

]]>
https://blogs.perficient.com/2024/11/04/leveraging-the-update-record-function-in-lwc/feed/ 0 371275