amazon Articles / Blogs / Perficient https://blogs.perficient.com/tag/amazon/ Expert Digital Insights Tue, 14 Jan 2025 15:05:54 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png amazon Articles / Blogs / Perficient https://blogs.perficient.com/tag/amazon/ 32 32 30508587 All In on AI: Amazon’s High-Performance Cloud Infrastructure and Model Flexibility https://blogs.perficient.com/2024/12/10/all-in-on-ai-amazons-high-performance-cloud-infrastructure-and-model-flexibility/ https://blogs.perficient.com/2024/12/10/all-in-on-ai-amazons-high-performance-cloud-infrastructure-and-model-flexibility/#respond Tue, 10 Dec 2024 14:00:09 +0000 https://blogs.perficient.com/?p=373238

At AWS re:Invent last week, Amazon made one thing clear: it’s setting the table for the future of AI. With high-performance cloud primitives and the model flexibility of Bedrock, AWS is equipping customers to build intelligent, scalable solutions with connected enterprise data. This isn’t just about technology—it’s about creating an adaptable framework for AI innovation:

Cloud Primitives: Building the Foundations for AI

Generative AI demands robust infrastructure, and Amazon is doubling down on its core infrastructure to meet the scale and complexity of these market needs across foundational components:

  1. Compute:
    • Graviton Processors: AWS-native, ARM-based processors offering high performance with lower energy consumption.
    • Advanced Compute Instances: P6 instances with NVIDIA Blackwell GPUs, delivering up to 2.5x faster GenAI compute speeds.
  2. Storage Solutions:
    • S3 Table Buckets: Optimized for Iceberg tables and Parquet files, supporting scalable and efficient data lake operations critical to intelligent solutions.
  3. Databases at Scale:
    • Amazon Aurora: Multi-region, low-latency relational databases with strong consistency to keep up with massive and complex data demands.
  4. Machine Learning Accelerators:
    • Trainium2: Specialized chip architecture ideal for training and deploying complex models with improved price performance and efficiency.
    • Trainium2 UltraServers: Connected clusters of Trn2 servers with NeuronLink interconnect for massive scale and compute power for training and inference for the world’s largest models – with continued partnership with companies like Anthropic.

 Amazon Bedrock: Flexible AI Model Access

Infrastructure provides the baseline requirements for enterprise AI, setting the table for business outcome-focused innovation.  Enter Amazon Bedrock, a platform designed to make AI accessible, flexible, and enterprise-ready. With Bedrock, organizations gain access to a diverse array of foundation models ready for custom tailoring and integration with enterprise data sources:

  • Model Diversity: Access 100+ top models through the Bedrock Marketplace, guiding model availability and awareness across business use cases.
  • Customizability: Fine-tune models using organizational data, enabling personalized AI solutions.
  • Enterprise Connectivity: Kendra GenAI Index supports ML-based intelligent search across enterprise solutions and unstructured data, with natural language queries across 40+ enterprise sources.
  • Intelligent Routing: Dynamic routing of requests to the most appropriate foundation model to optimize response quality and efficiency.
  • Nova Models: New foundation models offer industry-leading price performance (Micro, Lite, Pro & Premier) along with specialized versions for images (Canvas) and video (Reel).

 Guidance for Effective AI Adoption

As important as technology is, it’s critical to understand success with AI is much more than deploying the right model.  It’s about how your organization approaches its challenges and adapts to implement impactful solutions.  I took away a few key points from my conversations and learnings last week:

  1. Start Small, Solve Real Problems: Don’t try to solve everything at once. Focus on specific, lower risk use cases to build early momentum.
  2. Data is King: Your AI is only as smart as the data it’s fed, so “choose its diet wisely”.  Invest in data preparation, as 80% of AI effort is related to data management.
  3. Empower Experimentation: AI innovation and learning thrives when teams can experiment and iterate with decision-making autonomy while focused on business outcomes.
  4. Focus on Outcomes: Work backward from the problem you’re solving, not the specific technology you’re using.  “Fall in love with the problem, not the technology.”
  5. Measure and Adapt: Continuously monitor model accuracy, retrieval-augmented generation (RAG) precision, response times, and user feedback to fine-tune performance.
  6. Invest in People and Culture: AI adoption requires change management. Success lies in building an organizational culture that embraces new processes, tools and workflows.
  7. Build for Trust: Incorporate contextual and toxicity guardrails, monitoring, decision transparency, and governance to ensure your AI systems are ethical and reliable.

Key Takeaways and Lessons Learned

Amazon’s AI strategy reflects the broader industry shift toward flexibility, adaptability, and scale. Here are the top insights I took away from their positioning:

  • Model Flexibility is Essential: Businesses benefit most when they can choose and customize the right model for the job. Centralizing the operational framework, not one specific model, is key to long-term success.
  • AI Must Be Part of Every Solution: From customer service to app modernization to business process automation, AI will be a non-negotiable component of digital transformation.
  • Think Beyond Speed: It’s not just about deploying AI quickly—it’s about integrating it into a holistic solution that delivers real business value.
  • Start with Managed Services: For many organizations, starting with a platform like Bedrock simplifies the journey, providing the right tools and support for scalable adoption.
  • Prepare for Evolution: Most companies will start with one model but eventually move to another as their needs evolve and learning expands. Expect change – and build flexibility into your AI strategy.

The Future of AI with AWS

AWS isn’t just setting the table—it’s planning for an explosion of enterprises ready to embrace AI. By combining high-performance infrastructure, flexible model access through Bedrock, and simplified adoption experiences, Amazon is making its case as the leader in the AI revolution.

For organizations looking to integrate AI, now is the time to act. Start small, focus on real problems, and invest in the tools, people, and culture needed to scale. With cloud infrastructure and native AI platforms, the business possibilities are endless. It’s not just about AI—it’s about reimagining how your business operates in a world where intelligence is the new core of how businesses work.

]]>
https://blogs.perficient.com/2024/12/10/all-in-on-ai-amazons-high-performance-cloud-infrastructure-and-model-flexibility/feed/ 0 373238
Perficient Achieves AWS Healthcare Services Competency, Strengthening Our Commitment to Healthcare https://blogs.perficient.com/2024/11/29/perficient-achieves-aws-healthcare-services-competency-strengthening-our-commitment-to-healthcare/ https://blogs.perficient.com/2024/11/29/perficient-achieves-aws-healthcare-services-competency-strengthening-our-commitment-to-healthcare/#respond Fri, 29 Nov 2024 16:30:18 +0000 https://blogs.perficient.com/?p=372789

At Perficient, we’re proud to announce that we have achieved the AWS Healthcare Services Competency! This recognition highlights our ability to deliver transformative cloud solutions tailored to the unique challenges and opportunities in the healthcare industry.

Healthcare organizations are under increasing pressure to innovate while maintaining compliance, ensuring security, and improving patient outcomes. Achieving the AWS Healthcare Services Competency validates our expertise in helping providers, payers, and life sciences organizations navigate these complexities and thrive in a digital-first world.

A Proven Partner in Healthcare Transformation

Our team of AWS-certified experts has extensive experience working with leading healthcare organizations to modernize systems, accelerate innovation, and deliver measurable outcomes. By aligning with AWS’s best practices and leveraging the full suite of AWS services, we’re helping our clients build a foundation for long-term success.

The Future of Healthcare Starts Here

This milestone is a reflection of our ongoing commitment to innovation and excellence. As we continue to expand our collaboration with AWS, we’re excited to partner with healthcare organizations to create solutions that enhance lives, empower providers, and redefine what’s possible.

Ready to Transform?

Learn more about how Perficient’s AWS expertise can drive your healthcare organization’s success.

]]>
https://blogs.perficient.com/2024/11/29/perficient-achieves-aws-healthcare-services-competency-strengthening-our-commitment-to-healthcare/feed/ 0 372789
Perficient Achieves AWS DevOps Competency https://blogs.perficient.com/2024/06/04/perficient-achieves-aws-devops-competency/ https://blogs.perficient.com/2024/06/04/perficient-achieves-aws-devops-competency/#respond Tue, 04 Jun 2024 18:48:31 +0000 https://blogs.perficient.com/?p=363795

Perficient is excited to announce our achievement in Amazon Web Services (AWS) DevOps Competency for AWS partners. This designation recognizes Perficient as an AWS partner that has demonstrated expertise in delivering DevSecOps solutions. This competency highlights Perficient’s ability to drive innovation, meet business objectives, and get the most out of your AWS services. 

What does this mean for Perficient? 

Achieving the AWS DevOps Competency status differentiates Perficient as an AWS Partner Network (APN) member that provides modern product engineering solutions designed to help enterprises adopt, develop, and deploy complex projects faster on AWS. To receive the designation, APN members must possess deep AWS expertise and deliver solutions seamlessly on AWS. 

This competency empowers our delivery teams to break down traditional silos, shorten feedback loops, and respond more effectively to changes, ultimately increasing speed to market by up to 75%.  

What does this mean for you? 

With our partnership with AWS, we can modernize our clients’ processes to improve product quality, scalability, and performance, and significantly reduce release costs by up to 97%. This achievement ensures that our CI/CD processes and IT governance are sustainable and efficient, benefiting organizations of any size.  

At Perficient, we strive to be the place where great minds and great companies converge to boldly advance business, and this achievement is a testament to that vision!  

]]>
https://blogs.perficient.com/2024/06/04/perficient-achieves-aws-devops-competency/feed/ 0 363795
Transform Your Business with Amazon DataZone https://blogs.perficient.com/2023/02/13/transform-your-business-with-amazon-datazone/ https://blogs.perficient.com/2023/02/13/transform-your-business-with-amazon-datazone/#respond Mon, 13 Feb 2023 15:35:48 +0000 https://blogs.perficient.com/?p=327445

Amazon recently released a new data tool called DataZone, which allows companies to share, search, and discover data at scale across organizational boundaries. It offers many features such as the ability to search for published data, request access, collaborate with teams through data assets, manage and monitor data assets across projects, access analytics with a personalized view for data assets through a web-based application or API, and manage and govern data access in accordance with your organization’s security regulations from a single place.

DataZone may be helpful for IT leaders because it enables them to empower their business users to make data-driven decisions and easily access data both within and outside their organization. With DataZone, users can search for and access data they need quickly and easily while also ensuring the necessary governance and access control. Additionally, DataZone makes it easier to discover, prepare, transform, analyze, and visualize data with its web-based application.

Implementation of DataZone can vary depending on the organization and its existing governance policies. If your data governance is already in place, implementation of DataZone may take only a few months. However, if governance needs to be established and implemented, it will take much longer and require significant organizational changes.

While it may seem obvious, DataZone is not a magic solution to all your data problems. Simply having a tool is not enough. Deciding to move forward with any data marketplace solution requires a shared responsibility model and governance across multiple channels and teams. We’ve Seen many companies fail to adopt the full use of data marketplaces due to lack of adoption by the business.2022 Guide Cover Image Vp Of It To Transforming Your Business 1400x788

Ultimately, DataZone can be an invaluable tool for IT leaders looking to empower their business to access data quickly and easily within and outside their organization while adhering to necessary governance and access control policies. With the help of the automated data harvesters, stewards, and AI, DataZone makes data not just accessible but also available, allowing businesses to make use of it when making decisions.

With our “VP of IT’s Guide to Transforming Your Business,” IT leaders can gain the insights they need to successfully implement the latest data-driven solutions, such as DataZone. Download it for free today to get the answers you need to unlock the full potential of your data investments and drive your business forward with data-driven decisions.

]]>
https://blogs.perficient.com/2023/02/13/transform-your-business-with-amazon-datazone/feed/ 0 327445
Automate Exporting CloudWatch Logs to S3 https://blogs.perficient.com/2022/09/20/automate-exporting-cloudwatch-logs-to-s3/ https://blogs.perficient.com/2022/09/20/automate-exporting-cloudwatch-logs-to-s3/#comments Tue, 20 Sep 2022 18:16:37 +0000 https://blogs.perficient.com/?p=318971

Written by Gerald Frilot. Published by Tony Harper.

 

AWS CloudWatch is a unified monitoring service for AWS services and your cloud applications. Using AWS CloudWatch, you can:

 

  • monitor your AWS account and resources
  • generate a stream of events
  • trigger alarms and actions for specific conditions
  • manually export CloudWatch log groups to an Amazon S3 bucket

 

Exporting data to an S3 bucket is an important process if your organization needs to report on CloudWatch data for a period greater than the specified retention time. After the retention time expires, log groups are permanently deleted. In this case, manual exports alleviate risks associated with data loss but one major disadvantage of manually exporting logs, as defined in AWS Docs, is that each AWS account can only support one export task at a time. This operation is feasible if you only have a few log groups to export but can become very time consuming and prone to errors if you need to manually export more than 100 log groups periodically.

 

Let’s use a step-by-step solution to automate the process of exporting larger log groups to an S3 bucket using a Lambda instance to direct CloudWatch event-based traffic. You can use an existing S3 bucket or create a new S3 instance.

 

 

Amazon Simple Storage Service (S3)

Log into your AWS account, search for the Amazon S3 service, and follow these steps to enable the simple storage service:

  1. Select a meaningful name
  2. Select an AWS Region
  3. Keep all defaults
    1. ACLs disabled (Recommended)
    2. Block all public access (Disabled)
    3. Bucket Versioning (Disable)
    4. Default encryption (Disable)
  • Select Create Bucket (This creates a new S3 instance for data storage)

 

Picture1

 

Picture2

 

Once the bucket is created, you will need to navigate to the Permissions Tab:

Picture3

 

 

Update the Bucket Policy that allows CloudWatch to store objects to the S3 bucket. Use the following to complete this process:

{
    “Version”: “2012-10-17”,
    “Statement”: [
        {
            “Effect”: “Allow”,
            “Principal”: {
                “Service”: “logs.YOUR-REGION.amazonaws.com”
            },
            “Action”: “s3:GetBucketAcl”,
            “Resource”: “arn:aws:s3:::BUCKET_NAME_HERE”
        },
        {
            “Effect”: “Allow”,
            “Principal”: {
                “Service”: “logs.YOUR-REGION.amazonaws.com”
            },
            “Action”: “s3:PutObject”,
            “Resource”: “arn:aws:s3:::BUCKET_NAME_HERE/*”,
            “Condition”: {
                “StringEquals”: {
                    “s3:x-amz-acl”: “bucket-owner-full-control”
                }
            }
        }
    ]
}

 

AWS Lambda

The S3 bucket is now configured to allow object write-through from our CloudWatch service. Our next step is to create a Lambda instance that houses the source code for receiving CloudWatch events and storing them to our S3 instance.

 

Search for the Lambda service in your AWS account, navigate to functions, and select Create Function.

 

Picture4

 

Follow these steps:

 

  1. Select the Author from scratch template

Picture5

 

  1. Under Basic Information, we need to provide:
    1. Function name
    2. Runtime (Python 3.9)
    3. Instruction set Architecture (x86_64 default)

Picture6

 

 

 

  1. Keep the defaults under execution role and advanced settings dropdown, and select Create Function

 

Picture7

 

Python Script (Pseudocode)

Python script imports the boto3 aws-sdk module for creating, configuring, and managing AWS services along with an os and time module. We instantiate a new instance of CloudWatch logs and a new instance of the AWS Systems Manager Parameter Store. Within the lambda handler method, we initialize an empty object and two empty arrays. The empty object may be useful if we only care to target a specific log group name prefix.

 

Our first array targets all log groups, and the second array is used to determine which log groups to export. We then check if the S3 bucket environment variable exists, if not we return an error. Otherwise, we enter a series of loops. The first loop will invoke the AWS DescribeLogGroups method and add them to our initial log groups array. Once all log groups are added, we begin our second loop that searches for the ExportToS3 tag in the initial log groups array. If this tag exists, we update the second array with log groups that need to be exported.

 

The final loop iterates over the second array and uses the log group name as a prefix for the Parameter Store search. If a match is found, we then check the time value stored and compare it to our current time. If 15 minutes have elapsed, we update the S3 bucket with our data and then update the Parameter Store value with the current time.

 

 

 

 

  1. Select Deploy to save our code changes and then navigate to the Configuration tab

Picture8

 

  1. We now need to create an environment variable that references the S3 bucket where our CloudWatch events will be stored

Picture9

 

Note: Key needs to be set to S3_BUCKET and the value set to the name of your S3 bucket. This is referenced in the lambda code and will need to be set prior to invoking this function.

 

  1. Our next course of action is to update the lambda’s basic execution role. This allows our lambda permission to perform read/update operations on separate AWS services. Use the following to complete the process:

{

    “Version”: “2012-10-17”,

    “Statement”: [

        {

            “Sid”: “VisualEditor0”,

            “Effect”: “Allow”,

            “Action”: [

                “logs:ListTagsLogGroup”,

                “logs:DescribeLogGroups”,

                “logs:CreateLogGroup”,

                “logs:CreateExportTask”,

                “ssm:GetParameter”,

                “ssm:PutParameter”

            ],

            “Resource”: “arn:aws:logs:{your-region}:{ your aws account number}:*”

        },

        {

            “Sid”: “VisualEditor1”,

            “Effect”: “Allow”,

            “Action”: [

                “logs:ListTagsLogGroup”,

                “logs:CreateLogStream”,

                “logs:DescribeLogGroups”,

                “logs:PutLogEvents”,

                “logs:CreateExportTask”,

                “ssm:GetParameter”,

                “ssm:PutParameter”,

                “s3:PutObject”,

                “s3:PutObjectAcl”

            ],

            “Resource”: “arn:aws:logs:{your region}:{your aws account number}:log-group:/aws/lambda/{ Function Name }:*”

        },

        {

            “Sid”: “VisualEditor2”,

            “Effect”: “Allow”,

            “Action”: “ssm:DescribeParameters”,

            “Resource”: “*”

        },

        {

            “Sid”: “VisualEditor3”,

            “Effect”: “Allow”,

            “Action”: [

                “ssm:GetParameter”,

                “ssm:PutParameter”

            ],

            “Resource”: “arn:aws:ssm:{ your region }:{aws account number}:parameter/log-exporter-*”

        },

        {

            “Sid”: “VisualEditor4”,

            “Effect”: “Allow”,

            “Action”: [

                “s3:PutObject”,

                “s3:PutObjectAcl”,

                “s3:GetObject”,

                “s3:GetObjectAcl”,

                “s3:DeleteObject”

            ],

            “Resource”: [

                “arn:aws:s3:::{aws bucket name}”,

                “arn:aws:s3:::{aws bucket name}/*”

            ]

        }

    ]

}

 

 

 

 

AWS Parameter Store

Now that the S3 bucket and the Lambda are completely set up, we can turn to the AWS service called Parameter Store which provides secure, hierarchical storage for configuration data management and secrets management.  This service is for reference only as our lambda method takes care of the initial setup and naming conventions for this service. When a CloudWatch event is triggered, our code references Parameter Store to determine if 15 minutes have elapsed since we last stored data in our S3 bucket. The first invocation will set the parameter store value to 0 and then check/update that value with every recurring event on 15-minute boundaries. Data is never overwritten, and our initial setup runs flawlessly without any user intervention.

 

Lambda Triggers

We are going to redirect back to our Lambda instance and make one final update under the Configuration > Triggers tab

Picture10

 

  1. Select Add trigger
  2. Fill in the following fields and then select Add
    1. CloudWatch Logs (click the caret to select the dropdown menu and select the right service)
    2. Log group
    3. Filter name

Picture11

Picture12

  1. Repeat steps 1 and 2 for each log group required for S3 storage.

 

Note: The previous step and the one following are executed in this order to avoid writing data to the S3 bucket for an active environment.

 

CloudWatch Tags

Our code will only export log groups that contain a tag and this operation can only be done from a terminal. Refer to AWS CLI to learn more about how to set up command line access (CLI) for your AWS environment. Once command-line access is complete, we can set up each log group needing export via the command line. Use the following command to complete this process:

 

aws –region us-west-2 logs tag-log-group –log-group-name /api/aws/connect –tags ExportToS3=true

 

We are now automatically set up to export CloudWatch log groups to our S3 bucket!

 

 

AWS Solution Delivered

We’ve chosen AWS Services because of its flexibility and ability to drive results to the market in a timely manner. By directing our attention to AWS Cloud, we were able to effectively export data to an S3 bucket driven by CloudWatch events.

 

Contact Us

At Perficient, we are an APN Advanced Consulting Partner for Amazon Connect which gives us a unique set of skills to accelerate your cloud, agent, and customer experience.

 

Perficient takes pride in our personal approach to the customer journey where we help enterprise clients transform and modernize their contact center and CRM experience with platforms like Amazon Connect. For more information on how Perficient can help you get the most out of Amazon Lex, please contact us here.

]]>
https://blogs.perficient.com/2022/09/20/automate-exporting-cloudwatch-logs-to-s3/feed/ 1 318971
Perficient Achieves AWS Migration & Modernization Competency https://blogs.perficient.com/2022/09/15/perficient-achieves-aws-migration-modernization-competency/ https://blogs.perficient.com/2022/09/15/perficient-achieves-aws-migration-modernization-competency/#respond Thu, 15 Sep 2022 17:09:08 +0000 https://blogs.perficient.com/?p=318827

Perficient is excited to announce our achievement in Amazon Web Services’ (AWS) Migration and Modernization Competency for AWS partners. This designation recognizes Perficient as an AWS partner that has demonstrated technical proficiency and customer success automating and accelerating customer application migration and modernization journeys.

AWS launched the Migration and Modernization Competency to allow customers to easily and confidently engage with highly specialized AWS partners that help customers modernize their application, either before or after they move to AWS. The AWS Migration and Modernization Competency takes on the heavy lifting of identifying and validating industry leaders with proven customer success and technical proficiency in migration and application modernization tooling.

The AWS Migration and Modernize Competency is the first secured competency in an effort to bring our customers more value, expertise, and effective strategy. We are currently pursing, and hope to soon secure, both the Healthcare and Life Sciences Competency as well as the Data and Analytics Competency through AWS. These recognitions will differentiate Perficient as an AWS Partner with extensive expertise delivering solutions on the AWS platform that help our customers adopt cloud and application transformation, reduce costs, enhance business agility, data accessibility, and security.

Perficient is a certified Amazon Web Services partner with more than 10 years of experience delivering enterprise-level applications and expertise in cloud platform solutions, contact center, application modernization, migrations, data analytics, mobile, developer and management tools, IoT, severless, security, and more. Paired with our industry-leading strategy and team, Perficient is equipped to help enterprise tackle the toughest challenges and get the most our of their implementations and integrations.

]]>
https://blogs.perficient.com/2022/09/15/perficient-achieves-aws-migration-modernization-competency/feed/ 0 318827
How to Guarantee a Positive Customer Experience https://blogs.perficient.com/2022/09/08/guarantee-a-positive-customer-experience/ https://blogs.perficient.com/2022/09/08/guarantee-a-positive-customer-experience/#comments Thu, 08 Sep 2022 14:00:06 +0000 https://blogs.perficient.com/?p=318126

When considering a customer’s purchase experience, how many things could possibly go wrong? In truth, there are endless possibilities. On the other hand, how many simple things can you implement to ensure a positive customer experience?

In my previous blog articles, I discussed Perficient’s “Now, New, and Next” framework. Today, I’ll share different tactics, features, services, and functionalities of the customer experience using this framework. These tactics will help increase conversion rates, reduce abandoned carts, and increase the total customer lifetime value and experience.

 

“NOW” EXPERIENCES

One tactic that you must have as a successful online business are multiple payment service options. Websites like Allbirds do this extremely well with their multi-payment gateways. For example,  Allbirds offers payment via credit/debit card, but they also allow payment via PayPal, Amazon Pay, and ShopPay. Many mom-and-pop businesses don’t offer any alternative to cash or credit/debit cards, which immediately sets them behind their competition.

How To Guarantee Positive Customer Experience 1

Also, nearly all customers expect to have easy order tracking with timely communication that remains on brand. In fact, 88% of consumers say the ability to track shipments in real-time is important. Shopify utilizes email communication that integrates order confirmation, pick up, and in-delivery communication. That way, you can rest assured that your customers will be satisfied throughout.

Email communication with consistent order details may seem simple. However, if a business or organization doesn’t have reliable and consistent email communication, customers will likely experience frustration. It decreases conversion rate, increases abandoned carts, and increases customer call volumes.

How To Guarantee Positive Customer Experience 2

“NEW” EXPERIENCES

“New” experiences are always evolving and transforming into “Now” experiences as more and more companies adopt them. Currently, there are many “New” experiences you can adopt to help keep your business relevant. For example, Love Wellness specializes in personal care products. They offer experiences like subscriptions with auto-renewals and delivery of replenishable products with reoccurring billing and installment payments.

You can also utilize services like Affirm and Afterpay to offer a “Buy Now, Pay Later” option.

How To Guarantee Positive Customer Experience 3

Organizations that apply these features see higher customer lifetime value, total order increases, and higher loyalty. While some customers have still not yet encountered these experiences, the novelty will eventually become ubiquitous in the market. Customers will soon expect them as table stakes. Implementing them now ensures your business stays ahead.

 

“NEXT” EXPERIENCES

Both “Now” and “New” pale in comparison to “Next” experiences. These experiences surprise and delight customers with early adoption.

One example of “Next” is Tesla, which offers a showroom to browse vehicles instead of a dealership. While in the showroom, you can select options for trim and accessories. You can also place your order right there to avoiding the hassle of waiting in a dealership. Then, your vehicle is delivered safely right to your door.

How To Guarantee Positive Customer Experience 4

Another example of a “Next” feature is Amazon’s Key In-Garage delivery system. With this feature, you can choose “Key Delivery” at checkout, and receive the items in your garage. You’ll connect a smart garage door opener, like those made by Chamberlain, to your garage door. This allows the delivery driver to securely store your packages to avoid bad weather or even porch theft. You can even watch the deliveries with an optional camera and choose when you want to use it.

How To Guarantee Positive Customer Experience 5

Whole Foods provides another example of “Next” features with their Pay by Palm technology. This allows customers to pay with a scan of their palm, without the need for a wallet, phone, or smartwatch. Pay by Palm allows customers to pay for items faster, resulting in a simpler shopping experience for all customers.

How To Guarantee Positive Customer Experience 6

Key Takeaways

While these capabilities are setting brands apart from the rest, it’s not without risks. These innovations and “Next” items could hit at the wrong time. Examples include the electric car in the 90s, or Pay by Palm starting its implementation during a pandemic.

Perficient is uniquely positioned to serve our consumers that need assistance in these areas. We drove wonderful customer loyalty statistics versus customer disappointment with clients like Joanne Fabric and Sally Beauty. Our experts will partner with you to understand your specific business needs and goals. They’ll help you develop a roadmap to change your company’s trajectory.  For more questions, contact our experts today.

]]>
https://blogs.perficient.com/2022/09/08/guarantee-a-positive-customer-experience/feed/ 1 318126
With A Passion for Learning, Toni Milushev Paves His Way to Career Success https://blogs.perficient.com/2022/08/23/with-a-passion-for-learning-toni-milushev-paves-his-way-to-career-success/ https://blogs.perficient.com/2022/08/23/with-a-passion-for-learning-toni-milushev-paves-his-way-to-career-success/#respond Tue, 23 Aug 2022 19:19:27 +0000 https://blogs.perficient.com/?p=316860

Meet Toni Milushev, Director of Product Engineering

Perficient is committed to Growth for Everyone, and colleague Toni Milushev is a great example of how we’re helping our people grow professionally. He takes the initiative to go above and beyond his role and is quick to lend a helping hand to assist those around him in their personal career growth. We recently had the chance to speak with Toni to learn more about how he’s grown his career at Perficient, and his overall outlook on professional development.Statcard Toni Milushev

As a director based in Chicago, Toni focuses on customer engagement solutions for Amazon, Microsoft, and Twilio. He oversees product practice, customer product development, and product starter packs. In his five years of working at Perficient, Toni has earned three promotions.

With roughly eight years of experience in the industry, he has advanced at an incredible rate, and his achievements can be tied to his work ethic and desire to grow.

Toni’s Career Journey

Before joining Perficient, Toni worked for a consulting company that was acquired by Perficient and was promoted three times prior to the acquisition.

“I stand out because I’m not afraid of additional work or projects, regardless of expertise. If I don’t have experience with a specific platform, saying ‘yes’ to new opportunities has expanded my skills and benefited my career. It shows I can learn on the go, be thrown into the fire, and figure it out.”

In 2005, his family migrated to the United States from Bulgaria to give him better opportunities, and they are the true motivation behind his success. Toni has never taken this for granted, and now lives by the philosophy that hard work pays off.

“I aim to keep growing, set big goals, and strive to achieve them so that my parents know that their decision to migrate here has positively impacted my career.”

Toni’s Experience at Perficient

Working at a global company in the U.S. has presented Toni with plenty of international work experiences as he regularly collaborates with colleagues around the world. In 2019, he spent three weeks in India working with one of our teams on site. He’s also had additional travel opportunities to help launch international Perficient offices.

“Perficient being such a large organization opens the doors to communication between offices throughout the world, and that is very unique. Having a global team and working with different cultures to deliver expectations for our clients inspires me to give my best every day.”

With the support of his colleagues and Perficient’s growth-oriented culture, Toni has achieved many professional milestones. One of his proudest moments was launching a product in the cloud: PACE, Perficient’s Amazon Connect Experience solution. Many customers have since used this product, and Toni regularly shares his thought leadership in Perficient’s Amazon Connect space.

READ MORE: Perficient’s Amazon Connect Thought Leadership

 

Key Takeaways from Toni’s Career Growth

Photo1231321

Toni’s admirable career journey has seen amazing growth in a short amount of time. Some key takeaways from Toni’s experience include implementing a lifelong learning approach, embracing mentors and mentees, and leading with a growth mindset. He dives deeper into each of these topics below.

Implementing Lifelong Learning

“Staying curious and wanting to learn have personal benefits and have played a significant role in my career development. It shows initiative to grow professionally and build your skillset.”

Toni continually seeks out opportunities to expand his expertise. Earning certifications has been one major way he’s shown his ambition to continue learning. His drive truly stands out. Toni took a SCRUM training course that was based out of China and provided by Perficient. This required late hours and putting in extra time and energy. He also completed numerous other certifications and training courses through Perficient Academy to help accelerate his career potential.

“It’s rewarding to take courses and earn certifications because I get recognition for them, and I’m able to market the fact that I’ve successfully passed the course. In my BU, it’s free to take certain certification exams, so it’s a way for me to build my skill set and market myself across my network. The knowledge I gain from earning certifications is beneficial because I can better communicate about numerous services.”

LEARN MORE: Perficient Academy Career Growth Tool

Embracing Mentorship

Connecting with mentors has been a huge launching point for Toni’s career growth. To find a mentor, he identifies peers and colleagues that think and solve problems differently than he does, and through discovering new perspectives, he expands his toolkit.

“From the get-go, I’ve been focused on finding mentors internally and externally to help with my career growth. I can learn something new from everyone, and I find it valuable to connect with new mentors to see what different people have to offer. For example, one of my current mentors is very analytical and thinks differently than I do, and this has helped expand how I approach certain scenarios.”

Toni is also focused on giving back and helping other people grow their careers to the next level. He talks with his mentees about different methodologies, and depending on where a person is in their career, discovering where they want to go. Identifying whether they want to specialize or generalize in certain areas and setting goals to track progress are highly valuable in fostering growth. He also recommends that his mentees participate in online trainings and certification courses, something he does to excel personally.

“Being on both sides of mentorship enables me to learn and give back to others by teaching what I’ve gained from my mentors. I can explain the learning experiences I’ve had throughout my career as a lesson for others to grow from.”

Leading with a Growth Mindset

According to Toni, it’s not just about mentors, but also, about work ethic and having the desire to grow. The right combination of skills and knowledge, paired with curiosity and a growth mindset has set Toni up for all the success he’s experienced.

“Personality and naturally having a growth mindset are both important aspects to be considered for promotions. These show initiative and that you want to make a difference, and this alone can set you apart from people.”

Toni builds a growth mindset by staying curious. He asks “why,” conducts research online, and tries to dig below the surface to avoid making assumptions, no matter the topic. His intrinsic drive and passion encourage him to continually progress and has allowed him to recognize that the sky is the limit. He is highly motivated, goal driven, and uses the resources available to him at every opportunity to succeed. His growth story shows that upward mobility is within reach for everyone at Perficient.


MORE ON GROWTH FOR EVERYONE

Perficient continually looks for ways to champion and challenge our workforce, encourage personal and professional growth, and celebrate the unique culture created by the ambitious, brilliant, people-oriented team we have cultivated. These are their stories.

Learn more about what it’s like to work at Perficient at our Careers page.

Follow our Life at Perficient blog on Twitter via @PerficientLife.

Connect with us on LinkedIn here.

]]>
https://blogs.perficient.com/2022/08/23/with-a-passion-for-learning-toni-milushev-paves-his-way-to-career-success/feed/ 0 316860
5 Commonly Asked Questions About Intrinsic Bias in AI/ML Models in Healthcare https://blogs.perficient.com/2022/07/19/5-commonly-asked-questions-about-intrinsic-bias-in-ai-ml-models-in-healthcare/ https://blogs.perficient.com/2022/07/19/5-commonly-asked-questions-about-intrinsic-bias-in-ai-ml-models-in-healthcare/#respond Tue, 19 Jul 2022 09:08:50 +0000 https://blogs.perficient.com/?p=312929

Healthcare organizations play a key role in offering access to care, motivating skilled workers, and acting as social safety nets in their communities. They, along with life sciences organizations, serve on the front lines of addressing health equity.

With a decade of experience in data content and knowledge, specializing in document processing, AI solutions, and natural language solutions, I strive to apply my technical and industry expertise to the top-of-mind issue of diversity, equity, and inclusion in healthcare.

Here are five questions that I hear commonly in my line of work:

1. What is the digital divide, and how does it impact healthcare consumers?

There are still too many people in this country who don’t have reliable access to computing devices and the internet in their homes. If we think back to the beginning of the pandemic, we can see this in sharp relief. The number one impediment to the shift to virtual school was that kids didn’t have devices or reliable internet at home.

We also saw quite clearly that the divide is disproportionately impacting low income people in disadvantaged neighborhoods.

The problem is both affordability and access.

The result, through a healthcare lens, is that people without reliable access to the internet have less access to information they can use to manage their health.

They are less able to find a doctor who’s a good fit for them. Their access to information about their insurance policy and what is covered is more restricted. They are less able to access telehealth services and see a provider from home.

All this compounds because we’re using digital and internet-connected tools to improve healthcare and outcomes for patients. But ultimately, the digital divide means we’re achieving marginal gains for the populations with the best outcomes already and not getting significant gains from the populations that need support the most.

2. How can organizations maintain an ethical stance while using AI/ML in healthcare?

Focus on intrinsic bias, the subconscious stereotypes that affect the way individuals make decisions. People have intrinsic biases picked up from their environment that require conscious acknowledgement and attention. Machine learning models also pick up these biases. This happens because models are trained on data about historical human decisions, so the human biases come through (and can even be amplified). It’s critical to understand where a model comes from, how it was trained, and why it was created before using it.

Ethical use of AI/ML in healthcare requires careful attention to detail and, often, human review of machine decisions in order to build trust.

3. How can HCOs manage inherent bias in data? Is it possible to eliminate it?

At this point, we’re working to manage bias, not eliminate it. This is most critical for training machine learning models and correctly interpreting the results. We generally recommend using appropriate tools to help detect bias in model predictions and to use those detections to drive retraining and repredicting.

Here are some of the simplest tools in our arsenal:

  • Flip the offending parameter and try again.
  • Determine if the model would have made a different prediction if the person was white and male.
  • Use that additional data point to advise a human on their decision.

For healthcare in particular, the human in the loop is critically important. There are some cases where membership in a protected class changes a prediction because it acts as a proxy for key genetic factor (man or woman, white or Black). The computer can easily correct for bias when reviewing a loan application. However, when evaluating heart attack risk, there are specific health factors that can be predicted by race or gender.

4. Why is it important to educate data scientists in this area?

Data scientists need to be aware of potential issues and omit protected class information from model training sets whenever possible. This is very difficult to do in healthcare, because that information can be used to predict outcomes.

The data scientist needs to understand the likelihood that there will be a problem and be trained to recognize problematic patterns. This is also why it’s very important for data scientists to have some understanding of the medical or scientific domain about which they’re building a model.

They need to understand the context of the data they’re using and the predictions they’re making to understand if protected classes driving outcomes is expected or unexpected.

5: What tools are available to identify bias in AI/ML models and how can an organization choose the right tool?

Tools like IBM OpenScale, Amazon Sagemaker Clarify, Google What-if and Microsoft Fairlearn are a great starting point in terms of detecting bias in models during training, and some can do so at runtime (including the ability to make corrections or identify changes in model behavior over time). These tools that enable both bias detection and model explainability and observability are critical to bringing AI/ML into live clinical and non-clinical healthcare settings.

EXPLORE NOW: Diversity, Equity & Inclusion (DE&I) in Healthcare

Healthcare Leaders Turn to Us

Perficient is dedicated to enabling organizations to elevate diversity, equity, and inclusion within their companies. Our healthcare practice is comprised of experts who understand the unique challenges facing the industry. The 10 largest health systems and 10 largest health insurers in the U.S. have counted on us to support their end-to-end digital success. Modern Healthcare has also recognized us as the fourth largest healthcare IT consulting firm.

We bring pragmatic, strategically-grounded know-how to our clients’ initiatives. And our work gets attention – not only by industry groups that recognize and award our work but also by top technology partners that know our teams will reliably deliver complex, game-changing implementations. Most importantly, our clients demonstrate their trust in us by partnering with us again and again. We are incredibly proud of our 90% repeat business rate because it represents the trust and collaborative culture that we work so hard to build every day within our teams and with every client.

With more than 20 years of experience in the healthcare industry, Perficient is a trusted, end-to-end, global digital consultancy. Contact us to learn how we can help you plan and implement a successful DE&I initiative for your organization.

]]>
https://blogs.perficient.com/2022/07/19/5-commonly-asked-questions-about-intrinsic-bias-in-ai-ml-models-in-healthcare/feed/ 0 312929
Contact Lens for Amazon Connect: Real-Time Use Cases and Rules https://blogs.perficient.com/2022/07/11/contact-lens-for-amazon-connect-real-time-use-cases-and-rules/ https://blogs.perficient.com/2022/07/11/contact-lens-for-amazon-connect-real-time-use-cases-and-rules/#respond Mon, 11 Jul 2022 17:24:17 +0000 https://blogs.perficient.com/?p=313118

In today’s blog post, I will walk you through a way in which Contact Lens can enhance your contact center: Amazon Connect Rules!

What Are Amazon Connect Rules?

As described in the Amazon Connect administrator guide, Amazon Connect Rules allow you to set up actions that are triggered based on conditions. When used in combination with Contact Lens, Rules allow you to trigger these actions based on the results of the real-time or post-call analysis of the call.

You can use Rules to automatically categorize calls or alert a supervisor if certain words are uttered by the agent or the customer. For example, you may want to send a real-time alert to a supervisor if a customer calls about cancelling their service so that the supervisor can provide real-time support and coaching to the agent. Or you may want to alert a supervisor if a customer is becoming abusive and swearing so that the supervisor can intervene to deescalate the conversation and protect the agent. In addition to words and phrases, Rules also allow you to create conditions based on things like sentiment analysis, interruptions, or the amount of non-talk time on the call.

There are two parts to setting up a Rule in Amazon Connect:

  • Define the conditions to be met for there to be a match, and
  • Define the actions to be taken when the Rule is matched.

Setting Up Rules in Amazon Connect

Rules can be easily created, viewed, and updated in the Amazon Connect instance UI by clicking on the gavel icon in the left sidebar. (Of course, as with all things Amazon Connect, your Amazon Connect user must have a security profile with sufficient permissions to interact with Rules. Look in the “Metrics and Quality” section of the security profile to ensure that the right boxes are checked for Rules.)

First, you will be asked to select if the Rule should be applied when a post-call analysis or a real-time analysis is available. Having the Rules matched based on the post-call analysis means that the alert will not be delivered until after the call has been completed and the agent has completed after contact work. Having the Rules matched based on the real-time analysis means that the Rules will be matched in real time while the call is still ongoing.

Once you have selected the type of analysis the Rule should be applied to, you can then use the UI to build the condition or conditions for the Rule. The administrator guide has an example of setting up conditions based on words and phrases.

After setting up the conditions for the Rule, the next screen will prompt you to configure the action or actions that should be taken when the Rule is matched. There is only one required action for all Rules: specifying the contact category that should be assigned to the contact. Once a category is assigned, your supervisors will be easily able to search for contacts based on category in Amazon Connect’s Contact Search.

Amazon also provides you with the option to create two additional types of actions:

  • Creating a Task
  • Generating an EventBridge event

A Task is a type of contact that you can create in Amazon Connect similar to a voice call or a chat. Tasks are useful if you want to create a follow-up action inside Amazon Connect – for example, a Task for a supervisor to follow up with an irate customer. When a Rule triggers the creation of a Task, the Task is automatically associated with the original contact, allowing for better traceability.

Tasks can be routed and prioritized like voice calls and chats, giving you the ability to prioritize and assign the work to the right person in your contact center. As well, for real-time use cases, the Task can include a link to a real-time transcript of the voice conversation so that the receiving agent/supervisor has all the context they need to provide support.

The second option, generating an EventBridge event, is useful if you want to create a follow-up action in an application or system outside Amazon Connect. Amazon EventBridge is Amazon’s serverless event bus. By generating an EventBridge event, you can trigger activity in other applications or systems that are outside your contact center based on the event. The administrator guide provides step-by-step instructions on using Rules to generate EventBridge events.

Other Real-Time Features and Use Cases

Used together, Contact Lens’ real-time analysis and Rules allow you to create real-time alerting in your contact center based on transcription and sentiment analysis.

If you are thinking about enabling real-time transcription, another feature you may want to consider is Amazon Connect Wisdom. Wisdom provides your agents with the ability to search across multiple repositories from a single UI. But importantly, when real-time analysis is enabled, Wisdom can also proactively recommend content to your agents to help them better handle the call. For a real-life use case of Wisdom, have a look at my colleague Toni Milushev’s recent blog post about enabling the Wisdom integration for customers of Perficient’s Amazon Connect Experience (PACE) solution.

If you’re interested in Contact Lens for Amazon Connect and need some guidance, we can help. At Perficient, we are an APN Advanced Consulting Partner for Amazon Connect which gives us a unique set of skills to accelerate your cloud, agent, and customer experience.

Perficient takes pride in our personal approach to the customer journey where we help enterprise clients transform and modernize their contact center and CRM experience with platforms like Amazon Connect.

For more information on how Perficient can help you get the most out of Amazon Connect and Contact Lens for Amazon Connect, please contact us here.

 

]]>
https://blogs.perficient.com/2022/07/11/contact-lens-for-amazon-connect-real-time-use-cases-and-rules/feed/ 0 313118
Load Data From Amazon RDS to Snowflake Using Matillion ETL Tool https://blogs.perficient.com/2022/06/22/load-data-from-amazon-rds-to-snowflake-using-matillion-etl-tool/ https://blogs.perficient.com/2022/06/22/load-data-from-amazon-rds-to-snowflake-using-matillion-etl-tool/#comments Wed, 22 Jun 2022 19:30:34 +0000 https://blogs.perficient.com/?p=311554

Amazon Relational Database Service, a  service provided by Amazon Web Services, is a fully managed SQL database cloud service that allows you to create and operate relational databases. With Amazon RDS, one can access all the files and database anywhere in a cost-effective and highly scalable way.

Snowflake is a cloud-based platform that helps data professionals get rid of separate data warehouses, data lakes, and data marts. Additionally, it allows secure data sharing across the organization. Snowflake is built on top of the Amazon Web Services, Microsoft Azure, and Google Cloud infrastructure. This tool is ideal for those organizations that don’t want to dedicate resources for setup, maintenance, and support of in-house servers as Matillion does not need hardware or software to install, configure, or manage it.

Matillion ETL is a cloud-based data integration tool which helps data teams to transform their business with data. Matillion is used to move data faster and do more on it with an easy-to-use visual approach. This tool provides a low-code interface for integration and transform data workflows. Matillion is a cost-effective tool for data professionals to get faster results in their cloud environment.

In this article, we are going to walk through a use case for loading data from Amazon RDS to Snowflake using the Matillion ETL Tool. Below is the prerequisite to perform these actions.

Prerequisites:

  1. A Matillion account
  2. A Snowflake account
  3. An AWS account to create Amazon RDS database

In the case that you do not have these accounts you can utilize the free trials to perform this use case.  You can create a free trial of an AWS account for 1 year, a free trial for Snowflake for 1 month, and a free trial for Matillion for 14 days. Below are the steps to perform this use case of loading data from Amazon RDS to Snowflake using Matillion ETL.

Steps:

  • After creating an AWS account go to the Amazon RDS database service. Click on create an Amazon RDS database of PostgreSQL engine type. Once the database instance is created, you will get the endpoint and port which will help us connect the RDS server and creating databases within it.

1

  • Once the PostgreSQL engine is created, we can use RDS endpoint to connect to the RDS instance using pgAdmin, pgAdmin is a web-based GUI tool used to interact with the Postgres database instances, both locally and remote servers as well. Using this GUI tool, we will create our database in it.

2

  • The next step is to create a database in Snowflake, which is where we are going to load our source database from Amazon RDS. Once the snowflake account is created, we can see the database icon to the right of the Snowflake logo, here we can create database by clicking create.

3

  • Next, we will create a Matillion account. Matillion instances can be created on Azure or AWS, in this use case the Matillion instance will be created on AWS. Access the Matillion instance and create a project.

4

  • Create a folder (Mohini-wm), add a orchestration job (test1). Then from the component, add ‘Create Table’ component. We will be using ‘Create Table’ component for creating a table in Snowflake. We have specified the ‘Database’ name which was already created in Snowflake. Specified ‘Schema’, ‘New Table Name’ and column names will need to be created in Snowflake.

5

  • We will be using the ‘RDS Query’ component next. This component will help us to connect to the Amazon RDS service. Specify parameters as shown below of Database Type, RDS Endpoint, etc.

6

  • Right click on the canvas and run the job. In the right-side section we can see the status of job, And in the left-hand side we can preview the data.

7

  • In Snowflake we can see the new table is created and data from Amazon RDS server is loaded to the snowflake using Matillion. For a preview of the data we will have to run the SELECT query.

8

At this point you have successfully loaded the data from Amazon RDS database and into Snowflake using Matillion ETL tool.

How Can Perficient Help You?

Perficient is a certified Amazon Web Services partner with more than 10 years of experience delivering enterprise-level applications and expertise in cloud platform solutions, contact center, application modernization, migrations, data analytics, mobile, developer and management tools, IoT, serverless, security, and more. Paired with our industry-leading strategy and team, Perficient is equipped to help enterprises tackle the toughest challenges and get the most out of their implementations and integrations.

Learn more about our AWS practice and get in touch with our team here!

]]>
https://blogs.perficient.com/2022/06/22/load-data-from-amazon-rds-to-snowflake-using-matillion-etl-tool/feed/ 2 311554
Integrating Terraform with Jenkins (CI/CD) https://blogs.perficient.com/2022/06/01/integrating-terraform-with-jenkins-ci-cd/ https://blogs.perficient.com/2022/06/01/integrating-terraform-with-jenkins-ci-cd/#comments Wed, 01 Jun 2022 16:57:44 +0000 https://blogs.perficient.com/?p=310456

Automated Infrastructure (AWS) Setup Using Terraform and Jenkins (Launch EC2 and VPC)

In this blog we will discuss how to execute the Terraform code using Jenkins and set up AWS infrastructure such as EC2 and VPC.

For those of you are unfamiliar with Jenkins, it is an open-source continuous integration and continuous development automation tool that allows us to implement CI/CD workflows, called pipelines.

Getting to Know the Architecture

2

1

What is Terraform? – Terraform is the infrastructure as code delivered by HashiCorp. It is a tool for building, changing, and managing infrastructure in a safe repeatable way.

What is Jenkins? An open-source continuous integration/continuous delivery and deployment (CI/CD) automation software DevOps tool written in the Java programming language. It is used to implement CI/CD workflows, called pipelines.

What is Infrastructure as Code? – It is the process of managing infrastructure in a file, or files, rather than manually configuring resources in a user interface.

 

Advantages of Continuous Integration/Continuous Deployment –

  • Small code changes are easier and less consequential.
  • Insulating faults is easier and faster.
  • Testability is enhanced through smaller, specific changes.

 

Terraform consists of three stages of workflow:

  1. Write: You set resources, which can be split between several cloud providers and services.
  2. Plan: Terraform creates a workplan for your existing infrastructure and configuration that describes the infrastructure it will create and update.
  3. Apply: Terraform completes all operations in the correct sequence.

 

In this Article, we will cover the basic functions of Terraform to create infrastructure on AWS.

 

  1. Launch One Linux Machine and Install Jenkins. 

3

    • The Admin password is created and stored in the log file. To access the password you will need to run the below command.
      • # cat  /var/lib/jenkins/secrets/initialAdminPasswordThen, Customize JenkinsAfter That, Create First Admin UserClick on Save and Continue…

4

  1. Install the Terraform Plugin in Jenkins
    • In the Jenkins console, go to Manage Jenkins > Manage Plugins > Available > and search “Terraform”.

5

  1. Configure Terraform
    • You will need to manually set up Terraform on the same sever as Jenkins using the below commands.
      • In Manage Jenkins > Global Tool Config > Terraform
      • Add Terraform.
      • Uncheck the “Install Automatically” check box.
      • Name: Terraform
      • Install Directory: /usr/local/bin/

6

    • After getting Terraform set up on the Jenkins’ server, you will need to install Git on your Jenkins VM and write Terraform code on .tf file

7

  1. Integrate Jenkins with Terraform and our Git Hub Repository
    • We need to create a new project to run Terraform using Jenkins.
    • In Jenkins got to New Item and enter and item name and create Pipeline.
    • Now, we will write the script for the GitHub and Terraform job. Here we can use the Jenkins syntax generator to write the script.

8

pipeline {

agent any

stages {

stage(‘Checkout’) {

steps {

checkout([$class: ‘GitSCM’, branches: [[name: ‘*/main’]], extensions: [], userRemoteConfigs: [[url: ‘https://github.com/suraj11198/Terraform-Blog.git‘]]])

}

}

stage (“terraform init”) {

steps {

sh (‘terraform init’)

}

}

stage (“terraform Action”) {

steps {

echo “Terraform action is –> ${action}”

sh (‘terraform ${action} –auto-approve’)

}

}

}

}

 

  1. Using the Previous Steps, We Should Have Successfully Built Our Job

9

  1. Our EC2 Instance and VPC are Created, and the Same VPC is Attached to Our EC2 Instance

10

11

 

Summary:

Using Terraform, we built an EC2 instance and VPC on AWS using remote control.

We have touched on the basics of Terraform and Jenkins capabilities. It has several functionalities for the construction, modification, and versioning of the infrastructure.

 

How Can Perficient Help You?

Perficient is a certified Amazon Web Services partner with more than 10 years of experience delivering enterprise-level applications and expertise in cloud platform solutions, contact center, application modernization, migrations, data analytics, mobile, developer and management tools, IoT, serverless, security, and more. Paired with our industry-leading strategy and team, Perficient is equipped to help enterprises tackle the toughest challenges and get the most out of their implementations and integrations.

Learn more about our AWS practice and get in touch with our team here!

]]>
https://blogs.perficient.com/2022/06/01/integrating-terraform-with-jenkins-ci-cd/feed/ 7 310456