Amazon Web Services Articles / Blogs / Perficient https://blogs.perficient.com/category/partners/amazon-web-services/ Expert Digital Insights Mon, 02 Dec 2024 21:51:55 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Amazon Web Services Articles / Blogs / Perficient https://blogs.perficient.com/category/partners/amazon-web-services/ 32 32 30508587 Perficient Achieves AWS Healthcare Services Competency, Strengthening Our Commitment to Healthcare https://blogs.perficient.com/2024/11/29/perficient-achieves-aws-healthcare-services-competency-strengthening-our-commitment-to-healthcare/ https://blogs.perficient.com/2024/11/29/perficient-achieves-aws-healthcare-services-competency-strengthening-our-commitment-to-healthcare/#respond Fri, 29 Nov 2024 16:30:18 +0000 https://blogs.perficient.com/?p=372789

At Perficient, we’re proud to announce that we have achieved the AWS Healthcare Services Competency! This recognition highlights our ability to deliver transformative cloud solutions tailored to the unique challenges and opportunities in the healthcare industry.

Healthcare organizations are under increasing pressure to innovate while maintaining compliance, ensuring security, and improving patient outcomes. Achieving the AWS Healthcare Services Competency validates our expertise in helping providers, payers, and life sciences organizations navigate these complexities and thrive in a digital-first world.

A Proven Partner in Healthcare Transformation

Our team of AWS-certified experts has extensive experience working with leading healthcare organizations to modernize systems, accelerate innovation, and deliver measurable outcomes. By aligning with AWS’s best practices and leveraging the full suite of AWS services, we’re helping our clients build a foundation for long-term success.

The Future of Healthcare Starts Here

This milestone is a reflection of our ongoing commitment to innovation and excellence. As we continue to expand our collaboration with AWS, we’re excited to partner with healthcare organizations to create solutions that enhance lives, empower providers, and redefine what’s possible.

Ready to Transform?

Learn more about how Perficient’s AWS expertise can drive your healthcare organization’s success.

]]>
https://blogs.perficient.com/2024/11/29/perficient-achieves-aws-healthcare-services-competency-strengthening-our-commitment-to-healthcare/feed/ 0 372789
Omnichannel Analytics Simplified – Optimizely Acquires Netspring https://blogs.perficient.com/2024/10/09/omnichannel-analytics-optimizely-netspring/ https://blogs.perficient.com/2024/10/09/omnichannel-analytics-optimizely-netspring/#respond Wed, 09 Oct 2024 12:53:32 +0000 https://blogs.perficient.com/?p=370331

Recently, the news broke that Optimizely acquired Netspring, a warehouse-native analytics platform.

I’ll admit, I hadn’t heard of Netspring before, but after taking a closer look at their website and capabilities, it became clear why Optimizely made this strategic move.

Simplifying Omnichannel Analytics for Real Digital Impact

Netspring is not just another analytics platform. It is focused on making warehouse-native analytics accessible to organizations of all sizes. As businesses gather more data than ever before from multiple sources – CRM, ERP, commerce, marketing automation, offline/retail – managing and analyzing that data in a cohesive way is a major challenge. Netspring simplifies this by enabling businesses to conduct meaningful analytics directly from their data warehouse, eliminating data duplication and ensuring a single source of truth.

By bringing Netspring into the fold, Optimizely has future-proofed its ability to leverage big data for experimentation, personalization, and analytics reporting across the entire Optimizely One platform.

Why Optimizely Acquired Netspring

Netspring brings significant capabilities that make it a best-in-class tool for warehouse-native analytics.

With Netspring, businesses can:

  • Run Product Analytics: Understand how users engage with specific products.
  • Analyze Customer Journeys: Dive deep into the entire customer journey, across all touchpoints.
  • Access Business Intelligence: Easily query key business metrics without needing advanced technical expertise or risking data inconsistency.

This acquisition means that data teams can now query and analyze information directly in the data warehouse, ensuring there’s no need for data duplication or exporting data to third-party platforms. This is especially valuable for large organizations that require data consistency and accuracy.

Omnichannel Analytics Optimizely Netspring

 


Ready to capitalize on these new features? Contact Perficient for a complimentary assessment!


The Growing Importance of Omnichannel Analytics

It’s no secret that businesses today are moving away from single analytics platforms. Instead, they are combining data from a wide range of sources to get a holistic view of their performance. It’s not uncommon to see businesses using a combination of tools like Snowflake, Google BigQuery, Salesforce, Microsoft Dynamics, Qualtrics, Google Analytics, and Adobe Analytics.
How?

These tools allow organizations to consolidate and analyze performance metrics across their entire omnichannel ecosystem. The need to clearly measure customer journeys, marketing campaigns, and sales outcomes across both online and offline channels has never been greater. This is where warehouse-native analytics, like Netspring, come into play.

Why You Need an Omnichannel Approach to Analytics & Reporting

Today’s businesses are increasingly reliant on omnichannel analytics to drive insights. Some common tools and approaches include:

  • Customer Data Platforms (CDPs): These platforms collect and unify customer data from multiple sources, providing businesses with a comprehensive view of customer interactions across all touchpoints.
  • Marketing Analytics Tools: These tools help companies measure the effectiveness of their marketing campaigns across digital, social, and offline channels. They ensure you have a real-time view of campaign performance, enabling better decision-making.
  • ETL Tools (Extract, Transform, Load): ETL tools are critical for moving data from various systems into a data warehouse, where it can be analyzed as a single, cohesive dataset.

The combination of these tools allows businesses to pull all relevant data into a central location, giving marketing and data teams a 360-degree view of customer behavior. This not only maximizes the return on investment (ROI) of marketing efforts but also provides greater insights for decision-making.

Navigating the Challenges of Omnichannel Analytics

While access to vast amounts of data is a powerful asset, it can be overwhelming. Too much data can lead to confusion, inconsistency, and difficulties in deriving actionable insights. This is where Netspring shines – its ability to work within an organization’s existing data warehouse provides a clear, simplified way for teams to view and analyze data in one place, without needing to be data experts. By centralizing data, businesses can more easily comply with data governance policies, security standards, and privacy regulations, ensuring they meet internal and external data handling requirements.

AI’s Role in Omnichannel Analytics

Artificial intelligence (AI) plays a pivotal role in this vision. AI can help uncover trends, patterns, and customer segmentation opportunities that might otherwise go unnoticed. By understanding omnichannel analytics across websites, mobile apps, sales teams, customer service interactions, and even offline retail stores, AI offers deeper insights into customer behavior and preferences.

This level of advanced reporting enables organizations to accurately measure the impact of their marketing, sales, and product development efforts without relying on complex SQL queries or data teams. It simplifies the process, making data-driven decisions more accessible.

Additionally, we’re looking forward to learning how Optimizely plans to leverage Opal, their smart AI assistant, in conjunction with the Netspring integration. With Opal’s capabilities, there’s potential to further enhance data analysis, providing even more powerful insights across the entire Optimizely platform.

What’s Next for Netspring and Optimizely?

Right now, Netspring’s analytics and reporting capabilities are primarily available for Optimizely’s experimentation and personalization tools. However, it’s easy to envision these features expanding to include content analytics, commerce insights, and deeper customer segmentation capabilities. As these tools evolve, companies will have even more ways to leverage the power of big data.

A Very Smart Move by Optimizely

Incorporating Netspring into the Optimizely One platform is a clear signal that Optimizely is committed to building a future-proof analytics and optimization platform. With this acquisition, they are well-positioned to help companies leverage omnichannel analytics to drive business results.

At Perficient, an Optimizely Premier Platinum Partner, we’re already working with many organizations to develop these types of advanced analytics strategies. We specialize in big data analytics, data science, business intelligence, and artificial intelligence (AI), and we see firsthand the value that comprehensive data solutions provide. Netspring’s capabilities align perfectly with the needs of organizations looking to drive growth and gain deeper insights through a single source of truth.

Ready to leverage omnichannel analytics with Optimizely?

Start with a complimentary assessment to receive tailored insights from our experienced professionals.

Connect with a Perficient expert today!
Contact Us

]]>
https://blogs.perficient.com/2024/10/09/omnichannel-analytics-optimizely-netspring/feed/ 0 370331
Smart Manufacturing, QA, Big Data, and More at The International Manufacturing Technology Show https://blogs.perficient.com/2024/09/19/smart-manufacturing-qa-big-data-and-more-at-the-international-manufacturing-technology-show/ https://blogs.perficient.com/2024/09/19/smart-manufacturing-qa-big-data-and-more-at-the-international-manufacturing-technology-show/#respond Thu, 19 Sep 2024 14:43:19 +0000 https://blogs.perficient.com/?p=369461

For my first time attending the International Manufacturing Technology Show (IMTS), I must say it did not disappoint. This incredible event in Chicago happens every two years and is massive in size, taking up every main hall in McCormick Place. It was a combination of technology showcases, featuring everything from robotics to AI and smart manufacturing.

As a Digital Strategy Director at Perficient, I was excited to see the latest advancements on display representing many of the solutions that our company promotes and implements at the leading manufacturers around the globe. Not to mention, IMTS was the perfect opportunity to network with industry influencers as well as technology partners.

Oh, the People You Will Meet and Things You Will See at IMTS

Whenever you go to a show of this magnitude, you’re bound to run into someone you know. I was fortunate to experience the show with several colleagues, with a few of us getting to meet our Amazon Web Services (AWS) account leaders as well as Google and Microsoft.

Google

The expertise of the engineers at each demonstration was truly amazing, specifically at one Robotic QA display. This robotic display was taking a series of pictures of automobile doors with the purpose of looking for defects. The data collected would go into their proprietary software for analysis and results. We found this particularly intriguing because we had been presented with similar use cases by some of our customers. We were so engrossed in talking with the engineers that our half-hour-long conversation felt like only a minute or two before we had to move on.

 

 

 

robotic manufacturing on displayAfter briefly stopping to grab a pint—excuse me, picture—of the robotic bartender, we made our way to the Smart Manufacturing live presentation on the main stage. The ultra-tech companies presented explanations of how they were envisioning the future with Manufacturing 5.0 and digital twins, featuring big data as a core component.  It was reassuring to hear this, considering that it’s a strength of ours, thus reinforcing the belief that we need to continue focusing on these types of use cases. Along with big data, we should stay the course with trends shaping the industry like Smart Manufacturing, which at its roots are a combination of operations management, cloud, AI, and technology.

Smart Manufacturing Presentation at IMTS

Goodbye IMTS, Hello Future Opportunities with Robotics, AI, and Smart Manufacturing

Overall, IMTS was certainly a worthwhile investment. It provided a platform to connect with potential partners, learn about industry trends, and strengthen our relationships with technology partners. As we look ahead to future events, I believe that a focused approach, leveraging our existing partnerships and adapting to the evolving needs of the manufacturing industry, will be key to maximizing our participation.

If you’d like to discuss these takeaways from IMTS Chicago 2024 at greater depth, please be sure to connect with our manufacturing experts.

 

 

 

 

 

 

 

 

]]>
https://blogs.perficient.com/2024/09/19/smart-manufacturing-qa-big-data-and-more-at-the-international-manufacturing-technology-show/feed/ 0 369461
Accelerate Cloud Migration with AWS OLA https://blogs.perficient.com/2024/09/03/accelerate-cloud-migration-with-aws-ola/ https://blogs.perficient.com/2024/09/03/accelerate-cloud-migration-with-aws-ola/#respond Tue, 03 Sep 2024 17:45:10 +0000 https://blogs.perficient.com/?p=368610

In the wake of VMware’s recent license cost increase under Broadcom’s new pricing model, many enterprises are facing the pressing need to reevaluate their IT strategies. For those reliant on VMware’s virtualization technologies, the cost hike poses a significant challenge to maintaining budgetary control while continuing to drive digital transformation efforts.

Rather than simply absorbing these increased expenses, businesses now have a prime opportunity to explore more cost-effective and future-proof solutions. Perficient and Amazon Web Services (AWS), a robust and versatile cloud platform, can help organizations not only manage but also optimize their IT spending.

Why AWS?

AWS stands out as a premier choice for enterprises seeking to transition from traditional VMware environments to the cloud. Amazon Web Services Optimization and Licensing Assessment (AWS OLA) evaluates your third-party licensing costs to help you right-size your resources, reduce costs, and explore flexible licensing options. With its comprehensive suite of cloud-native services, AWS offers unparalleled flexibility, scalability, and cost-efficiency, enabling businesses to innovate and grow without being constrained by rising license fees.

AWS Finance Data shows that customers who used OLA benefited from a 36% reduction in total cost of ownership (TCO). Licensing is a critical yet often overlooked factor in cloud migration decisions. The cost associated with commercial licenses and the specific terms can significantly impact the total cost of ownership (TCO). A 2023 AWS study of 439 customers, encompassing over 300,000 servers, revealed that factoring in licensing considerations along with utilization optimization resulted in an average potential savings of 25.8%. This highlights the importance of a comprehensive approach to cloud migration, where licensing plays a key role in achieving substantial cost reductions.

 

How Perficient Can Help

As an AWS Advanced Tier Services Partner with the Migration Consulting Competency, Perficient is uniquely positioned to perform your AWS Optimization and Licensing Assessment (OLA). Our deep expertise and proven methodologies ensure a thorough evaluation of your current infrastructure, delivering actionable insights to optimize your AWS environment and reduce costs. With our extensive experience and strategic approach, Perficient helps you navigate the complexities of AWS licensing, ensuring your transition to the cloud is seamless and cost-effective.

 

Turning VMware Cost Hikes into Cloud-Driven Success with AWS and Perficient

Perficient is your trusted partner in this transition, offering the expertise, tools, and support needed to successfully migrate to AWS. Together, we can transform this challenge into a strategic advantage, positioning your business for long-term success in the cloud era.

For more information on how Perficient can help you transition to AWS, contact us today. Let’s embark on this journey to a more agile, efficient, and cost-effective IT future.

 

Sources:

https://aws.amazon.com/optimization-and-licensing-assessment/

https://aws.amazon.com/blogs/mt/reduce-software-licensing-costs-with-an-aws-optimization-and-licensing-assessment/

]]>
https://blogs.perficient.com/2024/09/03/accelerate-cloud-migration-with-aws-ola/feed/ 0 368610
How to Navigate the VMware License Cost Increase https://blogs.perficient.com/2024/08/13/how-to-navigate-the-vmware-license-cost-increase/ https://blogs.perficient.com/2024/08/13/how-to-navigate-the-vmware-license-cost-increase/#respond Tue, 13 Aug 2024 12:28:41 +0000 https://blogs.perficient.com/?p=367246

VMware (Broadcom) has discontinued their VMware partner resell program. This announcement forces customers to move forward with one of three options:

  1. Buy directly from VMware,
  2. Migrate workloads to another hypervisor, or
  3. Make a platform change.

For many VMware customers, the price changes were abrupt, while others have the luxury of taking a little more time to explore their options.

 

 

The Cloud Advantage

As organizations reassess their IT strategies, the shift toward cloud architectures is becoming increasingly attractive. Cloud solutions, built specifically for the cloud environment, offer unparalleled flexibility, scalability, and cost efficiency. They allow businesses to take full advantage of modern infrastructure capabilities without being locked into the escalating costs of traditional on-premises solutions.

Making the Transition

At Perficient, we understand the complexities and challenges associated with such a significant transition. Our expertise in cloud consulting and implementation positions us as the ideal partner to help you navigate this critical shift. Our consultants have developed a comprehensive and flexible plan to assist you in maximizing the efficiency of your platform change.

Comprehensive Assessment and Strategy Development

Our team begins with a thorough assessment of your current IT infrastructure, evaluating the specific impact of the VMware cost increase on your operations. We then develop a tailored strategy that aligns with your business goals, ensuring a smooth and cost-effective transition to cloud solutions.

Migration Services

Moving from a VMware-based infrastructure to a cloud environment can be complex. Our migration services ensure a seamless transition with minimal disruption to your business operations. We employ best practices and proven methodologies to migrate your workloads efficiently and securely.

Ongoing Support and Operational Efficiency

Post migration, we provide ongoing support to ensure your cloud environment operates at peak efficiency. Our team continuously monitors and optimizes your infrastructure, helping you to maximize the return on your cloud investment.

Cost Management and Optimization

One of the key advantages of cloud migration is the potential for significant cost savings and licensing cost avoidance. Our cost management services help you to leverage cloud features to reduce expenses, such as auto-scaling, serverless computing, and efficient resource allocation.

Embracing the Cloud

Perficient stands ready to guide you through this transition, providing the expertise, tools, and support necessary to successfully navigate this change. Together, we can turn this challenge into a transformative opportunity for your business.

To learn more about how these changes might impact your organization and explore our detailed strategy for a smooth transition, visit our cloud page for further insights. Our team is here to help you every step of the way.

]]>
https://blogs.perficient.com/2024/08/13/how-to-navigate-the-vmware-license-cost-increase/feed/ 0 367246
AWS Cross-Account Best Practices https://blogs.perficient.com/2024/08/08/aws-cross-account-best-practices/ https://blogs.perficient.com/2024/08/08/aws-cross-account-best-practices/#respond Thu, 08 Aug 2024 10:21:21 +0000 https://blogs.perficient.com/?p=366648

Implementing AWS cross-account access is crucial to managing a secure and scalable cloud environment. This setup simplifies the management process, enhances security by adhering to the principle of least privilege, streamlines operations by reducing the need to switch accounts, and facilitates compliance and auditing by centralizing access and control.

Imagine GlobalTech’s website is hosted on EC2 instances in Account A. The company’s DNS management team, responsible for updating and managing DNS records, operates within Account B using Amazon Route 53. By configuring cross-account access, the DNS team can update DNS records to reflect changes in the IP addresses of the EC2 instances or manage traffic routing without needing direct access to them. This centralization improves efficiency, enhances security, and ensures that DNS configurations are managed consistently across the company’s infrastructure.

Let’s understand this using the examples below:

Scenario: Cross-Account Access for Route 53 DNS Management

Business Context

Your company, “GlobalTech,” has a multi-account AWS environment managed through AWS Organizations. The company uses:

  • Account A: Hosting EC2 instances for various applications.
  • Account B: Managing DNS records using Amazon Route 53.

You want to configure cross-account access so that Route 53 in Account B can manage DNS records for EC2 instances running in Account A. This allows your DNS management team to handle DNS configurations centrally without requiring direct access to EC2 instances.

  • Prerequisites
  • Configuration Overview
  • Best Practices
  • Conclusion

Prerequisites

  • AWS Organizations: GlobalTech uses AWS Organizations with an organizational unit (OU) structure.
  • IAM Roles and Policies: Proper IAM roles and policies must be configured to allow cross-account access.
  • Route 53 Hosted Zone: Hosted zones are set up in Account B.

Configuration Overview

Steps to Implement the Scenario

  1. Create IAM Role in Account A (EC2 Instances Account):
    • Log in to AWS Management Console for Account A.
    • Navigate to IAM and click on Roles, then Create role.
    • Select Trusted Entity:
    • Choose Another AWS account.
    • Enter the Account ID of Account B (Route 53 account).
  2. Add Permissions:
    • Attach the following policies to the role:
      • AmazonRoute53FullAccess: Grants full access to Route 53.
      • Custom policy for specific permissions, if needed.
  1. Role Name:
    • Name the role Route53CrossAccountRole.
  2. Create Role.

Update Trust Relationship in Account A Role

  • Navigate to the Trust relationships tab for the Route53CrossAccountRole.
  • Edit the trust policy to allow Account B to assume the role:

Screenshot 2

Create IAM Role in Account B (Route 53 Account)

  1. Log in to AWS Management Console for Account B.
  2. Navigate to IAM and click on Roles, then Create role.
  3. Select Trusted Entity:
    • Choose Another AWS account.
    • Enter the Account ID of Account A (EC2 account).
  4. Add Permissions:
    • Attach policies needed for Route 53 management (if additional permissions are required).
  5. Role Name:
    • Name the role EC2ManagementRole.
  6. Create Role.

Configure Route 53 Hosted Zone in Account B

  1. Log in to AWS Management Console for Account B.
  2. Navigate to Route 53 and create or select the existing hosted zone.
  3. Add DNS records pointing to the public IP addresses of the EC2 instances in Account A.

Assign IAM Role to EC2 Instances in Account A

  1. Log in to AWS Management Console for Account A.
  2. Navigate to EC2, and select the instances you want to associate with the role.
  3. Click on Actions > Security > Modify IAM Role.
  4. Select the Route53CrossAccountRole and save changes.

Automate DNS Updates (Optional)

  • Use AWS Lambda or a similar service to automatically update DNS records in Route 53 when EC2 instances are launched or terminated in Account A.
  • Ensure the Lambda function assumes the Route53CrossAccountRole to make necessary API calls to Route 53.

Testing and Verification

  1. Verify Role Assumption: Ensure that Account B can assume the Route53CrossAccountRole in Account A.
  2. Update DNS Records: Try updating DNS records in Route 53 for EC2 instances in Account A.
  3. Check DNS Resolution: Verify that the DNS records are correctly resolving to the EC2 instances in Account A.

Multiple Scenarios

  • Use Case: Developers in Account A need temporary access to resources in Account B, such as an S3 bucket.
  • Use Case: The finance team in Account A needs access to billing information for multiple AWS accounts
  • Use Case: Enable network communication between VPCs in different AWS accounts

Best Practices

To ensure secure and efficient cross-account access management in AWS, follow these best practices:

1. Use AWS Organizations

  • Centralized Management: Use AWS Organizations to manage multiple accounts centrally, allowing for better control and governance.
  • Service Control Policies (SCPs): Apply SCPs to enforce permission boundaries and ensure accounts only have the necessary permissions.

2. Implement IAM Roles

  • Cross-Account Roles: Create IAM roles for cross-account access instead of using root accounts or IAM users.
  • Role Assumption: Set up trust relationships to allow users or services in one account to assume roles in another account using the sts: AssumeRole API.

3. Principle of Least Privilege

  • Minimum Permissions: Grant only the permissions necessary for users, roles, and services to perform their tasks.
  • Fine-Grained Policies: Use detailed IAM policies to control access at a granular level.

4. Enable Multi-Factor Authentication (MFA)

  • MFA for Sensitive Operations: To add an extra layer of security, require MFA for roles and users performing sensitive operations.
  • MFA Enforcement: Use IAM policies to enforce MFA for specific actions or API calls Centralize Logging and Monitoring
  • AWS CloudTrail: Enable CloudTrail in all accounts to capture and log all API calls and user activities.
  • Centralized Logging: Store CloudTrail logs in a centralized S3 bucket for easier analysis and monitoring.
  • Amazon GuardDuty: Enable GuardDuty for continuous threat detection and monitoring.

5. Establish a Secure Network Architecture

  • VPC Peering and Transit Gateway: Use VPC peering or AWS Transit Gateway to enable secure and efficient network connectivity between accounts.

6. Regular Security Audits

  • Compliance Checks: Perform regular security audits and compliance checks to ensure cross-account access configurations meet security and compliance requirements.
  • Security Best Practices: Regularly update your knowledge of AWS security features and follow AWS security best practices.

7. Utilize AWS Trusted Advisor

  • Security Checks: Use AWS Trusted Advisor to perform security checks and receive recommendations for improving your security posture.
  • Review Recommendations: Regularly review and act on the recommendations provided by AWS Trusted Advisor

To ensure secure and efficient cross-account access management in AWS, implement AWS Organizations for centralized management, use IAM roles for granting cross-account access, enforce the principle of least privilege, enable MFA for sensitive operations, centralize logging and monitoring with CloudTrail and GuardDuty, and utilize AWS Resource Access Manager (RAM) for secure resource sharing. Automate account management, establish a secure network architecture, perform regular security audits, and adopt a tagging strategy for resource organization. Following these best practices will enhance security, streamline management, and maintain compliance in your AWS environment.

]]>
https://blogs.perficient.com/2024/08/08/aws-cross-account-best-practices/feed/ 0 366648
The Ultimate Guide for Cutting AWS Costs https://blogs.perficient.com/2024/07/30/the-ultimate-guide-for-cutting-aws-costs/ https://blogs.perficient.com/2024/07/30/the-ultimate-guide-for-cutting-aws-costs/#respond Wed, 31 Jul 2024 01:35:30 +0000 https://blogs.perficient.com/?p=366360

AWS cloud solution is becoming a requirement of the fast-evolving infrastructure needed in today’s IT business. All clients wish to move to cloud because it has higher availability and durability. The current consumers on cloud are always concerned with the ways that they can cut costs by a huge on Amazon web service monthly and or yearly billing cycle.
In this article we will examine such AWS resources & how, by using them, you can minimize your billing period.

1) AWS Cost allocation Tags

Using AWS tags we can track the resources that relate to each other. We can enable detailed cost report. The allocated tags show up in billing in column wise structure.

AWS generated cost allocation tagsUser tags
AWS tags will be automatically applied to resources we created if we have not tagged themThese tags are defined by the user. They start with prefix “user:”
It will start with prefix “aws:” e.g. (aws: createdBy).
They are not applied to the resources created before the activation

These cost allocation tags only show up in the billing console segment. Generally, it may take up to 24 hours for the tags to appear in the report.

To Activate Cost Allocation Tags:

Go to the AWS Billing and Cost Management console.

Select “Cost Allocation Tags” under “Billing preferences.”

Activate the tags you want to use for cost allocation by checking them.

Aaa

2) Trusted Advisor

This is a high level in AWS service assessment. This aids in the assessment of or proposing options such as cost management, robustness, reliability, scalability, quality of service, quality of operations. It is the same for all customers of AWS It offers core checks and basic suggestions.

To benefit from the full-on usage of this service you may have to be on a commercial and enterprise plan. We can make automatic reports and alerts for specific checks to stay informed about your AWS environment’s health and compliance with best practices.

From AWS management console, we can find Trusted advisor under support section.

Bbb

In trusted advisors, a service limit is used to monitor the recommendation. We can create manual cases from AWS support centre to increase limits or by using AWS service quotas service.

3) AWS Service Quotas

AWS Service Quotas, or limits, define the maximum number of resources or operations allowed within an AWS account. These quotas help ensure the stability and security of the AWS environment while providing predictable performance. AWS automatically sets these quotas, but many can be adjusted upon request.

We can setup CloudWatch Monitor usage against quotas and create alarms to alert you when you are nearing a quota limit.

Managing Service Quotas

  • AWS Management Console: Use the Service Quotas dashboard to view and manage your service quotas.

Ccc

  • AWS CLI: Use commands like aws service-quotas list-service-quotas to list quotas.
  • AWS SDKs: Use AWS SDKs to programmatically retrieve quota information.

 

Categories:

Account Quotas: Limits that apply to your entire AWS account.

Service-specific Quotas: Limits that apply to specific services like EC2, S3, RDS, etc.

 

Common AWS Service Quotas

EC2RDSS3
Running On-Demand Instances: varies depending on the type of instance;
for example, 20 are available for instances
with generic purposes.
DB Instances: Each account has 40 DB instances.Buckets: 1100 per account by default.
Spot Instances: There is a cap on how many spot instances you can execute.Storage: 100 TB of storage is available for all DB instances.Object Size: 5 TB or more per object
Elastic IP Addresses: 5 in each region.Snapshots: 100 manual snapshots per account.

4) AWS Saving Plans

Savings Plans promise a fixed level of usage (measured in $ / hour) for one or three years in exchange for a flexible pricing strategy that offers significant savings over On-Demand pricing.

Compute Savings Plans:EC2 Instance Savings Plans:
most flexible and cost-effectiveOffer maximum savings of up to 72%.
Regardless of region, instance family, operating system, or tenancy, apply to every EC2 instance.Specific to individual instance families in a chosen region
can also be used with AWS Lambda and Fargate.

Reserved Instances (RIs)

When compared to On-Demand pricing, Reserved Instances offer a substantial reduction of up to 75%. You can reserve capacity for your EC2 instances with them, but they do require a one- or three-year commitment.

Types of Reserved Instances:

Standard: Standard: Provide the largest discount possible; this is ideal for use in steady-state conditions.
Convertible: Offer savings while enabling changes to operating systems, tenancies, and instance kinds.

5) S3 – Intelligent Tiering

Amazon S3 Intelligent-Tiering is developed to automatically optimize storage costs as patterns of data access change. Without affecting performance or adding overhead, it transfers data between the frequent and seldom access tiers based on shifting access patterns.

Regardless of the access tier, there are no retrieval fees to retrieve your data. The cost of keeping an eye on things and transferring them across access tiers is covered by a little monthly monitoring and automation fee that is charged per object. Offers superior resilience and accessibility compared to alternative Amazon S3 storage categories.

Enabling S3 Intelligent-Tiering

Ddd

AWS Management Console: Navigate to the S3 bucket, select the objects, and choose “Change storage class” to move objects to S3 Intelligent-Tiering.

Alternatively, to move objects to Intelligent-Tiering, build up a lifecycle rule.

AWS CLI: To move things to Intelligent-Tiering, use the commands “aws s3 mv” or “aws s3 cp.”

6) AWS Budgets

Using AWS Budgets, a cost management tool, you can create personalized spending plans to monitor your AWS expenses, usage, and consumption. With its alerts, you may efficiently manage your AWS expenditure by being informed when you surpass or are expected to surpass your budget limitations.

Custom Budgets – Make expenses and utilization, Reserved Instances (RIs), Savings Plans, and Custom Budgets-based budgets. Establish budgets for several timeframes, such as monthly, quarterly, and annual.

Alerts and Notifications – When your budget is exceeded by actual or projected usage, get warnings via email or Amazon SNS. To receive warnings at different stages for the same budget, set up several thresholds.

Creating a Budget:

Open the AWS Budgets Dashboard.

Click on “Create a budget.” Follow the requirements and click on create Budget.

Eee

 

7) AWS Compute Optimizer

It helps in the optimization of your AWS resources, including Lambda functions, Auto Scaling groups, EBS volumes, and EC2 instances. It offers suggestions to boost productivity, cut expenses, and improve efficiency based on your usage behaviours.

EC2 Instances offers the best instance types based on how much memory, CPU, and network are used.

Auto Scaling Groups: suggests the ideal sizes and types of instances for groups.

EBS Volumes: makes recommendations for improving the types and settings of EBS volumes.

Lambda Functions: offers suggestions for maximizing concurrency and memory size.

For thorough cost management and monitoring, integrates easily with AWS services like AWS CloudWatch, AWS Budgets, and AWS Cost Explorer.

Enable AWS Compute Optimizer:

Go to the AWS Compute Optimizer Console.

Click “Get started” and follow the instructions to enable the service.

Fff

Example Use Case

EC2 Instance Optimization – To reduce expenses, find unused EC2 instances and reduce their size. To increase performance, find instances that are being overused and upgrade them.

Auto Scaling Group Optimization – To guarantee economical and effective scaling, optimize instance sizes and kinds within Auto Scaling groups.

Conclusion

We now know the seven most crucial things to do to reduce your AWS billing cycle. In the majority of situations, we can use CloudWatch to receive alerts when the threshold is reached. This will minimize needless billing for our management and maximize available resources.

]]>
https://blogs.perficient.com/2024/07/30/the-ultimate-guide-for-cutting-aws-costs/feed/ 0 366360
Understanding AWS Lambda Execution Role https://blogs.perficient.com/2024/07/23/understanding-aws-lambda-execution-role/ https://blogs.perficient.com/2024/07/23/understanding-aws-lambda-execution-role/#comments Wed, 24 Jul 2024 02:39:57 +0000 https://blogs.perficient.com/?p=365943

As we know, AWS Lambda is a serverless computing service that lets you run code without provisioning or managing servers. However, for Lambda functions to interact with other AWS services or resources, it needs permissions. This is where the AWS Lambda execution role comes into picture.

An execution role is an AWS Identity and Access Management (IAM) role that Lambda assumes when it runs your lambda function. This role gives the permissions to function which it needs to access AWS services and resources securely.

Why Lambda Execution Role Required?

When you create a Lambda function, it needs permissions to access other AWS resources like S3 buckets, DynamoDB tables, or CloudWatch logs. Instead of embedding credentials directly in your code (which is insecure and impractical), you assign an execution role to the Lambda function. This role defines the permissions the function has when it is invoked.

Creating an Execution Role Using AWS Management Console (GUI)

  1. Sign in to the AWS Management Console:
  2. Create a New Role:
    • In the navigation pane, choose “Roles” and then “Create role.”
    • Choose “AWS service” as the type of trusted entity.
    • Choose “Lambda” from the list of services.

Q1

    • Click “Next: Permissions.”

Q2

 

3. Attach Permissions Policies:

    • You can either choose existing policies or can create a custom policy.
    • Click “Next: Tags” (optional), then “Next: Review.

4.Review and Create:

    • Enter a role name
    • Review the role and click “Create role.”
  1. Attach the Role to Your Lambda Function:
    • Open the Lambda console.
    • Select your function.
    • Under “Execution role,” choose “Use an existing role.”

Q3

    • Select the role you just created and click “Save.”

Creating an Execution Role Using AWS CLI

  1. Create the Trust Policy:

cat > policy.json <<EOF

{

“Version”: “2012-10-17”,

“Statement”: [

{

“Effect”: “Allow”,

“Principal”: {

“Service”: “lambda.amazonaws.com”

},

“Action”: “sts:AssumeRole”

}

]

}

EOF

2.Create the Role:

aws iam create-role –role-name LambdaExecutionRole –assume-role-policy-document file://policy.json

3.Attach Permissions Policies:

aws iam attach-role-policy –role-name LambdaExecutionRole –policy-arn arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess

4. Attach the Role to Your Lambda Function:

aws lambda update-function-configuration –function-name YourLambdaFunctionName –role arn:aws:iam::YourAccountID:role/LambdaExecutionRole

Using IAM Access Analyzer to Identify Required Permissions

IAM Access Analyzer helps you identify the permissions your Lambda function needs. It analyzes your function’s activities and generates a policy that grants only the required permissions.

  1. Enable Access Analyzer:

Conclusion

Creating an AWS Lambda execution role is essential for granting your Lambda function the necessary permissions to interact with other AWS services securely. Whether you prefer using the AWS Management Console or the CLI, the process is straightforward. Additionally, IAM Access Analyzer will help to refine your policies to follow the principle of least privilege, enhancing the security of your applications.

By following these steps, you can ensure that your Lambda functions have the appropriate permissions while maintaining a secure and manageable environment.

]]>
https://blogs.perficient.com/2024/07/23/understanding-aws-lambda-execution-role/feed/ 13 365943
Powering AI-Driven Insights and Experiences: A Wealth Management Success Story https://blogs.perficient.com/2024/07/18/driving-the-future-of-wealth-management/ https://blogs.perficient.com/2024/07/18/driving-the-future-of-wealth-management/#respond Thu, 18 Jul 2024 20:54:12 +0000 https://blogs.perficient.com/?p=366091

Wealth management is evolving rapidly, driven by generational shifts, changing advisor roles, new business models, regulatory demands, and a growing preference for low-cost passive products.

In response to these changes, our fintech client partnered with one of the world’s largest financial institutions to develop a next-generation, open-source, front-to-back wealth management platform.

This innovative platform aims to:

  • Boost efficiencies and revenue opportunities for financial advisors
  • Create richer client experiences
  • Digitize enterprise-wide operations

Unleashing the Power of AI and Real-Time Insights

We architected and platformed a highly connected, extreme-scale data solution powered by AWS that unlocks actionable, real-time insights from billions of records and integrated data sources. AI-enabled predictions equip wealth management partners to support clients more efficiently and build a competitive advantage.


AI delivers proactive insight on potential risk profiles and incentives based on internal and market dynamics to accelerate responsive client services and performance reporting.


Read the Full Story: Speeding Insights and Powering Investment Experiences

The Transformative Power of Digital Expertise

In a rapidly evolving wealth management landscape, staying ahead of the curve requires innovation, agility, and the right partnerships. Whether you’re looking to optimize your wealth management platform or embark on a comprehensive digital transformation journey, we’re here to help you succeed.

We’ve been trusted by 16 of the top 20 global wealth and asset management firms, and as an AWS Advanced Consulting Partner, we help firms tackle their toughest cloud challenges

Interested in a deeper dive? Contact us today to jump-start your digital transformation journey.

]]>
https://blogs.perficient.com/2024/07/18/driving-the-future-of-wealth-management/feed/ 0 366091
Revolutionizing OpenAI Chatbot UI Deployment with DevSecOps https://blogs.perficient.com/2024/07/05/revolutionizing-openai-chatbot-ui-deployment-with-devsecops/ https://blogs.perficient.com/2024/07/05/revolutionizing-openai-chatbot-ui-deployment-with-devsecops/#respond Fri, 05 Jul 2024 17:12:29 +0000 https://blogs.perficient.com/?p=365644

In the contemporary era of digital platforms, capturing and maintaining user interest stands as a pivotal element determining the triumph of any software. Whether it’s websites or mobile applications, delivering engaging and tailored encounters to users holds utmost significance. In this project, we aim to implement DevSecOps for deploying an OpenAI Chatbot UI, leveraging Kubernetes (EKS) for container orchestration, Jenkins for Continuous Integration/Continuous Deployment (CI/CD), and Docker for containerization.

What is ChatBOT?

A ChatBOT is an artificial intelligence-driven conversational interface that draws from vast datasets of human conversations for training. Through sophisticated natural language processing methods, it comprehends user inquiries and furnishes responses akin to human conversation. By emulating the nuances of human language, ChatBOTs elevate user interaction, offering tailored assistance and boosting engagement levels.

What Makes ChatBOTs a Compelling Choice?

The rationale behind opting for ChatBOTs lies in their ability to revolutionize user interaction and support processes. By harnessing artificial intelligence and natural language processing, ChatBOTs offer instantaneous and personalized responses to user inquiries. This not only enhances user engagement but also streamlines customer service, reduces response times, and alleviates the burden on human operators. Moreover, ChatBOTs can operate round the clock, catering to users’ needs at any time, thus ensuring a seamless and efficient interaction experience. Overall, the adoption of ChatBOT technology represents a strategic move towards improving user satisfaction, operational efficiency, and overall business productivity.

Key Features of a ChatBOT Include:

  1. Natural Language Processing (NLP): ChatBOTs leverage NLP techniques to understand and interpret user queries expressed in natural language, enabling them to provide relevant responses.
  2. Conversational Interface: ChatBOTs utilize a conversational interface to engage with users in human-like conversations, facilitating smooth communication and interaction.
  3. Personalization: ChatBOTs can tailor responses and recommendations based on user preferences, past interactions, and contextual information, providing a personalized experience.
  4. Multi-channel Support: ChatBOTs are designed to operate across various communication channels, including websites, messaging platforms, mobile apps, and voice assistants, ensuring accessibility for users.
  5. Integration Capabilities: ChatBOTs can integrate with existing systems, databases, and third-party services, enabling them to access and retrieve relevant information to assist users effectively.
  6. Continuous Learning: ChatBOTs employ machine learning algorithms to continuously learn from user interactions and improve their understanding and performance over time, enhancing their effectiveness.
  7. Scalability: ChatBOTs are scalable and capable of handling a large volume of concurrent user interactions without compromising performance, ensuring reliability and efficiency.
  8. Analytics and Insights: ChatBOTs provide analytics and insights into user interactions, engagement metrics, frequently asked questions, and areas for improvement, enabling organizations to optimize their ChatBOT strategy.
  9. Security and Compliance: ChatBOTs prioritize security and compliance by implementing measures such as encryption, access controls, and adherence to data protection regulations to safeguard user information and ensure privacy.
  10. Customization and Extensibility: ChatBOTs offer customization options and extensibility through APIs and development frameworks, allowing organizations to adapt them to specific use cases and integrate additional functionalities as needed.

Through the adoption of DevSecOps methodologies and harnessing cutting-edge technologies such as Kubernetes, Docker, and Jenkins, we are guaranteeing the safe, scalable, and effective rollout of ChatBOT. This initiative aims to elevate user engagement and satisfaction levels significantly.

I extend our heartfelt appreciation to McKay Wrigley, the visionary behind this project. His invaluable contributions to the realm of DevSecOps have made endeavors like the ChatBOT UI project achievable.

Pipeline Workflow

Chatbotuiflow.drawio

 

Let’s start, building our pipelines for the deployment of OpenAI Chatbot application. I will be creating two pipelines in Jenkins,

  1. Creating an infrastructure using terraform on AWS cloud.
  2. Deploying the Chatbot application on EKS cluster node.

Prerequisite: Jenkins Server configured with Docker, Trivy, Sonarqube, Terraform, AWS CLI, Kubectl.

Once, we successfully established and configured a Jenkins server, equipped with all necessary tools to create a DevSecOps pipeline for deployment by following my previous blog. We can start building our DevSecOps pipeline for OpenAI chatbot deployment.

First thing, we need to do is configure terraform remote backend.

  1. Create a S3 bucket with any name.
  2. Create a DynamoDB table with name “Lock-Files” and Partition Key as “LockID”.
  3. Update the S3 bucket name and DynamoDB table name in backend.tf file, which is in EKS-TF folder in Github Repo.

Create Jenkins Pipeline

Let’s login into our Jenkins Server Console as you have completed the prerequisite. Click on “New Item” and give it a name, select pipeline and then ok.

I want to create this pipeline with build parameters to apply and destroy while building only. You must add this inside job like the below image.

Terraform Parameter

Let’s add a pipeline, Definition will be Pipeline Script.

pipeline{
    agent any
    stages {
        stage('Checkout from Git'){
            steps{
                git branch: 'main', url: 'https://github.com/sunsunny-hub/Chatbot-UIv2.git'
            }
        }
        stage('Terraform version'){
             steps{
                 sh 'terraform --version'
             }
        }
        stage('Terraform init'){
             steps{
                 dir('EKS-TF') {
                      sh 'terraform init'
                   }      
             }
        }
        stage('Terraform validate'){
             steps{
                 dir('EKS-TF') {
                      sh 'terraform validate'
                   }      
             }
        }
        stage('Terraform plan'){
             steps{
                 dir('EKS-TF') {
                      sh 'terraform plan'
                   }      
             }
        }
        stage('Terraform apply/destroy'){
             steps{
                 dir('EKS-TF') {
                      sh 'terraform ${action} --auto-approve'
                   }      
             }
        }
    }
}

Let’s apply and save and build with parameters and select action as apply.

Stage view it will take max 10mins to provision.

Blue ocean output

Terraform Pipe

Check in Your Aws console whether it created EKS cluster or not.

Awscluster

Ec2 instance is created for the Node group.

Nodeinstace

Now let’s create new pipeline for chatbot clone. In this pipeline will deploy chatbot application on docker container after successful deployment, will deploy the same docker image on above provisioned eks cluster.

Under Pipeline section Provide below details.

Definition: Pipeline script from SCM
SCM : Git
Repo URL : Your GitHub Repo 
Credentials: Created GitHub Credentials
Branch: Main
Path: Your Jenkinsfile path in GitHub repo.

Deploy Pipe1

Deploy Pipe2

Apply and Save and click on Build. Upon successful execution you can see all stages as green.

Deploy Output

Sonar- Console:

Sonar Result

You can see the report has been generated and the status shows as failed. You can ignore this as of now for this POC, but in real time project all this quality profile/gates need to be passed.

Dependency Check:

Dependency Check

Trivy File scan:

Trivyfile Scan

Trivy Image Scan:

Trivyimage Scam

Docker Hub:

Dockerhub

Now access the application on port 3000 of Jenkins Server Ec2 Instance public IP.

Note: Ensure that port 3000 is permitted in the Security Group of the Jenkins server.

Chatbotui Docker

Click on openai.com(Blue in colour)

This will redirect you to the ChatGPT login page where you can enter your email and password. In the API Keys section, click on “Create New Secret Key.”

Apikey

Give a name and copy it. Come back to chatbot UI that we deployed and bottom of the page you will see OpenAI API key and give the Generated key and click on save (RIGHT MARK).

Apikey2

UI look like:

Chatbotui Docker Apikey

Now, You can ask questions and test it.

Chatbotui Docker Apikey2

Deployment on EKS

Now we need to add credential for eks cluster, which will be used for deploying application on eks cluster node. For that ssh into Jenkins server. Give this command to add context.

aws eks update-kubeconfig --name <clustername> --region <region>

It will generate a Kubernetes configuration file. Navigate to the directory where the config file is located and copy its contents.

cd .kube
cat config

Save the copied configuration in your local file explorer at your preferred location and name it as a text file.

Kubeconfig

Next, in the Jenkins Console, add this file to the Credentials section with the ID “k8s” as a secret file.

K8s Credential

Finally, incorporate this deployment stage into your Jenkins file.

stage('Deploy to kubernetes'){
            steps{
                withAWS(credentials: 'aws-key', region: 'ap-south-1'){
                script{
                    withKubeConfig(caCertificate: '', clusterName: '', contextName: '', credentialsId: 'k8s', namespace: '', restrictKubeConfigAccess: false, serverUrl: '') {
                       sh 'kubectl apply -f k8s/chatbot-ui.yaml'
                  }
                }
            }
        }
      }

Now rerun the Jenkins Pipeline again.

Upon Success:

Eks Deploy

In the Jenkins give this command

kubectl get all
kubectl get svc #use anyone

This will create a Classic Load Balancer on the AWS Console.

Loadbalancer

Loadbalancer Console

Copy the DNS name and paste it into your browser to use it.

Note: Do the same process to get OpenAI API Key and add key to get output on Chatbot UI.

Chatbotui Eks

The Complete Jenkins file:

pipeline{
    agent any
    tools{
        jdk 'jdk17'
        nodejs 'node19'
    }
    environment {
        SCANNER_HOME=tool 'sonar-scanner'
    }
    stages {
        stage('Checkout from Git'){
            steps{
                git branch: 'main', url: 'https://github.com/sunsunny-hub/Chatbot-UIv2.git'
            }
        }
        stage('Install Dependencies') {
            steps {
                sh "npm install"
            }
        }
        stage("Sonarqube Analysis "){
            steps{
                withSonarQubeEnv('sonar-server') {
                    sh ''' $SCANNER_HOME/bin/sonar-scanner -Dsonar.projectName=Chatbot \
                    -Dsonar.projectKey=Chatbot '''
                }
            }
        }
        stage("quality gate"){
           steps {
                script {
                    waitForQualityGate abortPipeline: false, credentialsId: 'Sonar-token' 
                }
            } 
        }
        stage('OWASP FS SCAN') {
            steps {
                dependencyCheck additionalArguments: '--scan ./ --disableYarnAudit --disableNodeAudit', odcInstallation: 'DP-Check'
                dependencyCheckPublisher pattern: '**/dependency-check-report.xml'
            }
        }
        stage('TRIVY FS SCAN') {
            steps {
                sh "trivy fs . > trivyfs.json"
            }
        }
        stage("Docker Build & Push"){
            steps{
                script{
                   withDockerRegistry(credentialsId: 'docker', toolName: 'docker'){   
                       sh "docker build -t chatbot ."
                       sh "docker tag chatbot surajsingh16/chatbot:latest "
                       sh "docker push surajsingh16/chatbot:latest "
                    }
                }
            }
        }
        stage("TRIVY"){
            steps{
                sh "trivy image surajsingh16/chatbot:latest > trivy.json" 
            }
        }
        stage ("Remove container") {
            steps{
                sh "docker stop chatbot | true"
                sh "docker rm chatbot | true"
             }
        }
        stage('Deploy to container'){
            steps{
                sh 'docker run -d --name chatbot -p 3000:3000 surajsingh16/chatbot:latest'
            }
        }
        stage('Deploy to kubernetes'){
            steps{
                withAWS(credentials: 'aws-key', region: 'ap-south-1'){
                script{
                    withKubeConfig(caCertificate: '', clusterName: '', contextName: '', credentialsId: 'k8s', namespace: '', restrictKubeConfigAccess: false, serverUrl: '') {
                       sh 'kubectl apply -f k8s/chatbot-ui.yaml'
                  }
                }
            }
        }
        }
    }
    }

I hope you have successfully deployed the OpenAI Chatbot UI Application. You can also delete the resources using the same Terraform pipeline by selecting the action as “destroy” and running the pipeline.

]]>
https://blogs.perficient.com/2024/07/05/revolutionizing-openai-chatbot-ui-deployment-with-devsecops/feed/ 0 365644
Perficient Achieves AWS DevOps Competency https://blogs.perficient.com/2024/06/04/perficient-achieves-aws-devops-competency/ https://blogs.perficient.com/2024/06/04/perficient-achieves-aws-devops-competency/#respond Tue, 04 Jun 2024 18:48:31 +0000 https://blogs.perficient.com/?p=363795

Perficient is excited to announce our achievement in Amazon Web Services (AWS) DevOps Competency for AWS partners. This designation recognizes Perficient as an AWS partner that has demonstrated expertise in delivering DevSecOps solutions. This competency highlights Perficient’s ability to drive innovation, meet business objectives, and get the most out of your AWS services. 

What does this mean for Perficient? 

Achieving the AWS DevOps Competency status differentiates Perficient as an AWS Partner Network (APN) member that provides modern product engineering solutions designed to help enterprises adopt, develop, and deploy complex projects faster on AWS. To receive the designation, APN members must possess deep AWS expertise and deliver solutions seamlessly on AWS. 

This competency empowers our delivery teams to break down traditional silos, shorten feedback loops, and respond more effectively to changes, ultimately increasing speed to market by up to 75%.  

What does this mean for you? 

With our partnership with AWS, we can modernize our clients’ processes to improve product quality, scalability, and performance, and significantly reduce release costs by up to 97%. This achievement ensures that our CI/CD processes and IT governance are sustainable and efficient, benefiting organizations of any size.  

At Perficient, we strive to be the place where great minds and great companies converge to boldly advance business, and this achievement is a testament to that vision!  

]]>
https://blogs.perficient.com/2024/06/04/perficient-achieves-aws-devops-competency/feed/ 0 363795
ELT IS DEAD. LONG LIVE ZERO COPY. https://blogs.perficient.com/2024/04/29/elt-is-dead-long-live-zero-copy/ https://blogs.perficient.com/2024/04/29/elt-is-dead-long-live-zero-copy/#respond Mon, 29 Apr 2024 16:31:26 +0000 https://blogs.perficient.com/?p=362146

Imagine a world where we can skip Extract and Load, just do our data Transformations connecting directly to sources no matter what data platform you use?

Salesforce has taken significant steps over the last 2 years with Data Cloud to streamline how you get data in and out of their platform and we’re excited to see other vendors follow their lead. They’ve gone to the next level today by announcing their more comprehensive Zero Copy Partner Network.

By using industry standards, like Apache Iceberg, as the base layer, it means it’s easy for ALL data ecosystems to interoperate with Salesforce. We can finally make progress in achieving the dream of every master data manager, a world where the golden record can be constructed from the actual source of truth directly, without needing to rely on copies.

This is also a massive step forward for our clients as they mature into real DataOps and continue beyond to full site reliability engineering operational patterns for their data estates. Fewer copies of data mean increased pipeline reliability, data trustability, and data velocity.

This new model is especially important for our clients when they choose a heterogeneous ecosystem combining tools from many partners (maybe using Adobe for DXP and marking automation, and Salesforce for sales and service) they struggle to build consistent predictive models that can power them all—their customers end up getting different personalization from different channels. When we can bring all the data together in the Lakehouse faster and simpler, it makes it possible to build one model that can be consumed by all platforms. This efficiency is critical to the practicality of adopting AI at scale.

Perficient is unique in our depth and history with Data + Intelligence, and our diversity of partners. Salesforce’s “better together” approach is aligned precisely with our normal way of working. If you use Snowflake, RedShift, Synapse, Databricks, or Big Query, we have the right experience to help you make better decisions faster with Salesforce Data Cloud.

]]>
https://blogs.perficient.com/2024/04/29/elt-is-dead-long-live-zero-copy/feed/ 0 362146