cloud Articles / Blogs / Perficient https://blogs.perficient.com/tag/cloud/ Expert Digital Insights Tue, 11 Nov 2025 15:19:28 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png cloud Articles / Blogs / Perficient https://blogs.perficient.com/tag/cloud/ 32 32 30508587 Use Cases on AWS AI Services https://blogs.perficient.com/2025/11/09/amazon-web-services-ai/ https://blogs.perficient.com/2025/11/09/amazon-web-services-ai/#comments Sun, 09 Nov 2025 14:48:42 +0000 https://blogs.perficient.com/?p=386758

In today’s AI activated world, there are ample number of AI related tools that organizations can use to tackle diverse business challenges. In line with this, Amazon has it’s set of Amazon Web Services for AI and ML, to address the real-world needs.

This blog provides details on AWS services, but by understanding this writeup you can also get to know how AI and ML capabilities can be used to address various business challenges. To illustrate how these services can be leveraged, I have used a few simple and straightforward use cases and mapped the AWS solutions to them.

 

AI Use Cases : Using AWS Services

1. Employee On boarding process

Any employee onboarding process has its own challenges which can be improved by better information discovery, shortening the onboarding timelines, providing more flexibility to the new hire, option for learning and re-visiting the learning multiple times and enhancing both the security and personalization of the induction experience.

Using natural language queries, the AWS AI service – Amazon Kendra, enables new hires to easily find HR manuals, IT instructions, leave policies, and company guidelines, without needing to know exact file names or bookmark multiple URLs.

Amazon Kendra uses Semantic Search which understands the user’s intent and contextual meaning. Semantic search relies on Vector embeddings, Vector search, Pattern matching and Natural Language Processing.

Real-time data retrieval through Retrieval-augmented Generation (RAG) in Amazon Kendra empowers employees to access up-to-date content securely and efficiently.

Following are examples of few prompts a new hire can use to retrieve information:

  • How can I access my email on my laptop and on my phone.
  • How do I contact the IT support.
  • How can I apply for a leave and who do I reach out to for approvals.
  • How do I submit my timesheet.
  • Where can I find the company training portal.
  • ….etcetera.

Data Security

To protect organizational data and ensure compliance with enterprise security standards, Amazon Kendra supports robust data security measures, including encryption in transit and at rest, and seamless integration with AWS Identity and Access Management (IAM).

Role-based access ensures that sensitive information is only visible to authorized personnel.

Thus, in the Onboarding process, the HR team can provide the personalized touch, and the AI agent ensures the employees have easy, anytime access to the right information throughout their on-boarding journey.

.

2. Healthcare: Unlocking Insights from Unstructured Clinical Data

Healthcare providers always need to extract critical patient information and support timely decision-making. They face the challenge of rapidly analyzing vast amounts of unstructured medical records, such as physician notes, discharge summaries, and clinical reports.

From a data perspective two key features are required, namely, Entity Recognition and Attribute detection. Medical entities include symptoms, medications, diagnoses, and treatment plans. Similarly Attribute detection includes identifying the dosage, frequency and severity associated with these entities.

Amazon provides the service, Amazon Comprehend Medical which uses NLP and ML models for extracting such information from unstructured data available with healthcare organizations.

One of the crucial aspects in healthcare is to handle Security and compliance related to patient’s health data. AWS has Amazon Macie as a security related service which employs machine learning & pattern matching to discover, classify, and protect Protected Health Information (PHI) within Amazon S3 bucket. Such a service helps organizations maintain HIPAA compliance through automated data governance.

 

3. Enterprise data insights

Any large enterprise has data spread across various tools like SharePoint, Salesforce, Leave management portals or some accounting applications.

From these data sets, executives can extract great insights, evaluate what-if scenarios, check on some key performance indicators, and utilize all this for decision making.

We can use AWS AI service, Amazon Q business for this very purpose using various plugins, connectors to DBs, and Retrieval Augmented Generation for up-to-date information.

The user can use natural language to query the system and Amazon Q performs Semantic search to return back contextually appropriate information. It also uses Knowledge Grounding which eventually helps in providing accurate answers not relying solely on training data sets.

To ensure that AI-generated responses adhere strictly to approved enterprise protocols, provide accurate and relevant information, we can define built-in guardrails within Amazon Q, such as Global Controls and Topic blocking.

 

4. Retail company use cases

a) Reading receipts and invoices

The company wants to automate the financial auditing process. In order to achieve this we can use Amazon Textract to read receipts and invoices as it uses machine learning algorithms to accurately identify and extract key information like product names, prices, and reviews.

b) Analyse customer purchasing patterns

The company intends to analyse customer purchasing patterns to predict future sales trends from their large datasets of historical sales data. For these analyses the company wants to build, train, and deploy machine learning models quickly and efficiently.

Amazon SageMaker is the ideal service for such a development.

c) Customer support Bot

The firm receives thousands of customer calls daily. In order to smoothen the process, the firm is looking to create a conversational AI bot which can take text inputs and voice commands.

We can use Amazon Bedrock to create a custom AI application from a dataset of ready to use Foundation models. These models can process large volumes of customer data, generate personalized responses and integrate with other AWS services like Amazon SageMaker for additional processing and analytics.

We can use Amazon Lex to create the bot, and Amazon Polly for text to speech purposes.

d) Image analyses

The company might want to identify and categorize their products based on the images uploaded. To implement this, we can use Amazon S3 and Amazon Rekognition to analyze images as soon as the new product image is uploaded into the storage service.

 

AWS Services for Compliance & Regulations

AWS AI Services for Compliance

AWS Services for Compliance & Regulations

In order to manage complex customer requirements and handling large volumes of sensitive data it becomes essential for us to adhere to various regulations.

Key AWS services supporting these compliance and governance needs include:

  1. AWS Config
    Continuously monitors and records resource configurations to help assess compliance.
  2. AWS Artifact
    Centralized repository for on-demand access to AWS compliance reports and agreements.
  3. AWS CloudTrail
    Logs and tracks all user activity and API calls within your AWS environment for audit purposes.
  4. AWS Inspector
    Automated security assessment service that identifies vulnerabilities and deviations from best practices.
  5. AWS Audit Manager
    Simplifies audit preparation by automating evidence collection and compliance reporting.
  6. AWS Trusted Advisor
    Provides real-time recommendations to optimize security, performance, and cost efficiency.

 

Security and Privacy risks: Vulnerabilities in LLMs

Vulnerabilities in LLMs

Vulnerabilities in LLMs

While dealing with LLMs there are ways available to attack the prompts, however there are various safeguards also against them. Keeping in view the attacks I am noting down some vulnerabilities which are useful to understand the risks around your LLMs.

S.No Vulnerability Description
1 Prompt Injection User input intended to manipulate the LLM
2 Insecure o/p handling Un-validated model’s output.
3 Training data poisoning Malicious data introduced in training set.
4 Model Denial Of Service Disrupting availability by identifying architecture weaknesses.
5 Supply chain vulnerabilities Weakness in s/w, h/w, services used to build or deploy the model.
6 Leakage Leakage of sensitive data.
7 Insecure plugins Flaws in model components.
8 Excessive autonomy Autonomy to the model in decision making.
9 Over – reliance Relying heavily on model’s capabilities.
10 Model theft. Leading to unauthorized re-use of the copies of the model

 

Can you co-relate the above use cases with any of your challenges at hand? Have you been able to use any of the AWS services or other AI platforms for dealing with such challenges?

References:

https://aws.amazon.com/ai/services/
https://www.udemy.com/share/10bvuD/

]]>
https://blogs.perficient.com/2025/11/09/amazon-web-services-ai/feed/ 1 386758
Perficient Honored as Organization of the Year for Cloud Computing https://blogs.perficient.com/2025/10/28/perficient-honored-as-stratus-organization-of-the-year-for-cloud-computing/ https://blogs.perficient.com/2025/10/28/perficient-honored-as-stratus-organization-of-the-year-for-cloud-computing/#comments Tue, 28 Oct 2025 20:43:03 +0000 https://blogs.perficient.com/?p=388091

Perficient has been named Cloud Computing Organization of the Year by the 2025 Stratus Awards, presented by the Business Intelligence Group. This prestigious recognition celebrates our leadership in cloud innovation and the incredible work of our entire Cloud team.

Now in its 12th year, the Stratus Awards honor the companies, products, and individuals that are reshaping the digital frontier. This year’s winners are leading the way in cloud innovation across AI, cybersecurity, sustainability, scalability, and service delivery — and we’re proud to be among them.

“Cloud computing is the foundation of today’s most disruptive technologies,” said Russ Fordyce, Chief Recognition Officer of the Business Intelligence Group. “The 2025 Stratus Award winners exemplify how cloud innovation can drive competitive advantage, customer success and global impact.”

This award is a direct reflection of the passion, expertise, and dedication of our Cloud team — a group of talented professionals who consistently deliver transformative solutions for our clients. From strategy and migration to integration and acceleration, their work is driving real business outcomes and helping organizations thrive in an AI-forward world.

We’re honored to receive this recognition and remain committed to pushing the boundaries of what’s possible in the cloud with AI.

Read more about our Cloud Practice.

]]>
https://blogs.perficient.com/2025/10/28/perficient-honored-as-stratus-organization-of-the-year-for-cloud-computing/feed/ 1 388091
Optimizely Mission Control – Part III https://blogs.perficient.com/2025/09/13/optimizely-mission-control-part-iii/ https://blogs.perficient.com/2025/09/13/optimizely-mission-control-part-iii/#comments Sat, 13 Sep 2025 12:34:23 +0000 https://blogs.perficient.com/?p=385025

In this article, we will cover all the remaining actions available in Mission Control.

Base Code Deploy

The Optimizely team continuously improves the platform by introducing new features and releasing updated versions. To take advantage of these enhancements and bug fixes, projects must be upgraded to the latest version. After upgrading the project, it needs to be deployed to the appropriate environment. This deployment is carried out using the “Base Code Deploy” option in Mission Control.

How to deploy the Base Code

  • Log in to Mission Control.

  • Navigate to the Customers tab.

  • Select the appropriate Customer.

  • Choose the Environment where you want to deploy the base code changes.

  • Click the Action dropdown in the left pane.

  • Select Base Code Deploy.

  • A pop-up will appear with a scheduler option and a dropdown showing the latest build version.

  • Click Continue to initiate the deployment process.

  • Once the process completes, the base code is successfully deployed to the selected environment.

Reference: Base Code Deploy – Optimizely Support

Extension Deployment

There are many customizations implemented according to project requirements, and these are developed within the extension project following Optimizely framework guidelines. To make these changes available in the environment, we need to deploy the extension project code. This can be done using the Extension Deployment option available in Mission Control.

Deploy Extension Code

  • Log in to Mission Control.

  • Navigate to the Customers tab.

  • Select the appropriate Customer.

  • Choose the Environment where you want to deploy the extension code.

  • Click the Action dropdown in the left pane.

  • Select Extension Deployment.

  • A pop-up will appear with an optional scheduler and a dropdown showing available extension build versions.

  • Select the desired extension version to deploy.

  • Click Continue to initiate the deployment process immediately.

  • Once the process completes, the extension code is successfully deployed to the selected environment.

Reference: Extension Deployment – Optimizely Support

Production User Files Sync

In any project, there are numerous user files—especially images—which play a crucial role in the website’s appearance and user experience. During development, it’s important to keep these files synchronized across all environments. Typically, the files in lower environments should mirror those in the production environment. Since clients often update files directly in production, the “Production User Files Sync” option in Mission Control becomes extremely useful. It allows developers to easily sync user files from production to lower environments, ensuring consistency during development and testing.

How to sync production user files

  • Log in to Mission Control.

  • Navigate to the Customers tab.

  • Select the appropriate Customer.

  • Choose the lower environment where you want to sync the user files.

  • Click the Action dropdown in the left pane.

  • Select User File Sync from the list of available options.

  • A pop-up will appear with an optional scheduler and a Source Environment dropdown containing all environments available for the selected customer.

  • Select Production as the source (or any environment as required), then click Continue to start the sync process.

  • Depending on the size of the user files and network parameters, the process might take several minutes to complete.

Reference: Production User Files Sync – Optimizely Support

Production Database Sync

This option allows you to synchronize data from the production environment to a lower instance.
Note: Data cannot be synced from a lower instance back to production.

Critical Requirements

  • Matching Website Keys
    • The website keys in both the production and target environments must match.
    • If they do not, the site may experience a startup failure and become unstable.
  • Version Compatibility

    • The target environment must be running on a version that is equal to or newer than the source (production) version.

    • Both source and target environments must be on one of the last three supported long-term versions, or their corresponding short-term support versions.

    • If version requirements are not met, the sync process will fail.

  • Data Loss Warning
    • This is a destructive operation—it will overwrite data on the target (lower) environment.

    • Ensure that no critical or important data exists in the sandbox or lower instance before initiating the sync.

The Production Sync option does not replicate all data, but it does synchronize several key components. Below is the list of data that gets synced:

Product Data

  • Product settings (e.g., ERP Managed, Track Inventory, Quote Required)

  • Attribute values

  • Category assignments

  • Product content (metadata and rich content)

  • Product specifications

  • Child variants

  • Pricing and cost

  • Product number and URL segment

  • Warehouse inventory (stock levels)

  • Shipping information

Category Data

  • Category details (name, description)

  • Category hierarchy

  • Assigned products

  • Category content (metadata and content)

  • Attribute values

CMS Content

  • CMS customizations made via out-of-the-box widgets (non-code changes)

  • Variant page customizations and display rules

Additional Data

  • Attribute types and values

  • Variant types

  • Customer records

  • Website users

Data Not Synced from Production to Sandbox

The following areas are excluded from the Production Sync process and remain unchanged in the target sandbox environment:

  • System Configuration
  • Integration Job Settings
  • Admin & User Data
    • Exceptions

      • If a production admin user has made changes to data being synced (like CustomerOrders, Content, etc.), that admin user is also synced to the sandbox.

      • Admin user roles are also synced to preserve permission context.

      • To prevent role duplication:

        • All sandbox roles are appended with -1.

        • Production roles retain their original names.

      • If a matching admin user exists in both environments:

        • The production user and roles are retained.

        • Sandbox-only users receive roles with the -1 suffix.

  • Logs and Cache

Sync production data

  • Log in to Mission Control.

  • Navigate to the Customers tab.

  • Select the appropriate Customer.

  • Choose the lower environment where you want to sync the production data.

  • Click the Action dropdown in the left pane.

  • Select Production Database Sync from the list of available options.

  • A pop-up will appear with:

      • An optional scheduler, and

      • A Source Environment dropdown (select the production environment).

  • Click Continue to initiate the sync process.

  •  This is a large-scale data transfer operation. The sync process may take several minutes to complete, depending on the volume of data.

Note: Optimizely does not provide a rollback option for this process. Once the deployment is complete, any changes—such as modifications to stored procedures or database scripts—restored it.

Reference: Production Database Sync – Optimizely Support

]]>
https://blogs.perficient.com/2025/09/13/optimizely-mission-control-part-iii/feed/ 1 385025
Part 2: Implementing Azure Virtual WAN – A Practical Walkthrough https://blogs.perficient.com/2025/08/21/part-2-implementing-azure-virtual-wan-a-practical-walkthrough/ https://blogs.perficient.com/2025/08/21/part-2-implementing-azure-virtual-wan-a-practical-walkthrough/#respond Thu, 21 Aug 2025 09:33:21 +0000 https://blogs.perficient.com/?p=386292

In Part 1 (Harnessing the Power of AWS Bedrock through CloudFormation / Blogs / Perficient), we discussed what Azure Virtual WAN is and why it’s a powerful solution for global networking. Now, let’s get hands-on and walk through the actual implementation—step by step, in a simple, conversational way.

Architecturediagram

1.     Creating the Virtual WAN – The Network’s Control Plane

Virtual WAN is the heart of a global network, not just another resource. It replaces: Isolated VPN gateways per region, Manual ExpressRoute configurations, and complex peering relationships.

Setting it up is easy:

  • Navigate to Azure Portal → Search “Virtual WAN”
  • Click Create and configure.
  • Name: Naming matters for enterprise environments
  • Resource Group: Create new rg-network-global (best practice for lifecycle management)
  • Type: Standard (Basic lacks critical features like ExpressRoute support)

Azure will set up the Virtual WAN in a few seconds. Now, the real fun begins.

2. Setting Up the Virtual WAN Hub – The Heart of The Network

The hub is where all connections converge. It’s like a major airport hub where traffic from different locations meets and gets efficiently routed. Without a hub, you’d need to configure individual gateways for every VPN and ExpressRoute connection, leading to higher costs and management overhead.

  • Navigate to the Virtual WAN resource → Click Hubs → New Hub.
  • Configure the Hub.
  • Region: Choose based on: Primary user locations & Azure service availability (some regions lack certain services)
  • Address Space: Assign a private IP range (e.g., 10.100.0.0/24).

Wait for Deployment, this takes about 30 minutes (Azure is building VPN gateways, ExpressRoute gateways, and more behind the scenes).

Once done, the hub is ready to connect everything: offices, cloud resources, and remote users.

3. Connecting Offices via Site-to-Site VPN – Building Secure Tunnels

Branches and data centres need a reliable, encrypted connection to Azure. Site-to-Site VPN provides this over the public internet while keeping data secure. Without VPN tunnels, branch offices would rely on slower, less secure internet connections to access cloud resources, increasing latency and security risks.

  • In the Virtual WAN Hub, go to VPN (Site-to-Site) → Create VPN Site.
  • Name: branch-nyc-01
  • Private Address Space: e.g., 192.168.100.0/24 (must match on-premises network)
  • Link Speed: Set accurately for Azure’s QoS calculations
  • Download VPN Configuration: Azure provides a config file—apply it to the office’s VPN device (like a Cisco or Fortinet firewall).
  • Lastly, connect the VPN Site to the Hub.
  • Navigate to VPN connections → Create connection → Link the office to the hub.

Now, the office and Azure are securely connected.

4. Adding ExpressRoute – The Private Superhighway

For critical applications (like databases or ERP systems), VPNs might not provide enough bandwidth or stability. ExpressRoute gives us a dedicated, high-speed connection that bypasses the public internet. Without ExpressRoute, latency-sensitive applications (like VoIP or real-time analytics) could suffer from internet congestion or unpredictable performance.

  • Order an ExpressRoute Circuit: We can do this via the Azure Portal or through an ISP (like AT&T or Verizon).
  • Authorize the Circuit in Azure
  • Navigate to the Virtual WAN Hub → ExpressRoute → Authorize.
  • Linking it to Hub: Once it is authorized, connect the ExpressRoute circuit to the hub.

Now, the on-premises network has a dedicated, high-speed connection to Azure—no internet required.

5. Enabling Point-to-Site VPN for Remote Workers – The Digital Commute

Employees working from home need secure access to internal apps without exposing them to the public internet. P2S VPN lets them “dial in” securely from anywhere. Without P2S VPN, remote workers might resort to risky workarounds like exposing RDP or databases to the internet.

  • Configure P2S in The Hub
  • Navigate to VPN (Point-to-Site) → Configure.
  • Set Up Authentication: Choose certificate-based auth (secure and easy to manage) and upload the root/issuer certificates.
  • Assign an IP Pool. e.g., 192.168.100.0/24 (this is where remote users will get their IPs).
  • Download & Distribute the VPN Client

Employees install this on their laptops to connect securely. Now, the team can access Azure resources from anywhere just like they’re in the office.

6. Linking Azure Virtual Networks (VNets) – The Cloud’s Backbone

Applications in one VNet (e.g., frontend servers) often need to talk to another (e.g., databases). Rather than complex peering, the Virtual WAN handles routing automatically. Without VNet integration, it needs manual peering and route tables for every connection, creating a management nightmare at scale.

  • VNets need to be attached.
  • Navigate to The Hub → Virtual Network Connections → Add Connection.
  • Select the VNets. e.g., Connect vnet-app (for applications) and vnet-db (for databases).
  • Azure handles the Routing: Traffic flows automatically through the hub-no manual route tables needed.

Now, the cloud resources communicate seamlessly.

Monitoring & Troubleshooting

Networks aren’t “set and forget.” We need visibility to prevent outages and quickly fix issues. We can use tools like Azure Monitor, which tracks VPN/ExpressRoute health—like a dashboard showing all trains (data packets) moving smoothly. Again, Network Watcher can help to diagnose why a branch can’t connect.

Common Problems & Fixes

  • When VPN connections fail, the problem is often a mismatched shared key—simply re-enter it on both ends.
  • If ExpressRoute goes down, check with your ISP—circuit issues usually require provider intervention.
  • When VNet traffic gets blocked, verify route tables in the hub—missing routes are a common culprit.
]]>
https://blogs.perficient.com/2025/08/21/part-2-implementing-azure-virtual-wan-a-practical-walkthrough/feed/ 0 386292
Optimizely Mission Control – Part II https://blogs.perficient.com/2025/08/18/optimizely-mission-control-part-ii/ https://blogs.perficient.com/2025/08/18/optimizely-mission-control-part-ii/#respond Mon, 18 Aug 2025 07:02:45 +0000 https://blogs.perficient.com/?p=384870

In this section, we focused primarily on generating read-only credentials and how to use them to connect to the database.

Generate Database Credentials

The Mission Control tool generates read-only database credentials for a targeted instance, which remain active for 30 minutes. These credentials allow users to run select or read-only queries, making it easier to explore data on a cloud instance. This feature is especially helpful for verifying data-related issues without taking a database backup.

Steps to generate database credentials

  1. Log in to Mission Control.

  2. Navigate to the Customers tab.

  3. Select the appropriate Customer.

  4. Choose the Environment for which you need the credentials.

  5. Click the Action dropdown in the left pane.

  6. Select Generate Database Credentials.

  7. A pop-up will appear with a scheduler option.

  8. Click Continue to initiate the process.

  9. After a short time, the temporary read-only credentials will be displayed.

 

Once the temporary read-only credentials are generated, the next step is to connect to the database using those credentials.

To do this:

  1. Download and install Azure Data Studio
    Download Azure Data Studio

  2. Open Azure Data Studio after installation.

  3. Click “New Connection” or the “Connect” button.

  4. Use the temporary credentials provided by Mission Control to connect:

    • Server Name: Use the server name from the credentials.

    • Authentication Type: SQL Login

    • Username and Password: As provided in the credentials.

  5. Once connected, you can execute SELECT queries to explore or verify data on the cloud instance.

 

For more details, refer to the official Optimizely documentation on Generating Database Credentials.

For Part I, visit: Optimizely Mission Control – Part I

]]>
https://blogs.perficient.com/2025/08/18/optimizely-mission-control-part-ii/feed/ 0 384870
Optimizely Mission Control – Part I https://blogs.perficient.com/2025/08/04/optimizely-mission-control-part-i/ https://blogs.perficient.com/2025/08/04/optimizely-mission-control-part-i/#comments Mon, 04 Aug 2025 13:19:29 +0000 https://blogs.perficient.com/?p=384712

Optimizely provides powerful tools that make it easy to build, release, and manage cloud infrastructure efficiently.

Optimizely Mission Control Access

To use this tool, an Opti ID is required. Once you have an Opti ID, request that your organization grants access to your user account. Alternatively, you can raise a ticket with the Optimizely Support team along with approval from your project organization.

Key Actions

This tool provides various essential actions that can be performed for managing your cloud environments effectively. These include:

  • Restart Site

    • Restart the application in a specific environment to apply changes or resolve issues.

  • Database Backup

    • Create a backup of the environment’s database for debug purposes.

  • Generate Database Credentials

    • Generate secure credentials to connect to the environment’s database.

  • Base Code Deploy

    • Deploy the base application code to the selected environment.

  • Extension Deployment

    • Deploy any custom extension changes.

  • Production User Files Sync

    • Synchronize user-generated files (e.g., media, documents) from the production environment to lower environments.

  • Production Database Sync

    • Sync the production database to another lower environment (such as a sandbox) to sync up data.

Let’s walk through each of these actions step by step to understand how to perform them.

Restart Site

We can restart the site using the Mission Control tool. This option is handy when a website restart is required due to configuration changes. For example, updates to the storage or search provider often require a restart. Additionally, if an integration job gets stuck for any reason, the ability to restart the site becomes very helpful in restoring normal functionality.

How to restart the website

  1. Log in to Mission Control.
  2. Navigate to the Customers tab.

  3. Select the appropriate Customer.

  4. Choose the Environment where the restart is needed.

  5. Click on the Action dropdown in the left pane.

  6. Select Restart Site from the list.

  7. A pop-up will appear where you can either schedule the restart or click Continue for an immediate restart.

 

Reference: Restart Site – Optimizely Support

Database Backup

This is another useful feature available in Mission Control.

Using this option, we can take a backup from the Sandbox or Production instance and import it into the local environment. This helps us debug issues that occur in Sandbox or Production environments.

The backup file is generated with a .bacpac extension.

Steps to take a backup

  1. Log in to Mission Control.

  2. Navigate to the Customers tab.

  3. Select Database Backup from the list.

  4. A pop-up will appear prompting for a scheduled backup time.

  5. Set Skip Log to False to minimize the backup size.

  6. Click Continue and wait for the process to complete.

  7. Once finished, click on the provided link to download the backup file.

 

Reference: Database Backup – Optimizely Support

Stay tuned for the next blog to explore the remaining actions!

]]>
https://blogs.perficient.com/2025/08/04/optimizely-mission-control-part-i/feed/ 1 384712
Creating Data Lakehouse using Amazon S3 and Athena https://blogs.perficient.com/2025/07/31/creating-data-lakehouse-using-amazon-s3-and-athena/ https://blogs.perficient.com/2025/07/31/creating-data-lakehouse-using-amazon-s3-and-athena/#respond Thu, 31 Jul 2025 10:41:17 +0000 https://blogs.perficient.com/?p=385527

As organizations accumulate massive amounts of structured and unstructured data, consequently, the need for flexible, scalable, and cost-effective data architectures becomes more important than ever. Moreover, with the increasing complexity of data environments, organizations must prioritize solutions that can adapt and grow. In addition, the demand for real-time insights and seamless integration across platforms further underscores the importance of robust data architecture. As a result, Data Lakehouse — combining the best of data lakes and data warehouses — comes into play. In this blog post, we’ll walk through how to build a serverless, pay-per-query Data Lakehouse using Amazon S3 and Amazon Athena.

What Is a Data Lakehouse?

A Data Lakehouse is a modern architecture that blends the flexibility and scalability of data lakes with the structured querying capabilities and performance of data warehouses.

  • Data Lakes (e.g., Amazon S3) allow storing raw, unstructured, semi-structured, or structured data at scale.
  • Data Warehouses (e.g., Redshift, Snowflake) offer fast SQL-based analytics but can be expensive and rigid.

Lakehouse unify both, enabling:

  • Schema enforcement and governance
  • Fast SQL querying over raw data
  • Simplified architecture and lower cost

Flow

Tools We’ll Use

  • Amazon S3: For storing structured or semi-structured data (CSV, JSON, Parquet, etc.)
  • Amazon Athena: For querying that data using standard SQL

This setup is perfect for teams that want low cost, fast setup, and minimal maintenance.

Step 1: Organize Your S3 Bucket

Structure your data in S3 in a way that supports performance:

s3://Sample-lakehouse/

└── transactions/

└── year=2024/

└── month=04/

└── data.parquet

Best practices:

  • Use columnar formats like Parquet or ORC
  • Partition by date or region for faster filtering
  • In addition, compressing files (e.g., Snappy or GZIP) can help reduce scan costs.

 Step 2: Create a Table in Athena

You can create an Athena table manually via SQL. Athena uses a built-in Data Catalog

CREATE EXTERNAL TABLE IF NOT EXISTS transactions (

transaction_id STRING,

customer_id STRING,

amount DOUBLE,

transaction_date STRING

)

PARTITIONED BY (year STRING, month STRING)

STORED AS PARQUET

LOCATION ‘s3://sample-lakehouse/transactions/’;

Then run:

MSCK REPAIR TABLE transactions;

This tells Athena to scan the S3 directory and register your partitions.

Step 3: Query the Data

Once the table is created, querying is as simple as:

SELECT year, month, SUM(amount) AS total_sales

FROM transactions

WHERE year = ‘2024’ AND month = ’04’

GROUP BY year, month;

Benefits of This Minimal Setup

Benefit Description
Serverless No infrastructure to manage
Fast Setup Just create a table and query
Cost-effective Pay only for storage and queries
Flexible Works with various data formats
Scalable Store petabytes in S3 with ease

Building a data Lakehouse using Amazon S3 and Athena offers a modern, scalable, and cost-effective approach to data analytics. With minimal setup and no server management, you can unlock insights from your data quickly while maintaining flexibility and governance. Furthermore, this streamlined approach reduces operational overhead and accelerates time-to-value. Whether you’re a startup or an enterprise, this setup provides the foundation for data-driven decision-making at scale. In fact, it empowers teams to focus more on innovation and less on infrastructure.

]]>
https://blogs.perficient.com/2025/07/31/creating-data-lakehouse-using-amazon-s3-and-athena/feed/ 0 385527
Configuring Adjustment Period in Data Exchange https://blogs.perficient.com/2025/07/28/configuring-adjustment-period-in-data-exchange/ https://blogs.perficient.com/2025/07/28/configuring-adjustment-period-in-data-exchange/#respond Mon, 28 Jul 2025 16:15:23 +0000 https://blogs.perficient.com/?p=385393

An “adjustment period” refers to any accounting period used to adjust balances before the year-end closing. These periods are adjusted to “per12” and consequently are referred to as “per13”.  The dates within the adjustment period overlap with regular accounting periods.

In Data Exchange, adjustments are processed in Period Mappings where the mapping of adjustment period between source and target applications is defined. When setting up the data load rule, data can be loaded to both regular and adjustment periods or to adjustment period only depending on the Options selected for that rule.

Configure data load rule for adjustment period in the following steps:

Step1:

In Data Exchange, select Period Mapping under the Actions tab. In Global Period Mapping, insert the adjustment period using the format below:

Adj-23 as 01-12-2023 to 01-12-2023

Open Period Mapping

Global Mapping

Step 2:

In Source Mappings, select source and target applications. Click Add. Click  and browse to and select the source period key. When you select the Source Period Key, Data Management populates the Source Period and Source Period Year fields automatically.

Note: Ensure that the Source Period Key is the same as the source system.

Similarly, click  and browse to and select the target period key. When you select the Target Period Key, Data Management populates the Target Period NameTarget Period Month, and Target Period Year fields automatically.

Save.

Source Mapping

Step 3:

In the Integration tab, under Options, the user can view the ‘Period Mapping Type’ and ‘Include Adjustment Periods’ settings.

Options tab in Edit Integration

From Include Adjustment Period, select one of the following options for processing the periods:

  • No — Only regular periods will be processed. This is the default setting.
  • Yes — Both regular and adjustment periods are processed. If no adjustment period exists, only the regular period is processed.
  • Yes (Adjustment Only) — Only the adjustment period is processed. If none exists, the regular period is pulled.

Click Save.

Step 4:

Execute the data integration to retrieve data for the adjustment period.

Data load selection

After running the data integration, always validate the results to ensure data accuracy. Check the process logs and review the target application to confirm that all expected entries were loaded correctly. Investigate any discrepancies or errors shown in the log for timely resolution.

Helpful read: Multi-Year Multi-Period Data Load / Blogs / Perficient

]]>
https://blogs.perficient.com/2025/07/28/configuring-adjustment-period-in-data-exchange/feed/ 0 385393
Boost Cloud Efficiency: AWS Well-Architected Cost Tips https://blogs.perficient.com/2025/06/09/boost-cloud-efficiency-aws-well-architected-cost-tips/ https://blogs.perficient.com/2025/06/09/boost-cloud-efficiency-aws-well-architected-cost-tips/#respond Mon, 09 Jun 2025 06:36:11 +0000 https://blogs.perficient.com/?p=378814

In today’s cloud-first world, building a secure, high-performing, resilient, and efficient infrastructure is more critical than ever. That’s where the AWS Well-Architected Framework comes in a powerful guide designed to help architects and developers make informed decisions and build better cloud-native solutions.

What is the AWS Well-Architected Framework?

The AWS Well-Architected Framework provides a consistent approach for evaluating and improving your cloud architecture. It’s built around six core pillars that represent key areas of focus for building robust and scalable systems:

  • Operational Excellence – Continuously monitor and improve systems and processes.
  • Security – Protect data, systems, and assets through risk assessments and mitigation strategies.
  • Reliability – Ensure workloads perform as intended and recover quickly from failures.
  • Performance Efficiency – Use resources efficiently and adapt to changing requirements.
  • Cost Optimization – Avoid unnecessary costs and maximize value.
  • Sustainability – Minimize environmental impact by optimizing resource usage and energy consumption

98bb6d5d218aea2968fc8e8bba96ef68b6a7730c 1600x812

Explore the AWS Well-Architected Framework here https://aws.amazon.com/architecture/well-architected

AWS Well -Architected Timeline

Time to time, AWS made some changes in the framework and introduce new resources which we can follow to utilize them better for our use cases and get better architecture.

Oip

AWS Well-Architected Tool

To help you apply these principles, AWS offers the Well-Architected Tool—a free service that guides you through evaluating your workloads against the six pillars.

How it Works:

  • Select a workload.
  • Answer a series of questions aligned with the framework.
  • Review insights and recommendations.
  • Generate reports and track improvements over time.

Try the AWS Well-Architected Tool here https://aws.amazon.com/well-architected-tool/

Go Deeper with Labs and Lenses

AWS also Provides:

Deep Dive: Cost Optimization Pillar

Cost Optimization is not just about cutting costs—it’s about maximizing value. It ensures that your cloud investments align with business goals and scale efficiently.

Why It Matters:

  • Understand your spending patterns.
  • Ensure costs support growth, not hinder it.
  • Maintain control as usage scales.

5 Best Practices for Cost Optimization

  1. Practice Cloud Financial Management
  • Build a cost optimization team.
  • Foster collaboration between finance and tech teams.
  • Use budgets and forecasts.
  • Promote cost-aware processes and culture.
  • Quantify business value through automation and lifecycle management.
  1. Expenditure and Usage Awareness
  • Implement governance policies.
  • Monitor usage and costs in real-time.
  • Decommission unused or underutilized resources.
  1. Use Cost-Effective Resources
  • Choose the right services and pricing models.
  • Match resource types and sizes to workload needs.
  • Plan for data transfer costs.
  1. Manage Demand and Supply
  • Use auto-scaling, throttling, and buffering to avoid over-provisioning.
  • Align resource supply with actual demand patterns.
  1. Optimize Over Time
  • Regularly review new AWS features and services.
  • Adopt innovations that reduce costs and improve performance.

Conclusion

The AWS Well-Architected Framework is more than a checklist—it’s a mindset. By embracing its principles, especially cost optimization, you can build cloud environments that are not only efficient and scalable but also financially sustainable.

]]>
https://blogs.perficient.com/2025/06/09/boost-cloud-efficiency-aws-well-architected-cost-tips/feed/ 0 378814
Perficient Achieves Premier Partner Status with Snowflake https://blogs.perficient.com/2025/06/05/perficient-achieves-premier-partner-status-with-snowflake/ https://blogs.perficient.com/2025/06/05/perficient-achieves-premier-partner-status-with-snowflake/#comments Thu, 05 Jun 2025 15:50:57 +0000 https://blogs.perficient.com/?p=382480

We are proud to announce that Perficient has officially achieved Premier Partner status with Snowflake, a recognition that underscores our strategic commitment to delivering transformative data and AI solutions in the cloud. 

This milestone marks a significant step forward in our longstanding partnership with Snowflake. Advancing from Select to Premier Partner is more than a status update—it’s a clear reflection of our proven expertise, consistent delivery of value-driven outcomes, and dedication to helping organizations solve their most complex data challenges. 

“Reaching Premier Partner status with Snowflake is a testament to our team’s relentless focus on innovation and excellence,” said Michael Patterson, Managing Director, Data and Analytics, Perficient. “We are proud to work closely with Snowflake to help our clients accelerate their cloud data strategies, unlock AI-driven insights, and create real business impact. Our partnership is built on a shared commitment to delivering meaningful results for our customers.” 

Driving Value Through Strategic Partnership 

Organizations today are navigating an increasingly data-rich, AI-enabled landscape. They need trusted partners to help them move with speed, scale, and precision. At Perficient, our dedicated Snowflake practice is purpose-built to meet that need. 

We help enterprises modernize their data platforms, adopt real-time analytics, and implement responsible AI—delivering scalable, cloud-native architectures powered by the Snowflake Data Cloud and Snowflake Cortex AI. As a Premier Partner, we bring the right strategy, expertise, and technical depth to ensure our clients can confidently unlock the full potential of their data. 

Our elevation to Premier Partner status affirms the strength of our solutions, the trust of our customers, and the momentum behind our vision to lead in the next generation of data and AI transformation. 

Learn more about Perficient’s Snowflake expertise and how we help businesses design and implement intelligent data solutions that drive innovation and deliver measurable value. 

]]>
https://blogs.perficient.com/2025/06/05/perficient-achieves-premier-partner-status-with-snowflake/feed/ 1 382480
IOT and API Integration With MuleSoft: The Road to Seamless Connectivity https://blogs.perficient.com/2025/05/21/iot-and-api-integration-with-mulesoft-the-road-to-seamless-connectivity/ https://blogs.perficient.com/2025/05/21/iot-and-api-integration-with-mulesoft-the-road-to-seamless-connectivity/#respond Wed, 21 May 2025 09:08:59 +0000 https://blogs.perficient.com/?p=381483

In today’s hyper-connected world, the Internet of Things (IoT) is transforming industries, from smart manufacturing to intelligent healthcare. However, the real potential of IoT is to connect continuously with enterprise systems, providing real-time insights and automating. This is where MuleSoft’s Anypoint Platform comes in, a disturbance in integrating IoT units and API to create an ecosystem. This blog explains how MuleSoft sets the platform for connection and introduces a strong basis for IoT and API integration that goes beyond the specific dashboard to offer scalability, safety, and efficiency.

Objective

In this blog, I will show MuleSoft’s ability to integrate IoT devices with enterprise systems through API connectivity, focusing on real-time data processing. I will provide an example of how MuleSoft’s Anypoint Platform connects to an MQTT broker and processes IoT device sensor data. The example highlights MuleSoft’s ability to handle IoT protocols like MQTT and transform data for insights.

How Does MuleSoft Facilitate IoT Integration?

The MuleSoft’s Anypoint Platform is specific to the API connection, native protocol support, and a comprehensive integration structure to handle the complications of IoT integration. This is how MuleSoft IOT does the integration comfortably:

  1. API Connectivity for Scalable Ecosystems

MuleSoft’s API strategy categorizes integrations into System, Process, and Experience APIs, allowing modular connections between IoT devices and enterprise systems. For example, in a smart city, System APIs gather data from traffic sensors and insights into a dashboard. This scalability avoids the chaos of point-to-point integrations, a fault in most visualization-focused tools.

  1. Native IoT Protocol Support

IoT devices are based on protocols such as MQTT, AMQP, and CoAP, which MuleSoft supports. Without middleware, this enables direct communication between sensors and gateways. In a scenario, MuleSoft is better able to connect MQTT data from temperature sensors to a cloud platform such as Azure IoT Hub than other tools that require custom plugins.

  1. Real-Time Processing and Automation

IoT requires real-time data processing, and MuleSoft’s runtime engine processes data streams in real time while supporting automation. For example, if a factory sensor picks up a fault, MuleSoft can invoke an API to notify maintenance teams and update systems. MuleSoft integrates visualization with actionable workflows.

  1. Pre-Built Connectors for Setup

MuleSoft’s Anypoint Exchange provides connectors for IoT platforms (e.g., AWS IoT) and enterprise systems (e.g., Salesforce). In healthcare, connectors link patient wearables to EHRs, reducing development time. This plug-and-play approach beats custom integrations commonly required by other tools.

  1. Centralized Management and Security

IoT devices manage sensitive information, and MuleSoft maintains security through API encryption and OAuth. Its Management Center provides a dashboard to track device health and data flows, offering centralized control that standalone dashboard applications cannot provide without additional infrastructure.

  1. Hybrid and Scalable Deployments

MuleSoft’s hybrid model supports both on-premises and cloud environments, providing flexibility for IoT deployments. Its scalability handles growing networks, such as fleets of connected vehicles, making it a future-proof solution.

Building a Simple IoT Integration with MuleSoft

To demonstrate MuleSoft’s IoT integration, below I have created a simple flow in Anypoint Studio that connects to an MQTT Explorer, processes sensor data, and logs it to the dashboard integration. This flow uses a public MQTT Explorer to simulate IoT sensor data. The following are the steps for the Mule API flow:

Api Flowchart

Step 1: Setting Up the Mule Flow

In Anypoint Studio, create a new Mule project (e.g., ‘IoT-MQTT-Demo’). Design a flow with an MQTT Connector to connect to an explorer, a Transform Message component to process data, and a Logger to output results.

Step1

Step 2: Configuring the MQTT Connector

Configure the MQTT Connector properties. In General Settings, configure on a public broker (“tcp://test.mosquitto.org:1883”). Add the topic filter “iot/sensor/data” and select QoS “AT_MOST_ONCE”.

Step2

Step 3: Transforming the Data

Use DataWeave to parse the incoming JSON payload (e.g., ‘{“temperature”: 25.5 }’) and add a timestamp. The DataWeave code is:

“`

%dw 2.0

output application/json


{

sensor: “Temperature”,

value: read(payload, “application/json“).temperature default “”,

timestamp: now()

 

}

“`

Step3

Step 4: Connect to MQTT

                Click on the Connections and use the credentials as shown below to connect to the MQTT explorer:

Step4

 Step 5: Simulating IoT Data

Once the MQTT connects using an MQTT Explorer, publish a sample message ‘{“temperature”: 28 }’ to the topic ‘iot/sensor/data’, sending to the Mule flow as shown below.

Step5

Step 6: Logging the Output

Run the API and publish the message from the MQTT explorer, and the processed data will be logged into the console. Below shows an example log:

Step6

The above example highlights MuleSoft’s process for connecting IoT devices, processing data, and preparing it for visualization or automation.

Challenges in IoT Integration and MuleSoft’s Solutions

IoT integration faces challenges:

  • Device and Protocol Diversity: IoT ecosystems involve different devices, such as sensors or gateways, using protocols like MQTT or HTTP with different data formats, such as JSON, XML, or binary.
  • Data Volume and Velocity: IoT devices generate high volumes of real-time data, which requires efficient processing to avoid restrictions.
  • Security and Authentication: IoT devices are unsafe and require secure communications like TLS or OAuth for device authentication.
  • Data Transformation and Processing: IoT data sends binary data, which requires transformation from Binary to JSON and needs improvement before use.

The Future of IoT with MuleSoft

The future of IoT with MuleSoft is promising. MuleSoft uses the Anypoint Platform to solve critical integration issues. It integrates different IoT devices and protocols, such as MQTT, to provide data flow between ecosystems. It provides real-time data processing and analytics integration. Security is added with TLS and OAuth.

Conclusion

MuleSoft’s Anypoint Platform reviews IoT and API integration by providing a scalable, secure, real-time solution for connecting devices to enterprise systems. As I showed in the example, MuleSoft processes MQTT-based IoT data and transforms it for useful insights without external scripts or sensors. By addressing challenges like data volume and security, MuleSoft provides a platform to build IoT ecosystems that provide automation and insights. As IoT keeps growing, MuleSoft’s API connectivity and native protocol support establish it as an innovation, with new smart city, healthcare, and more connectivity. Discover MuleSoft’s Anypoint Platform to unlock the full potential of your IoT projects and set the stage for a connected future.

]]>
https://blogs.perficient.com/2025/05/21/iot-and-api-integration-with-mulesoft-the-road-to-seamless-connectivity/feed/ 0 381483
Strategic Cloud Partner: Key to Business Success, Not Just Tech https://blogs.perficient.com/2025/05/13/strategic-cloud-partner-key-to-business-success-not-just-tech/ https://blogs.perficient.com/2025/05/13/strategic-cloud-partner-key-to-business-success-not-just-tech/#comments Tue, 13 May 2025 14:20:07 +0000 https://blogs.perficient.com/?p=381334

Cloud is easy—until it isn’t.

Perficient’s Edge: A Strategic Cloud Partner Focused on Business Outcomes

Cloud adoption has skyrocketed. Multi-cloud. Hybrid cloud. AI-optimized workloads. Clients are moving fast, but many are moving blindly. The result? High costs, low returns, and strategies that stall before they scale.

That’s why this moment matters. Now, more than ever, your clients need a partner who brings more than just cloud expertise—they need business insight, strategic clarity, and real results.

In our latest We Are Perficient episode, we sat down with Kiran Dandu, Perficient’s Managing Director, to uncover exactly how we’re helping clients not just adopt cloud, but win with it.

If you’re in sales, this conversation is your cheat sheet for leading smarter cloud conversations with confidence.

Key #1: Start with Business Outcomes, Not Infrastructure

Kiran makes one thing clear from the start: “We don’t start with cloud. We start with what our clients want to achieve.”

At Perficient, cloud is a means to a business end. That’s why we begin every engagement by aligning cloud architecture with long-term business objectives—not just technical requirements.

Perficient’s Envision Framework: Aligning Cloud with Business Objectives

  • Define their ideal outcomes
  • Assess their existing workloads
  • Select the right blend of public, private, hybrid, or multi-cloud models
  • Optimize performance and cost every step of the way

This outcome-first mindset isn’t just smarter—it’s what sets Perficient apart from traditional cloud vendors.

Key #2: AI in the Cloud – Delivering Millions in Savings Today

Forget the hype—AI is already transforming how we operate in the cloud. Kiran breaks down the four key areas where Perficient is integrating AI to drive real value:

  • DevOps automation: AI accelerates code testing and deployment, reducing errors and speeding up time-to-market.
  • Performance monitoring: Intelligent tools predict and prevent downtime before it happens.
  • Cost optimization: AI identifies underused resources, helping clients cut waste and invest smarter.
  • Security and compliance: With real-time threat detection and automated incident response, clients stay protected 24/7.

The result? A cloud strategy that’s not just scalable, but self-improving.

Key #3: Beyond Cloud Migration to Continuous Innovation

Moving to the cloud isn’t the end goal—it’s just the beginning.

Kiran emphasizes how Perficient’s global delivery model and agile methodology empower clients to not only migrate, but to evolve and innovate faster. Our teams help organizations:

  • Integrate complex systems seamlessly
  • Continuously improve infrastructure as business needs change
  • Foster agility across every department—not just IT

And it’s not just theory. Our global consultants, including the growing talent across LATAM, are delivering on this promise every day.

“The success of our cloud group is really going to drive the success of the organization.”
Kiran Dandu

Global Talent, Local Impact: The Power of a Diverse Strategic Cloud Partner

While visiting our offices in Medellín, Colombia, Kiran highlighted the value of diversity in driving cloud success:

“This reminds me of India in many ways—there’s talent, warmth, and incredible potential here.”

That’s why Perficient is investing in uniting its global cloud teams. The cross-cultural collaboration between North America, LATAM, Europe, and India isn’t just a feel-good story—it’s the engine behind our delivery speed, technical excellence, and customer success.

Key Takeaways for Sales: Lead Smarter Cloud Conversations

If your client is talking about the cloud—and trust us, they are—this interview is part of your toolkit.
You’ll walk away understanding:

  • Why Perficient doesn’t just build cloud platforms—we build cloud strategies that deliver
  • How AI and automation are creating real-time ROI for our clients
  • What makes our global model the best-kept secret in cloud consulting
  • And how to speak the language of business outcomes, not just cloud buzzwords

Watch the Full Interview: Deep Dive with Kiran Dandu

Want to hear directly from the source? Don’t miss Kiran’s full interview, packed with strategic insights that will elevate your next sales conversation.

Watch now and discover how Perficient is transforming cloud into a competitive advantage.

Choose Perficient: Your Client’s Strategic Cloud Partner for a Competitive Edge

Perficient is not just another cloud partner—we’re your client’s competitive edge. Let’s start leading the cloud conversation like it.

]]>
https://blogs.perficient.com/2025/05/13/strategic-cloud-partner-key-to-business-success-not-just-tech/feed/ 1 381334