Platforms and Technology Articles / Blogs / Perficient https://blogs.perficient.com/category/services/platforms-and-technology/ Expert Digital Insights Thu, 19 Jun 2025 02:14:29 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Platforms and Technology Articles / Blogs / Perficient https://blogs.perficient.com/category/services/platforms-and-technology/ 32 32 30508587 Integrate Coveo Atomic CLI-Based Hosted Search Page into Adobe Experience Manager (AEM) https://blogs.perficient.com/2025/06/18/integrate-coveo-atomic-cli-based-hosted-search-page-into-adobe-experience-manager-aem/ https://blogs.perficient.com/2025/06/18/integrate-coveo-atomic-cli-based-hosted-search-page-into-adobe-experience-manager-aem/#respond Wed, 18 Jun 2025 06:20:24 +0000 https://blogs.perficient.com/?p=382055

Getting Started with Coveo Atomic CLI

This section explains how to install, configure, and deploy a Coveo Atomic project using the Coveo CLI

Install the CLI

To get started, install the Coveo CLI globally with npm:

npm install -g @coveo/cli

To ensure you’re always using the latest version, update it anytime with:

npm update -g @coveo/cli

Authentication

Once the CLI is installed, you will need to authenticate to your coveo organization. Use the following command, replacing the placeholders with your specific organization details:

coveo auth:login --environment=prod --organization=<your-organization> --region=<your-region>

For example:

coveo auth:login --environment=prod --organization=blogtestorgiekhkuqk --region=us

Initialize an Coveo Atomic CLI Project

After logging in, initialize a new atomic project by running:

coveo atomic:init <project-name> --type=app

For example:

coveo atomic:init atomicInterface  --type=app

Building and Deploying the Project

Once the project is ready, build the application:

npm run build

This command compiles your code and prepares it for deployment. It creates a production-ready build inside the dist/ folder.

Then deploy your interface to Coveo using:

coveo ui:deploy

After deployment, your search interface will be hosted on Coveo’s infrastructure, ready to embed anywhere—like Adobe

B11

Using and Initializing Atomic-Hosted-Page

This section guides you through using and initializing the Atomic-Hosted-Page component of your Coveo project.

Use Atomic-Hosted-Page

If you have customized your Atomic search page locally and deployed it to the Coveo infrastructure, then it will be listed in the Custom Deployment tab of the Search Pages (platform-ca | platform-eu | platform-au) page of the Administration Console. You can use the atomic-hosted-page component to consume it from anywhere on the web.

Initialize Atomic-Hosted-Page

Once you have installed the atomic-hosted-page or atomic-hosted-ui web component, you’ll need to add a script like the following to initialize the atomic-hosted-page component:

<head>
  <!-- ... -->
  <script>
    (async () => {
      await customElements.whenDefined('atomic-hosted-ui');
      const atomicHostedUIPage = document.querySelector('atomic-hosted-ui');

      await atomicHostedUIPage.initialize({
        accessToken: '<ACCESS_TOKEN>', 
        organizationId: '<ORGANIZATION_ID>', 
        pageId: '<PAGE_ID>' 
      });
    })();
  </script>
  <!-- ... -->
  <atomic-hosted-ui hosted-type="code"></atomic-hosted-ui> 
  <!-- ... -->
</head>

In this script, replace the placeholders with coveo specific details:

<ACCESS_TOKEN> (string) is an API key or platform token that grants the View all access level on the Search Pages domain in the target Coveo organization.
<ORGANIZATION_ID> (string) is the unique identifier of your organization (for example, mycoveoorganizationa1b23c).
<PAGE_ID> (string) is the unique identifier of the hosted page, which you can copy from the Administration Console.

Steps to Embed in Adobe Experience Manager (AEM)

  1. Login to Adobe AEM Author Instance
    Example URL: https://author-555.adobeaemcloud.com/

  2. Navigate to the AEM Sites Console
    Go to:https://author-555.adobeaemcloud.com/sites.html/content/blog/us/en/search-results
    The Sites Console in AEM, used to manage your website’s pages and structure.
    B12

  3. Create or Select the Page

    • Create new or use an existing page, for example: search-results.

    • Select the page’s checkbox → click Edit (top toolbar).

    • You’ll be redirected to the Page Editor: https://author-555.adobeaemcloud.com/editor.html/content/blog/us/en/search-results.html.

  4. Embed the Coveo Script:
    In the Page Editor, open the Content Tree on the left, select Layout Container, click the Configure (wrench icon) button B13

  5. Choose Embed Type
    Choose Embed → iFrame. Paste your <atomic-hosted-page> script inside the iFrame.
    B14

  6. Preview and Publish the Page

    Click Page Information icon → Publish Page, the alert confirms that the page will be live
    B15

  7. View the Published Page
    Example URL:http://localhost:4502/content/blog/us/en/search-results.html
    B16

That’s it—you’ve successfully embedded your Coveo Atomic CLI-based Hosted Search Page inside Adobe!

References:

Use a hosted page in your infrastructure | Coveo Atomic

 

]]>
https://blogs.perficient.com/2025/06/18/integrate-coveo-atomic-cli-based-hosted-search-page-into-adobe-experience-manager-aem/feed/ 0 382055
Microsoft Copilot for Power Platform https://blogs.perficient.com/2025/06/17/microsoft-copilot-for-power-platform/ https://blogs.perficient.com/2025/06/17/microsoft-copilot-for-power-platform/#respond Wed, 18 Jun 2025 03:25:39 +0000 https://blogs.perficient.com/?p=382923

Introduction to Copilot for Power Platform

Microsoft Copilot is a revolutionary AI-powered tool for Power Platform, designed to streamline the development process and enhance the intelligence of your applications. This learning path will take you through the fundamentals of Copilot and its integration with Power Apps, Power Automate, Power Virtual Agents, and AI Builder.

Copilot in Microsoft Power Platform helps app makers quickly solve business problems. A copilot is an AI assistant that can help you perform tasks and obtain information. You interact with a copilot by using a chat experience. Microsoft has added copilots across the different Microsoft products to help users be more productive. Copilots can be generic, such as Microsoft Copilot, and not tied to a specific Microsoft product. Alternatively, a copilot can be context-aware and tailored to the Microsoft product or application that you’re using at the time.

Picture1

Microsoft Power Platform Copilots & Specializations.

Microsoft Power Platform has several copilots that are available to makers and users.

Microsoft Copilot for Microsoft Power Apps

Use this copilot to help create a canvas app directly from your ideas. Give the copilot a natural language description, such as “I need an app to track my customer feedback.” Afterward, the copilot offers a data structure for you to iterate until it’s exactly what you need, and then it creates pages of a canvas app for you to work with that data. You can edit this information along the way. Additionally, this copilot helps you edit the canvas app after you create it. Power Apps also offers copilot controls for users to interact with Power Apps data, including copilots for canvas apps and model-driven apps.

Microsoft Copilot for Microsoft Power Automate

Use this copilot to create automation that communicates with connectors and improves business outcomes. This copilot can work with cloud flows and desktop flows. Copilot for Power Automate can help you build automation by explaining actions, adding actions, replacing actions, and answering questions.

Microsoft Copilot for Microsoft Power Pages

Use this copilot to describe and create an external-facing website with Microsoft Power Pages. As a result, you have theming options, standard pages to include, and AI-generated stock images and relevant text descriptions for the website that you’re building. You can edit this information as you build your Power Pages website.

How Copilots Work

You can create a copilot by using a language model, which is like a computer program that can understand and generate human-like language. A language model can perform various natural language processing tasks based on a deep-learning algorithm. The massive amounts of data that the language model processes can help the copilot recognize, translate, predict, or generate text and other types of content.

Despite being trained on a massive amount of data, the language model doesn’t contain information about your specific use case, such as the steps in a Power Automate flow that you’re editing. The copilot shares this information for the system to use when it interacts with the language model to answer your questions. This context is commonly referred to as grounding data. Grounding data is use case-specific data that helps the language model perform better for a specific topic. Additionally, grounding data ensures that your data and IP are never part of training the language model.

Accelerate Solution Building with Copilot

Consider the various copilots in Microsoft Power Platform as specialized assistants that can help you become more productive. Copilot can help you accelerate solution building in the following ways:

  • Prototyping
  • Inspiration
  • Help with completing tasks
  • Learning about something

Prototyping

Prototyping is a way of taking an idea that you discussed with others or drew on a whiteboard and building it in a way that helps someone understand the concept better. You can also use prototyping to validate that an idea is possible. For some people, having access to your app or website can help them become a supporter of your vision, even if the app or website doesn’t have all the features that they want.

Inspiration

Building on the prototyping example, you might need inspiration on how to evolve the basic prototype that you initially proposed. You can ask Copilot for inspiration on how to handle the approval of which ideas to prioritize. Therefore, you might ask Copilot, “How could we handle approval?”

Help with Completing Tasks

By using a copilot to assist in your solution building in Microsoft Power Platform, you can complete more complex tasks in less time than if you do them manually. Copilot can also help you complete small, tedious tasks, such as changing the color of all buttons in an app.

Learn about Something

While building an app, flow, or website, you can open a browser and use your favorite search engine to look up something that you’re trying to figure out. With Copilot, you can learn without leaving the designer. For example, your Power Automate flow has a step to List Rows from Dataverse, and you want to find out how to check if rows are retrieved. You could ask Copilot, “How can I check if any rows were returned from the List rows step?”

Knowing the context of your flow, Copilot would respond accordingly.

Design and Plan with Copilot

Copilot can be a powerful way to accelerate your solution-building. However, it’s the maker’s responsibility to know how to interact with it. That interaction includes writing prompts to get the desired results and evaluating the results that Copilot provides.

Consider the Design First

While asking Copilot to “Help me automate my company to run more efficiently” seems ideal, that prompt is unlikely to produce useful results from Microsoft Power Platform Copilots.

Consider the following example, where you want to automate the approval of intake requests. Without significant design thinking, you might use the following prompt with Copilot for Power Automate.

Copilot in cloud flow

Picture2

“Create an approval flow for intake requests and notify the requestor of the result.”

This prompt produces the following suggested cloud flow.

Picture3

While the prompt is an acceptable start, you should consider more details that can help you create a prompt that might get you closer to the desired flow.

A good way to improve your success is to spend a few minutes on a whiteboard or other visual design tool, drawing out the business process.

Picture4

Include the Correct Ingredients in the Prompt

A prompt should include as much relevant information as possible. Each prompt should include your intended goal, context, source, and outcome.

When you’re starting to build something with Microsoft Power Platform copilots, the first prompt that you use sets up the initial resource. For Power Apps, this first prompt is to build a table and an app. For Power Automate, this first prompt is to set up the trigger and the initial steps. For Power Pages, this first prompt sets up the website.

Consider the previous example and the sequence of steps in the sample drawing. You might modify your initial prompt to be similar to the following example.

“When I receive a response to my Intake Request form, start and wait for a new approval. If approved, notify the requestor saying so and also notify them if the approval is denied.”

Continue the Conversation

You can iterate with your copilot. After you establish the context, Copilot remembers it.

The key to starting to build an idea with Copilot is to consider how much to include with the first prompt and how much to refine and add after you set up the resource. Knowing this key consideration is helpful because you don’t need to get a perfect first prompt, only one that builds the idea. Then, you can refine the idea interactively with Copilot.

6 Unique Copilot Features in Power Platform

  1. Natural Language Power FX Formulas in Power Apps

Copilot enables developers to write Power FX formulas using natural language. For instance, typing /subtract datepicker1 from datepicker2 in a label control prompts Copilot to generate the corresponding formula, such as DateDiff(DatePicker1. SelectedDate, DatePicker2. SelectedDate, Days). This feature simplifies formula creation, especially for those less familiar with coding.

  1. AI-Powered Document Analysis with AI Builder

By integrating Copilot with AI Builder, users can automate the extraction of data from documents, such as invoices or approval forms. For example, Copilot can extract approval justifications and auto-generate emails for swift approvals within Outlook. This process streamlines workflows and reduces manual data entry.

  1. Automated Flow Creation in Power Automate

Copilot assists users in creating automated workflows by interpreting natural language prompts. For example, a user can instruct Copilot to “Create a flow that sends an email when a new item is added to SharePoint,” and Copilot will generate the corresponding flow. This feature accelerates the automation process without requiring extensive coding knowledge.

  1. Conversational App Development in Power Apps Studio

In Power Apps Studio, Copilot allows developers to build and edit apps using natural language commands. For instance, typing “Add a button to my header” or “Change my container to align center” enables Copilot to execute these changes, simplifying the development process and making it more accessible.

  1. Generative Topic Creation in Power Virtual Agents

Copilot facilitates the creation of conversation topics in Power Virtual Agents by generating them from natural language descriptions. For example, describing a topic like “Customer Support” prompts Copilot to create a topic with relevant trigger phrases and nodes, streamlining the bot development process.

  1. AI-Driven Website Creation in Power Pages

Copilot assists in building websites by interpreting natural language descriptions. For example, stating “Create a homepage with a contact form and a product gallery” prompts Copilot to generate the corresponding layout and components, expediting the website development process.

Limitations of Copilot

LimitationDescriptionExample
1. Limited understanding of business contextCopilot doesn’t always understand your specific business rules or logic.You ask Copilot to "generate a travel approval form," but your org requires approval from both the team lead and HR. Copilot might only include one level of approval.
2. Restricted to available connectors and dataCopilot can only access data sources that are already connected in your app.You ask it to "show top 5 sales regions," but haven’t connected your Sales DB — Copilot can't help unless that connection is preconfigured.
3. Not fully customizable outputYou might not get exactly the layout, formatting, or logic you want — especially for complex logic.Copilot generates a form with 5 input fields, but doesn't group them or align them properly; you still need to fine-tune it manually.
4. Model hallucination (AI guessing wrong info)Like other LLMs, Copilot may “guess” when unsure — and guess incorrectly.You ask Copilot to create a formula for filtering “Inactive users,” and it writes a filter condition that doesn’t exist in your dataset.
5. English-only or limited language supportMost effective prompts and results come in English; support for other languages is limited or not optimized.You try to ask Copilot in Hindi, and it misinterprets the logic or doesn't return relevant suggestions.
6. Requires clean, named data structuresCopilot struggles when your tables/columns aren't clearly named.If you name a field fld001_status instead of Status, Copilot might fail to identify it correctly or generate unreadable code.
7. Security roles not respected by CopilotCopilot may suggest features that would break your security model if implemented directly.You generate a data view for all users, but your app is role-based — Copilot won’t automatically apply row-level security filters.
8. No support for complex logic or multi-step workflowsIt’s good at simple flows, but not for things like advanced branching, looping, or nested conditions.You ask Copilot to automate a 3-level approval chain with reminder logic and escalation — it gives a very basic starting point.
9. Limited offline or disconnected useCopilot and generated logic assume you’re online.If your app needs to work offline (e.g., for field workers), Copilot-generated logic may not account for offline sync or local caching.
10. Only works inside Microsoft ecosystemCopilot doesn’t support 3rd-party AI tools natively.If your company uses Google Cloud or OpenAI directly, Copilot won’t connect unless you build custom connectors or use HTTP calls.

Build Good Prompts

Knowing how to best interact with the copilot can help get your desired results quickly. When you’re communicating with the copilot, make sure that you’re as clear as you can be with your goals. Review the following dos and don’ts to help guide you to a more successful copilot-building experience.

Do’s of Prompt-Building

To have a more successful copilot building experience, do the following:

  • Be clear and specific.
  • Keep it conversational.
  • Give examples.
  • Check for accuracy.
  • Provide contextual details.
  • Be polite.

Don’ts of Prompt-Building

  • Be vague.
  • Give conflicting instructions.
  • Request inappropriate or unethical tasks or information.
  • Interrupt or quickly change topics.
  • Use slang or jargon.

Conclusion

Copilot in Microsoft Power Platform marks a major step forward in making low-code development truly accessible and intelligent. By enabling users to build apps, automate workflows, analyze data, and create bots using natural language, it empowers both technical and non-technical users to turn ideas into solutions faster than ever.

It transforms how people interact with technology by:

  • Accelerating solution creation
  • Lowering technical barriers
  • Enhancing productivity and innovation

With built-in security, compliance with organizational governance, and continuous improvements from Microsoft’s AI advancements, Copilot is not just a tool—it’s a catalyst for transforming how organizations solve problems and deliver value.

As AI continues to evolve, Copilot will play a central role in democratizing software development and helping organizations move faster and smarter with data-driven, automated tools.

]]>
https://blogs.perficient.com/2025/06/17/microsoft-copilot-for-power-platform/feed/ 0 382923
YAML files in DBT https://blogs.perficient.com/2025/06/12/yaml-files-in-dbt/ https://blogs.perficient.com/2025/06/12/yaml-files-in-dbt/#respond Thu, 12 Jun 2025 05:18:13 +0000 https://blogs.perficient.com/?p=382730

To make streamline project development and maintenance, in any programming language, we need the support of metadata, configuration, and documentation. Project configurations can be done using configuration files. Configuration files are easy to use and make it user friendly to interact with developer. One such type of configuration files used in DBT are the YAML files.
In this blog, will go through the required YAML files in DBT.
Let’s understand first what YAML is and DBT

DBT (Data Build Tool) :
Data transformation is the important process in modern analytics. DBT is a system to transform, clean and aggregate data within data warehouse. The power of DBT lies in its utilization of YAML files for both configuration and transformation.
Note:
Please go through link for DBT(DBT)
What is YAML file:
YAML acronym as “Yet Another Markup Language.” It is easy to read and understand. YAML is superset of JSON.
Common use of YAML file:
– Configuration Management:
Use to define configuration like roles, environment.
– CI/CD Pipeline:
CI/CD tools depend on YAML file to describe their pipeline.
– Data Serialization:
YAML can manage complex data types such as linked list, arrays, etc.
– API:
YAML can be use in defining API contracts and specification.

Sample Example of YAML file:
Pictureyaml
YAML files are the core of defining configuration and transformation in DBT. YAML files have “.yml” extension.

The most important YAML file is
profiles.yml:
This file needs to be locally. It contains sensitive that can be used to connect with target data warehouse.
Purpose:
It consists of main configuration details to which connect with data warehouse(Snowflake, Postgres, etc.)
profile configuration looks like as :
Picturedbtdemo
Note:
We should not share profiles.yml file with anyone because it consists of target data warehouse information. This file will be used in DBT core and not  in DBT cloud.
YAML file classification according to DBT component:
Let us go through different components of DBT with corresponding YAML files:

1.dbt_project.yml:
This is the most important configuration file in DBT. This file tells DBT what configuration
need to use for projects. By default, dbt_project.yml is the current directory structure

For Example:

name: string

config-version: 2
version: version

profile: profilename

model-paths: [directorypath]
seed-paths: [directorypath]
test-paths: [directorypath]
analysis-paths: [directorypath]
macro-paths: [directorypath]
snapshot-paths: [directorypath]
docs-paths: [directorypath]
asset-paths: [directorypath]

packages-install-path: directorypath

clean targets: [directorypath]

query-comment: string

require-dbt-version: version-range | [version-range]

flags:
  <global-configs>

dbt-cloud:
  project-id: project_id # Required
  defer-env-id: environment # Optional

exposures:
  +enabled: true | false.

quoting:
  database: true | false
  schema: true | false
  identifier: true | false

metrics:
  <metric-configs>

models:
  <model-configs>

seeds:
  <seed-configs>

semantic-models:
  <semantic-model-configs>

saved-queries:
  <saved-queries-configs>

snapshots:
  <snapshot-configs>

sources:
  <source-configs>
  
tests:
  <test-configs>

vars:
  <variables>

on-run-start: sql-statement | [sql-statement]
on-run-end: sql-statement | [sql-statement]

dispatch:
  - macro_namespace: packagename
    search_order: [packagename]

restrict-access: true | false

 

Model:
Models use SQL language that defines how your data is transformed .In a model, configuration file, you define the source and the target tables and their transformations. It is under the model directory of DBT project, and we can give name as per our convenience.
Below is the example:
Picturemodel   This is the YAML file in model. Given name as “schema.yml”
Purpose of model YML file:
It configures the model level metadata such as tags, materialization, name, column which use for transforming the data
It looks like as below:

version: 2

models:
  - name: my_first_dbt_model
    description: "A starter dbt model"
    columns:
      - name: id
        description: "The primary key for this table"
        data_tests:
          - unique
          - not_null

  - name: my_second_dbt_model
    description: "A starter dbt model"
    columns:
      - name: id
        description: "The primary key for this table"
        data_tests:
          - unique
          - not_null


2.Seed:
Seeds used to load CSV files into data model. This is useful for staging before applying any
transformation.
Below is the example:
Pictureseeds

Purpose of Seeds YAML file:
To define the path of CSV file under seed directory and which column needs to transform in CSV file and load into the data warehouse tables.

Configuration file looks like as below:

version: 2
seeds:
  - name: <name>
    description: Raw data from a source
    database: <database name>
    schema: <database schema>
    materialized: table
    sql: |-
      SELECT
        id,
        name
      FROM <source_table>

Testing:
Testing is a key step in any project. Similarly, DBT create test folder to test unique constraints, not null values.

Create dbtTest.yml file under test folder of DBT project

And it looks like as below:

Picturetest
Purpose of test YML file as:
It helps to check data integrity quality and separates from the business logic
It looks like as below:

columns:
  - name: order_id
    tests:
      - not_null
      - unique

As we go through different YAML files in DBT and purpose for the same.

Conclusion:
dbt and its YAML files provide human readable way to manage data transformation. With dbt, we can easily create, transform, and test the data models and make valuable tools for data professionals. With both DBT and YAML, it empowers you to work more efficiently as data analyst. Data engineers or business analysts

Thanks for reading.

 

 

 

]]>
https://blogs.perficient.com/2025/06/12/yaml-files-in-dbt/feed/ 0 382730
Developing a Serverless Blogging Platform with AWS Lambda and Python https://blogs.perficient.com/2025/06/11/developing-a-serverless-blogging-platform-with-aws-lambda-and-python/ https://blogs.perficient.com/2025/06/11/developing-a-serverless-blogging-platform-with-aws-lambda-and-python/#respond Thu, 12 Jun 2025 04:55:52 +0000 https://blogs.perficient.com/?p=382159

Introduction

Serverless is changing the game—no need to manage servers anymore. In this blog, we’ll see how to build a serverless blogging platform using AWS Lambda and Python. It’s scalable, efficient, and saves cost—perfect for modern apps.

How It Works

 

Lalit Serverless

Prerequisites

Before starting the demo, make sure you have: an AWS account, basic Python knowledge, AWS CLI and Boto3 installed.

Demonstration: Step-by-Step Guide

Step 1: Create a Lambda Function

Open the Lambda service and click “Create function.” Choose “Author from scratch,” name it something like BlogPostHandler, select Python 3.x, and give it a role with access to DynamoDB and S3. Then write your code using Boto3 to handle CRUD operations for blog posts stored in DynamoDB.

Lamda_Function.txt

Step 2: Set Up API Gateway

First, go to REST API and click “Build.” Choose “New API,” name it something like BlogAPI, and select “Edge optimized” for global access. Then create a resource like /posts, add methods like GET or POST, and link them to your Lambda function (e.g. BlogPostHandler) using Lambda Proxy integration. After setting up all methods, deploy it by creating a stage like prod. You’ll get an Invoke URL which you can test using Postman or curl.

Picture1

 

Step 3: Configure DynamoDB

Open DynamoDB and click “Create table.” Name it something like BlogPosts, set postId as the partition key. If needed, add a sort key like category for filtering. Default on-demand capacity is fine—it scales automatically. You can also add extra attributes like timestamp or tags for sorting and categorizing. Once done, hit “Create.”

.

 

Picture2

Step 4: Deploy Static Content on S3

First, make your front-end files—HTML, CSS, maybe some JavaScript. Then go to AWS S3, create a new bucket with a unique name, and upload your files like index.html. This will host your static website.

Index.html

After uploading, set the bucket policy to allow public read access so anyone can view your site. That’s it—your static website will now be live from S3.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "PublicReadGetObject",
            "Effect": "Allow",
            "Principal": "*",
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::your-bucket-name/*"
        }
    ]
}

After uploading, don’t forget to replace your-bucket-name in the bucket policy with your actual S3 bucket name. This makes sure the permissions work properly. Now your static site is live—S3 will serve your HTML, CSS, and JS smoothly and reliably.

Step 5: Distribute via CloudFront

Go to CloudFront and create a new Web distribution. Set the origin to your S3 website URL (like your-bucket-name.s3-website.region.amazonaws.com, not the ARN). For Viewer Protocol Policy, choose “Redirect HTTP to HTTPS” for secure access. Leave other settings as-is unless you want to tweak cache settings. Then click “Create Distribution”—your site will now load faster worldwide.

Picture3

To let your frontend talk to the backend, you need to enable CORS in API Gateway. Just open the console, go to each method (like GET, POST, DELETE), click “Actions,” and select “Enable CORS.” That’s it—your frontend and backend can now communicate properly.

Picture4

Additionally, in your Lambda function responses.(We already added in our lambda function), make sure to include the following headers.

 

Results

That’s it—your serverless blogging platform is ready! API Gateway gives you the endpoints, Lambda handles the logic, DynamoDB stores your blog data, and S3 + CloudFront serve your frontend fast and globally. Fully functional, scalable, and no server headaches!

 

Picture5

Conclusion

Building a serverless blog with AWS Lambda and Python shows how powerful and flexible serverless really is. It’s low-maintenance, cost-effective, and scales easily perfect for anything from a personal blog to a full content site. A solid setup for modern web apps!

]]>
https://blogs.perficient.com/2025/06/11/developing-a-serverless-blogging-platform-with-aws-lambda-and-python/feed/ 0 382159
Running Multiple Test Cases from a CSV File Using Playwright and TypeScript. https://blogs.perficient.com/2025/06/11/running-multiple-test-cases-from-a-csv-file-using-playwright-and-typescript/ https://blogs.perficient.com/2025/06/11/running-multiple-test-cases-from-a-csv-file-using-playwright-and-typescript/#respond Wed, 11 Jun 2025 11:46:12 +0000 https://blogs.perficient.com/?p=382284

In the world of automated testing, maintaining flexibility and scalability is crucial—especially when it comes to validating functionality across multiple data inputs. Data-driven testing enables QA professionals to decouple test scripts from the input data, allowing the same test flow to run with multiple sets of inputs.

This tutorial explains how to set up data-driven tests in Playwright using TypeScript, where external CSV files provide varying input data for each scenario.

This approach is highly effective for validating login scenarios, form submissions, and any functionality that depends on multiple sets of data.

Why Use Data-Driven Testing?

Data-driven testing provides several benefits:

  • Reduced Code Duplication: Instead of writing multiple similar tests, a single test reads various inputs from an external file.
  • Improved Maintainability: Test data can be modified independently of the test logic.
  • Scalability: Enables easier scaling of testing across a wide range of input combinations.

When working with TypeScript and Playwright, using CSV files for test input is a natural fit for structured test cases, such as form validation, login testing, and e-commerce transactions.

Setting Up the Project

To get started, make sure you have a Playwright and TypeScript project set up. If not, here’s how to initialize it:

npm init -y

npm install -D @playwright/test

npx playwright install

Enable TypeScript support:

npm install -D typescript ts-node

Create a basic tsconfig.json:

{

  "compilerOptions": {

    "target": "ES6",

    "module": "commonjs",

    "strict": true,

    "esModuleInterop": true,

    "outDir": "dist"

  },

  "include": ["*.ts"]

}

 

Now, install a CSV parsing library:

npm install csv-parse

Creating the CSV File

We’ll begin by setting up a basic loginData.csv file containing sample login credentials.

username, password

user1,password1

user2,password2

invalidUser,wrongPass

Save it in your project root directory.

Reading CSV Data in TypeScript

Create a helper function, readCSV.ts, to parse CSV files:

import fs from 'fs';

import { parse } from 'csv-parse';

export async function loadCSV(fileLocation: string): Promise<Record<string, string>[]> {

  return new Promise((resolve, reject) => {

    const results: Record<string, string>[] = [];

    fs.createReadStream(fileLocation)

      .pipe(parse({ columns: true, skip_empty_lines: true }))

      .on('readable', function () {

        let row;

        while ((row = this.read()) !== null) {

          results.push(row);

        }

      })

      .on('end', () => resolve(results))

      .on('error', (error) => reject(error));

  });

}

Writing the Data-Driven Test in Playwright

Now, let’s write a test that uses this CSV data. Create a file named login.spec.ts:

import { test, expect } from '@playwright/test';

import { readCSV } from './readCSV';




test.describe('Data-Driven Login Tests', () => {

  let testData: Array<{ username: string; password: string }>;




  test.beforeAll(async () => {

    testfile = await readCSV('./loginData.csv');

  });




  for (const data of testfile || []) {

    test(`Log in attempt with ${data.username}`, async ({ page }) => {

      await page.goto('https://example.com/login');

      await page.fill('#username', data.username);

      await page.fill('#password', data.password);

      await page.click('button[type="submit"]');




      // Adjust this check based on expected outcomes

      if (data.username.startsWith('user')) {

        await expect(page).toHaveURL(/dashboard/);

      } else {

         expect(page.locator('.error-text')).toBeVisible();

      }

    });

  }

});

The approach reads each row from the CSV and generates individual test cases dynamically, using the data from each entry as input parameters.

Best Practices

  • Separate Test Data from Logic: Always keep your data files separate from test scripts to simplify maintenance.
  • Validate Test Inputs: Ensure CSV files are clean and correctly formatted.
  • Parameterize Conditions: Adjust validation logic based on the nature of test data (e.g., valid vs. invalid credentials).

Conclusion

Using CSV-based data-driven testing with Playwright and TypeScript offers a powerful way to scale test coverage without bloating your codebase. It’s ideal for login scenarios, input validation, and other repetitive test cases where only the data varies.

By externalizing your data and looping through test scenarios programmatically, you can reduce redundancy, improve maintainability, and support continuous delivery pipelines more effectively.

As your application grows, this strategy will help ensure that your test suite remains efficient, readable, and scalable.

]]>
https://blogs.perficient.com/2025/06/11/running-multiple-test-cases-from-a-csv-file-using-playwright-and-typescript/feed/ 0 382284
Revolutionizing Clinical Trial Data Management with AI-Powered Collaboration https://blogs.perficient.com/2025/06/10/revolutionizing-clinical-trial-data-management-with-ai-powered-collaboration/ https://blogs.perficient.com/2025/06/10/revolutionizing-clinical-trial-data-management-with-ai-powered-collaboration/#respond Tue, 10 Jun 2025 20:51:08 +0000 https://blogs.perficient.com/?p=363367

Clinical trial data management is critical to pharmaceutical research, yet it remains a significant challenge for many organizations. The industry faces several persistent hurdles:

  • Data fragmentation: Research teams often struggle with siloed information across departments, hindering collaboration and comprehensive analysis.
  • Outdated systems: Many organizations rely on legacy data management tools that fail to meet the demands of modern clinical trials.
  • Incomplete or inaccurate data: Ensuring data completeness and accuracy is an ongoing battle, potentially compromising trial integrity and patient safety.
  • Limited data accessibility: Researchers frequently lack efficient ways to access and interpret the specific data relevant to their roles.
  • Collaboration barriers: Disparate teams often struggle to share insights and work cohesively, slowing down the research process.
  • Regulatory compliance: Keeping up with evolving data management regulations adds another layer of complexity to clinical trials.

These challenges not only slow down the development of new treatments but also increase costs and potentially impact patient outcomes. As clinical trials grow more complex and data-intensive, addressing these pain points in data management becomes increasingly crucial for researchers and product teams.

A Unified Clinical Trial Data Management Platform 

Life sciences leaders are engaging our industry experts to reimagine the clinical data review process. We recently embarked on a journey with a top-five life sciences organization that shared a similar clinical collaboration vision and, together, moved from vision to global production use of this unified platform. This cloud-based, client-tailored solution leverages AI, rich integrations, and collaborative tools to streamline the clinical trial data management process. 

Key Features of Our Client-Tailored Clinical Data Review Solution: 

  1. Data Review Whiteboard: A centralized module providing access to clean, standardized data with customized dashboards for different team needs.
  2. Patient Profiles: Easily track individual trial participants across multiple data domains, ensuring comprehensive patient monitoring.
  3. EDC Integration: Seamlessly integrate Electronic Data Capture system queries, enabling interactive conversations between clinical team members.
  4. Study Setup: Centralize and manage all metadata, facilitating efficient study design and execution.
  5. AI-Powered Insights: Leverage artificial intelligence to analyze vast amounts of clinical trial data, automatically identify anomalies, and support improved decision-making.

The Impact: Enhanced Collaboration and Faster Results 

By implementing our clinical trial data management solution, organizations can: 

  • Ensure patient safety through comprehensive data visibility
  • Break down data silos, promoting collaboration across teams 
  • Accelerate the development of new treatments 
  • Improve decision-making with AI-driven insights 
  • Streamline the clinical data review process 

Breaking Down Clinical Data Siloes for Better Outcomes 

Leveraging a modern, cloud-based architecture and open-source technologies to create a unified clinical data repository, the clinical data review solution takes aim at the siloes that have historically plagued the clinical review process. By breaking down these silos, researchers can avoid duplicating efforts, share insights earlier, and ultimately accelerate the development of new treatments.

AI Drives Clinical Data Insights 

Clinical trials produce vast amounts of data—all of it useful, but potentially cumbersome to sort and examine. That’s where artificial intelligence (AI) models can step in, analyzing and extracting meaning from mountains of raw information. It can also be deployed to automatically identify anomalies, alerting researchers that further action is needed. By embedding AI directly into its main data pipelines, our tailored clinical data review solution effortlessly supports improved decision making.

Data Puts Patients First 

Patient safety must be the number one concern of any ethical trial, and clinical research data can play a key role in ensuring it. With a clinical data hub offering unparalleled vision into every piece of data generated for the trial – from lab results and anomalies to adverse reactions, – teams can track the well-being of each patient in their study. Users can flag potential issues, making it easy for collaborators to review any concerns.

Clinical Trial Data Collaboration Blog Post 1 Image

Success In Action

Our tailored solution for a top-five life sciences leader integrated data from 13 sources and included bi-directional EDC integration and multiple AI models. Our deep understanding of clinical trial processes, data management, and platforms proved instrumental in delivering a solution that met—and exceeded—expectations. 

Want to know more about our approach to clinical trial data collaboration? Check out our guide on the subject.

Transform Clinical Data Review With An Expert Partner

Discover why the largest life sciences organizations – including 14 of the top 20 pharma/biotech firms, 6 of the top 10 CROs, and 14 of the top 20 medical device organizations – have counted on our world-class industry capabilities and experience with leading technology innovators. Our deep expertise in life sciences and digital technologies, including artificial intelligence and machine learning, helps transform the R&D process and deliver meaningful value to patients and healthcare professionals.

Contact us to learn about our life sciences and healthcare expertise and capabilities, and how we can help you transform your business.

Empower Healthcare With AI-Driven Insights

 

]]>
https://blogs.perficient.com/2025/06/10/revolutionizing-clinical-trial-data-management-with-ai-powered-collaboration/feed/ 0 363367
Empowering the Modern Insurance Agent: Digital Strategies That Deliver Business Impact https://blogs.perficient.com/2025/06/09/empowering-the-modern-insurance-agent-digital-strategies-that-deliver-business-impact/ https://blogs.perficient.com/2025/06/09/empowering-the-modern-insurance-agent-digital-strategies-that-deliver-business-impact/#respond Mon, 09 Jun 2025 16:37:12 +0000 https://blogs.perficient.com/?p=382627

Digital transformation in insurance isn’t slowing down. But here’s the good news: agents aren’t being replaced by technology. They’re being empowered by it. Agents are more essential than ever in delivering value. For insurance leaders making strategic digital investments, the opportunity lies in enabling agents to deliver personalized, efficient, and human-centered experiences at scale.

Drawing from recent industry discussions and real-world case studies, we’ve gathered insights to highlight four key themes where digital solutions are transforming agent effectiveness and unlocking measurable business value.

Personalization at Scale: Turning Data into Differentiated Experiences

Customers want to feel seen, and they expect tailored advice with seamless service. When you deliver personalized experiences, you build stronger loyalty, increase engagement, and drive better results.

Key insights:

  • Personalization sits at the intersection of human empathy and machine accuracy.
  • Leveraging operational data through platforms like Salesforce Marketing Cloud enables 1:1 personalization across millions of customers and prospects.
  • One insurer saw a 5x increase in key site action conversions and converted 1.3 million unknown users to known through integrated digital personalization.

Strategic takeaway:

Look for platforms that bring all your customer data together and enable real-time personalization. This isn’t just about marketing. It’s a growth strategy.

Success In Action: Proving Rapid Value and Creating Better Member Experiences

Intelligent Automation: Freeing Agents to Focus on What Matters Most

Agents spend too much time on repetitive, low-value tasks. Automation can streamline these processes, allowing agents to focus on complex, high-value interactions that need a human touch.

Key insights:

  • Automating beneficiary change requests reduced manual work and improved data accuracy for one major insurer.
  • Another organization automated loan processing, which reduced processing time by 92% and unlocked $2M in annual savings.

Strategic takeaway:

Start with automation in the back-office to build confidence and demonstrate ROI. Then expand to customer-facing processes to enhance speed and service without sacrificing the personal feel.

Explore More: Transform Your Business With Cutting-Edge AI and Automation Solutions

Digitization: Building the Foundation for AI and Insight-Driven Decisions

Insurance is a document-heavy industry. Unlocking the value trapped in unstructured data is critical to enabling AI and smarter decision-making.

Key insights:

  • Digitizing legacy documents using tools like Microsoft Syntex and AI Builder enabled one insurer to create a consolidated, accurate claims and policy database.
  • This foundational step is essential for applying machine learning and delivering personalized experiences at scale.

Strategic takeaway:

Prioritize digitization as a foundational investment. Without clean, accessible data, personalization and automation efforts will stall.

Related: Data-Driven Companies Move Faster and Smarter

Agentic Frameworks: Guiding Agents with Real-Time Intelligence

The future of insurance distribution lies in human-AI collaboration. Agentic frameworks empower agents with intelligent prompts, decision support, and operational insights.

Key insights:

  • AI can help guide agents through complex underwriting and risk assessment scenarios which helps improve both speed and accuracy.
  • Carriers are increasingly testing these frameworks in the back office, where the risk is lower and the savings are real.

Strategic takeaway:

Start building toward a connected digital ecosystem where AI supports—not replaces—your teams. That’s how you can deliver empathetic, efficient, and accurate service.

You May Also Enjoy: Top 5 Digital Trends for Insurance in 2025

Final Thought: Technology as an Enabler, Not a Replacement

The most successful carriers seeing the biggest wins are those that blend the precision of machines with human empathy. They’re transforming how agents engage, advise, and deliver value.

“If you don’t have data fabric, platform modernization, and process optimization, you can’t deliver personalization at scale. It’s a crawl, walk, run journey—but the results are real.”

Next Steps for Leaders:

  • Assess your data readiness. Is your data accessible, accurate, and actionable?
  • Identify automation quick wins. Where can you reduce manual effort without disrupting the customer experience?
  • Invest in personalization platforms. Are your agents equipped to deliver tailored advice at scale?
  • Explore agentic frameworks. How can AI support—not replace—your frontline teams?

Carriers and brokers count on us to help modernize, innovate, and win in an increasingly competitive marketplace. Our solutions power personalized omnichannel experiences and optimize performance across the enterprise.

  • Business Transformation: Activate strategy and innovation ​within the insurance ecosystem.​
  • Modernization: Optimize technology to boost agility and ​efficiency across the value chain.​
  • Data + Analytics: Power insights and accelerate ​underwriting and claims decision-making.​
  • Customer Experience: Ease and personalize experiences ​for policyholders and producers.​

We are trusted by leading technology partners and consistently mentioned by analysts. Discover why we have been trusted by 13 of the 20 largest P&C firms and 11 of the 20 largest annuity carriers. Explore our insurance expertise and contact us to learn more.

]]>
https://blogs.perficient.com/2025/06/09/empowering-the-modern-insurance-agent-digital-strategies-that-deliver-business-impact/feed/ 0 382627
Beginner’s Guide to Playwright Testing in Next.js https://blogs.perficient.com/2025/06/09/beginners-guide-to-playwright-testing-in-next-js/ https://blogs.perficient.com/2025/06/09/beginners-guide-to-playwright-testing-in-next-js/#comments Mon, 09 Jun 2025 09:17:04 +0000 https://blogs.perficient.com/?p=382450

Building modern web applications comes with the responsibility of ensuring they perform correctly across different devices, browsers, and user interactions. If you’re developing with Next.js, a powerful React framework, incorporating automated testing from the start can save you from bugs, regression s, and unexpected failures in production.

This guide introduces Playwright, a modern end-to-end testing framework from Microsoft and demonstrates how to integrate it into a Next.js project. By the end, you’ll have a basic app with route navigation and playwright test that verify pages render and behave correctly.

Why Use Playwright with Next.js

Next.js enables fast, scalable React applications with feature live server-side rendering (SSR), static site generation (SSG), dynamic routing and API routes.

Playwright helps you simulate real user action like clicking, navigating and filling out form in a browser environment. It’s:

  • Fast and reliable
  • Headless (run without UI), or headed (for debugging)
  • Multi-browser (Chromium, Firefox, WebKit)
  • Great for full end-to-end testing

Together, they create a perfect testing stack

Prerequisites

Before we start, make sure you have the following:

  • Node.js v16 or above
  • npm or yarn
  • Basic familiarity with JavaScript, Next.js and React

Step 1: Create a New Next.js App

Let’s start with a fresh project. Open your terminal and run:

npx create-next-app@latest nextjs-playwright-demo
cd nextjs-playwright-demo

Once the setup is completed, start your development server:

npm run dev

You should see the default Next.js homepage at https://localhost:3000

Step 2: Add Pages and Navigation

Let’s add two simple routes: Home and About

Create about.tsx

// src/app/about/page.tsx
export default function About() {
    return (
        <h2>About Page</h2>
    )
}

 

Update the Home Page with a Link

Edit src/app/page.tsx:

import Link from "next/link";

export default function App() {
    return (
        <div>
            <h2>Home Page</h2>
            <Link href="/about">Go to about</Link>
        </div>
    )
}

You now have two routes ready to be tested.

Step 3: Install Playwright

Install Playwright globally and its dependencies

npm install -g playwright

It installs playwright test library and browsers (Chromium, Firefox, Webkit)

Step 4: Initialize Playwright

Run:

npm init playwright

This sets up:

  • playwright.config.ts for playwright configurations
  • tests/ directory for your test files
  • Install dev dependency in the project

Step 5: Write Playwright Tests for Your App

Create a test file: tests/routes.spec.ts

import { test, expect } from "@playwright/test";

test("Home page render correctly", async ({ page }) => {
    await page.goto("http://localhost:3000/");
    await expect(page.locator("h2")).toHaveText(/Home Page/);
});

test("About page renders correctly", async ({ page }) => {
    await page.goto("http://localhost:3000/about");
    await expect(page.locator("h2")).toHaveText(/About Page/);
});

test("User can navigate from Home to About Page", async ({ page }) => {
    await page.goto("http://localhost:3000/");
    await page.click("text=Go to About");
    await page.waitForURL("/about");
    await expect(page).toHaveURL("/about");
    await expect(page.locator("h2")).toHaveText(/About Page/);
});

What’s Happening?

  • The first test visits the home page and checks heading text
  • The second test goes directly to the About page
  • The third simulates clicking a link to navigate between routes

Step 6: Run Your Tests

To run all tests:

npx playwright test

You should see output like:

Command Line Output

Run in the headed mode (visible browser) for debugging:

npx playwright test --headed

Launch the interactive test runner:

npx playwright test --ui

Step 7: Trace and Debug Failures

Playwright provides a powerful trace viewer to debug flaky or failed tests.

Enable tracing in playwright.config.ts:

Playwright Config Js

Then show the report with

npx playwright show-report

This opens a UI where you can replay each step of your test.

What You’ve Learned

In this tutorial, you’ve:

  • Create a basic Next.js application
  • Set up routing between pages
  • Installed and configured Playwright
  • Wrote end-to-end test to validate route rendering and navigation
  • Learned how to run, debug and show-report your tests

Next Steps

This is the just the beginning. Playwright can also test:

  • API endpoints
  • Form submissions
  • Dynamic content loading
  • Authentication flows
  • Responsive behavior

Conclusion

Combining Next.js with Playwright gives you confidence in your app’s behavior. It empowers you to automate UI testing in a way that simulates real user interactions. Even for small apps, this testing workflow can save you from major bugs and regressions.

]]>
https://blogs.perficient.com/2025/06/09/beginners-guide-to-playwright-testing-in-next-js/feed/ 1 382450
Boost Cloud Efficiency: AWS Well-Architected Cost Tips https://blogs.perficient.com/2025/06/09/boost-cloud-efficiency-aws-well-architected-cost-tips/ https://blogs.perficient.com/2025/06/09/boost-cloud-efficiency-aws-well-architected-cost-tips/#respond Mon, 09 Jun 2025 06:36:11 +0000 https://blogs.perficient.com/?p=378814

In today’s cloud-first world, building a secure, high-performing, resilient, and efficient infrastructure is more critical than ever. That’s where the AWS Well-Architected Framework comes in a powerful guide designed to help architects and developers make informed decisions and build better cloud-native solutions.

What is the AWS Well-Architected Framework?

The AWS Well-Architected Framework provides a consistent approach for evaluating and improving your cloud architecture. It’s built around six core pillars that represent key areas of focus for building robust and scalable systems:

  • Operational Excellence – Continuously monitor and improve systems and processes.
  • Security – Protect data, systems, and assets through risk assessments and mitigation strategies.
  • Reliability – Ensure workloads perform as intended and recover quickly from failures.
  • Performance Efficiency – Use resources efficiently and adapt to changing requirements.
  • Cost Optimization – Avoid unnecessary costs and maximize value.
  • Sustainability – Minimize environmental impact by optimizing resource usage and energy consumption

98bb6d5d218aea2968fc8e8bba96ef68b6a7730c 1600x812

Explore the AWS Well-Architected Framework here https://aws.amazon.com/architecture/well-architected

AWS Well -Architected Timeline

Time to time, AWS made some changes in the framework and introduce new resources which we can follow to utilize them better for our use cases and get better architecture.

Oip

AWS Well-Architected Tool

To help you apply these principles, AWS offers the Well-Architected Tool—a free service that guides you through evaluating your workloads against the six pillars.

How it Works:

  • Select a workload.
  • Answer a series of questions aligned with the framework.
  • Review insights and recommendations.
  • Generate reports and track improvements over time.

Try the AWS Well-Architected Tool here https://aws.amazon.com/well-architected-tool/

Go Deeper with Labs and Lenses

AWS also Provides:

Deep Dive: Cost Optimization Pillar

Cost Optimization is not just about cutting costs—it’s about maximizing value. It ensures that your cloud investments align with business goals and scale efficiently.

Why It Matters:

  • Understand your spending patterns.
  • Ensure costs support growth, not hinder it.
  • Maintain control as usage scales.

5 Best Practices for Cost Optimization

  1. Practice Cloud Financial Management
  • Build a cost optimization team.
  • Foster collaboration between finance and tech teams.
  • Use budgets and forecasts.
  • Promote cost-aware processes and culture.
  • Quantify business value through automation and lifecycle management.
  1. Expenditure and Usage Awareness
  • Implement governance policies.
  • Monitor usage and costs in real-time.
  • Decommission unused or underutilized resources.
  1. Use Cost-Effective Resources
  • Choose the right services and pricing models.
  • Match resource types and sizes to workload needs.
  • Plan for data transfer costs.
  1. Manage Demand and Supply
  • Use auto-scaling, throttling, and buffering to avoid over-provisioning.
  • Align resource supply with actual demand patterns.
  1. Optimize Over Time
  • Regularly review new AWS features and services.
  • Adopt innovations that reduce costs and improve performance.

Conclusion

The AWS Well-Architected Framework is more than a checklist—it’s a mindset. By embracing its principles, especially cost optimization, you can build cloud environments that are not only efficient and scalable but also financially sustainable.

]]>
https://blogs.perficient.com/2025/06/09/boost-cloud-efficiency-aws-well-architected-cost-tips/feed/ 0 378814
Capturing API Requests from Postman Using JMeter https://blogs.perficient.com/2025/06/09/capturing-api-requests-from-postman-using-jmeter/ https://blogs.perficient.com/2025/06/09/capturing-api-requests-from-postman-using-jmeter/#respond Mon, 09 Jun 2025 06:21:51 +0000 https://blogs.perficient.com/?p=382378

Performance testing is a crucial phase in the API development lifecycle. If you’re using Postman for API testing and want to transition to load testing using Apache JMeter, you’ll be glad to know that JMeter can record your Postman API calls. This blog will guide you through a step-by-step process of capturing those requests seamlessly.

Why Record Postman Requests in JMeter?

Postman is excellent for testing individual API calls manually, while JMeter excels at simulating concurrent users and measuring performance.

Prerequisites:

  • Apache JMeter
  • Postman
  • JDK 8 or later
  • Internet access

Step-by-Step Guide

  1. Launch JMeter and Create a Test Plan: Open JMeter and start by creating a Thread Group under the Test Plan and add the HTTP(S) Test Script Recorder under Non-Test Elements.
    Image11
  2. Add a Recording Controller: Inside your Thread Group, add a Recording Controller and this will collect all the requests captured during the session.
    Image 12
  3. Import the JMeter certificate in postman: Go to Postman > Settings > “Certificates” tab, click and Toggle On for “CA certificates”, locate ApacheJMeterTemporaryRootCA.crt and add.Image 3
  4. Open Postman > navigate to Settings > then go to the Proxy tab. Configure the proxy by setting the port to ‘8888’. Set the proxy server address to ‘https://localhost’ during configuration.Image 4
  5. Start the JMeter Proxy Recorder: Set the port to 8888 in the recorder and hit Start.
    Image 15
  6. Execute API Requests from Postman: Send any API requests from Postman, and you’ll see them appear in the Recording Controller in JMeter. Next, search online for REST APIs that are available for free use. Here, I have taken the example of https://reqres.in/for reference.
    Image 6
  7. Stop the Recording: Click Stop in JMeter’s recorder once you’ve captured all desired traffic.Image 7
  8. Review the Results: Add a Listener like ‘View Results Tree’ under your Thread Group to see the captured request and response data.
    Image 8

Wrapping Up

By recording Postman traffic into JMeter, you’re not only saving time but also setting up your foundation for powerful performance testing. Whether you’re preparing for stress testing or simulating concurrent user traffic, this integration is a valuable step forward.

Happy Testing!!!

]]>
https://blogs.perficient.com/2025/06/09/capturing-api-requests-from-postman-using-jmeter/feed/ 0 382378
Mastering Databricks Jobs API: Build and Orchestrate Complex Data Pipelines https://blogs.perficient.com/2025/06/06/mastering-databricks-jobs-api-build-and-orchestrate-complex-data-pipelines/ https://blogs.perficient.com/2025/06/06/mastering-databricks-jobs-api-build-and-orchestrate-complex-data-pipelines/#respond Fri, 06 Jun 2025 18:45:09 +0000 https://blogs.perficient.com/?p=382492

In this post, we’ll dive into orchestrating data pipelines with the Databricks Jobs API, empowering you to automate, monitor, and scale workflows seamlessly within the Databricks platform.

Why Orchestrate with Databricks Jobs API?

When data pipelines become complex involving multiple steps—like running notebooks, updating Delta tables, or training machine learning models—you need a reliable way to automate and manage them with ease. The Databricks Jobs API offers a flexible and efficient way to automate your jobs/workflows directly within Databricks or from external systems (for example AWS Lambda or Azure Functions) using the API endpoints.

Unlike external orchestrators such as Apache Airflow, Dagster etc., which require separate infrastructure and integration, the Jobs API is built natively into the Databricks platform. And the best part? It doesn’t cost anything extra. The Databricks Jobs API allows you to fully manage the lifecycle of your jobs/workflows using simple HTTP requests.

Below is the list of API endpoints for the CRUD operations on the workflows:

  • Create: Set up new jobs with defined tasks and configurations via the POST /api/2.1/jobs/create Define single or multi-task jobs, specifying the tasks to be executed (e.g., notebooks, JARs, Python scripts), their dependencies, and the compute resources.
  • Retrieve: Access job details, check statuses, and review run logs using GET /api/2.1/jobs/get or GET /api/2.1/jobs/list.
  • Update: Change job settings such as parameters, task sequences, or cluster details through POST /api/2.1/jobs/update and /api/2.1/jobs/reset.
  • Delete: Remove jobs that are no longer required using POST /api/2.1/jobs/delete.

These full CRUD capabilities make the Jobs API a powerful tool to automate job management completely, from creation and monitoring to modification and deletion—eliminating the need for manual handling.

Key components of a Databricks Job

  • Tasks: Individual units of work within a job, such as running a notebook, JAR, Python script, or dbt task. Jobs can have multiple tasks with defined dependencies and conditional execution.
  • Dependencies: Relationships between tasks that determine the order of execution, allowing you to build complex workflows with sequential or parallel steps.
  • Clusters: The compute resources on which tasks run. These can be ephemeral job clusters created specifically for the job or existing all-purpose clusters shared across jobs.
  • Retries: Configuration to automatically retry failed tasks to improve job reliability.
  • Scheduling: Options to run jobs on cron-based schedules, triggered events, or on demand.
  • Notifications: Alerts for job start, success, or failure to keep teams informed.

Getting started with the Databricks Jobs API

Before leveraging the Databricks Jobs API for orchestration, ensure you have access to a Databricks workspace, a valid Personal Access Token (PAT), and sufficient privileges to manage compute resources and job configurations. This guide will walk through key CRUD operations and relevant Jobs API endpoints for robust workflow automation.

1. Creating a New Job/Workflow:

To create a job, you send a POST request to the /api/2.1/jobs/create endpoint with a JSON payload defining the job configuration.

{
  "name": "Ingest-Sales-Data",
  "tasks": [
    {
      "task_key": "Ingest-CSV-Data",
      "notebook_task": {
        "notebook_path": "/Users/name@email.com/ingest_csv_notebook",
        "source": "WORKSPACE"
      },
      "new_cluster": {
        "spark_version": "15.4.x-scala2.12",
        "node_type_id": "i3.xlarge",
        "num_workers": 2
      }
    }
  ],
  "schedule": {
    "quartz_cron_expression": "0 30 9 * * ?",
    "timezone_id": "UTC",
    "pause_status": "UNPAUSED"
  },
  "email_notifications": {
    "on_failure": [
      "name@email.com"
    ]
  }
}

This JSON payload defines a Databricks job that executes a notebook-based task on a newly provisioned cluster, scheduled to run daily at 9:30 AM UTC. The components of the payload are explained below:

  • name: The name of your job.
  • tasks: An array of tasks to be executed. A job can have one or more tasks.
    • task_key: A unique identifier for the task within the job. Used for defining dependencies.
    • notebook_task: Specifies a notebook task. Other task types include spark_jar_task, spark_python_task, spark_submit_task, pipeline_task, etc.
      • notebook_path: The path to the notebook in your Databricks workspace.
      • source: The source of the notebook (e.g., WORKSPACE, GIT).
    • new_cluster: Defines the configuration for a new cluster that will be created for this job run. You can also use existing_cluster_id to use an existing all-purpose cluster (though new job clusters are recommended).
      • spark_version, node_type_id, num_workers: Standard cluster configuration options.
  • schedule: Defines the job schedule using a cron expression and timezone.
  • email_notifications: Configures email notifications for job events.

To create a Databricks workflow, the above JSON payload can be included in the body of a POST request sent to the Jobs API’s create endpoint—either using curl or programmatically via the Python requests library as shown below:

Using Curl:

curl -X POST \
  https://<databricks-instance>.cloud.databricks.com/api/2.1/jobs/create \
  -H "Authorization: Bearer <Your-PAT>" \
  -H "Content-Type: application/json" \
  -d '@workflow_config.json' #Place the above payload in workflow_config.json

Using Python requests library:

import requests
import json
create_response = requests.post("https://<databricks-instance>.cloud.databricks.com/api/2.1/jobs/create", data=json.dumps(your_json_payload), auth=("token", token))
if create_response.status_code == 200:
    job_id = json.loads(create_response.content.decode('utf-8'))["job_id"]
    print("Job created with id: {}".format(job_id))
else:
    print("Job creation failed with status code: {}".format(create_response.status_code))
    print(create_response.text)

The above example demonstrated a basic single-task workflow. However, the full potential of the Jobs API lies in orchestrating multi-task workflows with dependencies. The tasks array in the job payload allows you to configure multiple dependent tasks.
For example, the following workflow defines three tasks that execute sequentially: Ingest-CSV-DataTransform-Sales-DataWrite-to-Delta.

{
  "name": "Ingest-Sales-Data-Pipeline",
  "tasks": [
    {
      "task_key": "Ingest-CSV-Data",
      "notebook_task": {
        "notebook_path": "/Users/name@email.com/ingest_csv_notebook",
        "source": "WORKSPACE"
      },
      "new_cluster": {
        "spark_version": "15.4.x-scala2.12",
        "node_type_id": "i3.xlarge",
        "num_workers": 2
      }
    },
    {
      "task_key": "Transform-Sales-Data",
      "depends_on": [
        {
          "task_key": "Ingest-CSV-Data"
        }
      ],
      "notebook_task": {
        "notebook_path": "/Users/name@email.com/transform_sales_data",
        "source": "WORKSPACE"
      },
      "new_cluster": {
        "spark_version": "15.4.x-scala2.12",
        "node_type_id": "i3.xlarge",
        "num_workers": 2
      }
    },
    {
      "task_key": "Write-to-Delta",
      "depends_on": [
        {
          "task_key": "Transform-Sales-Data"
        }
      ],
      "notebook_task": {
        "notebook_path": "/Users/name@email.com/write_to_delta_notebook",
        "source": "WORKSPACE"
      },
      "new_cluster": {
        "spark_version": "15.4.x-scala2.12",
        "node_type_id": "i3.xlarge",
        "num_workers": 2
      }
    }
  ],
  "schedule": {
    "quartz_cron_expression": "0 30 9 * * ?",
    "timezone_id": "UTC",
    "pause_status": "UNPAUSED"
  },
  "email_notifications": {
    "on_failure": [
      "name@email.com"
    ]
  }
}

 

Picture1


2. Updating Existing Workflows:

For modifying existing workflows, we have two endpoints: the update endpoint /api/2.1/jobs/update and the reset endpoint /api/2.1/jobs/reset. The update endpoint applies a partial update to your job. This means you can tweak parts of the job — like adding a new task or changing a cluster spec — without redefining the entire workflow. While the reset endpoint does a complete overwrite of the job configuration. Therefore, when resetting a job, you must provide the entire desired job configuration, including any settings you wish to keep unchanged, to avoid them being overwritten or removed entirely. Let us go over a few examples to better understand the endpoints better.

2.1. Update Workflow Name & Add New Task:

Let us modify the above workflow by renaming it from Ingest-Sales-Data-Pipeline to Sales-Workflow-End-to-End, adding an input parametersource_location to the Ingest-CSV-Data, and introducing a new task Write-to-Postgres, which runs after the successful completion of Transform-Sales-Data.

{
  "job_id": 947766456503851,
  "new_settings": {
    "name": "Sales-Workflow-End-to-End",
    "tasks": [
      {
        "task_key": "Ingest-CSV-Data",
        "notebook_task": {
          "notebook_path": "/Users/name@email.com/ingest_csv_notebook",
          "base_parameters": {
            "source_location": "s3://<bucket>/<key>"
          },
          "source": "WORKSPACE"
        },
        "new_cluster": {
          "spark_version": "15.4.x-scala2.12",
          "node_type_id": "i3.xlarge",
          "num_workers": 2
        }
      },
      {
        "task_key": "Transform-Sales-Data",
        "depends_on": [
          {
            "task_key": "Ingest-CSV-Data"
          }
        ],
        "notebook_task": {
          "notebook_path": "/Users/name@email.com/transform_sales_data",
          "source": "WORKSPACE"
        },
        "new_cluster": {
          "spark_version": "15.4.x-scala2.12",
          "node_type_id": "i3.xlarge",
          "num_workers": 2
        }
      },
      {
        "task_key": "Write-to-Delta",
        "depends_on": [
          {
            "task_key": "Transform-Sales-Data"
          }
        ],
        "notebook_task": {
          "notebook_path": "/Users/name@email.com/write_to_delta_notebook",
          "source": "WORKSPACE"
        },
        "new_cluster": {
          "spark_version": "15.4.x-scala2.12",
          "node_type_id": "i3.xlarge",
          "num_workers": 2
        }
      },
      {
        "task_key": "Write-to-Postgres",
        "depends_on": [
          {
            "task_key": "Transform-Sales-Data"
          }
        ],
        "notebook_task": {
          "notebook_path":"/Users/name@email.com/write_to_postgres_notebook",
          "source": "WORKSPACE"
        },
        "new_cluster": {
          "spark_version": "15.4.x-scala2.12",
          "node_type_id": "i3.xlarge",
          "num_workers": 2
        }
      }
    ],
    "schedule": {
      "quartz_cron_expression": "0 30 9 * * ?",
      "timezone_id": "UTC",
      "pause_status": "UNPAUSED"
    },
    "email_notifications": {
      "on_failure": [
        "name@email.com"
      ]
    }
  }
}

Picture2

2.2. Update Cluster Configuration:

Cluster startup can take several minutes, especially for larger, more complex clusters. Sharing the same cluster allows subsequent tasks to start immediately after previous ones complete, speeding up the entire workflow. Parallel tasks can also run concurrently sharing the same cluster resources efficiently. Let’s update the above workflow to share the same cluster between all the tasks.

{
  "job_id": 947766456503851,
  "new_settings": {
    "name": "Sales-Workflow-End-to-End",
    "job_clusters": [
      {
        "job_cluster_key": "shared-cluster",
        "new_cluster": {
          "spark_version": "15.4.x-scala2.12",
          "node_type_id": "i3.xlarge",
          "num_workers": 2
        }
      }
    ],
    "tasks": [
      {
        "task_key": "Ingest-CSV-Data",
        "notebook_task": {
          "notebook_path": "/Users/name@email.com/ingest_csv_notebook",
          "base_parameters": {
            "source_location": "s3://<bucket>/<key>"
          },
          "source": "WORKSPACE"
        },
        "job_cluster_key": "shared-cluster"
      },
      {
        "task_key": "Transform-Sales-Data",
        "depends_on": [
          {
            "task_key": "Ingest-CSV-Data"
          }
        ],
        "notebook_task": {
          "notebook_path": "/Users/name@email.com/transform_sales_data",
          "source": "WORKSPACE"
        },
        "job_cluster_key": "shared-cluster"
      },
      {
        "task_key": "Write-to-Delta",
        "depends_on": [
          {
            "task_key": "Transform-Sales-Data"
          }
        ],
        "notebook_task": {
          "notebook_path": "/Users/name@email.com/write_to_delta_notebook",
          "source": "WORKSPACE"
        },
        "job_cluster_key": "shared-cluster"
      },
      {
        "task_key": "Write-to-Postgres",
        "depends_on": [
          {
            "task_key": "Transform-Sales-Data"
          }
        ],
        "notebook_task": {
          "notebook_path":"/Users/name@email.com/write_to_postgres_notebook",
          "source": "WORKSPACE"
        },
        "job_cluster_key": "shared-cluster"
      }
    ],
    "schedule": {
      "quartz_cron_expression": "0 30 9 * * ?",
      "timezone_id": "UTC",
      "pause_status": "UNPAUSED"
    },
    "email_notifications": {
      "on_failure": [
        "name@email.com"
      ]
    }
  }
}

Picture3

2.3. Update Task Dependencies:

Let’s add a new task named Enrich-Sales-Data and update the dependency as shown below:
Ingest-CSV-Data →
Enrich-Sales-Data → Transform-Sales-Data →[Write-to-Delta, Write-to-Postgres].Since we are updating dependencies of existing tasks, we need to use the reset endpoint /api/2.1/jobs/reset.

{
  "job_id": 947766456503851,
  "new_settings": {
    "name": "Sales-Workflow-End-to-End",
    "job_clusters": [
      {
        "job_cluster_key": "shared-cluster",
        "new_cluster": {
          "spark_version": "15.4.x-scala2.12",
          "node_type_id": "i3.xlarge",
          "num_workers": 2
        }
      }
    ],
    "tasks": [
      {
        "task_key": "Ingest-CSV-Data",
        "notebook_task": {
          "notebook_path":"/Users/name@email.com/ingest_csv_notebook",
          "base_parameters": {
            "source_location": "s3://<bucket>/<key>"
          },
          "source": "WORKSPACE"
        },
        "job_cluster_key": "shared-cluster"
      },
      {
        "task_key": "Enrich-Sales-Data",
        "depends_on": [
          {
            "task_key": "Ingest-CSV-Data"
          }
        ],
        "notebook_task": {
          "notebook_path":"/Users/name@email.com/enrich_sales_data",
          "source": "WORKSPACE"
        },
        "job_cluster_key": "shared-cluster"
      },
      {
        "task_key": "Transform-Sales-Data",
        "depends_on": [
          {
            "task_key": "Enrich-Sales-Data"
          }
        ],
        "notebook_task": {
          "notebook_path":"/Users/name@email.com/transform_sales_data",
          "source": "WORKSPACE"
        },
        "job_cluster_key": "shared-cluster"
      },
      {
        "task_key": "Write-to-Delta",
        "depends_on": [
          {
            "task_key": "Transform-Sales-Data"
          }
        ],
        "notebook_task": {
          "notebook_path":"/Users/name@email.com/write_to_delta_notebook",
          "source": "WORKSPACE"
        },
        "job_cluster_key": "shared-cluster"
      },
      {
        "task_key": "Write-to-Postgres",
        "depends_on": [
          {
            "task_key": "Transform-Sales-Data"
          }
        ],
        "notebook_task": {
          "notebook_path":"/Users/name@email.com/write_to_postgres_notebook",
          "source": "WORKSPACE"
        },
        "job_cluster_key": "shared-cluster"
      }
    ],
    "schedule": {
      "quartz_cron_expression": "0 30 9 * * ?",
      "timezone_id": "UTC",
      "pause_status": "UNPAUSED"
    },
    "email_notifications": {
      "on_failure": [
        "name@email.com"
      ]
    }
  }
}

Picture4

The update endpoint is useful for minor modifications like updating the workflow name, updating the notebook path, input parameters to tasks, updating the job schedule, changing cluster configurations like node count etc., while the reset endpoint should be used for deleting existing tasks, redefining task dependencies, renaming tasks etc.
The update endpoint does not delete tasks or settings you omit i.e. tasks not mentioned in the request will remain unchanged, while the reset endpoint removes/deletes any fields or tasks not included in the request.

3. Trigger an Existing Job/Workflow:

Use the/api/2.1/jobs/run-now endpoint to trigger a job run on demand. Pass the input parameters to your notebook tasks using thenotebook_paramsfield.

curl -X POST https://<databricks-instance>/api/2.1/jobs/run-now \
  -H "Authorization: Bearer <DATABRICKS_TOKEN>" \
  -H "Content-Type: application/json" \
  -d '{
    "job_id": 947766456503851,
    "notebook_params": {
      "source_location": "s3://<bucket>/<key>"
    }
  }'

4. Get Job Status:

To check the status of a specific job run, use the /api/2.1/jobs/runs/get endpoint with the run_id. The response includes details about the run, including its state (e.g., PENDING, RUNNING, COMPLETED, FAILED etc).

curl -X GET \
  https://<databricks-instance>.cloud.databricks.com/api/2.1/jobs/runs/get?run_id=<your-run-id> \
  -H "Authorization: Bearer <Your-PAT>"

5. Delete Job:

To remove an existing Databricks workflow, simply call the DELETE /api/2.1/jobs/delete endpoint using the Jobs API. This allows you to programmatically clean up outdated or unnecessary jobs as part of your pipeline management strategy.

curl -X POST https://<databricks-instance>/api/2.1/jobs/delete \
  -H "Authorization: Bearer <DATABRICKS_PERSONAL_ACCESS_TOKEN>" \
  -H "Content-Type: application/json" \
  -d '{ "job_id": 947766456503851 }'

 

Conclusion:

The Databricks Jobs API empowers data engineers to orchestrate complex workflows natively, without relying on external scheduling tools. Whether you’re automating notebook runs, chaining multi-step pipelines, or integrating with CI/CD systems, the API offers fine-grained control and flexibility. By mastering this API, you’re not just building workflows—you’re building scalable, production-grade data pipelines that are easier to manage, monitor, and evolve.

]]>
https://blogs.perficient.com/2025/06/06/mastering-databricks-jobs-api-build-and-orchestrate-complex-data-pipelines/feed/ 0 382492
Redefining CCaaS Solutions Success in the Digital Era https://blogs.perficient.com/2025/06/03/redefining-ccaas-success-in-the-digital-era/ https://blogs.perficient.com/2025/06/03/redefining-ccaas-success-in-the-digital-era/#comments Tue, 03 Jun 2025 20:26:24 +0000 https://blogs.perficient.com/?p=382347

With the advancement of technology, machine learning and AI capabilities in the customer care space, customer expectations are evolving faster than ever before. Customers expect smoother, context-aware, personalized, and generally more effective and faster experiences across channels when contacting a support center. 

This calls for a need to revisit and redefine the success metrics for a Contact Center as a Service (CCaaS) strategy. 

 

Let’s break this down into two categories. The first category includes key metrics that are still essential to be measured. The standards for these metrics though are raised and the way they are measured have evolved. The second category introduces new metrics that are emerging because of advanced CCaaS capabilities in a modern contact center landscape. 

  

Key Traditional Success Metrics Reimagined  

  

Customer Satisfaction (CSAT) remains a cornerstone success metric. Every improvement a customer service center is looking to make, from improving operational efficiencies to enhancing agent and customer experience, will directly or indirectly impact the customer and is aimed at elevating that customer experiences. With automated personalized journeys being an important part of modern customer service, it is important to monitor real-time analytics on automated journeys in addition to live agent interactions. This helps better understand the customer experience and find opportunities to fine tune the friction points to improve customer satisfaction. Customer service is not only about resolving customer issues, but also about providing an effortless experience. 

  

First Contact Resolution is still a key success metric in the CCaaS space, but modern tools can revolutionize the extent a customer service center can go to improve this metric, so the standards for this metric have raised. Passing context effectively across channels, real-time monitoring, predictive analytics and insights, and proactive outreach can increase the likelihood of addressing customer needs on the first contact or even sometimes without the need for a live agent interaction. 

  

Customer Retention Rate metric has been revamped with the advancement of technology in customer service. Advanced predictive analytics can help track the customer experience throughout their journey and shed light on the underlying customer behavior patterns. This will enable proactive engagement strategies personalized to every customer. Real-time sentiment analysis can provide instant feedback to the customer service representatives and their supervisors to give them a chance to course correct immediately in order to shift the sentiment to a positive experience and retain customers. 

  

Emerging Success Metrics 

  

Agent Experience and Satisfaction has a direct impact on the operation of a contact center and hence the customer experience. Traditionally, this metric was not tracked broadly as an important metric to measure a successful contact center strategy. However, we know today that agent experience and satisfaction is a key metric for transforming contact centers from cost centers into revenue generating units. Contact centers can leverage modern tools in different areas from agent performance monitoring, training and identifying knowledge gaps to providing automated workflows and real-time agent assistance, to elevate the agent experience.

These strategies and tools help agents become more effective and productive while providing service. Satisfied agents are more motivated to help customers effectively. This can improve metrics like First Contact Resolution rate and Average Handle Time. Happy and productive agents are more likely to engage positively with customers to discuss potential cross-sell and upsell opportunities. Moreover, agent turnover and the cost associated with that will be lowered due to the reduced burden of onboarding and training new agents regularly and constantly being short of staff. 

  

Sentiment Analysis and Real-time Interaction Quality provides immediate insights to the contact center representatives about the customer’s emotions, the conversation tone, and the effectiveness of their interactions. This will help the contact center representatives to refine their interaction strategy on the spot to maintain a positive and effective engagement with the customer. These transforms contact centers into emotionally intelligent, customer-focused support centers. This makes a huge difference in a time where the quality of experience matters as much as the outcome. 

  

Predictive Analysis Accuracy represents an entirely new set of metrics for a modern contact center that leverages predictive analytics in its operation. It is crucial to measure this metric and evaluate the accuracy of the forecasts against customer behavior and demands as well as the agent workflow needs. Inaccurate predictions are not only ineffective but can also be harmful to contact center operations. They can lead to poor decision making, confusion, and disappointing customer experiences. Accuracy in the anticipation of customer needs can enable proactive outreach, positive and effective interactions, less friction points and reduced service contacts while facilitating effective automatic upsell and cross-sell initiatives. 

  

Technology Utilization Rate is an important metric to track in a modern and evolving customer care solution. While with the latest technological advancements a lot of intelligent automation and enhancements can be made within a CCaaS solution, a contact center strategy is required to identify the most impactful modern capabilities for every customer service operation. The strategy needs to incorporate tracking the success of the technology adoption through system usage data and adoption metrics. This ensures that technology is being leveraged effectively and is providing value to business. The technology utilization tracking can also reveal training and adoption gaps, ensuring that modern tools are not just implemented for the sake of innovation, but are actively contributing to improved efficiency within a contact center. 

  

Conclusion

The development of advanced native capabilities and integration of modern tools within CCaaS platforms are revolutionizing the customer care industry and reshaping customer expectations. Staying ahead of this shift is crucial. While utilizing these advancements to achieve operational efficiencies, it is equally important to redefine the success metrics that provide businesses with insights and feedback on a modern CCaaS strategic roadmap. Adopting a fresh approach to capturing traditional metrics like Customer Satisfaction Scores and First Contact Resolution, combined with measuring new metrics such as Real-time Interaction Quality and Predictive Analysis Accuracy will offer a comprehensive view of a contact center’s maturity and its progress towards a successful and effective modern CCaaS solution. 

We can measure these metrics by utilizing built-in monitoring and analytical tools of modern CCaaS platforms along with AI-powered services integrations for features like Sentiment and Real-time Quality Analysis. We can gather regular feedback and data from agents and automated tracking tools to monitor system usability and efficiency. All this data can be streamed and displayed on a unified custom analytics dashboard, providing a comprehensive view of contact center performance and effectiveness. 

]]>
https://blogs.perficient.com/2025/06/03/redefining-ccaas-success-in-the-digital-era/feed/ 1 382347