Perficient Blogs https://blogs.perficient.com/ Expert Digital Insights Mon, 18 Aug 2025 19:24:23 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Perficient Blogs https://blogs.perficient.com/ 32 32 30508587 Introducing Perficient’s Executive Ally Program https://blogs.perficient.com/2025/08/18/introducing-perficients-executive-ally-program/ https://blogs.perficient.com/2025/08/18/introducing-perficients-executive-ally-program/#respond Mon, 18 Aug 2025 19:24:23 +0000 https://blogs.perficient.com/?p=386073

Building Bridges Through Executive Allyship to Support LGBTQ+ InclusionPrism Erg Logo Blog

Perficient’s PRISM Employee Resource Group (ERG) launched an initiative to engage senior leaders across Perficient in meaningful, visible LGBTQ+ allyship. The Executive Ally Program aims to connect leaders across Perficient with the PRISM community, fostering genuine relationships and creating spaces where every colleague feels seen, heard, and valued. With more than 7,000 colleagues worldwide, this program is a strategic effort to ensure allyship starts at the highest levels of leadership and cascades throughout the company.

The “Why” Behind the Program 

Simply put, inclusive leadership drives better business outcomes. Research shows that workplaces that embrace diversity and inclusion attract broader talent pools, improve employee retention, and foster environments where everyone can thrive.  

At Perficient, we believe that leaders who actively champion inclusion create stronger, more innovative teams. Through the Executive Ally Program, Perficient colleagues are leading this effort by engaging in open, authentic conversations with their leaders, helping to educate and inspire them to become executive allies. 

How it Works 

According to research by Harvard Business Review, members of the LGBTQ+ community defined being a good ally as having three main components: being accepting, taking action, and having humility. There is a phased strategy behind the Executive Ally Program to touch on each of these areas.  

PRISM members can help identify and connect with leaders to be part of building a more inclusive and informed culture from the top-down. Here is an overview of how the program works: 

  1. Grassroots Engagement: Colleagues are encouraged to have honest and productive dialogues with their business unit leaders about the importance of allyship. 
  2. Equipping Leaders: Once leaders have signed up for the program, PRISM provides resources and training, including guides and research developed by reputable organizations. There is no prior expertise required, just a willingness to learn and grow. 
  3. Pledge and Personal Commitment: Leaders who complete the curriculum will pledge to foster inclusive teams and personalize their allyship efforts. This could be as simple as adding pronouns to email signatures or displaying symbols of support. 

PRISM recognizes that allyship is a journey. This program lays a solid foundation for leaders looking to increase the depth, breadth, and visibility of their allyship efforts. The program also addresses cultural nuances, aiming to scale inclusivity efforts across Perficient globally. 

The inaugural cohort of leaders in the program is currently underway. As these leaders grow in their understanding and commitment, they will serve as inspirations for others across the organization. 

Enhancing Workplace Culture Through Allyship 

By encouraging leaders to take a formal pledge to uphold and champion inclusivity, this reinforces Perficient’s commitment to fostering an environment where every team member can thrive personally and professionally. 

The Executive Ally Program is not a one-time event but a continuous journey toward a more inclusive and supportive workplace. It reflects Perficient’s dedication to creating a culture where everyone can bring their authentic selves to work. 


READY TO GROW YOUR CAREER? 

It’s no secret our success is because of our people. No matter the technology or time zone, our colleagues are committed to delivering innovative, end-to-end digital solutions for the world’s biggest brands, and we bring a collaborative spirit to every interaction. We’re always seeking the best and brightest to work with us. Join our team and experience a culture that challenges, champions, and celebrates our people. 

Visit our Careers page to see career opportunities and more! 

Go inside Life at Perficient and connect with us on LinkedIn, YouTube, Twitter, Facebook, TikTok, and Instagram. 

]]>
https://blogs.perficient.com/2025/08/18/introducing-perficients-executive-ally-program/feed/ 0 386073
Live Agent Transfer in Copilot Studio Using D365 Omnichannel – Step-by-Step Implementation https://blogs.perficient.com/2025/08/18/live-agent-transfer-in-copilot-studio-using-d365-omnichannel-step-by-step-implementation/ https://blogs.perficient.com/2025/08/18/live-agent-transfer-in-copilot-studio-using-d365-omnichannel-step-by-step-implementation/#respond Mon, 18 Aug 2025 09:29:53 +0000 https://blogs.perficient.com/?p=385924

Welcome to Part 2 of this blog series! In Part 1, we discussed the high-level architecture and use case for enabling live agent transfer from a chatbot.

In this post, I’ll walk you through the actual steps to build this feature using:

  • Copilot Studio
  • D365 Omnichannel for Customer Service
  • Customer Service Workspace
  • Customer Voice

Prerequisites

  • Dynamics 365 Customer Service license + Omnichannel Add-on
  • Admin access to D365 and Power Platform Admin Center
  • Agents added to your environment with proper roles

Step-by-Step Implementation

1: Set Up Omnichannel Workstream

  • Go to Customer Service Admin Center
  • Create a Workstream for live chat
  • Link it to a queue and assign agents

Customer Service Workspace Customer Service Workspace1

2: Create Chat Channel

  • In the same admin center, create a Chat Channel
  • Configure greeting, authentication (optional), timeouts
  • Copy the embed code to add to your portal or test site

Customer Service Chat Channel for Copilot Studio

3: Create a Bot in Copilot Studio

  • Create a bot and add core topics
  • Create a new topic: “Escalate to Agent”
  • Add trigger phrases like:
    • “Talk to someone.”
    • “Escalate to huma.n”
    • “Need real help”
  • Use the Transfer to Agent node
    • Select the Chat Channel
    • Add a fallback message in case agents are unavailable

Coplot Studio

4: Test the Flow

  • Open your bot via the portal or the embedded site
  • Trigger the escalation topic
  • Bot should say: “Transferring you to a live agent…”
  • An available agent receives the chat in the Customer Service Workspace
  • The agent sees the whole chat history and continues the conversation

Copilot Studio & Customer Service

5: [Optional] Post-Conversation Feedback Using Customer Voice

To collect feedback after the chat ends, enable the native post-conversation survey feature in Omnichannel.

Steps:

  1. Create a feedback survey in Microsoft Customer Voice
  2. Go to Customer Service Admin Center > Workstream > Behavior tab
  3. Enable post-conversation survey
  4. Select “Customer Voice.”

Customer Voice

That’s it – once the chat ends, users will be prompted with your feedback form automatically.

Real Scenarios Tested

  • User types “Speak to a human.”
  • Bot transfers to live agent
  • Agent sees the customer transcript and profile
  • No agent? Bot shows “All agents are currently busy.”

Final Outcome

This setup enables a production-ready escalation workflow with:

  • Low-code development
  • Reusable components
  • Smooth agent handoff
  • Agent empowerment with full context

Conclusion

This approach balances bot automation with human empathy by allowing live agent transfers when needed. Copilot Studio and D365 Omnichannel work well together for modern, scalable customer service solutions.

]]>
https://blogs.perficient.com/2025/08/18/live-agent-transfer-in-copilot-studio-using-d365-omnichannel-step-by-step-implementation/feed/ 0 385924
AI: Security Threat to Personal Data? https://blogs.perficient.com/2025/08/18/ai-security-threat-to-personal-data/ https://blogs.perficient.com/2025/08/18/ai-security-threat-to-personal-data/#respond Mon, 18 Aug 2025 07:33:26 +0000 https://blogs.perficient.com/?p=385942

In recent years, AI chatbots like ChatGPT have gone from fun tools for answering questions to serious helpers in workplaces, education, and even personal decision-making. With ChatGPT-5 now being the latest and most advanced version, it’s no surprise that people are asking a critical question:

“Is my personal data safe when I use ChatGPT-5?”

First, What Is ChatGPT-5?

ChatGPT-5 is an AI language model created by OpenAI. You can think of it like a super-smart digital assistant that can:

  • Answering questions across a wide range of topics
  • Drafting emails, essays, and creative content
  • Writing and debugging code
  • Assisting with research and brainstorming
  • Supporting productivity and learning

It learns from patterns in data, but here’s an important point – it doesn’t “remember” your conversations unless the developer has built a special memory feature and you’ve agreed to it.

How Your Data Is Used

When you chat with ChatGPT-5, your messages are processed to generate a response. Depending on the app or platform you use, your conversations may be:

  • Temporarily stored to improve the AI’s performance
  • Reviewed by humans (in rare cases) to train and fine-tune the system
  • Deleted or anonymized after a specific period, depending on the service’s privacy policy

This is why reading the privacy policy is not just boring legal stuff – it’s how you find out precisely what happens to your data.

Real Security Risks to Be Aware Of

The concerns about ChatGPT-5 (and similar AI tools) are less about it being “evil” and more about how your data could be exposed if not appropriately handled.

Here are the main risks:

1. Accidental Sharing of Sensitive Information

Many users unknowingly type personal details – such as their full name, home address, phone number, passwords, or banking information – into AI chat windows. While the chatbot itself may not misuse this data, it is still transmitted over the internet and may be temporarily stored by the platform. If the platform suffers a data breach or if the information is accessed by unauthorized personnel, your sensitive data could be exposed or exploited.

Best Practice: Treat AI chats like public forums – never share confidential or personally identifiable information.

2. Data Retention by Third-Party Platforms

AI chatbots are often integrated into third-party platforms, such as browser extensions, productivity tools, or mobile apps. These integrations may collect and store your chat data on their own servers, sometimes without clearly informing you. Unlike official platforms with strict privacy policies, third-party services may lack robust security measures or transparency.

Risk Example: A browser extension that logs your AI chats could be hacked, exposing all stored conversations.

Best Practice: Use only trusted, official apps and review their privacy policies before granting access.

3. Misuse of Login Credentials

In rare but serious cases, malicious AI integrations or compromised platforms could capture login credentials you enter during a conversation. If you share usernames, passwords, or OTPs (one-time passwords), these could be used to access your accounts and perform unauthorized actions – such as placing orders, transferring money, or changing account settings.

Real-World Consequence: You might wake up to find that someone used your credentials to order expensive items or access private services.

Best Practice: Never enter login details into any AI chat, and always use two-factor authentication (2FA) for added protection.

4. Phishing & Targeted Attacks

If chat logs containing personal information are accessed by cybercriminals, they can use that data to craft highly convincing phishing emails or social engineering attacks. For example, knowing your name, location, or recent purchases allows attackers to impersonate trusted services and trick you into clicking malicious links or revealing more sensitive data.

Best Practice: Be cautious of unsolicited messages and verify the sender before responding or clicking links.

5. Overtrusting AI Responses

AI chatbots are trained on vast datasets, but they can still generate inaccurate, outdated, or misleading information. Relying on AI responses without verifying facts can lead to poor decisions, especially in areas like health, finance, or legal advice.

Risk Example: Acting on incorrect medical advice or sharing false information publicly could have serious consequences.

Best Practice: Always cross-check AI-generated content with reputable sources before taking action or sharing it.

How to Protect Yourself

Here are simple steps you can take:

  • Never share sensitive login credentials or card details inside a chat.
  • Stick to official apps and platforms to reduce the risk of malicious AI clones.
  • Use 2-factor authentication (2FA) for all accounts, so even stolen passwords can’t be used easily.
  • Check permissions before connecting ChatGPT-5 to any service – don’t allow unnecessary access.
  • Regularly clear chat history if your platform stores conversations.

Final Thoughts

ChatGPT-5 is a tool, and like any tool, it can be used for good or misused. The AI itself isn’t plotting to steal your logins or credentials, but if you use it carelessly or through untrusted apps, your data could be at risk.

Golden rule: Enjoy the benefits of AI, but treat it like a stranger online – don’t overshare, and keep control of your personal data.

]]>
https://blogs.perficient.com/2025/08/18/ai-security-threat-to-personal-data/feed/ 0 385942
Optimizely Mission Control – Part II https://blogs.perficient.com/2025/08/18/optimizely-mission-control-part-ii/ https://blogs.perficient.com/2025/08/18/optimizely-mission-control-part-ii/#respond Mon, 18 Aug 2025 07:02:45 +0000 https://blogs.perficient.com/?p=384870

In this section, we focused primarily on generating read-only credentials and how to use them to connect to the database.

Generate Database Credentials

The Mission Control tool generates read-only database credentials for a targeted instance, which remain active for 30 minutes. These credentials allow users to run select or read-only queries, making it easier to explore data on a cloud instance. This feature is especially helpful for verifying data-related issues without taking a database backup.

Steps to generate database credentials

  1. Log in to Mission Control.

  2. Navigate to the Customers tab.

  3. Select the appropriate Customer.

  4. Choose the Environment for which you need the credentials.

  5. Click the Action dropdown in the left pane.

  6. Select Generate Database Credentials.

  7. A pop-up will appear with a scheduler option.

  8. Click Continue to initiate the process.

  9. After a short time, the temporary read-only credentials will be displayed.

 

Once the temporary read-only credentials are generated, the next step is to connect to the database using those credentials.

To do this:

  1. Download and install Azure Data Studio
    Download Azure Data Studio

  2. Open Azure Data Studio after installation.

  3. Click “New Connection” or the “Connect” button.

  4. Use the temporary credentials provided by Mission Control to connect:

    • Server Name: Use the server name from the credentials.

    • Authentication Type: SQL Login

    • Username and Password: As provided in the credentials.

  5. Once connected, you can execute SELECT queries to explore or verify data on the cloud instance.

 

For more details, refer to the official Optimizely documentation on Generating Database Credentials.

For Part I, visit: Optimizely Mission Control – Part I

]]>
https://blogs.perficient.com/2025/08/18/optimizely-mission-control-part-ii/feed/ 0 384870
AI’s Hidden Thirst: The Water Behind Tech https://blogs.perficient.com/2025/08/16/ais-hidden-thirst-the-water-behind-tech/ https://blogs.perficient.com/2025/08/16/ais-hidden-thirst-the-water-behind-tech/#respond Sat, 16 Aug 2025 12:21:58 +0000 https://blogs.perficient.com/?p=386202

Have you ever wondered what happens if you ask AI to create an image, write a poem, or draft an email?
Most of us picture “the cloud” working its magic in a distant location. The twist is that the cloud is physical, real, and thirsty. Data centers require water, sometimes millions of gallons per day, to stay cool while AI is operating.

By 2025, it is impossible to overlook AI’s growing water footprint. But don’t worry, AI isn’t to blame here. It’s about comprehending the problem, the ingenious ways technology is attempting to solve it, and what we (as humans) can do to improve the situation.

Why does AI need water?

Doesn’t your laptop heat up quickly when you run it on overdrive for hours? Now multiply that by millions of machines that are constantly in operation and stacked in enormous warehouses. A data centre is that.

These facilities are cooled by air conditioning units, liquid cooling, or evaporative cooling to avoid overheating. And gallons of fresh water are lost every day due to evaporative cooling, in which water actually evaporates into the atmosphere to remove heat.

Therefore, there is an invisible cost associated with every chatbot interaction, artificial intelligence-powered search, and generated image: water.

How big is the problem in 2025?

Pretty Big—and expanding. According to a 2025 industry report, data centers related to artificial intelligence may use more than 6 billion cubic meters of water a year by the end of this decade. That is roughly equivalent to the annual consumption of a mid-sized nation.

Miguel Data Centers 2

In short, AI’s water consumption is no longer a “future problem.” The effects are already being felt by the communities that surround big data centers. Concerns regarding water stress during dry months have been voiced by residents in places like Arizona and Ireland.

But wait—can AI help solve this?

Surprisingly, yes. It is being saved by the same intelligence that requires water.

optimised cooling: Businesses are utilising AI to operate data centers more effectively by anticipating precisely when and how much cooling is required, which can reduce water waste by as much as 20–30%.

Technology for liquid cooling: Some new servers are moving to liquid cooling systems, which consume a lot less water than conventional techniques.

Green data centers: Major corporations, such as Google and Microsoft, are testing facilities that use recycled water rather than fresh water for cooling and are powered by renewable energy.

Therefore, “AI is the problem” is not the story. “AI is thirsty, but also learning how to drink smarter,” it says.

What about us—can regular people help?

Absolutely.Our decisions have an impact even though the majority of us do not manage data centers. Here’s how:

More intelligent use of AI: We can be aware of how frequently we execute complex AI tasks, just as we try to conserve energy. (Is 50 AI-generated versions of the same image really necessary?)

Encourage green tech: Selecting platforms and services that are dedicated to sustainable data practices encourages the sector to improve.

Community action: Cities can enact laws that promote the use of recycled water in data centers and openness regarding the effects of water use in their communities.

Consider it similar to electricity, whose hidden costs we initially hardly noticed. Efficiency and awareness, however, had a significant impact over time. Water and AI can have the same effect.

What’s the bigger picture?

AI is only one piece of the global water puzzle. Water stress is still primarily caused by industry, agriculture, and climate change. However, the emergence of AI makes us reevaluate how we want to engage with the planet’s most valuable resource in the digital future.

If this is done correctly, artificial intelligence (AI) has the potential to be a partner in sustainability, not only in terms of how it uses water but also in terms of how it aids in global water monitoring, forecasting, and conservation.

The Takeaway

The cloud isn’t magic. It’s water, energy, wires, and metal. And AI’s thirst increases with its growth. However, this is an opportunity for creativity rather than panic. Communities, engineers, and even artificial intelligence (AI) are already rethinking how to keep machines cool without depleting the planet.

Therefore, keep in mind that every pixel and word contains a hidden drop of water the next time you converse with AI or create an interesting image. Furthermore, the more information we have, the better decisions we can make to ensure the future continues.

]]>
https://blogs.perficient.com/2025/08/16/ais-hidden-thirst-the-water-behind-tech/feed/ 0 386202
How Global Collaboration Drives Digital Transformation at Perficient https://blogs.perficient.com/2025/08/15/how-global-collaboration-drives-digital-transformation-at-perficient/ https://blogs.perficient.com/2025/08/15/how-global-collaboration-drives-digital-transformation-at-perficient/#respond Fri, 15 Aug 2025 19:08:32 +0000 https://blogs.perficient.com/?p=386199

As Perficient continues to lead the world’s most admired brands through their unique AI-first digital transformation journeys, our focus on global collaboration remains at the heart of our success. For us, transformation isn’t just about adopting new technology. It’s about bringing together the right people with the right expertise, no matter where they are in the world, to solve real business challenges and create lasting impact. 

With locations across the U.S., Latin America, India, China, and Europe, our teams span continents but operate as one global team. By working across borders, time zones, and areas of expertise, we’re able to offer clients diverse perspectives, deeper industry knowledge, and faster paths to AI-driven innovation. This global mindset is woven into our culture and embedded in the way we approach every engagement. It’s how we ensure that our solutions are not only technically sound but also scalable and aligned with our clients’ long-term goals. 

“Global operations thrive on diversity—not just in skills but in perspectives. An inclusive, globally integrated team brings fresh ideas and insights that can be pivotal. We actively create environments where the diverse perspectives of each team member are valued, resulting in solutions that are culturally relevant and forward-thinking,” said Kevin Sheen, senior vice president, in a recent presentation to our Latin America (LatAm) team. 

READ MORE: Empowering Transformation Through Global Expertise 

At Perficient, global collaboration isn’t a strategy we aspire to—it’s how we work every day. Fueled by intention, transparency, and a shared commitment to excellence, our collaborative culture empowers us to drive AI innovation and boldly advance business in a constantly evolving world. 

In this first blog post of our series focused on Perficient’s collaborative culture, we are showcasing how our colleagues around the world are combining their expertise to fuel global growth and build the strong connections that drive our success. 

Our LatAm Team’s Global Impact 

One of the strongest examples of our global collaboration in action is the growth and evolution of our presence in Latin America. Perficient LatAm began as a small collection of regional offices but, over the years, has grown into a unified, strategically aligned operation that plays an integral role in our global delivery model. With offices across Colombia, Mexico, Uruguay, Argentina, and Chile, the LatAm team has built a shared foundation of delivery and recruitment processes. This strategic alignment didn’t happen overnight. It was driven by intentional leadership, cross-regional transparency, and a commitment to working as a unified team. 

In the past two years, our leaders across LatAm have served as key points of contact to more closely align their operations with our U.S. business units. “We began investing in more leadership to act as single points of contact,” said David Arango Gaviria, director, Colombia Sales. “Assigning leaders to our practices helped put us on the map. We are now more exposed to customer and sales processes through these granular interactions.” 

One example of this collaboration in action is our LatAm team’s work to support a global leader in the manufacturing industry. While the work originally focused on custom development, our LatAm teams quickly evolved to establish a dedicated commerce practice from the ground up, recruiting, training, and cross-skilling talent to meet the dynamic needs of the client. 

“The willingness and openness that everyone showed to building bootcamps, facilitating education, and cross-training was a true example of collaboration,” said David. Our LatAm team has also played a pivotal role in expanding global delivery capacity for clients in industries such as food production and healthcare, supporting complex transitions, integrating seamlessly with our U.S. and India teams, and fostering new ways of working. 

Our LatAm team’s impact goes beyond client work. They’ve been key to internal innovation, especially in building accelerators that improve how we deliver. A great example is the Quality Assurance (QA) AI Assistant, which started as a local idea and quickly became a global effort. As the use of AI in Quality Assurance (QA) became integral, our LatAm team proposed a tool to automate tasks like generating user stories and test cases. Their concept brought in collaborators from the U.S. and India, turning it into a cross-regional project. In just two months, the team launched a working, enterprise-ready solution. This is a clear example of how global collaboration speeds up delivery and creates real value for clients. 

LEARN MORE: Perficient’s Quality Assurance and Test Automation Services 

Perficient India: Building Global Connections Through Local Innovation 

Perficient India continues to play a key role in strengthening global collaboration by creating spaces for knowledge sharing, innovation, and alignment with global teams. Events across our Bangalore, Nagpur, Hyderabad, Pune, and Chennai offices are helping connect colleagues across practices and geographies. 

At Perfathon 2025 in Bangalore, six teams worked through real-world challenges in a two-day hackathon designed to encourage cross-functional thinking. “Perfathon was more than just a hackathon—it was a vibrant space for collaboration, creativity, and learning,” said Gomathy Raveena Nair, lead technical consultant, Bangalore.  

READ MORE: Perfathon 2025 – Hackathon at Perficient 

Recent visits from our U.S.-based Financial Services leaders have served as powerful practice-specific moments that continue to shape and strengthen global collaboration. Mangayarkarasi Rengasamy, senior business consultant, Chennai, shared how valuable these interactions have been for creating greater alignment: “We received encouraging feedback on our ongoing engagements, further solidifying our momentum. We engaged in thought-provoking brainstorming and exceptional teamwork. Looking ahead, we have an exciting roadmap of action items.” 

From technical meetups to leadership engagement, the India team is helping drive a more connected global culture.  

A Culture Built on Connection 

At Perficient, global collaboration isn’t just how we deliver. It’s how we grow, solve problems, and innovate together. Across every region and practice, we’re building a culture that empowers our people to actively seek out partnerships, align around shared goals, and bring their full expertise to the table.  

As John Vylasek, senior solutions strategist, Data & Analytics, said, “I’ve been leading global teams for many years, and the approach is consistent. Find the people who make the extra effort to communicate, align, and get things done. Whether they’re in Latin America, India, or anywhere else, those relationships are what make the work meaningful.” This level of connection and seamless global collaboration transforms good work into lasting impact. Global collaboration not only brings out results for our clients but truly defines the Perficient experience for our people. 

Whether through cross-regional delivery, AI-enabled innovation, or in-person engagement, our teams are united by a common mindset: we work better when we work together. Stay tuned for the next blog in this series, where we will explore how we fulfill our mission through collaboration. 

]]>
https://blogs.perficient.com/2025/08/15/how-global-collaboration-drives-digital-transformation-at-perficient/feed/ 0 386199
Smart Failure Handling in HCL Commerce with Circuit Breakers https://blogs.perficient.com/2025/08/15/smart-failure-handling-in-hcl-commerce-with-circuit-breakers/ https://blogs.perficient.com/2025/08/15/smart-failure-handling-in-hcl-commerce-with-circuit-breakers/#respond Fri, 15 Aug 2025 05:48:28 +0000 https://blogs.perficient.com/?p=386135

In modern enterprise systems, stability and fault tolerance are not optional; they are essential. One proven approach to ensure robustness is the Circuit Breaker pattern, widely used in API development to prevent cascading failures. HCL Commerce takes this principle further by embedding circuit breakers into its HCL Cache to effectively manage Redis failures.

 What Is a Circuit Breaker?
The Circuit Breaker is a design pattern commonly used in API development to stop continuous requests to a service that is currently failing, thereby protecting the system from further issues. It helps maintain system stability by detecting failures and stopping the flow of requests until the issue is resolved.

The circuit breaker typically operates in three main (or “normal”) states. These are part of the standard global pattern of Circuit Breaker design.

Normal States:

  1. CLOSED:
  • At the start, the circuit breaker allows all outbound requests to external services without restrictions.
  • It monitors the success and failure of these calls.
  1. OPEN:
  • The circuit breaker rejects all external calls.
  • This state is triggered when the failure threshold is reached (e.g., 50% failure rate).
  • It remains in this state for a specified duration (e.g., 60 seconds).
  1. HALF_OPEN:
  • After the wait duration in the OPEN state, the circuit breaker transitions to HALF_OPEN.
  • It allows a limited number of calls to check if the external service has recovered.
  • If these calls succeed (e.g., receive a 200 status), the circuit breaker transitions back to  CLOSED.
  • If the error rate continues to be high, the circuit breaker reverts to the OPEN state.
Circuit Breaker Pattern

Circuit breaker pattern with normal states

Special States:

  1. FORCED_OPEN:
  • The circuit breaker is manually set to reject all external calls.
  • No calls are allowed, regardless of the external service’s status.
  1. DISABLED:
  • The circuit breaker is manually set to allow all external calls.
  • It does not monitor or track the success or failure of these calls.
Circuit breaker pattern with special states

Circuit breaker pattern with special states

Circuit Breaker in HCL Cache (for Redis)

In HCL Commerce, the HCL Cache layer interacts with Redis for remote coaching. But what if Redis becomes unavailable or slow? HCL Cache uses circuit breakers to detect issues and temporarily stop calls to Redis, thus protecting the rest of the system from being affected.

Behavior Overview:

  • If 20 consecutive failures occur in 10 seconds, the Redis connection is cut off.
  • The circuit remains open for 60 seconds.
  • At this stage, the circuit enters a HALF_OPEN state, where it sends limited test requests to evaluate if the external service has recovered.
  • If even 2 of these test calls fail, the circuit reopens for another 60 seconds.

Configuration Snapshot

To manage Redis outages effectively, HCL Commerce provides fine-grained configuration settings for both Redis client behavior and circuit breaker logic. These settings are defined in the Cache YAML file, allowing teams to tailor fault-handling based on their system’s performance and resilience needs.

 Redis Request Timeout Configuration

Slow Redis responses are not treated as failures unless they exceed the defined timeout threshold. The Redis client in HCL Cache supports timeout and retry configurations to control how persistent the system should be before declaring a failure:

timeout: 3000           # Max time (in ms) to wait for a Redis response
retryAttempts: 3        # Number of retry attempts on failure
retryInterval: 1500    # Specifies the delay (in milliseconds) between each retry attempt.

With the above configuration, the system will spend up to 16.5 seconds (3000 + 3 × (3000 + 1500)) trying to get a response before returning a failure. While these settings offer robustness, overly long retries can result in delayed user responses or log flooding, so tuning is essential.

Circuit Breaker Configuration

Circuit breakers are configured under the redis.circuitBreaker section of the Cache YAML file. Here’s an example configuration:

redis:
  circuitBreaker:
    scope: auto
    retryWaitTimeMs: 60000
    minimumFailureTimeMs: 10000
    minimumConsecutiveFailures: 20
    minimumConsecutiveFailuresResumeOutage: 2 
cacheConfigs:
  defaultCacheConfig:
    localCache:
      enabled: true
      maxTimeToLiveWithRemoteOutage: 300

Explanation of Key Fields:

  • scope: auto: Automatically determines whether the circuit breaker operates at the client or cache/shard level, depending on the topology.
  • retryWaitTimeMs (Default: 60000): Time to wait before attempting Redis connections after circuit breaker is triggered.
  • minimumFailureTimeMs (Default: 10000): Minimum duration during which consecutive failures must occur before opening the circuit.
  • minimumConsecutiveFailures (Default: 20): Number of continuous failures required to trigger outage mode.
  • minimumConsecutiveFailuresResumeOutage (Default: 2): Number of failures after retrying that will put the system back into outage mode.
  • maxTimeToLiveWithRemoteOutage: During Redis outages, local cache entries use this TTL value (in seconds) to serve data without invalidation messages.

Real-world Analogy

Imagine you have a web service that fetches data from an external API. Here’s how the circuit breaker would work:

  1. CLOSED: The service makes calls to the API and monitors the responses.
  2. OPEN: If the API fails too often (e.g., 50% of the time), the circuit breaker stops making calls for 60 seconds.
  3. HALF_OPEN: After 60 seconds, the circuit breaker allows a few calls to the API to see if it’s working again.
  4. CLOSED: If the API responds successfully, the circuit breaker resumes normal operation.
  5. OPEN: If the API still fails, the circuit breaker stops making calls again and waits.

Final Thought

By combining the classic circuit breaker pattern with HCL Cache’s advanced configuration, HCL Commerce ensures graceful degradation during Redis outages. It’s not just about availability—it’s about intelligent fault recovery.

For more detailed information, you can refer to the official documentation here:
🔗 HCL Commerce Circuit Breakers – Official Docs

]]>
https://blogs.perficient.com/2025/08/15/smart-failure-handling-in-hcl-commerce-with-circuit-breakers/feed/ 0 386135
From Self-Service to Self-Driving: How Agentic AI Will Transform Analytics in the Next 3 Years https://blogs.perficient.com/2025/08/13/from-self-service-to-self-driving-how-agentic-ai-will-transform-analytics-in-the-next-3-years/ https://blogs.perficient.com/2025/08/13/from-self-service-to-self-driving-how-agentic-ai-will-transform-analytics-in-the-next-3-years/#respond Wed, 13 Aug 2025 20:41:05 +0000 https://blogs.perficient.com/?p=386080

From Self-Service to Self-Driving: How Agentic AI Will Transform Analytics in the Next 3 Years

Imagine starting your workday with an alert not from a human analyst, but from an AI agent. While you slept, this agent sifted through last night’s sales data, spotted an emerging decline in a key region, and already generated a mini-dashboard highlighting the issue and recommending a targeted promotion. No one asked it to; it acted on its own. This scenario isn’t science fiction or some distant future; it’s the imminent reality of agentic AI in enterprise analytics. Businesses have spent years perfecting dashboards and self-service BI, empowering users to explore data on their own. However, in a world where conditions are constantly changing, even the most advanced dashboard may feel excessively slow. Enter agentic AI: the next frontier where intelligent agents don’t just inform decisions; they make and even execute decisions autonomously. Over the next 1–3 years, this shift toward AI-driven “autonomous BI” is poised to redefine how we interact with data, how analytics teams operate, and how insights are delivered across organizations.

In this post, we’ll clarify what agentic AI means in the context of enterprise analytics and explore how it differs from traditional automation or self-service BI. We’ll forecast specific changes this paradigm will bring, from business users getting proactive insights to data teams overseeing AI collaborators, and call out real examples (think AI agents auto-generating dashboards, orchestrating data pipelines, or flagging anomalies in real time). We’ll also consider the cultural and organizational implications of this evolution, such as trust and governance, and conclude with a point of view on how enterprises can prepare for the agentic AI era.

What is Agentic AI in Enterprise Analytics?

Agentic AI (often called agentic analytics in BI circles) refers to analytics systems powered by AI “agents” that can autonomously analyze data and take action without needing constant human prompts. In traditional BI, a human analyst or business user queries data, interprets results, and decides on an action. By contrast, an agentic AI system is goal-driven and proactive; it continuously monitors data, interprets changes, and initiates responses aligned with business objectives on its own. In other words, it shifts the analytics model from simply supporting human decisions to executing or recommending decisions independently.

Put simply, agentic analytics enables autonomous, goal-driven analytic agents that behave like tireless virtual analysts. They’re designed to think, plan, and act much like a human analyst would, but at machine speed and scale. Instead of waiting for someone to run a report or ask a question, these AI agents proactively scan data streams, reason over what they find, and trigger the appropriate next steps. For example, an agent might detect that a KPI is off track and automatically send an alert or even adjust a parameter in a system, closing the loop between insight and action. This stands in contrast to earlier “augmented analytics” or alerting tools that, while they could highlight patterns or outliers, were fundamentally passive; they still waited for a human to log in or respond. Agentic AI, by definition, carries the initiative: it doesn’t just explain what’s happening; it helps change what happens next.

It’s worth noting that the term “agentic” implies having agency, the capacity to act autonomously. In enterprise analytics, this means the AI isn’t just crunching numbers; it’s making choices about what analyses to perform and what operational actions to trigger based on those analyses. This could range from generating a new visualization to writing back results into a CRM to launching a workflow in response to a detected trend. Crucially, agentic AI doesn’t operate in isolation of humans’ goals. These agents are usually configured around explicit business objectives or KPIs (e.g., reduce churn, optimize inventory). They aim to carry out the intent set by business leaders, just without needing a person to micromanage each step.

Beyond Automation and Self-Service – How Agentic AI Differs from Today’s BI

It’s important to distinguish agentic AI from the traditional automation and self-service BI approaches that many enterprises have implemented over the past decade. While those were important steps in modernizing analytics, agentic AI goes a step further in several key ways:

  • Proactive vs. Reactive: Traditional BI systems (even self-service ones) are fundamentally reactive. They provide dashboards, reports, or alerts that a human must actively check or respond to. Automation in classic BI (like scheduled reports or rule-based alerts) can trigger predefined actions, but only for anticipated scenarios. Agentic AI flips this model: AI agents continuously monitor data streams and autonomously identify anomalies or opportunities in real time, acting without waiting for a human query or a pre-scheduled job. The system doesn’t sit idle until someone asks a question; it searches for questions to answer and problems to solve on its own. This drastically reduces decision latency, as actions can be taken at the moment conditions warrant, not hours or days later when a person finally notices.
  • Decision Execution vs. Decision Support: Self-service BI and automation tools have largely been about supporting human decision-making, surfacing insights faster, or auto-refreshing data, but ultimately leaving the interpretation and follow-up to people. Agentic AI shifts to decision execution. An agentic analytics platform can decide on and carry out a next step in the business process. Rather than just emailing you an alert about a sudden dip in revenue, an agent might also initiate a discounted offer to at-risk customers or reallocate ad spend, actions a human analyst might have taken, now handled by the AI. It’s a move from insight to outcome. As one industry observer put it, “agentic analytics executes and orchestrates actions… a shift from insights for humans to outcomes through machines.” Importantly, this doesn’t mean removing humans entirely; think of it as humans setting the goals and guardrails, while the AI agent carries out the routine decisions within those boundaries (often phrased as moving from human-in-the-loop to human-on-the-loop oversight).
  • Adaptive Learning vs. Static Rules: Traditional automation often runs on static, predefined rules or scripts (e.g., “if KPI X drops below Y, send alert”). Agentic AI agents are typically powered by advanced AI (including machine learning and large language models) that allow them to learn and adapt. They maintain memory of past events, learn from feedback, and improve their recommendations over time. This means the agent can handle novel situations better than a fixed rule could. For instance, if an agent took an action that didn’t have the desired outcome, it can adjust its strategy next time. This continuous learning loop is something traditional BI tools lack; they’re only as good as their initial programming, whereas an agentic system can get “smarter” and more personalized with each iteration.
  • Natural Interaction and Democratization: Self-service BI lowered the technical barrier for users to get insights (e.g., drag-and-drop dashboards, natural language query features). Agentic AI lowers it even further by allowing conversational or even hands-off interaction. Business users might simply state goals or ask questions in plain English, and the AI agent handles the heavy lifting of data analysis and presentation. For example, a user could ask, “Why did our conversion rate drop last week?” and receive an explanation with charts, without writing a single formula. More impressively, an agent might notify the user of the drop before they even ask, complete with a diagnosis of causes. In effect, everyone gets access to a “personal data analyst” that works 24/7. This continues the BI trend of democratizing data, but with agentic AI, even non-technical users can leverage advanced analytics because the AI translates raw data into succinct, contextual insights. The result is more people in the organization can harness data effortlessly, through intuitive interactions, without sacrificing trust or accuracy, although ensuring that trust is maintained brings us to important governance considerations, which we’ll discuss later.

In summary, agentic AI goes beyond what traditional automation or self-service BI can do. If a classic self-service dashboard was like a GPS map you had to read, an agentic AI is like a self-driving car; you tell it where you want to go, and it navigates there (while you watch and ensure it stays on track). This evolution is happening now because of converging advances in technology: more powerful AI models, API-accessible cloud tools, and enterprises’ appetite for real-time, automated decisions. With the groundwork laid, analytics is moving from a manual, human-driven endeavor to a collaborative human-AI partnership, and often, the AI will take the first action.

The Coming Changes: How Agentic AI Will Impact Users, Teams, and Analytics Delivery

What practical changes should we expect as agentic AI becomes part of enterprise analytics in the next 1–3 years? Let’s explore the forecast across three dimensions: how business users interact with data, how data and analytics teams work, and how analytics capabilities are delivered in organizations.

Impact on Business Users: From Asking for Insights to Acting on Conversations

For business users, the managers, analysts, and non-technical staff who consume data, agentic AI will make analytics feel more like a conversation and less like a hunt for answers. Instead of clicking through dashboards or waiting for weekly reports, users will have AI assistants that deliver insights proactively and in real-time.

  • Proactive Insights and Alerts: Users will increasingly find that key insights come to them without asking. AI agents will continuously watch metrics and immediately flag anomalies or trends in real time, for instance, spotting a sudden spike in support tickets or a dip in conversion rate, and notify the relevant users with an explanation. This might happen via the tools people already use (a Slack message, an email, a mobile notification) rather than a BI portal. Crucially, the agent doesn’t just raise a flag; it provides context (e.g., “Conversion rates dropped 5% today, mainly in the Northeast region, possibly due to a pricing change”) and might even suggest a next step. Business users move from being discoverers of insights to responders to insights surfaced autonomously.
  • Conversational Data Interaction: The mode of interacting with analytics will shift toward natural language. We’re already seeing early versions of this with chatbots in analytics tools, but agentic AI will make it far more powerful. Users will be able to ask follow-up questions in plain English and get instant answers with relevant charts or predictions, effectively having a dialog with their data. For example, a marketing VP could ask, “Agent, why is our Q3 pipeline behind plan?” and get a dynamically generated explanation that the agent figured out by correlating CRM data and marketing metrics. If the answer isn’t clear, the VP can ask, “Can you break that down by product line and suggest any fixes?”, and the agent will drill down and even propose actions (like increasing budget on a lagging campaign). This means less time training business users on BI tools and more time acting on insights, since the AI handles the mechanics of data analysis.
  • Higher Trust (with Transparency): Initially, some users may be wary of an AI making suggestions or decisions; trust is a big cultural factor. Over the next few years, expect agentic AI tools to integrate explainability features to earn user trust. For instance, an agent might not only send a recommendation but also a brief rationale: “I’m suggesting a price drop on Product X because sales are 20% below forecast and inventory is high.” This transparency, along with the option for users to provide feedback or override decisions, will be key. As users see that the agents’ tips are grounded in data and often helpful, comfort with “AI co-workers” will grow. In fact, by offloading routine analysis to AI, business users can focus more on strategic thinking, and paradoxically increase their data literacy by engaging in more high-level questioning of the data (the AI does the number crunching, but users still exercise judgment on the recommendations).
  • Example, Daily “Agent” Briefings: To illustrate, imagine a finance director gets a daily briefing generated by an AI agent each morning. It’s a short narrative: “Good morning. Today’s cash flow is on track, but I noticed an unusual expense spike in marketing, 30% above average. I’ve attached a breakdown chart and alerted the marketing lead. Also, three regional sales agents missed their targets; I’ve scheduled a meeting on their calendars to review. Let me know if you want me to take any action on budget reallocations.” This kind of hands-off insight delivery, where the agent surfaces what matters and even kicks off next steps, could become a routine part of business life. Business users essentially gain a virtual analyst that watches over their domain continuously.

Overall, for business users, the next few years with agentic AI will feel like analytics has turned from a static product (dashboards and reports you check) into an interactive service (an intelligent assistant that tells you what you need to know and helps you act on it). The organizations that embrace this will likely see faster decision cycles and a more data-informed workforce, as employees spend less time gathering insights and more time using them.

Impact on Data Teams: From Builders of Reports to Trainers of AI Partners

For data and analytics teams (data analysts, BI developers, data engineers, data scientists), agentic AI will bring a significant shift in roles and workflows. Rather than manually producing every insight or report, these teams will collaborate with AI agents and focus on enabling and governing these agents.

  • Shift to Higher-Value Tasks: Much of a data team’s routine workload today, writing SQL queries, building dashboards, updating reports, and troubleshooting minor data issues, can be time-consuming. As AI agents start handling tasks like generating analyses or spotting data issues automatically, human analysts will be freed up for more high-value activities. For example, if an agent can automatically produce a weekly KPI overview and pinpoint the outliers, the analyst can spend their time investigating the why behind those outliers and planning strategic responses, rather than crunching the numbers. Data scientists might similarly delegate basic model monitoring or data prep to AI routines and focus on designing better experiments or algorithms. In essence, the human experts become more like strategic supervisors and domain experts, guiding the AI on what problems to tackle and validating how the AI’s insights are used.
  • New Collaboration with AI (“Centaur” Teams): We’ll likely see the rise of “centaur” analytics teams, a term borrowed from human-computer chess teams, where human analysts and AI agents work together on analytics projects. A data analyst might ask an AI agent to fetch and preprocess certain data, test dozens of correlations, or even draft an analytic report. The analyst then reviews, corrects, and adds domain context. This iterative partnership can drastically speed up analysis cycles. Data teams will need to develop skills in prompting and guiding AI agents, much like a lead analyst guiding a junior employee. The next 1–3 years might even see specialized roles emerge, such as Analytics AI Trainers or AI Wrangler, people who specialize in configuring these agents, tuning their behavior (for example, setting the logic for when an agent should escalate an issue to a human), and feeding them the right context.
  • Focus on Data Pipeline Orchestration and Quality: Agentic AI is only as good as the data it can access. Data engineers will find their work more crucial than ever, not in manually running pipelines, but in ensuring robust, real-time data infrastructure for the agents. In fact, one of the big changes is that AI agents themselves may orchestrate data pipelines or integration tasks as needed. For instance, if an analytics agent determines it needs fresh data from a new source (say, a marketing system) to analyze a trend, it could automatically trigger an ETL job or API call to pull that data, rather than waiting on a data engineer’s backlog. We’re already seeing early architectures where an agent, empowered with the right APIs, can initiate workflows across the data stack. Data teams, therefore, will put more effort into building composable, API-driven data platforms that agents can plug into on the fly. They will also need to set up monitoring. If an agent’s automated pipeline run fails or produces weird results, it should alert the team or retry, which ties into governance (discussed below).
  • Example, AI Orchestrating a Pipeline: Consider a data engineering scenario: an AI agent in charge of analytics notices that a particular report is missing data about a new product line. Traditionally, an engineer might have to add the new data source and rebuild the pipeline. In an agentic AI setup, the agent itself might call a data integration tool via API to pull in the new product data and update the data model, then regenerate the dashboard with that data included. All of this could happen in minutes, whereas a manual process might take days. The data team’s job in this case was to make sure the integration tool and data model were accessible and that the agent had the proper permissions and guidelines. This kind of autonomous pipeline management could become more common, with humans overseeing the exceptions.
  • Guardians of Governance: Perhaps the most critical role for data teams will be governing the AI agents. They will define the guardrails, what the agents are allowed to do autonomously vs. where human sign-off is required, how to avoid the AI making erroneous or harmful decisions, and how to monitor the AI’s performance. Data governance and security professionals will work closely with analytics teams to implement policy-based controls on these agents. For example, an agent might be permitted to send an internal Slack alert or create a Jira ticket on its own, but not to send a message directly to a client without approval. Every action an agent takes will likely be logged and auditable. The next few years will see companies extending their data governance frameworks to cover AI behavior, ensuring transparency, preventing “rogue” actions, and maintaining compliance. Data teams will need to build trust dashboards of their own, showing how often agents are intervening, what outcomes resulted, and flagging any questionable AI decisions for review.

In short, data teams will transition from being the sole producers of analytics output to being the enablers and overseers of AI-driven analytics. Their success will be measured not just by the reports they build, but by how well they can leverage AI to scale insights. This means stronger emphasis on data quality, real-time data availability, and robust governance. Culturally, it may require a mindset shift: accepting that some of the work traditionally done “by hand” can be delegated to machines, and that the value of the team is in how they guide those machines and interpret the results, rather than in producing every chart themselves. Organizations that prepare their data talent for this augmented role, through training in AI tools and proactive change management, will handle the transition more smoothly.

Impact on Analytics Delivery: Insights When and Where They’re Needed

Agentic AI will also transform how analytics capabilities are delivered and consumed in the enterprise. Today, the typical delivery mechanism is a dashboard, report, or perhaps a scheduled email, in other words, the user has to go to a tool or receive a static packet of information. In the coming years, analytics delivery will become more embedded, continuous, and personalized, largely thanks to AI agents working behind the scenes.

  • From Dashboards to Embedded Insights: We may witness the beginning of the end of the standalone, static dashboard as the primary analytics product. Instead, insights will be delivered in the flow of work. AI agents can push insights into chat applications, business software (CRM, ERP), or even directly into operational dashboards in real-time. For example, rather than expecting a manager to log into a BI tool, an agent might integrate with Slack or Microsoft Teams to post a daily metrics summary, or inject an alert into a sales system (“this customer is at risk of churning; here’s why…” as a note on the account). This embedded approach has been called “headless BI” or “analytics anywhere,” and agentic AI accelerates it, because the agents can operate through APIs; they aren’t tied to a single UI. The result: analytics becomes more ubiquitous but less visible; users just experience their software getting smarter with data-driven guidance at every turn, courtesy of AI.
  • Autonomous Report Generation: The creation of analytic content itself will increasingly be automated. Need a new report or visualization? In many cases, you won’t file a request to IT or even drag-and-drop it yourself; an AI agent can generate it on the fly. For instance, if a department head wonders about a trend, the agent can compile a quick dashboard or narrative report addressing that query, using templates and visualization libraries. These reports might be ephemeral (created for that moment and then discarded or refreshed later). Over the next few years, as agentic AI gets better at understanding business context, we’ll see “self-serve” taken to the next level: the system serves itself on behalf of the user. One concrete example today is AI that generates Power BI or Tableau dashboards from natural language questions. Going forward, an agent might proactively create an entire dashboard for a quarterly business review meeting, unprompted, because it knows what metrics the meeting usually covers and has detected some changes worth highlighting. Indeed, some modern BI platforms are already hinting at this capability; e.g., Tableau’s upcoming “Pulse” and ThoughtSpot’s Spotter agent aim to deliver key metrics and even generate charts without manual effort.
  • Real-Time Anomaly Detection and Action: Real-time analytics isn’t new, but agentic AI will broaden its impact. Rather than just streaming charts updating in real time, an agentic approach means the moment an anomaly occurs, it’s not only detected, but something happens. This is analytics delivery as an event-driven process. If a sudden spike in website latency is detected, an AI agent might immediately create an incident ticket and ping the on-call engineer with diagnostic info attached. If sales on a new product are surging beyond forecast, an agent might auto-adjust the supply chain parameters or at least alert the inventory planner to stock up. These kinds of immediate, cross-system actions blur the line between analytics and operations. In effect, analytics outputs (insights) and business inputs (actions) merge. The next few years will likely see BI tools integrating more tightly with automation/workflow platforms so that insight-to-action loops can be closed programmatically. As one example, agents could leverage workflow tools (like Salesforce Flow or Azure Logic Apps) to trigger multi-step processes when certain data conditions are met. The vision is an “autonomous enterprise” where routine decisions and responses happen at machine speed, with humans intervening only for exceptions or strategic choices.
  • Continuous Personalization: Analytics delivery will also become more tailored to each user’s context, thanks to AI’s ability to personalize. An agent could learn what each user cares about (their role, their usual queries, and their past behavior) and customize the insights delivered. For example, a VP of Sales might get alerts about big deals slipping, while a CFO’s agent curates financial risk indicators. Both are looking at the same underlying data universe, but their AI agents filter and format insights to what’s most relevant to each. This personalization extends to timing and format; the AI might learn that a particular manager prefers a text summary vs. a chart and deliver information accordingly. In the near term, this might simply mean smarter defaults and recommendations in BI tools. Within a few years, it could mean each executive essentially has a bespoke analytics feed curated by an AI that knows their priorities.

To sum up, analytics capabilities will be delivered more fluidly and in an integrated fashion. Rather than thinking of “going to analytics,” the analytics will come to you, often initiated by an agent. Dashboards and reports will not disappear overnight (they still have their place for deep dives and record-keeping), but the center of gravity will shift toward timely insights injected into decision points. The business impact is significant: decisions can be made faster and in context, and fewer opportunities or risks will slip through unnoticed between reporting cycles. It’s a world where, ideally, nothing important waits for the next report; your AI agent has already informed the right people or taken action.

Organizational Implications: Trust, Culture, and Governance in the Age of AI Agents

The technical capabilities of agentic AI are exciting, but enterprises must also grapple with cultural and organizational implications. Introducing autonomous AI into analytics workflows will affect how people feel about trust, control, and their own roles. Here are some key considerations:

  • Building Trust in AI Decisions: Trust is paramount. If business stakeholders don’t trust the AI outputs or actions, they’ll resist using them. Early in the adoption of agentic AI, organizations should invest in explainability and transparency. Ensure the AI agents can show the rationale behind their conclusions (audit trails, plain-language explanations) to demystify their “thinking.” Start with agents making low-risk decisions and proving their reliability. For instance, let an agent flag anomalies and suggest actions for a period of time, and have humans review its accuracy. As confidence grows, the agent can be allowed to take more autonomous actions. It’s also wise to maintain a human-in-the-loop for critical decisions; for example, an agent might draft an email to a client or a change to pricing, but a human approves it until the AI has earned trust. According to best practices, a well-architected agentic system will log every action and enable easy overrides or rollbacks. Demonstrating these safety nets goes a long way in getting team buy-in.
  • Governance and Ethical Use: Alongside trust is the need for robust governance. Companies will need to update their data governance policies to include AI agent behavior. This means defining what data an agent can access (to prevent privacy violations), what types of decisions it’s allowed to make, and how to handle errors or “hallucinations” (when an AI produces incorrect output). Establish clear accountability: if an AI agent makes a mistake, who checks it and corrects it? Setting up an AI governance committee or expanding the remit of existing data governance boards can help oversee these issues. They should define guidelines like: AI agents must identify themselves as such when communicating (so people know it’s an algorithm), they must adhere to company compliance rules (e.g., not sending sensitive data externally), and they should escalate to humans when a situation is ambiguous or high-stakes. Fortunately, many agentic AI platforms recognize this need and offer role-based controls and audit features. Enterprises should take advantage of those and not treat an autonomous agent as a “set and forget” technology; continuous monitoring is key. Essentially, trust but verify: let the agents run, but keep dashboards for AI performance and a way to quickly intervene if something looks off.
  • Job Roles and Skills Evolution: Understandably, some employees may fear that more AI autonomy could threaten jobs (the classic “will AI replace me?” concern). It’s critical for leadership to address this proactively as part of cultural change. The narrative should be that agentic AI is meant to augment human talent, not replace it, taking over drudgery and enabling people to focus on higher-value work. In many cases, new roles will emerge (as discussed for data teams), and existing roles will shift to incorporate AI supervision. Training and upskilling programs will be important so that staff know how to work with AI agents. For example, train business analysts to interpret AI-generated insights and ask the right questions of the system, or train data scientists on how to embed AI agents into workflows. Equally, encourage development of “soft skills” like critical thinking and data storytelling, because while the AI can crunch data, humans still need to translate insights into decisions and convince others of a course of action. Organizations that treat this as an opportunity for employees to become more strategic and tech-savvy will find the cultural transition much smoother than those that simply impose the technology. Including end-users in pilot projects (so they can give feedback on the agent’s behaviors and feel ownership) is another good practice to ease adoption.
  • Data Literacy and Decision Culture: With AI taking on more analytics tasks, one might worry that employees’ data skills will atrophy. On the contrary, if rolled out correctly, agentic AI can actually raise the baseline of data literacy in the company. When AI agents provide insights in accessible language, it can educate users on what the data means. People might start to internalize, for example, which factors typically influence sales because their AI assistant frequently points them out. However, there’s a flip side: employees must be educated not to blindly follow AI. A culture of healthy skepticism and validation should be maintained, e.g., encouraging users to double-check critical suggestions or understand the “why” behind agent actions. Essentially, “trust the AI, but verify the results” should be a mantra. Businesses should continue investing in data literacy programs, now including AI literacy: teaching staff the basics of how these analytics agents work, their limitations, and how to interpret their outputs. This will empower employees to use AI as a tool rather than see it as a mysterious black box or, worse, a threat.
  • Change Management and Communication: Rolling out agentic AI capabilities enterprise-wide is a major change that touches processes and people across departments. A strong change management plan is essential. Communicate early and often about what agentic AI is, why the company is adopting it, and how it will benefit both the organization and individual employees (e.g., “It will free you from manual spreadsheet updates so you can spend more time with clients”). Highlight success stories from pilot tests; for instance, if the sales team’s new AI agent helped them respond faster to lead changes, share that story. Address concerns in open forums. And provide channels for feedback once it’s in use: users should have a way to report if the AI agent did something weird or if they have ideas for improvements. Culturally, leadership should champion a mindset of responsible experimentation, encourage teams to try these new AI-driven workflows while also reinforcing that ethical considerations and human judgment remain paramount. Over the next few years, companies that actively shape their culture around human-AI collaboration will likely outperform those that simply deploy the tech and hope people figure it out.

Preparing for the Agentic AI Era: Recommendations for Enterprises

Agentic AI in analytics is on the horizon, and the time to prepare is now. Here’s a forward-thinking game plan for enterprises to get ready for this shift:

  • Strengthen Data Foundations: Ensure your data house is in order. Agentic AI thrives on timely, high-quality data. Invest in data readiness, integrate your data sources, clean up quality issues, and build the pipelines for real or near-real-time data access. Consider modern data architectures (like data lakes or warehouses with streaming capabilities) that an AI agent can tap into on demand. The next 1–3 years should see upgrades to data infrastructure with an eye toward supporting AI: e.g., adopting tools that allow easy API access to data, implementing robust data catalogs/semantic layers (so the AI agents understand business definitions), and generally making data more available and trustworthy. Simply put, if your data is fragmented or slow, an AI agent won’t magically fix that; lay the groundwork now.
  • Start with Pilot Projects: Rather than flipping a switch enterprise-wide, start by introducing agentic AI on a smaller scale to learn what works. Identify a use case with clear value, for example, an AI agent to monitor financial metrics for anomalies, or an agent to handle marketing campaign optimization suggestions. Pilot it in one department or process. This allows you to fine-tune the technology and the human processes around it. In the pilot, closely involve the end-users and gather feedback: Did the agent provide useful insights? Did it make any mistakes? How was the user experience? Use these lessons to refine your approach before scaling up. Early successes will also build momentum and buy-in within the organization. By experimenting in the next year, you’ll develop internal expertise and champions who can lead broader adoption in years 2 and 3.
  • Invest in Skills and Change Management: Prepare your people, not just your tech. Launch training programs and workshops to familiarize employees with the concepts of AI-driven analytics. Train your data teams on the specific AI tools or platforms you plan to use (maybe it’s a feature in your BI software, or a custom AI solution using Python frameworks). Also, upskill business users on how to interpret AI outputs, for instance, how to converse with a data chatbot effectively, or how to verify an AI-generated insight. Simultaneously, engage in change management: communicate the vision that agentic AI will augment everyone’s capabilities. Address the “what does this mean for my job” questions head-on (perhaps emphasizing that the organization will re-invest efficiency gains into growth, not just headcount cuts, to quell fears). Encourage a culture of continuous learning so employees see this as an opportunity to learn new tools and advance their roles. Essentially, prepare the human minds for the change, not just the IT systems.
  • Define Governance and Guardrails: Before unleashing AI agents, define the governance policies that will keep them in check. Assemble the relevant stakeholders (IT, data governance, legal, business leaders) to map out scenarios: What decisions can the AI make autonomously? What data is it allowed to use? How will we handle errors or unexpected outcomes? Draft guidelines such as “AI must tag any outbound communication as AI-generated” or “For decisions impacting spend over $X, require human approval”. Set up an oversight process, maybe a periodic review of AI agent logs and outcomes by a governance board. This preparation will help prevent incidents and also reassure everyone that there are safety nets. Additionally, explore your tool’s capabilities for setting roles/permissions for agents. Many modern analytics platforms embed governance features (for example, ensuring the AI only uses governed data sources or limiting integration points to approved systems). Leverage those. In short, treat your AI agent like a new team member: it needs a “job description” and supervision.
  • Reimagine Processes and Roles: Be proactive in redesigning workflows to integrate AI agents. Don’t just slap AI onto existing processes; think about where decisions or handoffs could be made more efficient. For example, if marketing currently meets weekly to adjust campaigns, could an AI agent handle adjustments daily and the meeting shift to strategy? If data engineers spend time on routine pipeline fixes, can an agent auto-detect and resolve some of those? Start mapping these possibilities and adjusting team roles accordingly. You might formally assign someone as an “AI operations” lead to monitor all agent activity. You might need to update incident response playbooks to include AI-generated alerts. Also consider KPI changes: perhaps include metrics like “number of autonomous decisions executed” or “AI agent precision (accuracy of its recommendations)” as new performance indicators for the analytics program. By envisioning these changes early, you can guide the transition rather than just reacting to it.
  • Develop a Clear Vision and Executive Support: Finally, ensure there is a clear point of view from leadership on why the organization is embracing agentic AI. Tie it to business goals (faster insights, more competitive decisions, empowered employees, etc.). When leadership articulates a positive vision, e.g., “In three years, we aim to have AI copilots assisting every team, elevating our decision-making and freeing us to focus on innovation,” it gives the effort purpose and urgency. Secure executive sponsorship to allocate budget and to champion the change across departments. Enterprises should also track the industry and learn from others: join communities or forums on AI in analytics, and perhaps partner with vendors or consultants who specialize in this area (since they can share best practices from multiple client experiences). A clear, supported strategy will help coordinate the technical and cultural preparation into a successful transformation.

Agentic AI represents a bold leap in the evolution of business intelligence, from tools that we operate to intelligent agents that work alongside us (and sometimes ahead of us). In the next 1–3 years, we can expect early forms of these AI agents to become part of everyday analytics in forward-thinking enterprises. They will likely start by tackling well-defined tasks: automatically generating reports, sending alerts for anomalies, and answering common analytical questions. Over time, as trust and sophistication grow, their autonomy will increase to more complex orchestrations and decision executions. The payoff can be substantial: faster decision cycles, decisions that are more data-driven and less prone to human overlook, and analytics capabilities that truly scale across an organization. Companies that embrace this shift early could gain a competitive edge, outpacing those stuck in manual analytics with speed, agility, and insights that are both deeper and more timely.

Yet, success with agentic AI won’t come just from buying the latest AI tool. It requires a thoughtful approach to technology, process, and people. The enterprises that thrive will be those that pair innovation with governance, enthusiasm with education, and automation with a human touch. By laying the groundwork now, improving data infrastructure, cultivating AI-friendly skills, and establishing clear rules, organizations can confidently welcome their new AI “colleagues” and harness their potential. In the near future, your most trusted analyst might not be a person at all, but an algorithmic agent that never sleeps, never gets tired, and continuously learns. The question is, will your organization be ready to partner with it and leap ahead into this new age of analytics?

Sources:

  • Ryan Aytay, Tableau, “Agentic Analytics: A New Paradigm for Business Intelligence”, Tableau Blog (April 2025)
  • Arend Verschueren, Biztory, “Agentic Analytics: The Future of Autonomous BI” (June 2025)
  • Shuchismita Sahu, Medium, “Agentic BI: Your Intelligent Data Analyst Revolution” (May 2025)
  • Will Thrash, Perficient Blogs, “Elevate Your Analytics: Overcoming the Roadblocks to AI-Driven Insights” (Jan 2025)
  • Will Thrash, Perficient Blogs, “Headless BI?” (Nov 2023)

 

]]>
https://blogs.perficient.com/2025/08/13/from-self-service-to-self-driving-how-agentic-ai-will-transform-analytics-in-the-next-3-years/feed/ 0 386080
AI-Powered Personalization: Integrate Adobe Commerce with Real-Time CDP https://blogs.perficient.com/2025/08/13/ai-powered-personalization-integrate-adobe-commerce-with-real-time-cdp/ https://blogs.perficient.com/2025/08/13/ai-powered-personalization-integrate-adobe-commerce-with-real-time-cdp/#respond Wed, 13 Aug 2025 14:53:53 +0000 https://blogs.perficient.com/?p=385760

In today’s hyper-personalized digital world, delivering the right message to the right customer at the right time is non-negotiable. 

Adobe Commerce is a powerful eCommerce engine, but when coupled with Adobe Real-Time CDP (Customer Data Platform), it evolves into an intelligent experience machine, which is capable of deep AI-powered personalization, dynamic segmentation, and real-time responsiveness. 

What is Adobe Real-Time CDP? 

Adobe Real-Time CDP is a Customer Data Platform that collects and unifies data across various sources (websites, apps, CRM, etc.) into a single, comprehensive real-time customer profile. This data is then accessible to other systems for marketing, sales, and service. 

Key Capabilities of Real-time CDP

  • Real-time data ingestion and activation.  
  • Identity resolution across devices and platforms 
  • AI-driven insights and audience segmentation 
  • Data governance and privacy compliance tools 

Why Integrate Adobe Commerce with Adobe CDP? 

Adobe Commerce offers native customer segmentation, but it’s limited to session or behavior data within the commerce environment. When the customer data is vast, the native segmentation becomes very slow, impacting overall performance.  

What We Gain with Real-Time CDP

FeatureNative CommerceAdobe Real-Time CDP
SegmentationStatic, rule-basedReal-time, AI-powered
Data SourcesCommerce-onlyOmnichannel (web, CRM, etc.)
PersonalizationSession-basedCross-channel, predictive
Identity GraphNo Identity GraphCross-device customer data
ActivationLimited to CommerceActivate across systems

Use Cases

  1. Win-back Campaign: Identify dormant users in CDP and activate personalized discounts  
  2. Cart Recovery: Capture cart abandonment events. 
  3. High-Intent Buyers: Target customers who browse premium products but didn’t convert 

Integration of Adobe Commerce with Adobe Real-Time CDP 

Data Layer Implementation

  • Install Adobe Experience Platform Web SDK to enable real-time event tracking and identity collection.  
  • Define and deploy a custom XDM schema aligned with Commerce events. 

CDP Personalization Schema

Customer Identity Mapping

  • Implement Adobe Identity Service to build unified customer profiles across anonymous and logged-in sessions. 
  • Ensure login/signup events are tracked for persistent identification. 

Data Collection Configuration

  • Tag key Commerce events (add to cart, purchase, product) to collect data. 
  • Set up batch or streaming ingestion using the following extensions: 
    • audiences-activation 
    • experience-platform-connector
  • Admin configuration for Organization ID, Dataset ID & Data Stream ID:  
    • System -> Services -> Data Connection 
    • System -> Services -> Commerce Service Connector 

Real time CDP Personalization

Audience Segmentation & Activation

  • Create dynamic audiences using behavioral, transactional, and CRM data.  
  • Assign Audience in Adobe Commerce. 

Personalization Execution

  • Leverage Adobe Target or Adobe Experience Manager (AEM) to serve personalized content.
  • CDP can be used for decision making, like suppressing offers to churn customers. 

Challenges to Consider 

  • Data Governance: Ensure GDPR/CCPA compliance with CDP’s consent management tools. 
  • Identity Resolution Complexity: Work closely with marketing teams to define identity rules. 
  • Cross-Team Collaboration: Integration touches data engineering, commerce, marketing, and legal teams.

Conclusion 

Integrating Adobe Commerce with CDP empowers both business and technical teams to unify profiles and stay ahead in a dynamic marketplace by delivering personalization 

Adobe Real-Time CDP is not just a marketing tool, it’s an asset for creating commerce experiences that adapt to the customer in real-time.   

]]>
https://blogs.perficient.com/2025/08/13/ai-powered-personalization-integrate-adobe-commerce-with-real-time-cdp/feed/ 0 385760
How to Setup Nwayo Preprocessor in Magento 2 https://blogs.perficient.com/2025/08/13/how-to-setup-nwayo-preprocessor-in-magento-2/ https://blogs.perficient.com/2025/08/13/how-to-setup-nwayo-preprocessor-in-magento-2/#respond Wed, 13 Aug 2025 06:01:56 +0000 https://blogs.perficient.com/?p=385807

What is Nwayo?

Nwayo Preprocessor is an extendable front-end boilerplate designed to streamline development for multi-theme, multi-site, and multi-CMS front-end frameworks. It provides an efficient workflow for building responsive, scalable, and maintainable web themes across different platforms. 

In Magento 2, Nwayo can be particularly beneficial for front-end developers as it simplifies the theme deployment process. With just a single change in the Sass files, the framework can automatically regenerate and apply updates across the site. This approach not only accelerates the development process but also ensures consistency in the front-end experience across various themes and websites. 

Benefits of Using Nwayo Preprocessor

Time-Saving and Efficiency

  •  Nwayo automates the process of compiling and deploying front-end code, particularly Sass to CSS, with just a few commands. This allows developers to focus more on building and refining features rather than managing repetitive tasks like manual builds and deployments.                                                                                                            

Scalability Across Multi-Site and Multi-Theme Projects

  • Nwayo is designed to handle multi-site and multi-theme environments, which is common in complex platforms like Magento 2. This allows developers to easily maintain and apply changes across different sites and themes without duplicating efforts, making it ideal for large-scale projects.                                                                                   

Consistency and Maintainability

  • By centralizing code management and automating build processes, Nwayo ensures that all updates made in Sass files are applied consistently throughout the project. This helps in maintaining a uniform look and feel across different sections and themes, reducing the risk of human error and improving maintainability.                                                                                                                                                                                                                                        

Flexibility and Extensibility

Nwayo is highly extensible, allowing developers to tailor the boilerplate to their specific project needs. Whether it’s adding new workflows, integrating with different CMS platforms, or customizing the theme, Nwayo provides a flexible framework that can adapt to various front-end requirements.                                             

Version Control and Updates

With built-in commands to check versions and install updates, Nwayo makes it easy to keep the workflow up to date. This ensures compatibility with the latest development tools and standards, helping developers stay current with front-end best practices.  

Requirements to Set Up Nwayo

i)Node.js 

ii) Nwayo CLI

How to Set Up Nwayo Preprocessor in Magento 2?

Run the commands in your project root folder

Step 1

  • To set the boilerplate over the project 
  • npx @absolunet/nwayo-grow-project

Step 2

  • Install workflow and vendor (in the Nwayo root folder)
  • npm install

Step 3

  • Install CLI (in the Nwayo root folder) 
  • npm install -g @absolunet/nwayo-cli

 Step 4

  • Install Nwayo Workflow (in the Nwayo folder) 
  • nwayo install workflow

Step 5

  • Run the project (in the Nwayo folder) 
  • (It will convert Sass to CSS) 
  • nwayo Run watch

Step 6

  • Build the project (in the Nwayo folder) 
  • (It will build the Sass Files) 
  • nwayo rebuild

 Nwayo Rebuild

Magento 2 Integration

Nwayo integrates seamlessly with Magento 2, simplifying the process of managing multi-theme, multi-site environments. Automating Sass compilation and CSS generation allows developers to focus on custom features without worrying about the manual overhead of styling changes. With Nwayo, any updates to your Sass files are quickly reflected across your Magento 2 themes, saving time and reducing errors. 

Compatibility with Other Frameworks and CMS

Nwayo is a versatile tool designed to work with various front-end frameworks and CMS platforms. Its extendable architecture allows it to be used beyond Magento 2, providing a unified front-end development workflow for multiple environments. Some of the other frameworks and platforms that Nwayo supports include: 

1.WordPress

Nwayo can be easily adapted to work with WordPress themes. Since WordPress sites often rely on custom themes, Nwayo can handle Sass compilation and make theme management simpler by centralizing the CSS generation process for various stylesheets used in a WordPress project. 

2. Drupal

For Drupal projects, Nwayo can streamline theme development, allowing developers to work with Sass files while ensuring CSS is consistently generated across all Drupal themes. This is especially helpful when maintaining multi-site setups within Drupal, as it can reduce the time needed for theme updates. 

3. Laravel

When working with Laravel-based applications that require custom front-end solutions, Nwayo can automate the build process for Sass files, making it easier to manage the styles for different views and components within Laravel Blade templates. It helps keep the front-end codebase clean and optimized. 

4. Static Site Generators (Jekyll, Hugo, etc.)

Nwayo can also be used in static site generators like Jekyll or Hugo. In these setups, it handles the styling efficiently by generating optimized CSS files from Sass. This is particularly useful when you need to manage themes for static websites where speed and simplicity are key priorities. 

Framework-Agnostic Features

Nwayo’s CLI and Sass-based workflow can be customized to work in nearly any front-end project, regardless of the underlying CMS or framework. This makes it suitable for developers working on custom projects where there’s no predefined platform, allowing them to benefit from a consistent and efficient development workflow across different environments. 

Performance and Optimization

Nwayo includes several built-in features for optimizing front-end assets: 

  • Minification of CSS files: Ensures that the final CSS output is as small and efficient as possible, helping to improve page load times. 
  • Code Splitting: Allows developers to load only the required CSS for different pages or themes, reducing the size of CSS payloads and improving site performance. 
  • Automatic Prefixing: Nwayo can automatically add vendor prefixes for different browsers, ensuring cross-browser compatibility without manual adjustments.              

Custom Workflow Adaptation

Nwayo’s modular architecture allows developers to easily add or remove features from the workflow. Whether you’re working with React, Vue, or other JavaScript frameworks, Nwayo’s preprocessor can be extended to fit the unique requirements of any project. 

Example Framework Compatibility Diagram

You could include a diagram or chart that shows Nwayo’s compatibility with different frameworks and CMS: 

Framework Compatibility Diagram

This visual table makes it clear which frameworks Nwayo supports, giving developers an overview of its flexibility. 

10 Useful Nwayo Preprocessor Commands 

In addition to the basic commands for setting up and managing Nwayo in your project, here are other helpful commands you can use for various tasks:                                                                                                                                           

1. Check Nwayo Version

Check Nwayo Version   

This command allows you to verify the currently installed version of Nwayo in your environment. 

2. Install Vendors 

Install Vendors

Installs third-party dependencies required by the Nwayo workflow, making sure your project has all the necessary assets to function correctly. 

3. Remove Node Modules 

Remove Node Modules

This command clears the node_modules folder, which may be helpful if you’re facing dependency issues or need to reinstall modules. 

4. Build the Project 

Build The Project

Runs a complete build of the project, compiling all Sass files into CSS. This is typically used when preparing a project for production.

5. Watch for File Changes 

Watch For File Changes

Watches for changes in your Sass files and automatically compiles them into CSS. This is useful during development when you want real-time updates without having to manually trigger a build. 

6. Linting (Check for Code Quality) 

Linting (check For Code Quality)

Checks your Sass files for code quality and best practices using predefined linting rules. This helps ensure that your codebase follows consistent styling and performance guidelines. 

7. Clean Build Artifacts 

Clean Build Artifacts

Removes generated files (CSS, maps, etc.) to ensure that you’re working with a clean project. This can be useful when preparing for a fresh build.

8. Generate Production-Ready CSS

Generate Production Ready Css

This command builds the project in production mode, minifying CSS files and optimizing them for faster load times.

9. List Available Commands

List Available Commands

Displays all available commands, providing a quick reference for tasks that can be executed via the Nwayo CLI.

10. Nwayo Configurations (View or Edit) 

Nwayo Configurations (view Or Edit)

Allows you to view or modify the configuration settings for your Nwayo setup, such as output paths or preprocessing options.

By utilizing these commands, you can take full advantage of Nwayo’s features and streamline your front-end development workflow in Magento 2 or other compatible frameworks.

For a complete list of commands, visit the Nwayo CLI Documentation.

Reference Links

For more detailed information and official documentation on Nwayo, visit the following resources:

  1. Nwayo Official Documentation
    https://documentation.absolunet.com/nwayo/
    This is the official guide to setting up and using Nwayo. It includes installation instructions, supported commands, and best practices for integrating Nwayo with various frameworks, including Magento 2.
  2. Nwayo GitHub Repository
    https://github.com/absolunet/nwayo
    The GitHub repository provides access to the Nwayo source code, release notes, and additional resources for developers looking to contribute or understand the inner workings of the tool.
  3. Nwayo CLI Documentation
    https://npmjs.com/package/@absolunet/nwayo-cli
    This page details the Nwayo CLI, including installation instructions, supported commands, and usage examples.

Conclusion

In conclusion, using Nwayo code can significantly simplify the development process, allowing developers to focus on building unique features rather than spending time on repetitive tasks. By utilizing existing code templates and libraries, developers can save time and improve their productivity.

]]>
https://blogs.perficient.com/2025/08/13/how-to-setup-nwayo-preprocessor-in-magento-2/feed/ 0 385807
Live Agent Escalation in Copilot Studio Using D365 Omnichannel – Architecture and Use Case https://blogs.perficient.com/2025/08/13/live-agent-escalation-in-copilot-studio-using-d365-omnichannel-architecture-and-use-case/ https://blogs.perficient.com/2025/08/13/live-agent-escalation-in-copilot-studio-using-d365-omnichannel-architecture-and-use-case/#respond Wed, 13 Aug 2025 05:58:08 +0000 https://blogs.perficient.com/?p=385242

With the increasing use of AI chatbots, businesses often face one key challenge: when and how to seamlessly hand over the conversation from a bot to a human agent.

In this two-part series, I’ll walk you through how we used Microsoft Copilot Studio and Dynamics 365 Omnichannel to build a live agent escalation feature. Part 1 will focus on the why, what, and architecture, and Part 2 will deep dive into the actual implementation.

Problem Statement

Chatbots are great for handling FAQs and basic support, but they fall short when:

  • A customer is frustrated or confused

  • Complex or sensitive issues arise

  • Immediate human empathy or decision-making is needed

In such cases, a real-time live agent transfer becomes essential.

High-Level Use Case

We built a chatbot for a customer portal using Copilot Studio. While it handles common queries, we also needed to:

  • Escalate conversations to live agents if the user asks for it

  • Preserve chat context during handoff

  • Route to the correct agent or queue based on rules

  • Provide agents with complete chat history and customer info

Architecture Overview

Here’s how the components interact:

[User][Copilot Studio Bot][Transfer to Agent Node] [Omnichannel Workstream] [Queue with Available Agents] [Agent in Customer Service Workspace]
Architecture Copilot Live Agent

Tools Involved

  • Copilot Studio: Low-code chatbot builder

  • D365 Omnichannel for Customer Service: Real-time chat and routing

  • Customer Service Workspace: Where agents receive and respond to chats

  • Web Page: To host the bot on a public-facing portal

Benefits of This Integration

  • Bot handles everyday tasks, reducing agent load

  • Smooth escalation without losing chat context

  • Intelligent routing via workstreams and queues

  • Agent productivity improves with transcript visibility and customer profile.

Conclusion

In this first part of our blog series, we explored the high-level architecture and components involved in enabling a seamless live agent transfer from Copilot Studio to a real support agent via D365 Omnichannel.

By combining the conversational power of Copilot Studio with the robust routing and session management capabilities of Omnichannel for Customer Service, organizations can elevate their customer support experience by offering the best of both automation and human interaction.

What’s Next in Part 2?

In Part 2, I’ll walk you through:

  • Setting up Omnichannel in D365

  • Creating the bot in Copilot Studio

  • Configuring escalation logic

  • Testing the live agent transfer end-to-end

Stay tuned!

]]>
https://blogs.perficient.com/2025/08/13/live-agent-escalation-in-copilot-studio-using-d365-omnichannel-architecture-and-use-case/feed/ 0 385242
Frontend Standards for Optimizely Configured Commerce: Clean & Scalable Web Best Practices https://blogs.perficient.com/2025/08/13/frontend-standards-for-optimizely-configured-commerce-clean-scalable-web-best-practices/ https://blogs.perficient.com/2025/08/13/frontend-standards-for-optimizely-configured-commerce-clean-scalable-web-best-practices/#respond Wed, 13 Aug 2025 05:02:02 +0000 https://blogs.perficient.com/?p=385731

Optimizely Configured Commerce (Spire) is a strong platform for creating content-rich ecommerce sites, especially suited for wholesalers and manufacturers. Alongside powerful e-commerce features like product recommendations and fast ordering, it supports extensive storefront customization with blogs, case studies, forums, and other content types.

To build clean, scalable, and maintainable websites on Optimizely Configured Commerce, it’s crucial to follow these essential frontend coding standards and best practices.

  • Maintaining clean, standardized frontend code ensures scalable, accessible, and high-performing websites. 
  • Optimizely Spire emphasizes structured, reusable components instead of raw HTML or inconsistent styling. 
  • Adopting consistent coding conventions across HTML, CSS, and component design keeps projects maintainable and easy to scale. 

The suggested coding standards for Spire’s frequently used frontend elements are listed below.

Wrapper Elements (div)

Using generic wrappers like <div> can lead to unnecessary nesting and less maintainable code. Prefer styled components to create clear, consistent, and scalable layouts. 

    • Less preferable options

      <div class="container">…</div>
      <Typography as="div">…</ Typography >
      
    • Recommended options

      <StyledWrapper className="container">…</StyledWrapper>

Heading(h1-h6, p, span, strong)

Use the Typography component with the correct “as” prop for all headings and text elements to ensure semantic HTML, accessibility, and consistent styling. 

Note: If you don’t specify the as” prop, Typography defaults to rendering as a <span> 

    • Less preferable options

      <h2> Title</h2>
      <div class="heading">…</div>
    • Recommended options

      <Typography as="h2">…</Typography>

Anchor/Button

For navigation and actions, avoid mixing raw tags. Use dedicated components to keep interactions accessible, consistent, and semantically correct. 

    • Less preferable options

      <a href="/url">…</a>
      <button><a href="/url">…</a></button>
      <button class="btn btn-primary">…</button>
      
    • Recommended options

      <Link href="url">…</Link>
      <Clickable href="url">…</Clickable>
      <Button variant="primary">…</Button>
      

Image

Avoid raw <img> tags and use image components that provide better accessibility and responsive handling for consistent and maintainable layouts.

    • Less preferable options

      <img src="/images/logo.png" alt="Logo" />
    • Recommended options

      <Img src="/images/logo.png" altText={translate(“Logo")} />

Table Elements(table, thead, tbody, th, td)

Avoid using raw HTML tables or div-based layouts. Instead, use table components to maintain semantic markup, improve accessibility, and ensure consistent styling.

    • Less preferable options

      <div style="display:table;width:100%">
        <div style="display:table-row;font-weight:bold">
          <div style="display:table-cell;padding:8px">…</div>
        </div>
      </div>
      
      <table>
        <tr><th>…</th></tr>
        <tr><td>…</td></tr>
      </table>
      
    • Recommended options

      <DataTable>
          <DataTableHead>
              <DataTableHeader>Date</DataTableHeader>
          </DataTableHead>
          <DataTableBody>
              <DataTableRow><DataTableCell>2025-06-28</DataTableCell></DataTableRow>
          </DataTableBody>
      </DataTable>
      

Row/Column

By preventing inline styles, ensuring clean, responsive layouts, and maintaining a consistent grid structure throughout the project, GridContainer and GridItem help you avoid hard-coded rows, columns, or improper nesting.

    • Less preferable options

      <div class="row">
          <div class="column" style="width: 50%;">Left content</div>
          <div class="column" style="width: 50%;">Right content</div>
      </div>
      
      <GridContainer>
          <GridItem width={[12, 12, 12, 12, 12]}>
              <GridItem width={[12, 12, 12, 6, 6]}>… </GridItem>
              <GridItem width={[12, 12, 12, 6, 6]}>… </GridItem>          
          </GridItem>
      </GridContainer>
      
      <GridContainer>
          <StyledWrapper>
              <GridItem width={[12, 12, 12, 6, 6]}>…</GridItem>
          </StyledWrapper>
          <GridItem width={[12, 12, 12, 6, 6]}>…</GridItem>
      </GridContainer>
      
    • Recommended options

      <GridContainer>
          <GridItem width={[12, 12, 12, 6, 6]}>… </GridItem>
          <GridItem width={[12, 12, 12, 6, 6]}>…</GridItem>
      </GridContainer>
      
      <GridContainer>
          <GridItem width={[12, 12, 12, 12, 12]}>
              <GridContainer>
                  <GridItem width={[12, 12, 12, 6, 6]}>… </GridItem>
                  <GridItem width={[12, 12, 12, 6, 6]}>… </GridItem>  
              </GridContainer>
          </GridItem>
      </GridContainer>
      
      <GridContainer>
          <GridItem width={[12, 12, 12, 12, 12]}>
              <GridContainer>
                  <GridItem width={[12, 12, 12, 6, 6]}>… </GridItem>
                  <GridItem width={[12, 12, 12, 6, 6]}>… </GridItem>
              </GridContainer>
              <StyledWrapper>…</StyledWrapper>
          </GridItem>
      </GridContainer>
      

Hidden/VisuallyHidden

Avoid using inline styles or custom classes to hide content. Instead, use dedicated components like Hidden or VisuallyHidden to ensure accessibility and consistent behavior.

    • Less preferable options

      <div style="display: none;">…</div>
      
      <span style="position: absolute; width: 1px; height: 1px; margin: -1px; padding: 0; border: 0; clip: rect(0, 0, 0, 0); overflow: hidden;">…</span>
      
      <span class="visually-hidden">…</span>
      
    • Recommended options

      <Hidden below="md">…</Hidden>
      <VisuallyHidden>…</VisuallyHidden>
      

Translation

Always wrap translatable text with the translate() function to ensure localization support, avoid hardcoded strings, and make the UI adaptable to different languages.

    • Less preferable options

      <Typography>Translatable Text</Typography>
    • Recommended options

      <Typography>{translate(" Translatable Text")}</Typography>

GridItem and GridContainer – Spacing

Use the gap property on GridContainer to manage spacing between columns, rather than manually adding padding or margins. This ensures cleaner, more consistent, and maintainable layouts.

    • Less preferable options

      customGridContainer: { css: css` > div > div {padding: 8px;} `,}
    • Recommended options

      customGridContainer: {gap: 16}

Breakpoints

Customize your breakpoints to avoid repeating identical values. This promotes consistency, reduces redundancy, and supports a scalable layout aligned with your design system.

    • Less preferable options

      breakpoints: {
          keys: ["xs", "sm", "md", "lg", "xl"],
          values: [0, 320, 768, 1024, 1024],
          maxWidths: [390, 768, 1024, 1024, 1440],
      },
      
    • Recommended options

      breakpoints: {
          keys: ["xs", "sm", "md", "lg", "xl"],
          values: [0, 576, 768, 992, 1200],
          maxWidths: [540, 540, 720, 960, 1140]
      },
      const customTheme = {
          ...baseTheme,
          breakpoints: {
            ...baseTheme.breakpoints,
            values: [0, 800, 1200],        // Custom breakpoint widths
            maxWidths: [520, 720, 1100]    // Max container widths per range
          },
      }

Icons

Avoid hardcoding color values in SVG files. Instead, use currentColor to allow icons to inherit the surrounding text color, ensuring consistency with the theme and simplifying theming.

    • Less preferable options

      <svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24">
        <path d="M16.59 8.59L12 13.17L7.41 8.59L6 10L12 16L18 10Z" fill="#f00"/>
      </svg>
      
    • Recommended options

      <svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24">
        <path d="M16.59 8.59L12 13.17L7.41 8.59L6 10L12 16L18 10Z" fill="currentColor"/>
      </svg>
      

Writing CSS for Dynamically Generated Classes and Selectors

Avoid targeting hashed or auto-generated class names. Instead, use stable selectors or “data-attributes” to ensure maintainable and predictable styling, and to prevent breakage when class names change.

    • Less preferable options

      .girEbT .GridItemStyle-sc-1qhu4nt {…}
      [class*="GridItemStyle—sc--"], 
      [class*="GridItemStyle--"], 
      [class*="GridItemStyle-"] {…}
      
    • Recommended options

      [class*="GridItemStyle"] {…}
      .testClassName {…}
      [data-test-selector=”testSelector”] {…}
      

Conclusion

  • Utilize built-in Optimizely Spire components like Tabs, Accordions, Checkboxes, Radios, and Form Fields to maintain clean, standardized code.
  • Following consistent frontend coding standards guarantees projects that are scalable, maintainable, and accessible.
  • Writing modular, reusable code minimizes technical debt, fosters better collaboration, and enhances overall user experience.
  • Embracing these best practices ensures your projects are future-ready and easier to maintain over time
]]>
https://blogs.perficient.com/2025/08/13/frontend-standards-for-optimizely-configured-commerce-clean-scalable-web-best-practices/feed/ 0 385731