Perficient’s PRISM Employee Resource Group (ERG) launched an initiative to engage senior leaders across Perficient in meaningful, visible LGBTQ+ allyship. The Executive Ally Program aims to connect leaders across Perficient with the PRISM community, fostering genuine relationships and creating spaces where every colleague feels seen, heard, and valued. With more than 7,000 colleagues worldwide, this program is a strategic effort to ensure allyship starts at the highest levels of leadership and cascades throughout the company.
Simply put, inclusive leadership drives better business outcomes. Research shows that workplaces that embrace diversity and inclusion attract broader talent pools, improve employee retention, and foster environments where everyone can thrive.
At Perficient, we believe that leaders who actively champion inclusion create stronger, more innovative teams. Through the Executive Ally Program, Perficient colleagues are leading this effort by engaging in open, authentic conversations with their leaders, helping to educate and inspire them to become executive allies.
According to research by Harvard Business Review, members of the LGBTQ+ community defined being a good ally as having three main components: being accepting, taking action, and having humility. There is a phased strategy behind the Executive Ally Program to touch on each of these areas.
PRISM members can help identify and connect with leaders to be part of building a more inclusive and informed culture from the top-down. Here is an overview of how the program works:
PRISM recognizes that allyship is a journey. This program lays a solid foundation for leaders looking to increase the depth, breadth, and visibility of their allyship efforts. The program also addresses cultural nuances, aiming to scale inclusivity efforts across Perficient globally.
The inaugural cohort of leaders in the program is currently underway. As these leaders grow in their understanding and commitment, they will serve as inspirations for others across the organization.
By encouraging leaders to take a formal pledge to uphold and champion inclusivity, this reinforces Perficient’s commitment to fostering an environment where every team member can thrive personally and professionally.
The Executive Ally Program is not a one-time event but a continuous journey toward a more inclusive and supportive workplace. It reflects Perficient’s dedication to creating a culture where everyone can bring their authentic selves to work.
It’s no secret our success is because of our people. No matter the technology or time zone, our colleagues are committed to delivering innovative, end-to-end digital solutions for the world’s biggest brands, and we bring a collaborative spirit to every interaction. We’re always seeking the best and brightest to work with us. Join our team and experience a culture that challenges, champions, and celebrates our people.
Visit our Careers page to see career opportunities and more!
Go inside Life at Perficient and connect with us on LinkedIn, YouTube, Twitter, Facebook, TikTok, and Instagram.
]]>Welcome to Part 2 of this blog series! In Part 1, we discussed the high-level architecture and use case for enabling live agent transfer from a chatbot.
In this post, I’ll walk you through the actual steps to build this feature using:
To collect feedback after the chat ends, enable the native post-conversation survey feature in Omnichannel.
That’s it – once the chat ends, users will be prompted with your feedback form automatically.
This setup enables a production-ready escalation workflow with:
This approach balances bot automation with human empathy by allowing live agent transfers when needed. Copilot Studio and D365 Omnichannel work well together for modern, scalable customer service solutions.
]]>In recent years, AI chatbots like ChatGPT have gone from fun tools for answering questions to serious helpers in workplaces, education, and even personal decision-making. With ChatGPT-5 now being the latest and most advanced version, it’s no surprise that people are asking a critical question:
“Is my personal data safe when I use ChatGPT-5?”
ChatGPT-5 is an AI language model created by OpenAI. You can think of it like a super-smart digital assistant that can:
It learns from patterns in data, but here’s an important point – it doesn’t “remember” your conversations unless the developer has built a special memory feature and you’ve agreed to it.
When you chat with ChatGPT-5, your messages are processed to generate a response. Depending on the app or platform you use, your conversations may be:
This is why reading the privacy policy is not just boring legal stuff – it’s how you find out precisely what happens to your data.
The concerns about ChatGPT-5 (and similar AI tools) are less about it being “evil” and more about how your data could be exposed if not appropriately handled.
Here are the main risks:
Many users unknowingly type personal details – such as their full name, home address, phone number, passwords, or banking information – into AI chat windows. While the chatbot itself may not misuse this data, it is still transmitted over the internet and may be temporarily stored by the platform. If the platform suffers a data breach or if the information is accessed by unauthorized personnel, your sensitive data could be exposed or exploited.
Best Practice: Treat AI chats like public forums – never share confidential or personally identifiable information.
AI chatbots are often integrated into third-party platforms, such as browser extensions, productivity tools, or mobile apps. These integrations may collect and store your chat data on their own servers, sometimes without clearly informing you. Unlike official platforms with strict privacy policies, third-party services may lack robust security measures or transparency.
Risk Example: A browser extension that logs your AI chats could be hacked, exposing all stored conversations.
Best Practice: Use only trusted, official apps and review their privacy policies before granting access.
In rare but serious cases, malicious AI integrations or compromised platforms could capture login credentials you enter during a conversation. If you share usernames, passwords, or OTPs (one-time passwords), these could be used to access your accounts and perform unauthorized actions – such as placing orders, transferring money, or changing account settings.
Real-World Consequence: You might wake up to find that someone used your credentials to order expensive items or access private services.
Best Practice: Never enter login details into any AI chat, and always use two-factor authentication (2FA) for added protection.
If chat logs containing personal information are accessed by cybercriminals, they can use that data to craft highly convincing phishing emails or social engineering attacks. For example, knowing your name, location, or recent purchases allows attackers to impersonate trusted services and trick you into clicking malicious links or revealing more sensitive data.
Best Practice: Be cautious of unsolicited messages and verify the sender before responding or clicking links.
AI chatbots are trained on vast datasets, but they can still generate inaccurate, outdated, or misleading information. Relying on AI responses without verifying facts can lead to poor decisions, especially in areas like health, finance, or legal advice.
Risk Example: Acting on incorrect medical advice or sharing false information publicly could have serious consequences.
Best Practice: Always cross-check AI-generated content with reputable sources before taking action or sharing it.
Here are simple steps you can take:
ChatGPT-5 is a tool, and like any tool, it can be used for good or misused. The AI itself isn’t plotting to steal your logins or credentials, but if you use it carelessly or through untrusted apps, your data could be at risk.
Golden rule: Enjoy the benefits of AI, but treat it like a stranger online – don’t overshare, and keep control of your personal data.
]]>In this section, we focused primarily on generating read-only credentials and how to use them to connect to the database.
The Mission Control tool generates read-only database credentials for a targeted instance, which remain active for 30 minutes. These credentials allow users to run select or read-only queries, making it easier to explore data on a cloud instance. This feature is especially helpful for verifying data-related issues without taking a database backup.
Log in to Mission Control.
Navigate to the Customers tab.
Select the appropriate Customer.
Choose the Environment for which you need the credentials.
Click the Action dropdown in the left pane.
Select Generate Database Credentials.
A pop-up will appear with a scheduler option.
Click Continue to initiate the process.
After a short time, the temporary read-only credentials will be displayed.
Once the temporary read-only credentials are generated, the next step is to connect to the database using those credentials.
To do this:
Download and install Azure Data Studio
Download Azure Data Studio
Open Azure Data Studio after installation.
Click “New Connection” or the “Connect” button.
Use the temporary credentials provided by Mission Control to connect:
Server Name: Use the server name from the credentials.
Authentication Type: SQL Login
Username and Password: As provided in the credentials.
Once connected, you can execute SELECT queries to explore or verify data on the cloud instance.
For more details, refer to the official Optimizely documentation on Generating Database Credentials.
For Part I, visit: Optimizely Mission Control – Part I
]]>Have you ever wondered what happens if you ask AI to create an image, write a poem, or draft an email?
Most of us picture “the cloud” working its magic in a distant location. The twist is that the cloud is physical, real, and thirsty. Data centers require water, sometimes millions of gallons per day, to stay cool while AI is operating.
By 2025, it is impossible to overlook AI’s growing water footprint. But don’t worry, AI isn’t to blame here. It’s about comprehending the problem, the ingenious ways technology is attempting to solve it, and what we (as humans) can do to improve the situation.
Doesn’t your laptop heat up quickly when you run it on overdrive for hours? Now multiply that by millions of machines that are constantly in operation and stacked in enormous warehouses. A data centre is that.
These facilities are cooled by air conditioning units, liquid cooling, or evaporative cooling to avoid overheating. And gallons of fresh water are lost every day due to evaporative cooling, in which water actually evaporates into the atmosphere to remove heat.
Therefore, there is an invisible cost associated with every chatbot interaction, artificial intelligence-powered search, and generated image: water.
Pretty Big—and expanding. According to a 2025 industry report, data centers related to artificial intelligence may use more than 6 billion cubic meters of water a year by the end of this decade. That is roughly equivalent to the annual consumption of a mid-sized nation.
In short, AI’s water consumption is no longer a “future problem.” The effects are already being felt by the communities that surround big data centers. Concerns regarding water stress during dry months have been voiced by residents in places like Arizona and Ireland.
Surprisingly, yes. It is being saved by the same intelligence that requires water.
optimised cooling: Businesses are utilising AI to operate data centers more effectively by anticipating precisely when and how much cooling is required, which can reduce water waste by as much as 20–30%.
Technology for liquid cooling: Some new servers are moving to liquid cooling systems, which consume a lot less water than conventional techniques.
Green data centers: Major corporations, such as Google and Microsoft, are testing facilities that use recycled water rather than fresh water for cooling and are powered by renewable energy.
Therefore, “AI is the problem” is not the story. “AI is thirsty, but also learning how to drink smarter,” it says.
Absolutely.Our decisions have an impact even though the majority of us do not manage data centers. Here’s how:
More intelligent use of AI: We can be aware of how frequently we execute complex AI tasks, just as we try to conserve energy. (Is 50 AI-generated versions of the same image really necessary?)
Encourage green tech: Selecting platforms and services that are dedicated to sustainable data practices encourages the sector to improve.
Community action: Cities can enact laws that promote the use of recycled water in data centers and openness regarding the effects of water use in their communities.
Consider it similar to electricity, whose hidden costs we initially hardly noticed. Efficiency and awareness, however, had a significant impact over time. Water and AI can have the same effect.
AI is only one piece of the global water puzzle. Water stress is still primarily caused by industry, agriculture, and climate change. However, the emergence of AI makes us reevaluate how we want to engage with the planet’s most valuable resource in the digital future.
If this is done correctly, artificial intelligence (AI) has the potential to be a partner in sustainability, not only in terms of how it uses water but also in terms of how it aids in global water monitoring, forecasting, and conservation.
The cloud isn’t magic. It’s water, energy, wires, and metal. And AI’s thirst increases with its growth. However, this is an opportunity for creativity rather than panic. Communities, engineers, and even artificial intelligence (AI) are already rethinking how to keep machines cool without depleting the planet.
Therefore, keep in mind that every pixel and word contains a hidden drop of water the next time you converse with AI or create an interesting image. Furthermore, the more information we have, the better decisions we can make to ensure the future continues.
]]>As Perficient continues to lead the world’s most admired brands through their unique AI-first digital transformation journeys, our focus on global collaboration remains at the heart of our success. For us, transformation isn’t just about adopting new technology. It’s about bringing together the right people with the right expertise, no matter where they are in the world, to solve real business challenges and create lasting impact.
With locations across the U.S., Latin America, India, China, and Europe, our teams span continents but operate as one global team. By working across borders, time zones, and areas of expertise, we’re able to offer clients diverse perspectives, deeper industry knowledge, and faster paths to AI-driven innovation. This global mindset is woven into our culture and embedded in the way we approach every engagement. It’s how we ensure that our solutions are not only technically sound but also scalable and aligned with our clients’ long-term goals.
“Global operations thrive on diversity—not just in skills but in perspectives. An inclusive, globally integrated team brings fresh ideas and insights that can be pivotal. We actively create environments where the diverse perspectives of each team member are valued, resulting in solutions that are culturally relevant and forward-thinking,” said Kevin Sheen, senior vice president, in a recent presentation to our Latin America (LatAm) team.
READ MORE: Empowering Transformation Through Global Expertise
At Perficient, global collaboration isn’t a strategy we aspire to—it’s how we work every day. Fueled by intention, transparency, and a shared commitment to excellence, our collaborative culture empowers us to drive AI innovation and boldly advance business in a constantly evolving world.
In this first blog post of our series focused on Perficient’s collaborative culture, we are showcasing how our colleagues around the world are combining their expertise to fuel global growth and build the strong connections that drive our success.
Our LatAm Team’s Global Impact
One of the strongest examples of our global collaboration in action is the growth and evolution of our presence in Latin America. Perficient LatAm began as a small collection of regional offices but, over the years, has grown into a unified, strategically aligned operation that plays an integral role in our global delivery model. With offices across Colombia, Mexico, Uruguay, Argentina, and Chile, the LatAm team has built a shared foundation of delivery and recruitment processes. This strategic alignment didn’t happen overnight. It was driven by intentional leadership, cross-regional transparency, and a commitment to working as a unified team.
In the past two years, our leaders across LatAm have served as key points of contact to more closely align their operations with our U.S. business units. “We began investing in more leadership to act as single points of contact,” said David Arango Gaviria, director, Colombia Sales. “Assigning leaders to our practices helped put us on the map. We are now more exposed to customer and sales processes through these granular interactions.”
One example of this collaboration in action is our LatAm team’s work to support a global leader in the manufacturing industry. While the work originally focused on custom development, our LatAm teams quickly evolved to establish a dedicated commerce practice from the ground up, recruiting, training, and cross-skilling talent to meet the dynamic needs of the client.
“The willingness and openness that everyone showed to building bootcamps, facilitating education, and cross-training was a true example of collaboration,” said David. Our LatAm team has also played a pivotal role in expanding global delivery capacity for clients in industries such as food production and healthcare, supporting complex transitions, integrating seamlessly with our U.S. and India teams, and fostering new ways of working.
Our LatAm team’s impact goes beyond client work. They’ve been key to internal innovation, especially in building accelerators that improve how we deliver. A great example is the Quality Assurance (QA) AI Assistant, which started as a local idea and quickly became a global effort. As the use of AI in Quality Assurance (QA) became integral, our LatAm team proposed a tool to automate tasks like generating user stories and test cases. Their concept brought in collaborators from the U.S. and India, turning it into a cross-regional project. In just two months, the team launched a working, enterprise-ready solution. This is a clear example of how global collaboration speeds up delivery and creates real value for clients.
LEARN MORE: Perficient’s Quality Assurance and Test Automation Services
Perficient India: Building Global Connections Through Local Innovation
Perficient India continues to play a key role in strengthening global collaboration by creating spaces for knowledge sharing, innovation, and alignment with global teams. Events across our Bangalore, Nagpur, Hyderabad, Pune, and Chennai offices are helping connect colleagues across practices and geographies.
At Perfathon 2025 in Bangalore, six teams worked through real-world challenges in a two-day hackathon designed to encourage cross-functional thinking. “Perfathon was more than just a hackathon—it was a vibrant space for collaboration, creativity, and learning,” said Gomathy Raveena Nair, lead technical consultant, Bangalore.
READ MORE: Perfathon 2025 – Hackathon at Perficient
Recent visits from our U.S.-based Financial Services leaders have served as powerful practice-specific moments that continue to shape and strengthen global collaboration. Mangayarkarasi Rengasamy, senior business consultant, Chennai, shared how valuable these interactions have been for creating greater alignment: “We received encouraging feedback on our ongoing engagements, further solidifying our momentum. We engaged in thought-provoking brainstorming and exceptional teamwork. Looking ahead, we have an exciting roadmap of action items.”
From technical meetups to leadership engagement, the India team is helping drive a more connected global culture.
A Culture Built on Connection
At Perficient, global collaboration isn’t just how we deliver. It’s how we grow, solve problems, and innovate together. Across every region and practice, we’re building a culture that empowers our people to actively seek out partnerships, align around shared goals, and bring their full expertise to the table.
As John Vylasek, senior solutions strategist, Data & Analytics, said, “I’ve been leading global teams for many years, and the approach is consistent. Find the people who make the extra effort to communicate, align, and get things done. Whether they’re in Latin America, India, or anywhere else, those relationships are what make the work meaningful.” This level of connection and seamless global collaboration transforms good work into lasting impact. Global collaboration not only brings out results for our clients but truly defines the Perficient experience for our people.
Whether through cross-regional delivery, AI-enabled innovation, or in-person engagement, our teams are united by a common mindset: we work better when we work together. Stay tuned for the next blog in this series, where we will explore how we fulfill our mission through collaboration.
]]>In modern enterprise systems, stability and fault tolerance are not optional; they are essential. One proven approach to ensure robustness is the Circuit Breaker pattern, widely used in API development to prevent cascading failures. HCL Commerce takes this principle further by embedding circuit breakers into its HCL Cache to effectively manage Redis failures.
What Is a Circuit Breaker?
The Circuit Breaker is a design pattern commonly used in API development to stop continuous requests to a service that is currently failing, thereby protecting the system from further issues. It helps maintain system stability by detecting failures and stopping the flow of requests until the issue is resolved.
The circuit breaker typically operates in three main (or “normal”) states. These are part of the standard global pattern of Circuit Breaker design.
Normal States:
Circuit breaker pattern with normal states
Special States:
Circuit breaker pattern with special states
Circuit Breaker in HCL Cache (for Redis)
In HCL Commerce, the HCL Cache layer interacts with Redis for remote coaching. But what if Redis becomes unavailable or slow? HCL Cache uses circuit breakers to detect issues and temporarily stop calls to Redis, thus protecting the rest of the system from being affected.
Behavior Overview:
Configuration Snapshot
To manage Redis outages effectively, HCL Commerce provides fine-grained configuration settings for both Redis client behavior and circuit breaker logic. These settings are defined in the Cache YAML file, allowing teams to tailor fault-handling based on their system’s performance and resilience needs.
Redis Request Timeout Configuration
Slow Redis responses are not treated as failures unless they exceed the defined timeout threshold. The Redis client in HCL Cache supports timeout and retry configurations to control how persistent the system should be before declaring a failure:
timeout: 3000 # Max time (in ms) to wait for a Redis response retryAttempts: 3 # Number of retry attempts on failure retryInterval: 1500 # Specifies the delay (in milliseconds) between each retry attempt.
With the above configuration, the system will spend up to 16.5 seconds (3000 + 3 × (3000 + 1500)) trying to get a response before returning a failure. While these settings offer robustness, overly long retries can result in delayed user responses or log flooding, so tuning is essential.
Circuit Breaker Configuration
Circuit breakers are configured under the redis.circuitBreaker section of the Cache YAML file. Here’s an example configuration:
redis: circuitBreaker: scope: auto retryWaitTimeMs: 60000 minimumFailureTimeMs: 10000 minimumConsecutiveFailures: 20 minimumConsecutiveFailuresResumeOutage: 2 cacheConfigs: defaultCacheConfig: localCache: enabled: true maxTimeToLiveWithRemoteOutage: 300
Explanation of Key Fields:
Real-world Analogy
Imagine you have a web service that fetches data from an external API. Here’s how the circuit breaker would work:
Final Thought
By combining the classic circuit breaker pattern with HCL Cache’s advanced configuration, HCL Commerce ensures graceful degradation during Redis outages. It’s not just about availability—it’s about intelligent fault recovery.
For more detailed information, you can refer to the official documentation here:
HCL Commerce Circuit Breakers – Official Docs
Imagine starting your workday with an alert not from a human analyst, but from an AI agent. While you slept, this agent sifted through last night’s sales data, spotted an emerging decline in a key region, and already generated a mini-dashboard highlighting the issue and recommending a targeted promotion. No one asked it to; it acted on its own. This scenario isn’t science fiction or some distant future; it’s the imminent reality of agentic AI in enterprise analytics. Businesses have spent years perfecting dashboards and self-service BI, empowering users to explore data on their own. However, in a world where conditions are constantly changing, even the most advanced dashboard may feel excessively slow. Enter agentic AI: the next frontier where intelligent agents don’t just inform decisions; they make and even execute decisions autonomously. Over the next 1–3 years, this shift toward AI-driven “autonomous BI” is poised to redefine how we interact with data, how analytics teams operate, and how insights are delivered across organizations.
In this post, we’ll clarify what agentic AI means in the context of enterprise analytics and explore how it differs from traditional automation or self-service BI. We’ll forecast specific changes this paradigm will bring, from business users getting proactive insights to data teams overseeing AI collaborators, and call out real examples (think AI agents auto-generating dashboards, orchestrating data pipelines, or flagging anomalies in real time). We’ll also consider the cultural and organizational implications of this evolution, such as trust and governance, and conclude with a point of view on how enterprises can prepare for the agentic AI era.
Agentic AI (often called agentic analytics in BI circles) refers to analytics systems powered by AI “agents” that can autonomously analyze data and take action without needing constant human prompts. In traditional BI, a human analyst or business user queries data, interprets results, and decides on an action. By contrast, an agentic AI system is goal-driven and proactive; it continuously monitors data, interprets changes, and initiates responses aligned with business objectives on its own. In other words, it shifts the analytics model from simply supporting human decisions to executing or recommending decisions independently.
Put simply, agentic analytics enables autonomous, goal-driven analytic agents that behave like tireless virtual analysts. They’re designed to think, plan, and act much like a human analyst would, but at machine speed and scale. Instead of waiting for someone to run a report or ask a question, these AI agents proactively scan data streams, reason over what they find, and trigger the appropriate next steps. For example, an agent might detect that a KPI is off track and automatically send an alert or even adjust a parameter in a system, closing the loop between insight and action. This stands in contrast to earlier “augmented analytics” or alerting tools that, while they could highlight patterns or outliers, were fundamentally passive; they still waited for a human to log in or respond. Agentic AI, by definition, carries the initiative: it doesn’t just explain what’s happening; it helps change what happens next.
It’s worth noting that the term “agentic” implies having agency, the capacity to act autonomously. In enterprise analytics, this means the AI isn’t just crunching numbers; it’s making choices about what analyses to perform and what operational actions to trigger based on those analyses. This could range from generating a new visualization to writing back results into a CRM to launching a workflow in response to a detected trend. Crucially, agentic AI doesn’t operate in isolation of humans’ goals. These agents are usually configured around explicit business objectives or KPIs (e.g., reduce churn, optimize inventory). They aim to carry out the intent set by business leaders, just without needing a person to micromanage each step.
It’s important to distinguish agentic AI from the traditional automation and self-service BI approaches that many enterprises have implemented over the past decade. While those were important steps in modernizing analytics, agentic AI goes a step further in several key ways:
In summary, agentic AI goes beyond what traditional automation or self-service BI can do. If a classic self-service dashboard was like a GPS map you had to read, an agentic AI is like a self-driving car; you tell it where you want to go, and it navigates there (while you watch and ensure it stays on track). This evolution is happening now because of converging advances in technology: more powerful AI models, API-accessible cloud tools, and enterprises’ appetite for real-time, automated decisions. With the groundwork laid, analytics is moving from a manual, human-driven endeavor to a collaborative human-AI partnership, and often, the AI will take the first action.
What practical changes should we expect as agentic AI becomes part of enterprise analytics in the next 1–3 years? Let’s explore the forecast across three dimensions: how business users interact with data, how data and analytics teams work, and how analytics capabilities are delivered in organizations.
Impact on Business Users: From Asking for Insights to Acting on Conversations
For business users, the managers, analysts, and non-technical staff who consume data, agentic AI will make analytics feel more like a conversation and less like a hunt for answers. Instead of clicking through dashboards or waiting for weekly reports, users will have AI assistants that deliver insights proactively and in real-time.
Overall, for business users, the next few years with agentic AI will feel like analytics has turned from a static product (dashboards and reports you check) into an interactive service (an intelligent assistant that tells you what you need to know and helps you act on it). The organizations that embrace this will likely see faster decision cycles and a more data-informed workforce, as employees spend less time gathering insights and more time using them.
For data and analytics teams (data analysts, BI developers, data engineers, data scientists), agentic AI will bring a significant shift in roles and workflows. Rather than manually producing every insight or report, these teams will collaborate with AI agents and focus on enabling and governing these agents.
In short, data teams will transition from being the sole producers of analytics output to being the enablers and overseers of AI-driven analytics. Their success will be measured not just by the reports they build, but by how well they can leverage AI to scale insights. This means stronger emphasis on data quality, real-time data availability, and robust governance. Culturally, it may require a mindset shift: accepting that some of the work traditionally done “by hand” can be delegated to machines, and that the value of the team is in how they guide those machines and interpret the results, rather than in producing every chart themselves. Organizations that prepare their data talent for this augmented role, through training in AI tools and proactive change management, will handle the transition more smoothly.
Agentic AI will also transform how analytics capabilities are delivered and consumed in the enterprise. Today, the typical delivery mechanism is a dashboard, report, or perhaps a scheduled email, in other words, the user has to go to a tool or receive a static packet of information. In the coming years, analytics delivery will become more embedded, continuous, and personalized, largely thanks to AI agents working behind the scenes.
To sum up, analytics capabilities will be delivered more fluidly and in an integrated fashion. Rather than thinking of “going to analytics,” the analytics will come to you, often initiated by an agent. Dashboards and reports will not disappear overnight (they still have their place for deep dives and record-keeping), but the center of gravity will shift toward timely insights injected into decision points. The business impact is significant: decisions can be made faster and in context, and fewer opportunities or risks will slip through unnoticed between reporting cycles. It’s a world where, ideally, nothing important waits for the next report; your AI agent has already informed the right people or taken action.
The technical capabilities of agentic AI are exciting, but enterprises must also grapple with cultural and organizational implications. Introducing autonomous AI into analytics workflows will affect how people feel about trust, control, and their own roles. Here are some key considerations:
Agentic AI in analytics is on the horizon, and the time to prepare is now. Here’s a forward-thinking game plan for enterprises to get ready for this shift:
Agentic AI represents a bold leap in the evolution of business intelligence, from tools that we operate to intelligent agents that work alongside us (and sometimes ahead of us). In the next 1–3 years, we can expect early forms of these AI agents to become part of everyday analytics in forward-thinking enterprises. They will likely start by tackling well-defined tasks: automatically generating reports, sending alerts for anomalies, and answering common analytical questions. Over time, as trust and sophistication grow, their autonomy will increase to more complex orchestrations and decision executions. The payoff can be substantial: faster decision cycles, decisions that are more data-driven and less prone to human overlook, and analytics capabilities that truly scale across an organization. Companies that embrace this shift early could gain a competitive edge, outpacing those stuck in manual analytics with speed, agility, and insights that are both deeper and more timely.
Yet, success with agentic AI won’t come just from buying the latest AI tool. It requires a thoughtful approach to technology, process, and people. The enterprises that thrive will be those that pair innovation with governance, enthusiasm with education, and automation with a human touch. By laying the groundwork now, improving data infrastructure, cultivating AI-friendly skills, and establishing clear rules, organizations can confidently welcome their new AI “colleagues” and harness their potential. In the near future, your most trusted analyst might not be a person at all, but an algorithmic agent that never sleeps, never gets tired, and continuously learns. The question is, will your organization be ready to partner with it and leap ahead into this new age of analytics?
Sources:
]]>
In today’s hyper-personalized digital world, delivering the right message to the right customer at the right time is non-negotiable.
Adobe Commerce is a powerful eCommerce engine, but when coupled with Adobe Real-Time CDP (Customer Data Platform), it evolves into an intelligent experience machine, which is capable of deep AI-powered personalization, dynamic segmentation, and real-time responsiveness.
Adobe Real-Time CDP is a Customer Data Platform that collects and unifies data across various sources (websites, apps, CRM, etc.) into a single, comprehensive real-time customer profile. This data is then accessible to other systems for marketing, sales, and service.
Adobe Commerce offers native customer segmentation, but it’s limited to session or behavior data within the commerce environment. When the customer data is vast, the native segmentation becomes very slow, impacting overall performance.
Feature | Native Commerce | Adobe Real-Time CDP |
---|---|---|
Segmentation | Static, rule-based | Real-time, AI-powered |
Data Sources | Commerce-only | Omnichannel (web, CRM, etc.) |
Personalization | Session-based | Cross-channel, predictive |
Identity Graph | No Identity Graph | Cross-device customer data |
Activation | Limited to Commerce | Activate across systems |
Integrating Adobe Commerce with CDP empowers both business and technical teams to unify profiles and stay ahead in a dynamic marketplace by delivering personalization.
Adobe Real-Time CDP is not just a marketing tool, it’s an asset for creating commerce experiences that adapt to the customer in real-time.
]]>Nwayo Preprocessor is an extendable front-end boilerplate designed to streamline development for multi-theme, multi-site, and multi-CMS front-end frameworks. It provides an efficient workflow for building responsive, scalable, and maintainable web themes across different platforms.
In Magento 2, Nwayo can be particularly beneficial for front-end developers as it simplifies the theme deployment process. With just a single change in the Sass files, the framework can automatically regenerate and apply updates across the site. This approach not only accelerates the development process but also ensures consistency in the front-end experience across various themes and websites.
Nwayo is highly extensible, allowing developers to tailor the boilerplate to their specific project needs. Whether it’s adding new workflows, integrating with different CMS platforms, or customizing the theme, Nwayo provides a flexible framework that can adapt to various front-end requirements.
With built-in commands to check versions and install updates, Nwayo makes it easy to keep the workflow up to date. This ensures compatibility with the latest development tools and standards, helping developers stay current with front-end best practices.
i)Node.js
ii) Nwayo CLI
Run the commands in your project root folder
Nwayo integrates seamlessly with Magento 2, simplifying the process of managing multi-theme, multi-site environments. Automating Sass compilation and CSS generation allows developers to focus on custom features without worrying about the manual overhead of styling changes. With Nwayo, any updates to your Sass files are quickly reflected across your Magento 2 themes, saving time and reducing errors.
Nwayo is a versatile tool designed to work with various front-end frameworks and CMS platforms. Its extendable architecture allows it to be used beyond Magento 2, providing a unified front-end development workflow for multiple environments. Some of the other frameworks and platforms that Nwayo supports include:
Nwayo can be easily adapted to work with WordPress themes. Since WordPress sites often rely on custom themes, Nwayo can handle Sass compilation and make theme management simpler by centralizing the CSS generation process for various stylesheets used in a WordPress project.
For Drupal projects, Nwayo can streamline theme development, allowing developers to work with Sass files while ensuring CSS is consistently generated across all Drupal themes. This is especially helpful when maintaining multi-site setups within Drupal, as it can reduce the time needed for theme updates.
When working with Laravel-based applications that require custom front-end solutions, Nwayo can automate the build process for Sass files, making it easier to manage the styles for different views and components within Laravel Blade templates. It helps keep the front-end codebase clean and optimized.
Nwayo can also be used in static site generators like Jekyll or Hugo. In these setups, it handles the styling efficiently by generating optimized CSS files from Sass. This is particularly useful when you need to manage themes for static websites where speed and simplicity are key priorities.
Nwayo’s CLI and Sass-based workflow can be customized to work in nearly any front-end project, regardless of the underlying CMS or framework. This makes it suitable for developers working on custom projects where there’s no predefined platform, allowing them to benefit from a consistent and efficient development workflow across different environments.
Nwayo includes several built-in features for optimizing front-end assets:
Nwayo’s modular architecture allows developers to easily add or remove features from the workflow. Whether you’re working with React, Vue, or other JavaScript frameworks, Nwayo’s preprocessor can be extended to fit the unique requirements of any project.
You could include a diagram or chart that shows Nwayo’s compatibility with different frameworks and CMS:
This visual table makes it clear which frameworks Nwayo supports, giving developers an overview of its flexibility.
In addition to the basic commands for setting up and managing Nwayo in your project, here are other helpful commands you can use for various tasks:
This command allows you to verify the currently installed version of Nwayo in your environment.
Installs third-party dependencies required by the Nwayo workflow, making sure your project has all the necessary assets to function correctly.
This command clears the node_modules folder, which may be helpful if you’re facing dependency issues or need to reinstall modules.
Runs a complete build of the project, compiling all Sass files into CSS. This is typically used when preparing a project for production.
Watches for changes in your Sass files and automatically compiles them into CSS. This is useful during development when you want real-time updates without having to manually trigger a build.
Checks your Sass files for code quality and best practices using predefined linting rules. This helps ensure that your codebase follows consistent styling and performance guidelines.
Removes generated files (CSS, maps, etc.) to ensure that you’re working with a clean project. This can be useful when preparing for a fresh build.
This command builds the project in production mode, minifying CSS files and optimizing them for faster load times.
Displays all available commands, providing a quick reference for tasks that can be executed via the Nwayo CLI.
Allows you to view or modify the configuration settings for your Nwayo setup, such as output paths or preprocessing options.
By utilizing these commands, you can take full advantage of Nwayo’s features and streamline your front-end development workflow in Magento 2 or other compatible frameworks.
For a complete list of commands, visit the Nwayo CLI Documentation.
For more detailed information and official documentation on Nwayo, visit the following resources:
In conclusion, using Nwayo code can significantly simplify the development process, allowing developers to focus on building unique features rather than spending time on repetitive tasks. By utilizing existing code templates and libraries, developers can save time and improve their productivity.
]]>With the increasing use of AI chatbots, businesses often face one key challenge: when and how to seamlessly hand over the conversation from a bot to a human agent.
In this two-part series, I’ll walk you through how we used Microsoft Copilot Studio and Dynamics 365 Omnichannel to build a live agent escalation feature. Part 1 will focus on the why, what, and architecture, and Part 2 will deep dive into the actual implementation.
Chatbots are great for handling FAQs and basic support, but they fall short when:
A customer is frustrated or confused
Complex or sensitive issues arise
Immediate human empathy or decision-making is needed
In such cases, a real-time live agent transfer becomes essential.
We built a chatbot for a customer portal using Copilot Studio. While it handles common queries, we also needed to:
Escalate conversations to live agents if the user asks for it
Preserve chat context during handoff
Route to the correct agent or queue based on rules
Provide agents with complete chat history and customer info
Here’s how the components interact:
Copilot Studio: Low-code chatbot builder
D365 Omnichannel for Customer Service: Real-time chat and routing
Customer Service Workspace: Where agents receive and respond to chats
Web Page: To host the bot on a public-facing portal
Bot handles everyday tasks, reducing agent load
Smooth escalation without losing chat context
Intelligent routing via workstreams and queues
Agent productivity improves with transcript visibility and customer profile.
In this first part of our blog series, we explored the high-level architecture and components involved in enabling a seamless live agent transfer from Copilot Studio to a real support agent via D365 Omnichannel.
By combining the conversational power of Copilot Studio with the robust routing and session management capabilities of Omnichannel for Customer Service, organizations can elevate their customer support experience by offering the best of both automation and human interaction.
In Part 2, I’ll walk you through:
Setting up Omnichannel in D365
Creating the bot in Copilot Studio
Configuring escalation logic
Testing the live agent transfer end-to-end
Stay tuned!
]]>Optimizely Configured Commerce (Spire) is a strong platform for creating content-rich ecommerce sites, especially suited for wholesalers and manufacturers. Alongside powerful e-commerce features like product recommendations and fast ordering, it supports extensive storefront customization with blogs, case studies, forums, and other content types.
To build clean, scalable, and maintainable websites on Optimizely Configured Commerce, it’s crucial to follow these essential frontend coding standards and best practices.
The suggested coding standards for Spire’s frequently used frontend elements are listed below.
Using generic wrappers like <div> can lead to unnecessary nesting and less maintainable code. Prefer styled components to create clear, consistent, and scalable layouts.
<div class="container">…</div> <Typography as="div">…</ Typography >
<StyledWrapper className="container">…</StyledWrapper>
Use the Typography component with the correct “as” prop for all headings and text elements to ensure semantic HTML, accessibility, and consistent styling.
Note: If you don’t specify the “as” prop, Typography defaults to rendering as a <span>
<h2> Title</h2> <div class="heading">…</div>
<Typography as="h2">…</Typography>
For navigation and actions, avoid mixing raw tags. Use dedicated components to keep interactions accessible, consistent, and semantically correct.
<a href="/url">…</a> <button><a href="/url">…</a></button> <button class="btn btn-primary">…</button>
<Link href="url">…</Link> <Clickable href="url">…</Clickable> <Button variant="primary">…</Button>
Avoid raw <img> tags and use image components that provide better accessibility and responsive handling for consistent and maintainable layouts.
<img src="/images/logo.png" alt="Logo" />
<Img src="/images/logo.png" altText={translate(“Logo")} />
Avoid using raw HTML tables or div-based layouts. Instead, use table components to maintain semantic markup, improve accessibility, and ensure consistent styling.
<div style="display:table;width:100%"> <div style="display:table-row;font-weight:bold"> <div style="display:table-cell;padding:8px">…</div> </div> </div> <table> <tr><th>…</th></tr> <tr><td>…</td></tr> </table>
<DataTable> <DataTableHead> <DataTableHeader>Date</DataTableHeader> </DataTableHead> <DataTableBody> <DataTableRow><DataTableCell>2025-06-28</DataTableCell></DataTableRow> </DataTableBody> </DataTable>
By preventing inline styles, ensuring clean, responsive layouts, and maintaining a consistent grid structure throughout the project, GridContainer and GridItem help you avoid hard-coded rows, columns, or improper nesting.
<div class="row"> <div class="column" style="width: 50%;">Left content</div> <div class="column" style="width: 50%;">Right content</div> </div> <GridContainer> <GridItem width={[12, 12, 12, 12, 12]}> <GridItem width={[12, 12, 12, 6, 6]}>… </GridItem> <GridItem width={[12, 12, 12, 6, 6]}>… </GridItem> </GridItem> </GridContainer> <GridContainer> <StyledWrapper> <GridItem width={[12, 12, 12, 6, 6]}>…</GridItem> </StyledWrapper> <GridItem width={[12, 12, 12, 6, 6]}>…</GridItem> </GridContainer>
<GridContainer> <GridItem width={[12, 12, 12, 6, 6]}>… </GridItem> <GridItem width={[12, 12, 12, 6, 6]}>…</GridItem> </GridContainer> <GridContainer> <GridItem width={[12, 12, 12, 12, 12]}> <GridContainer> <GridItem width={[12, 12, 12, 6, 6]}>… </GridItem> <GridItem width={[12, 12, 12, 6, 6]}>… </GridItem> </GridContainer> </GridItem> </GridContainer> <GridContainer> <GridItem width={[12, 12, 12, 12, 12]}> <GridContainer> <GridItem width={[12, 12, 12, 6, 6]}>… </GridItem> <GridItem width={[12, 12, 12, 6, 6]}>… </GridItem> </GridContainer> <StyledWrapper>…</StyledWrapper> </GridItem> </GridContainer>
Avoid using inline styles or custom classes to hide content. Instead, use dedicated components like Hidden or VisuallyHidden to ensure accessibility and consistent behavior.
<div style="display: none;">…</div> <span style="position: absolute; width: 1px; height: 1px; margin: -1px; padding: 0; border: 0; clip: rect(0, 0, 0, 0); overflow: hidden;">…</span> <span class="visually-hidden">…</span>
<Hidden below="md">…</Hidden> <VisuallyHidden>…</VisuallyHidden>
Always wrap translatable text with the translate() function to ensure localization support, avoid hardcoded strings, and make the UI adaptable to different languages.
<Typography>Translatable Text</Typography>
<Typography>{translate(" Translatable Text")}</Typography>
Use the gap property on GridContainer to manage spacing between columns, rather than manually adding padding or margins. This ensures cleaner, more consistent, and maintainable layouts.
customGridContainer: { css: css` > div > div {padding: 8px;} `,}
customGridContainer: {gap: 16}
Customize your breakpoints to avoid repeating identical values. This promotes consistency, reduces redundancy, and supports a scalable layout aligned with your design system.
breakpoints: { keys: ["xs", "sm", "md", "lg", "xl"], values: [0, 320, 768, 1024, 1024], maxWidths: [390, 768, 1024, 1024, 1440], },
breakpoints: { keys: ["xs", "sm", "md", "lg", "xl"], values: [0, 576, 768, 992, 1200], maxWidths: [540, 540, 720, 960, 1140] }, const customTheme = { ...baseTheme, breakpoints: { ...baseTheme.breakpoints, values: [0, 800, 1200], // Custom breakpoint widths maxWidths: [520, 720, 1100] // Max container widths per range }, }
Avoid hardcoding color values in SVG files. Instead, use currentColor to allow icons to inherit the surrounding text color, ensuring consistency with the theme and simplifying theming.
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24"> <path d="M16.59 8.59L12 13.17L7.41 8.59L6 10L12 16L18 10Z" fill="#f00"/> </svg>
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24"> <path d="M16.59 8.59L12 13.17L7.41 8.59L6 10L12 16L18 10Z" fill="currentColor"/> </svg>
Avoid targeting hashed or auto-generated class names. Instead, use stable selectors or “data-attributes” to ensure maintainable and predictable styling, and to prevent breakage when class names change.
.girEbT .GridItemStyle-sc-1qhu4nt {…} [class*="GridItemStyle—sc--"], [class*="GridItemStyle--"], [class*="GridItemStyle-"] {…}
[class*="GridItemStyle"] {…} .testClassName {…} [data-test-selector=”testSelector”] {…}