With the advancement of technology, machine learning and AI capabilities in the customer care space, customer expectations are evolving faster than ever before. Customers expect smoother, context-aware, personalized, and generally more effective and faster experiences across channels when contacting a support center.
This calls for a need to revisit and redefine the success metrics for a Contact Center as a Service (CCaaS) strategy.
Let’s break this down into two categories. The first category includes key metrics that are still essential to be measured. The standards for these metrics though are raised and the way they are measured have evolved. The second category introduces new metrics that are emerging because of advanced CCaaS capabilities in a modern contact center landscape.
Customer Satisfaction (CSAT) remains a cornerstone success metric. Every improvement a customer service center is looking to make, from improving operational efficiencies to enhancing agent and customer experience, will directly or indirectly impact the customer and is aimed at elevating that customer experiences. With automated personalized journeys being an important part of modern customer service, it is important to monitor real-time analytics on automated journeys in addition to live agent interactions. This helps better understand the customer experience and find opportunities to fine tune the friction points to improve customer satisfaction. Customer service is not only about resolving customer issues, but also about providing an effortless experience.
First Contact Resolution is still a key success metric in the CCaaS space, but modern tools can revolutionize the extent a customer service center can go to improve this metric, so the standards for this metric have raised. Passing context effectively across channels, real-time monitoring, predictive analytics and insights, and proactive outreach can increase the likelihood of addressing customer needs on the first contact or even sometimes without the need for a live agent interaction.
Customer Retention Rate metric has been revamped with the advancement of technology in customer service. Advanced predictive analytics can help track the customer experience throughout their journey and shed light on the underlying customer behavior patterns. This will enable proactive engagement strategies personalized to every customer. Real-time sentiment analysis can provide instant feedback to the customer service representatives and their supervisors to give them a chance to course correct immediately in order to shift the sentiment to a positive experience and retain customers.
Agent Experience and Satisfaction has a direct impact on the operation of a contact center and hence the customer experience. Traditionally, this metric was not tracked broadly as an important metric to measure a successful contact center strategy. However, we know today that agent experience and satisfaction is a key metric for transforming contact centers from cost centers into revenue generating units. Contact centers can leverage modern tools in different areas from agent performance monitoring, training and identifying knowledge gaps to providing automated workflows and real-time agent assistance, to elevate the agent experience.
These strategies and tools help agents become more effective and productive while providing service. Satisfied agents are more motivated to help customers effectively. This can improve metrics like First Contact Resolution rate and Average Handle Time. Happy and productive agents are more likely to engage positively with customers to discuss potential cross-sell and upsell opportunities. Moreover, agent turnover and the cost associated with that will be lowered due to the reduced burden of onboarding and training new agents regularly and constantly being short of staff.
Sentiment Analysis and Real-time Interaction Quality provides immediate insights to the contact center representatives about the customer’s emotions, the conversation tone, and the effectiveness of their interactions. This will help the contact center representatives to refine their interaction strategy on the spot to maintain a positive and effective engagement with the customer. These transforms contact centers into emotionally intelligent, customer-focused support centers. This makes a huge difference in a time where the quality of experience matters as much as the outcome.
Predictive Analysis Accuracy represents an entirely new set of metrics for a modern contact center that leverages predictive analytics in its operation. It is crucial to measure this metric and evaluate the accuracy of the forecasts against customer behavior and demands as well as the agent workflow needs. Inaccurate predictions are not only ineffective but can also be harmful to contact center operations. They can lead to poor decision making, confusion, and disappointing customer experiences. Accuracy in the anticipation of customer needs can enable proactive outreach, positive and effective interactions, less friction points and reduced service contacts while facilitating effective automatic upsell and cross-sell initiatives.
Technology Utilization Rate is an important metric to track in a modern and evolving customer care solution. While with the latest technological advancements a lot of intelligent automation and enhancements can be made within a CCaaS solution, a contact center strategy is required to identify the most impactful modern capabilities for every customer service operation. The strategy needs to incorporate tracking the success of the technology adoption through system usage data and adoption metrics. This ensures that technology is being leveraged effectively and is providing value to business. The technology utilization tracking can also reveal training and adoption gaps, ensuring that modern tools are not just implemented for the sake of innovation, but are actively contributing to improved efficiency within a contact center.
The development of advanced native capabilities and integration of modern tools within CCaaS platforms are revolutionizing the customer care industry and reshaping customer expectations. Staying ahead of this shift is crucial. While utilizing these advancements to achieve operational efficiencies, it is equally important to redefine the success metrics that provide businesses with insights and feedback on a modern CCaaS strategic roadmap. Adopting a fresh approach to capturing traditional metrics like Customer Satisfaction Scores and First Contact Resolution, combined with measuring new metrics such as Real-time Interaction Quality and Predictive Analysis Accuracy will offer a comprehensive view of a contact center’s maturity and its progress towards a successful and effective modern CCaaS solution.
We can measure these metrics by utilizing built-in monitoring and analytical tools of modern CCaaS platforms along with AI-powered services integrations for features like Sentiment and Real-time Quality Analysis. We can gather regular feedback and data from agents and automated tracking tools to monitor system usability and efficiency. All this data can be streamed and displayed on a unified custom analytics dashboard, providing a comprehensive view of contact center performance and effectiveness.
]]>Azure Firewall, a managed, cloud-based network security service, is an essential component of Azure’s security offerings. It comes in three different versions – Basic, Standard, and Premium – each designed to cater to a wide range of customer use cases and preferences. This blog post will provide a comprehensive comparison of these versions, discuss best practices for their use, and delve into their application in hub-spoke and Azure Virtual WAN with Secure Hub architectures.
Azure Firewall is a cloud-native, intelligent network firewall security service designed to protect your Azure cloud workloads. It offers top-tier threat protection and is fully stateful, meaning it can track the state of network connections and make decisions based on the context of the traffic.
In today’s digital landscape, cyber threats are becoming increasingly sophisticated. Organizations need robust security measures to protect their data and applications. Azure Firewall provides enhanced security by inspecting both inbound and outbound traffic, using advanced threat intelligence to block malicious IP addresses and domains. This ensures that your network is protected against a wide range of threats, including malware, phishing, and other cyberattacks.
Managing network security across multiple subscriptions and virtual networks can be a complex and time-consuming process. Azure Firewall simplifies this process by allowing you to centrally create, enforce, and log application and network connectivity policies. This centralized management ensures consistent security policies across your organization, making it easier to maintain and monitor your network security.
Businesses often experience fluctuating traffic volumes, which can strain network resources. Azure Firewall offers unlimited cloud scalability, meaning it can handle varying workloads without compromising performance. This scalability is crucial for businesses that need to accommodate peak traffic periods and ensure continuous protection.
Downtime can be costly for businesses, both in terms of lost revenue and damage to reputation. Azure Firewall’s built-in high availability ensures that your firewall is always operational, minimizing downtime and maintaining continuous protection
Many industries have strict data protection regulations that organizations must comply with. Azure Firewall helps organizations meet these regulatory and compliance requirements by providing detailed logging and monitoring capabilities. This is particularly vital for industries such as finance, healthcare, and government, where data security is of paramount importance.
Deploying multiple firewalls across different networks can be expensive. By deploying Azure Firewall in a central virtual network, organizations can achieve cost savings. This centralized approach reduces the need for multiple firewalls, lowering overall costs while maintaining robust security.
Azure Firewall Basic is recommended for small to medium-sized business (SMB) customers with throughput needs of up to 250 Mbps. It’s a cost-effective solution for businesses that require fundamental network protection.
Azure Firewall Standard is recommended for customers looking for a Layer 3–Layer 7 firewall and need autoscaling to handle peak traffic periods of up to 30 Gbps. It supports enterprise features like threat intelligence, DNS proxy, custom DNS, and web categories.
Azure Firewall Premium is recommended for securing highly sensitive applications, such as those involved in payment processing. It supports advanced threat protection capabilities like malware and TLS inspection. Azure Firewall Premium utilizes advanced hardware and features a higher-performing underlying engine, making it ideal for handling heavier workloads and higher traffic volumes.
Here’s a comparison of the features available in each version of Azure Firewall:
Feature | Basic | Standard | Premium |
---|---|---|---|
Stateful firewall (Layer 3/Layer 4) | Yes | Yes | Yes |
Application FQDN filtering | Yes | Yes | Yes |
Network traffic filtering rules | Yes | Yes | Yes |
Outbound SNAT support | Yes | Yes | Yes |
Threat intelligence-based filtering | No | Yes | Yes |
Web categories | No | Yes | Yes |
Intrusion Detection and Prevention System (IDPS) | No | No | Yes |
TLS Inspection | No | No | Yes |
URL Filtering | No | No | Yes |
Azure Firewall plays a crucial role in the hub-spoke network architecture pattern in Azure. The hub is a virtual network (VNet) in Azure that acts as a central point of connectivity to your on-premises network. The spokes are VNets that peer with the hub and can be used to isolate workloads. Azure Firewall secures and inspects network traffic, but it also routes traffic between VNets .
A secured hub is an Azure Virtual WAN Hub with associated security and routing policies configured by Azure Firewall Manager. Use secured virtual hubs to easily create hub-and-spoke and transitive architectures with native security services for traffic governance and protection.
Azure Firewall operates by using rules and rule collections to manage and filter network traffic. Here are some key concepts:
Azure Firewall integrates with Azure Monitor for viewing and analyzing logs. Logs can be sent to Log Analytics, Azure Storage, or Event Hubs and analyzed using tools like Log Analytics, Excel, or Power BI.
Create a Resource Group
Sign in to the Azure portal:
Create the Firewall:
Create Application Rules
To maximize the performance of your Azure Firewall, it’s important to follow best practices. Here are some recommendations:
I have been writing about the Redwood Experience with Supply Chain Management, especially with the Inventory Management. Oracle has gone all-in with Redwood Experience in Inventory Management in 25B.
The 25 Inventory Management readiness documentation lists all new features and how to use them, so I will not repeat this greatly written document: https://docs.oracle.com/en/cloud/saas/readiness/scm/25b/inv25b/index.html
For the previous features in Redwood, please consider visiting the Readiness documentation: https://docs.oracle.com/en/cloud/saas/readiness/scm-all.html
This page is my personal favorite since it provides easy to find features and documentation along with that.
1. Why?
You may be asking the question: why is the Redwood so hot and why do I have to transform?
If you are an Oracle customer or you have been in Oracle space for a while (I have been in the space for almost three decades), you know that once Oracle sets a vision and starts delivering new technology it becomes the future. We have witnessed this when Oracle moved the business applications from 10.7 Character mode to 10SC (Smart Client) 10NCA (Network Architecture). We went from character mode to GUI. It wasn’t easy and quick, but it happened. Then we moved from major releases in EBS and got used to the Self Service architecture.
Oracle delivered the Fusion Applications long time ago and we have witnessed that each quarterly release has added more functionality. Since 2024 Oracle has been improving user interface and adding mobility to the Inventory Management pages, but the most radical improvements have happened in 25A and 25B. Now, almost 100% of the Inventory Management is in Redwood and it’s the next generation of Cloud applications.
Redwood brings better usability and better user interface that I explained in my past blog https://blogs.perficient.com/2025/05/30/starting-redwood-experience-with-25a-inventory-management/, but it also opens the door for Artificial Intelligence (AI).
Oracle is expected to release major AI improvements in 25C that I plan to write a blog to talk about. Redwood Experience is a prerequisite for all cool AI technology to work. Agentic AI features or AI Agents will be part of the Fusion Applications which is a topic for another blog.
So, while majority of the screens are optional, why not get ahead of the game and start adopting?
2. How
You may be asking the question: what actions do I need to take to use Redwood
Read Documentation. In Customer Connect, we are seeing many questions from the Oracle Community about Redwood pages not populating items or screens are coming out blank. Please see this documentation for the important considerations
https://docs.oracle.com/en/cloud/saas/readiness/scm/25a/inv25a/25A-inventory-wn-t65792.htm
By the way, if you have not registered to Oracle Customer Connect, I highly recommend, so you can get in contact with the rest of your peer Oracle Community members and Oracle ACEs like myself who can possibly respond to your questions: https://community.oracle.com/customerconnect/
Then please see the Profile options for the new features. You will have to flip the profile options at site level from No to Yes, so that the features are enabled.
The documents I previously mentioned have the profile option names and the navigation is to use the task bar from the Functional Setup Manager and search for Manage Administrative Profile Values.
3. What
You may be asking the question: what Redwood Pages I should use first
Adoption is very critical when changing the user experience. Change Management becomes critical migrating from traditional cloud pages to the newly designed Redwood Pages. What I would recommend is to first enable the configuration pages, so that the internal Oracle team and business analysts have a feel of the Redwood Experience.
Then there are a few pages that users can be beneficial that I mentioned in my prior blog: https://blogs.perficient.com/2025/05/30/starting-redwood-experience-with-25a-inventory-management/
One bold move is to flip all features to Redwood and start testing internally first in a lower pod. Oracle has designed this, so companies have time to take on as much as they can during the course of an unidentified period of time. As of today, Oracle has not announced when the Redwood Experience will be mandatory. Most pages are possible to switch back and forth, but please read the feature’s release note to see if there is a not that will explicitly say that once it’s turned on, there is not a path to go back.
In conclusion, Oracle Fusion Application’s future is in Redwood Experience and built in AI, so I recommend that you try to adapt and use.
Contact Mehmet Erisen at Perficient for more introspection of this functionality, and how Perficient and Oracle Fusion Cloud can digitalize and modernize your ERP platform.
]]>Oracle has delivered many features in Redwood Experience as of 25A. The purpose of this blog is to give a taste of a few Redwood pages and provide a recommendation to migrate to Redwood Experience on select pages to ease the transition and adoption. The pages I’ll cover in this blog are:
*names in parenthesis are the traditional cloud menu entries
Oracle has documented the Important actions to be taken before the Redwood enablement in 25A readiness notes. We recommend that you read and follow this documentation for successful roll out https://docs.oracle.com/en/cloud/saas/readiness/scm/25a/inv25a/25A-inventory-wn-t65792.htm This document also has a list of newly designed Redwood pages that you can pick and chose for your company. I wanted to cover a few that seems to be a good start with Redwood enablement.
“Manage Item Quantities” is one of the most visited pages in Oracle Fusion Inventory Management. It’s a very functional page that one can see on hand quantities as well as incoming stock and stock in receiving. The page also allows various actions and provides additional information. I’ll share a few screenshots of the user interface to explain the look and feel of the Redwood Pages. The traditional page The page gave great functionality with the lack of exporting the results. Also, the nodes in the page caused users to drill down multiple levels, for example from Item to Subinventory and Locator, to LOT number and project and task. This multi-node architecture was good to see the information but prevented users from seeing all data in one place and exporting it. The Redwood Experience “Item Quantities” page takes the “Manage Item Quantities” page to the next level. Immediately, the user will enjoy a more modern responsive experience. The data is displayed in a tabular format
The Redwood Experience gives the user a flexible layout. Easily add remove filters, follow the deep link to the Lot or Serial number, see the on hand quantities as well as other measures such as available to transact, available to reserve, inbound in receiving quantities all in one line. This is a huge improvement compared to the traditional Manage Item Quantities page. The Redwood Experience pages are user friendly and support download of the “on hand quantities” without needing additional reporting. In this case, I’m downloading all items that are in my inventory organization by clicking the export button. Clicking the Inbound deep link will show the user inbound details. Clicking the Lot under item Control will take the user to the Lot information page. Please note that the user is now at the lot details page. After testing the Redwood Item Quantities page, I was very pleased with the improvements:
One caveat I observed on this page is that the item description cannot be added at this time to Redwood Experience, but it is available if the user clicks on the item number. There is also a need to scroll down to the last record, so the export includes all results. After carefully reviewing the new Redwood Experience, and testing the page thoroughly, I believe that this is one of the Redwood Pages that can be enabled to transition from traditional user interface to the Redwood Experience.
This page with Redwood Experience replaces the Completed Transactions page. Once again, it’s another Redwood Page that is easy to adopt and use. It has all the bells and whistles of the traditional cloud page and more. What I like about the Redwood Experience is that it is so easy to add and remove columns. One can quickly scroll through the available columns and check or uncheck multiple columns at a time, and use the Control+F (Find) feature of the browser. Another improvement is that the user can share the saved searches with others leveraging the filter save tool. The one has been on the wish list for many users.
The newly designed Lot and Serial Numbers Redwood page is also a great way to start the Redwood Experience with one caveat. If you frequently jump to Lot/Serial transactions or on hand quantities from the manage Lot/Serial pages, this feature is yet not available as of 25A in Redwood Experience.
Another easy to transform Redwood page is the Item Supply and Demand page. All functionality is the same, but Redwood Experience has additional columns such as supply/demand quantity in separate columns, Party, Work Order Description, Shipping Priority and Created by to give the user better information. User acceptance and adoption comes with time, so the sooner the transition begins, the more successful the implementations will go. Perficient can help you with your transition from traditional Fusion or legacy on-prem applications to the SCM Redwood experience. When you are ready to take the first step and you’re looking for some advice, contact us. Our strategy is to craft a path for our clients that will make the transition as seamless as possible to the user community and their support staff.
Contact Mehmet Erisen at Perficient for more introspection of this functionality, and how Perficient and Oracle Fusion Cloud can digitalize and modernize your ERP platform.
]]>Imagine you are retrieving an API key/Bearer token from an external system to make a new callout to another external system. But there is one issue: you cannot determine when the new callout will take place, so you need to store the token value somewhere in your org for later use. Probably you will be thinking of storing it in your custom object or custom metadata. But creating a separate object/metadata for such a case may not be an efficient way. Then what? I think this is where Platform Cache helps you. Platform Cache provides temporary storage for such data in such cases. You can set up your platform cache by following the steps outlined in this trailhead. Here, I will explain how you can use the platform cache for this use case: I expect that you have already configured your platform cache in your dev org. Now suppose, you receive a bearer token from one API to pass into another callout after sometime, so you can store that access token in the Platform cache and avoid the hassle of creating a separate custom object/metadata or Custom Setting (Although you cannot update a Custom Metadata/Setting using Apex code).
The Trailhead referenced in this article is crucial for understanding and setting up the platform cache. If you have not gone through the trailhead, then stop here and complete the trailhead first.
How to fetch a bearer token and save it into the Platform cache using Apex:
Now, you can fetch the bearer token from the Platform cache and use it in your next callout with the following code:
Remember, Platform Cache provides temporary storage, and data can be evicted from it because of memory pressure. Platform Cache does not guarantee the determined lifetime of the value stored; therefore, it is better not to store any value with an extended validity or sensitive information.
Although platform cache is used to improve performance by evading unnecessary repeated API calls, there are a few concerns about using the Platform cache.
External System used: https://www.reqres.in
Technology used: Salesforce
In today’s world, the Internet of Things (IoT) is revolutionizing industries across the globe by connecting devices, systems, and people in ways that were once unimaginable. From smart homes to advanced manufacturing, IoT is creating new opportunities for innovation, efficiency, and data-driven decision-making. At the forefront of this transformation is Microsoft Azure, a cloud computing platform that has become a powerhouse for IoT operations.
In this blog, we’ll dive into the essentials of Azure IoT operations, how it simplifies and optimizes IoT workflows, and the key features that make it the go-to solution for businesses looking to scale their IoT systems.
Azure IoT is a set of services, solutions, and tools from Microsoft that allow businesses to securely connect, monitor, and control IoT devices across various environments. Azure IoT offers comprehensive capabilities for deploying IoT solutions, managing the entire device lifecycle, and providing data-driven insights that enhance decision-making processes.
At the core of Azure IoT is the ability to integrate IoT devices with cloud-based analytics, machine learning, and data processing tools, enabling businesses to leverage real-time data for more informed decisions.
Microsoft Azure is one of the leading cloud platforms that offer robust, scalable, and secure solutions for managing IoT operations. It is built on a foundation of powerful infrastructure that allows organizations to quickly deploy IoT systems and scale them based on evolving business needs. Here’s why businesses choose Azure for their IoT operations:
Azure IoT can support millions of devices, allowing businesses to scale their IoT systems from a handful of devices to a vast ecosystem of connected devices across different locations. This scalability ensures that companies can grow their IoT initiatives without being limited by infrastructure constraints.
Security is a major concern when managing IoT devices, as these devices are often vulnerable to cyberattacks. Azure IoT integrates comprehensive security measures, including device identity management, encryption, secure data transmission, and compliance with industry standards. This ensures that IoT operations are protected from potential threats.
Azure provides a suite of analytics tools, including Azure Machine Learning, Power BI, and Azure Stream Analytics, that enable businesses to analyze data generated by their IoT devices in real-time. This enables predictive maintenance, real-time monitoring, and data-driven decision-making, ultimately improving operational efficiency.
Azure IoT seamlessly integrates with other Microsoft tools and services, such as Office 365, Microsoft Teams, and Azure Active Directory. This enables businesses to seamlessly integrate IoT data into their existing workflows and processes, fostering collaboration and facilitating more informed business operations.
Azure IoT supports a wide variety of devices, operating systems, and protocols, making it a versatile solution that can integrate with existing IoT deployments, regardless of the technology stack. This interoperability allows businesses to get the most out of their IoT investments.
Azure IoT comprises various tools and services that address different aspects of IoT operations, ranging from device connectivity to data processing and analytics. Let’s explore some of the key components:
At the heart of Azure IoT operations is Azure IoT Hub, a fully managed service that enables secure and reliable communication between IoT devices and the cloud. It allows businesses to connect, monitor, and control millions of IoT devices from a single platform.
The IoT Hub provides two-way communication, allowing devices to send data to the cloud while also receiving commands. This bi-directional communication is crucial for remote monitoring, updating device configurations, and managing device health in real-time.
Azure Digital Twins is an advanced service that enables businesses to create digital models of physical environments. These models are used to visualize and analyze IoT data, providing a more comprehensive understanding of how devices and systems interact in the real world.
By utilizing Azure Digital Twins, organizations can optimize their operations, enhance asset management, and simulate scenarios for predictive maintenance and improved energy efficiency.
Azure IoT Edge enables businesses to run Azure services, including machine learning, analytics, and AI, directly on IoT devices at the edge of the network. This reduces latency and enables faster decision-making by processing data locally, rather than relying solely on cloud-based processing.
IoT Edge is ideal for scenarios where real-time data processing is critical, such as autonomous vehicles, industrial automation, or remote monitoring of remote assets.
Azure IoT Central is a fully managed IoT SaaS (Software as a Service) solution that simplifies the deployment, management, and monitoring of IoT applications. With IoT Central, businesses can quickly deploy IoT solutions without requiring deep technical expertise in cloud infrastructure.
It offers an intuitive interface for managing devices, setting up dashboards, and creating alerts. IoT Central significantly reduces the complexity and time required to deploy IoT systems.
Azure Time Series Insights is a fully managed analytics and storage service for time-series data. It is specifically designed for handling large volumes of data generated by IoT devices, such as sensor data, telemetry data, and event logs.
Time Series Insights offers powerful visualization and querying capabilities, enabling businesses to uncover trends and patterns in their IoT data. This is especially useful for monitoring long-term performance, detecting anomalies, and optimizing processes.
Azure IoT operations can be further optimized by integrating advanced technologies such as Artificial Intelligence (AI) and Machine Learning (ML). These technologies enable businesses to collect and store IoT data, as well as derive actionable insights from it.
By analyzing IoT data, Azure IoT can predict equipment failures before they occur. Using machine learning algorithms, businesses can identify patterns that indicate potential breakdowns and perform maintenance only when necessary, reducing downtime and maintenance costs.
Azure IoT enables businesses to automate processes based on real-time data. For example, in smart factories, devices can automatically adjust production lines based on environmental conditions, inventory levels, or supply chain disruptions, increasing efficiency and reducing human error.
Azure IoT can help businesses optimize energy usage by continuously monitoring energy consumption and adjusting operations accordingly. Smart building solutions, for example, can automatically control lighting, heating, and cooling systems to reduce energy waste and lower costs.
When configuring Azure IoT operations, you’re setting up a system where devices can securely connect to the cloud, send telemetry data, and receive commands for actions. Let’s walk through a practical configuration example using Azure IoT Hub, a key service in Azure IoT operations by following below link.
Tutorial: Send telemetry from an IoT Plug and Play device to Azure IoT Hub
Azure IoT operations are transforming how businesses leverage the Internet of Things to improve efficiency, enhance customer experiences, and unlock new revenue streams. With its powerful cloud infrastructure, end-to-end solutions, and integration with Microsoft’s suite of tools, Azure is a leading choice for businesses looking to capitalize on the potential of IoT.
By deploying Azure IoT, companies can connect their devices, analyze real-time data, optimize operations, and make data-driven decisions that enhance their bottom line. Whether you’re starting small with a few connected devices or deploying large-scale, enterprise-wide IoT solutions, Azure provides the tools, security, and scalability needed to succeed in the world of connected technology.
As IoT continues to evolve, Azure will undoubtedly remain at the forefront of this exciting and transformative field, helping businesses drive innovation and stay competitive in an increasingly connected world.
]]>Maybe you’ve already made the switch to XM Cloud, or maybe you’re still evaluating it as the answer to all your digital delivery challenges. Spoiler alert: it won’t magically solve everything — but with the right setup and smart optimizations, it can absolutely deliver fast, scalable, and maintainable experiences.
If you’re using Sitecore Headless with Next.js, you’re already building on a modern and flexible foundation. Add in a deployment platform like Vercel, and you’ve got serious power at your fingertips. But unlocking that potential requires knowing where to fine-tune — both at the application and platform level.
The Sitecore Layout Service is versatile but can return bulky JSON payloads if left unchecked. Clean up your responses by:
Removing unused placeholders and renderings
Filtering out internal tracking or analytics fields unless explicitly needed
Configuring the Layout Service to tailor the response to your frontend needs
If you’re using Sitecore Search or XM Cloud with GraphQL, concise queries will help keep your pages fast and predictable
Request only the fields you need
first:
or limit:
to control result sizeOrganize queries into reusable fragments for maintainability and performance
Smaller payloads result in faster hydration, quicker time-to-interactive, and lower bandwidth usage — all especially valuable for mobile-heavy audiences.
Don’t rely on manual rebuilds or blanket cache clears. XM Cloud supports webhooks on publish, which opens the door to smarter automation:
Trigger on-demand ISR revalidation for updated pages
Push new content to Edge Config, CDNs, or search indexes
Notify external systems (e.g., analytics, commerce, personalization) immediately
It’s the best way to keep content fresh without sacrificing performance or rebuilding the entire site.
Not every page needs to be dynamic, and not every page should be static. Picking the right rendering strategy is critical — especially in a Sitecore headless app where you’re mixing marketing content with personalization and real-time updates.
Here’s how to decide:
Use SSR (Server-Side Rendering) when:
The page depends on the user session or request (e.g., personalization, authenticated pages)
You’re rendering in preview mode for content authors
Use SSG (Static Site Generation) when:
The content rarely changes (e.g., static landing pages or campaigns)
You want instant load times and no server cost
Use ISR (Incremental Static Regeneration) when:
Content changes periodically, but not per-request
You want to combine the speed of static with the freshness of dynamic
If you’re still using regular <a>
tags or not thinking about navigation performance, this one’s for you. The next/link
component enables fast, client-side routing and automatic prefetching of pages in the background.
Example:
import Link from 'next/link'; <Link href="/products" prefetch={true}>About Us</Link>
Use it for all internal links
Set prefetch={true}
on high-priority routes
Check behavior in your browser’s network tab — look for .json
page data being fetched in advance
This alone can make your site feel instantly faster to users.
Sitecore headless apps don’t include next/font
by default, but it’s worth integrating. It allows you to self-host fonts in a performance-optimized way and avoid layout shifts.
Example:
import { Inter } from 'next/font/google'; const inter = Inter({ subsets: ['latin'] });
Apply fonts globally or per-page to improve loading consistency and avoid FOUT (Flash of Unstyled Text). Better fonts = better user experience.
Performance isn’t just about server-side logic — it’s also about keeping your codebase lean and clean.
What to review:
Old personalization plugins that are no longer used
Middleware that’s too permissive or generic in its matching
Outdated multisite logic if you’ve already split into multiple Vercel projects
Unused components or fetch logic in shared utilities
Use Vercel performance insights to track slow routes and spot cold starts.
Fluid Compute lets Vercel reuse idle time across your serverless functions. That means better performance and lower costs — without any code changes.
To enable it:
Go to your Vercel project settings
Navigate to Functions
Toggle Fluid Compute on
You can monitor the impact under Observability → Logs in your dashboard. It’s a low-effort win. Read more details about Fluid Compute in my previous blog!
Next.js middleware is powerful but potentially expensive in performance terms. Use it wisely:
Limit middleware to only essential routes
Avoid using fetch()
inside middleware — use Edge Config instead
Replace multisite plugins with separate Vercel projects
Audit unused or legacy logic, especially leftover personalization
Track middleware behavior through the Middleware tab in Vercel Logs.
For the fastest possible redirects, manage them directly in Vercel using Edge Config. This keeps Sitecore out of the request path and ensures instant resolution at the edge.
If you’re managing a large volume of redirects, consider using a bloom filter to optimize memory usage. Just note that bloom filters introduce a small delay due to redirect verification.
Optimizing a Sitecore Headless application, especially one deployed on Vercel, is about making dozens of small, smart decisions that add up to big wins in performance, scalability, and developer happiness. Whether it’s pruning your Layout Service output or toggling a setting in your Vercel dashboard, each move brings you closer to a faster, more responsive site.
XM Cloud doesn’t come pre-optimized — but that’s actually a good thing. It gives you the power and flexibility to build the way you want. Just make sure you’re building it right.
Sitecore & XM Cloud
Prune Layout Service JSON (remove unused placeholders and fields)
Use GraphQL efficiently (limit queries, use fragments)
Set up publish webhooks for on-demand rendering or cache purging
Rendering Strategy
Use SSR for personalized/authenticated content
Use SSG for static pages
Use ISR for hybrid performance/freshness
Next.js
Replace <a>
with next/link
and enable prefetching
Add next/font
for consistent and fast font rendering
Vercel
Enable Fluid Compute for better serverless efficiency
Use middleware only where necessary and avoid fetch inside
Use Edge Config for fast redirect handling
Monitor logs and performance insights for slow routes and cold starts
In today’s hyper-connected world, the Internet of Things (IoT) is transforming industries, from smart manufacturing to intelligent healthcare. However, the real potential of IoT is to connect continuously with enterprise systems, providing real-time insights and automating. This is where MuleSoft’s Anypoint Platform comes in, a disturbance in integrating IoT units and API to create an ecosystem. This blog explains how MuleSoft sets the platform for connection and introduces a strong basis for IoT and API integration that goes beyond the specific dashboard to offer scalability, safety, and efficiency.
In this blog, I will show MuleSoft’s ability to integrate IoT devices with enterprise systems through API connectivity, focusing on real-time data processing. I will provide an example of how MuleSoft’s Anypoint Platform connects to an MQTT broker and processes IoT device sensor data. The example highlights MuleSoft’s ability to handle IoT protocols like MQTT and transform data for insights.
The MuleSoft’s Anypoint Platform is specific to the API connection, native protocol support, and a comprehensive integration structure to handle the complications of IoT integration. This is how MuleSoft IOT does the integration comfortably:
MuleSoft’s API strategy categorizes integrations into System, Process, and Experience APIs, allowing modular connections between IoT devices and enterprise systems. For example, in a smart city, System APIs gather data from traffic sensors and insights into a dashboard. This scalability avoids the chaos of point-to-point integrations, a fault in most visualization-focused tools.
IoT devices are based on protocols such as MQTT, AMQP, and CoAP, which MuleSoft supports. Without middleware, this enables direct communication between sensors and gateways. In a scenario, MuleSoft is better able to connect MQTT data from temperature sensors to a cloud platform such as Azure IoT Hub than other tools that require custom plugins.
IoT requires real-time data processing, and MuleSoft’s runtime engine processes data streams in real time while supporting automation. For example, if a factory sensor picks up a fault, MuleSoft can invoke an API to notify maintenance teams and update systems. MuleSoft integrates visualization with actionable workflows.
MuleSoft’s Anypoint Exchange provides connectors for IoT platforms (e.g., AWS IoT) and enterprise systems (e.g., Salesforce). In healthcare, connectors link patient wearables to EHRs, reducing development time. This plug-and-play approach beats custom integrations commonly required by other tools.
IoT devices manage sensitive information, and MuleSoft maintains security through API encryption and OAuth. Its Management Center provides a dashboard to track device health and data flows, offering centralized control that standalone dashboard applications cannot provide without additional infrastructure.
MuleSoft’s hybrid model supports both on-premises and cloud environments, providing flexibility for IoT deployments. Its scalability handles growing networks, such as fleets of connected vehicles, making it a future-proof solution.
To demonstrate MuleSoft’s IoT integration, below I have created a simple flow in Anypoint Studio that connects to an MQTT Explorer, processes sensor data, and logs it to the dashboard integration. This flow uses a public MQTT Explorer to simulate IoT sensor data. The following are the steps for the Mule API flow:
In Anypoint Studio, create a new Mule project (e.g., ‘IoT-MQTT-Demo’). Design a flow with an MQTT Connector to connect to an explorer, a Transform Message component to process data, and a Logger to output results.
Configure the MQTT Connector properties. In General Settings, configure on a public broker (“tcp://test.mosquitto.org:1883”). Add the topic filter “iot/sensor/data” and select QoS “AT_MOST_ONCE”.
Use DataWeave to parse the incoming JSON payload (e.g., ‘{“temperature”: 25.5 }’) and add a timestamp. The DataWeave code is:
“`
%dw 2.0
output application/json
{
sensor: “Temperature”,
value: read(payload, “application/json“).temperature default “”,
timestamp: now()
}
“`
Click on the Connections and use the credentials as shown below to connect to the MQTT explorer:
Once the MQTT connects using an MQTT Explorer, publish a sample message ‘{“temperature”: 28 }’ to the topic ‘iot/sensor/data’, sending to the Mule flow as shown below.
Run the API and publish the message from the MQTT explorer, and the processed data will be logged into the console. Below shows an example log:
The above example highlights MuleSoft’s process for connecting IoT devices, processing data, and preparing it for visualization or automation.
IoT integration faces challenges:
The future of IoT with MuleSoft is promising. MuleSoft uses the Anypoint Platform to solve critical integration issues. It integrates different IoT devices and protocols, such as MQTT, to provide data flow between ecosystems. It provides real-time data processing and analytics integration. Security is added with TLS and OAuth.
MuleSoft’s Anypoint Platform reviews IoT and API integration by providing a scalable, secure, real-time solution for connecting devices to enterprise systems. As I showed in the example, MuleSoft processes MQTT-based IoT data and transforms it for useful insights without external scripts or sensors. By addressing challenges like data volume and security, MuleSoft provides a platform to build IoT ecosystems that provide automation and insights. As IoT keeps growing, MuleSoft’s API connectivity and native protocol support establish it as an innovation, with new smart city, healthcare, and more connectivity. Discover MuleSoft’s Anypoint Platform to unlock the full potential of your IoT projects and set the stage for a connected future.
]]>Considering migrating your contact center operations to the cloud? Transitioning from a legacy on-premise solution to a Cloud Contact Center as a Service (CCaaS) platform offers significant advantages, including greater flexibility, scalability, improved customer experience, and potential cost savings. However, the success of this transition depends heavily on selecting the right vendor and ensuring alignment with your unique business requirements.
Here are five essential questions to ask any CCaaS vendor as you plan your migration:
Integration capabilities are key and may impact the effectiveness of your new cloud solution. Ensure that the proposed CCaaS platform easily integrates with or provides viable alternatives to your current CRM, workforce management solutions, business intelligence/reporting tools, and legacy applications. Smooth integrations are vital for maintaining operational efficiency and enhancing the customer and employee experience.
Every contact center has agent processes and customer interaction workflows. Verify that your CCaaS vendor allows customization of critical features like interactive voice response (IVR), agent dashboards, and reporting tools (to name just a few). Flexibility in customization ensures that the platform supports your business goals and enhances operational efficiency without disrupting established workflows. Assess included AI-enabled features such as IVAs, real-time agent coaching, customer sentiment analysis, etc.
Data security and compliance with regulations like HIPAA, GDPR, or PCI are likely critical requirements for your organization. This can be especially true in industries that deal with sensitive customer or patient information. Confirm the vendor’s commitment to comprehensive security protocols, including the ability to redact or mask Personally Identifiable Information (PII). Ask your vendor for clearly defined compliance certifications and if they conduct regular security audits.
Uninterrupted service is critical for contact centers, and it’s essential to understand how the CCaaS vendor handles service disruptions, outages, and disaster scenarios. Ask about their redundancy measures, geographic data center distribution, automatic failover procedures, and guarantees outlined in their Service Level Agreements (SLAs).
It is impossible to overstate the importance of good change management and enablement. Transitioning to a cloud environment involves adapting to new technologies and processes. Determine the availability of the vendor’s training programs, materials, and support channels.
By proactively addressing these five key areas, your organization can significantly streamline your migration process and ensure long-term success in your new cloud-based environment. Selecting the right vendor based on these criteria will facilitate a smooth transition and empower your team to deliver exceptional customer experiences efficiently and reliably.
]]>Privileged Identity Management (PIM) is a service in Microsoft Entra ID that enables you to manage, control, and monitor access to important resources in your organization. These resources include those in Microsoft Entra ID, Azure, and other Microsoft Online Services such as Microsoft 365 or Microsoft Intune. This blog has been written to help those who want to set up just-in-time access for Azure resources and provide access to the subscription level only.
PIM ensures that only the right people can access essential systems when needed and only for a short time. This reduces the chances of misuse by someone with powerful access.
PIM ensures that people only have the access they need to do their jobs. This means they can’t access anything unnecessary, keeping things secure.
With PIM, users can get special access for a set period. Once the time is up, the access is automatically removed, preventing anyone from holding on to unnecessary permissions.
PIM gives Just-in-Time (JIT) Access, meaning users can only request higher-level access when needed, and it is automatically taken away after a set time. This reduces the chances of having access for too long.
PIM lets you set up a process where access needs to be approved by someone (like a manager or security) before it’s given. This adds another layer of control.
PIM keeps detailed records of who asked for and received special access, when they accessed something, and what they did. This makes it easier to catch any suspicious activities.
Instead of giving someone admin access all the time, PIM allows it to be granted for specific tasks. Admins only get special access when needed, and for as long as necessary, so there is less risk.
Some industries require companies to follow strict rules (like protecting personal information). PIM helps meet these rules by controlling who has access and keeping track of it for audits.
2. Select Your Assignment
Azure PIM helps make your system safer by ensuring that only the right people can access essential resources for a short time. It lets you give access when needed (just-in-time), require approval for special access, automatically manage who can access what, and keep track of everything. PIM is essential for organizations that want to limit who can access sensitive information, ensure only the necessary people have the correct permissions at the right time, and prevent unauthorized access.
]]>Cloud is easy—until it isn’t.
Cloud adoption has skyrocketed. Multi-cloud. Hybrid cloud. AI-optimized workloads. Clients are moving fast, but many are moving blindly. The result? High costs, low returns, and strategies that stall before they scale.
That’s why this moment matters. Now, more than ever, your clients need a partner who brings more than just cloud expertise—they need business insight, strategic clarity, and real results.
In our latest We Are Perficient episode, we sat down with Kiran Dandu, Perficient’s Managing Director, to uncover exactly how we’re helping clients not just adopt cloud, but win with it.
If you’re in sales, this conversation is your cheat sheet for leading smarter cloud conversations with confidence.
Kiran makes one thing clear from the start: “We don’t start with cloud. We start with what our clients want to achieve.”
At Perficient, cloud is a means to a business end. That’s why we begin every engagement by aligning cloud architecture with long-term business objectives—not just technical requirements.
This outcome-first mindset isn’t just smarter—it’s what sets Perficient apart from traditional cloud vendors.
Forget the hype—AI is already transforming how we operate in the cloud. Kiran breaks down the four key areas where Perficient is integrating AI to drive real value:
The result? A cloud strategy that’s not just scalable, but self-improving.
Moving to the cloud isn’t the end goal—it’s just the beginning.
Kiran emphasizes how Perficient’s global delivery model and agile methodology empower clients to not only migrate, but to evolve and innovate faster. Our teams help organizations:
And it’s not just theory. Our global consultants, including the growing talent across LATAM, are delivering on this promise every day.
“The success of our cloud group is really going to drive the success of the organization.”
— Kiran Dandu
While visiting our offices in Medellín, Colombia, Kiran highlighted the value of diversity in driving cloud success:
“This reminds me of India in many ways—there’s talent, warmth, and incredible potential here.”
That’s why Perficient is investing in uniting its global cloud teams. The cross-cultural collaboration between North America, LATAM, Europe, and India isn’t just a feel-good story—it’s the engine behind our delivery speed, technical excellence, and customer success.
If your client is talking about the cloud—and trust us, they are—this interview is part of your toolkit.
You’ll walk away understanding:
Want to hear directly from the source? Don’t miss Kiran’s full interview, packed with strategic insights that will elevate your next sales conversation.
Watch now and discover how Perficient is transforming cloud into a competitive advantage.
Perficient is not just another cloud partner—we’re your client’s competitive edge. Let’s start leading the cloud conversation like it.
]]>When it comes to manufacturing, companies (OEM) require services from their manufacturing partners to help with the production of finished and semi-finished products. There are two known solutions that Oracle Fusion SCM suite offers: Outside Processing (OSP) and Contract Manufacturing. Both solutions involve a third-party vendor and a service component to either help complete a work order, fulfill a sales order, or fulfill subassembly demand. Both solutions serve a purpose and are quite powerful. Before I jump into the comparison, here’s a textbook definition of both solutions:
OSP:
OSP is the process of outsourcing a portion of the work order that is being done in house. For example, a steel shop that can cut and weld steel to manufacture frames may send the frames to a paint shop (vendor) to get painted. The steel shop then receives the painted frames in house and perhaps performs a few more value-added steps to complete the work order. The paint portion of this work order is considered an outside job. Companies may prefer outside processing for various reasons. The steel manufacturer may not be interested in installing a paint booth and employing painters, or the company may have a paint booth but it’s backlogged or is down. Specialization may be required. In all these scenarios, a vendor is needed to help.
Contract Manufacturing:
Contract Manufacturing is to (optionally) provide materials to a vendor and expect the vendor to produce assemblies and send them to external or internal customers. Typically, the company ships raw materials and/or subassemblies to their vendor and manages stock in their warehouse. With contract manufacturing, the vendor is in complete control of manufacturing process and is supposed to update and complete work orders or communicate back the process. The OEM usually has owned stock at the vendor location and tracks in its books.
In most cases the “textbook” response can be straightforward. In some cases, companies that are using production steps from a vendor as one of the operations in their in-house work orders use the OSP solution. It’s straightforward and quite easy to set up. Read this blog for OSP treatment advice in Cost Management.
Contract Manufacturing may be used directly to fulfill back-to-back sales orders, or to fulfill the Supply Planning generated subassembly transfer orders and work orders. Two drawbacks to Contract Manufacturing (as of Release 25A), is that the second portion of the solution can only be accomplished by Supply Planning. Contract Manufacturing is a robust solution, but it requires vendors to actively participate providing feedback on inventory levels, or updating the production progress in Oracle Fusion.
There are use cases where companies may use a vendor to produce subassemblies, but would like to create manual work orders, manage the inventory, and want a less complicated solution. In this case, the OSP solution can work beautifully simulating a Contract Manufacturing solution.
OSP | Contract Manufacturing | |
New Inventory Organization | Optional | Required |
Supply Planning | Optional | May be required |
Service Items | Required | Required |
Blanket Purchase Agreements | Optional | Optional |
Ease of Implementation | Easy | More Complicated |
Based on the business requirements, the OEM may choose to go with OSP. Let’s assume that the inventory sent to the vendor for Contract Manufacturing is only sent when new assemblies are required and there isn’t an opportunity to electronically integrate through webservices and communication is with emails or with other correspondence.
For this OEM, it is feasible to create a Work Center in Oracle Fusion Manufacturing for its vendor, automatically create service purchase orders, and have dedicated supply and completion subinventories. Inventory Management could be a little challenging dedicating the raw material for contract manufacturer use only, but not unmanageable. In this simple scenario, the OEM doesn’t have to go through a complex setup to use Contract Manufacturing, and they can go with the OSP solution.
There are various use cases and potential solutions using Oracle Supply Planning, Inventory Management, and min-max planning, Oracle Manufacturing, Inventory and Procurement, and the Supplier Portal to fulfill various manufacturing scenarios.
Contact Mehmet Erisen at Perficient for more introspection of this functionality, and how Perficient and Oracle Fusion Cloud can digitalize and modernize your ERP platform.
]]>