Cloud Articles / Blogs / Perficient https://blogs.perficient.com/category/services/platforms-and-technology/cloud/ Expert Digital Insights Tue, 03 Jun 2025 21:10:32 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Cloud Articles / Blogs / Perficient https://blogs.perficient.com/category/services/platforms-and-technology/cloud/ 32 32 30508587 Redefining CCaaS Solutions Success in the Digital Era https://blogs.perficient.com/2025/06/03/redefining-ccaas-success-in-the-digital-era/ https://blogs.perficient.com/2025/06/03/redefining-ccaas-success-in-the-digital-era/#comments Tue, 03 Jun 2025 20:26:24 +0000 https://blogs.perficient.com/?p=382347

With the advancement of technology, machine learning and AI capabilities in the customer care space, customer expectations are evolving faster than ever before. Customers expect smoother, context-aware, personalized, and generally more effective and faster experiences across channels when contacting a support center. 

This calls for a need to revisit and redefine the success metrics for a Contact Center as a Service (CCaaS) strategy. 

 

Let’s break this down into two categories. The first category includes key metrics that are still essential to be measured. The standards for these metrics though are raised and the way they are measured have evolved. The second category introduces new metrics that are emerging because of advanced CCaaS capabilities in a modern contact center landscape. 

  

Key Traditional Success Metrics Reimagined  

  

Customer Satisfaction (CSAT) remains a cornerstone success metric. Every improvement a customer service center is looking to make, from improving operational efficiencies to enhancing agent and customer experience, will directly or indirectly impact the customer and is aimed at elevating that customer experiences. With automated personalized journeys being an important part of modern customer service, it is important to monitor real-time analytics on automated journeys in addition to live agent interactions. This helps better understand the customer experience and find opportunities to fine tune the friction points to improve customer satisfaction. Customer service is not only about resolving customer issues, but also about providing an effortless experience. 

  

First Contact Resolution is still a key success metric in the CCaaS space, but modern tools can revolutionize the extent a customer service center can go to improve this metric, so the standards for this metric have raised. Passing context effectively across channels, real-time monitoring, predictive analytics and insights, and proactive outreach can increase the likelihood of addressing customer needs on the first contact or even sometimes without the need for a live agent interaction. 

  

Customer Retention Rate metric has been revamped with the advancement of technology in customer service. Advanced predictive analytics can help track the customer experience throughout their journey and shed light on the underlying customer behavior patterns. This will enable proactive engagement strategies personalized to every customer. Real-time sentiment analysis can provide instant feedback to the customer service representatives and their supervisors to give them a chance to course correct immediately in order to shift the sentiment to a positive experience and retain customers. 

  

Emerging Success Metrics 

  

Agent Experience and Satisfaction has a direct impact on the operation of a contact center and hence the customer experience. Traditionally, this metric was not tracked broadly as an important metric to measure a successful contact center strategy. However, we know today that agent experience and satisfaction is a key metric for transforming contact centers from cost centers into revenue generating units. Contact centers can leverage modern tools in different areas from agent performance monitoring, training and identifying knowledge gaps to providing automated workflows and real-time agent assistance, to elevate the agent experience.

These strategies and tools help agents become more effective and productive while providing service. Satisfied agents are more motivated to help customers effectively. This can improve metrics like First Contact Resolution rate and Average Handle Time. Happy and productive agents are more likely to engage positively with customers to discuss potential cross-sell and upsell opportunities. Moreover, agent turnover and the cost associated with that will be lowered due to the reduced burden of onboarding and training new agents regularly and constantly being short of staff. 

  

Sentiment Analysis and Real-time Interaction Quality provides immediate insights to the contact center representatives about the customer’s emotions, the conversation tone, and the effectiveness of their interactions. This will help the contact center representatives to refine their interaction strategy on the spot to maintain a positive and effective engagement with the customer. These transforms contact centers into emotionally intelligent, customer-focused support centers. This makes a huge difference in a time where the quality of experience matters as much as the outcome. 

  

Predictive Analysis Accuracy represents an entirely new set of metrics for a modern contact center that leverages predictive analytics in its operation. It is crucial to measure this metric and evaluate the accuracy of the forecasts against customer behavior and demands as well as the agent workflow needs. Inaccurate predictions are not only ineffective but can also be harmful to contact center operations. They can lead to poor decision making, confusion, and disappointing customer experiences. Accuracy in the anticipation of customer needs can enable proactive outreach, positive and effective interactions, less friction points and reduced service contacts while facilitating effective automatic upsell and cross-sell initiatives. 

  

Technology Utilization Rate is an important metric to track in a modern and evolving customer care solution. While with the latest technological advancements a lot of intelligent automation and enhancements can be made within a CCaaS solution, a contact center strategy is required to identify the most impactful modern capabilities for every customer service operation. The strategy needs to incorporate tracking the success of the technology adoption through system usage data and adoption metrics. This ensures that technology is being leveraged effectively and is providing value to business. The technology utilization tracking can also reveal training and adoption gaps, ensuring that modern tools are not just implemented for the sake of innovation, but are actively contributing to improved efficiency within a contact center. 

  

Conclusion

The development of advanced native capabilities and integration of modern tools within CCaaS platforms are revolutionizing the customer care industry and reshaping customer expectations. Staying ahead of this shift is crucial. While utilizing these advancements to achieve operational efficiencies, it is equally important to redefine the success metrics that provide businesses with insights and feedback on a modern CCaaS strategic roadmap. Adopting a fresh approach to capturing traditional metrics like Customer Satisfaction Scores and First Contact Resolution, combined with measuring new metrics such as Real-time Interaction Quality and Predictive Analysis Accuracy will offer a comprehensive view of a contact center’s maturity and its progress towards a successful and effective modern CCaaS solution. 

We can measure these metrics by utilizing built-in monitoring and analytical tools of modern CCaaS platforms along with AI-powered services integrations for features like Sentiment and Real-time Quality Analysis. We can gather regular feedback and data from agents and automated tracking tools to monitor system usability and efficiency. All this data can be streamed and displayed on a unified custom analytics dashboard, providing a comprehensive view of contact center performance and effectiveness. 

]]>
https://blogs.perficient.com/2025/06/03/redefining-ccaas-success-in-the-digital-era/feed/ 1 382347
A Comprehensive Guide to Azure Firewall https://blogs.perficient.com/2025/06/03/a-comprehensive-guide-to-azure-firewall/ https://blogs.perficient.com/2025/06/03/a-comprehensive-guide-to-azure-firewall/#respond Tue, 03 Jun 2025 10:09:38 +0000 https://blogs.perficient.com/?p=380960

Azure Firewall, a managed, cloud-based network security service, is an essential component of Azure’s security offerings. It comes in three different versions – Basic, Standard, and Premium – each designed to cater to a wide range of customer use cases and preferences. This blog post will provide a comprehensive comparison of these versions, discuss best practices for their use, and delve into their application in hub-spoke and Azure Virtual WAN with Secure Hub architectures.

What is Azure Firewall?

Azure Firewall is a cloud-native, intelligent network firewall security service designed to protect your Azure cloud workloads. It offers top-tier threat protection and is fully stateful, meaning it can track the state of network connections and make decisions based on the context of the traffic.

Key Features of Azure Firewall

  • High Availability: Built-in high availability ensures that your firewall remains operational at all times.
  • Scalability: Unlimited cloud scalability to handle varying workloads.
  • Traffic Inspection: Inspects both east-west (within the same network) and north-south (between different networks) traffic.
  • Threat Intelligence: Uses advanced threat intelligence to block malicious IP addresses and domains.
  • Centralized Management: Allows you to centrally create, enforce, and log application and network connectivity policies across multiple subscriptions and virtual networks.
  • Compliance: Helps organizations meet regulatory and compliance requirements by providing detailed logging and monitoring capabilities.
  • Cost Efficiency: By deploying Azure Firewall in a central virtual network, you can achieve cost savings by avoiding the need to deploy multiple firewalls across different networks.

    Firewall Architecture

Why Azure Firewall is Essential

Enhanced Security

In today’s digital landscape, cyber threats are becoming increasingly sophisticated. Organizations need robust security measures to protect their data and applications. Azure Firewall provides enhanced security by inspecting both inbound and outbound traffic, using advanced threat intelligence to block malicious IP addresses and domains. This ensures that your network is protected against a wide range of threats, including malware, phishing, and other cyberattacks.

Centralized Management

Managing network security across multiple subscriptions and virtual networks can be a complex and time-consuming process. Azure Firewall simplifies this process by allowing you to centrally create, enforce, and log application and network connectivity policies. This centralized management ensures consistent security policies across your organization, making it easier to maintain and monitor your network security.

Scalability

Businesses often experience fluctuating traffic volumes, which can strain network resources. Azure Firewall offers unlimited cloud scalability, meaning it can handle varying workloads without compromising performance. This scalability is crucial for businesses that need to accommodate peak traffic periods and ensure continuous protection.

High Availability

Downtime can be costly for businesses, both in terms of lost revenue and damage to reputation. Azure Firewall’s built-in high availability ensures that your firewall is always operational, minimizing downtime and maintaining continuous protection

Compliance

Many industries have strict data protection regulations that organizations must comply with. Azure Firewall helps organizations meet these regulatory and compliance requirements by providing detailed logging and monitoring capabilities. This is particularly vital for industries such as finance, healthcare, and government, where data security is of paramount importance.

Cost Efficiency

Deploying multiple firewalls across different networks can be expensive. By deploying Azure Firewall in a central virtual network, organizations can achieve cost savings. This centralized approach reduces the need for multiple firewalls, lowering overall costs while maintaining robust security.

Azure Firewall Versions: Basic, Standard, and Premium

Azure Firewall Basic

Azure Firewall Basic is recommended for small to medium-sized business (SMB) customers with throughput needs of up to 250 Mbps. It’s a cost-effective solution for businesses that require fundamental network protection.

Azure Firewall Standard

Azure Firewall Standard is recommended for customers looking for a Layer 3–Layer 7 firewall and need autoscaling to handle peak traffic periods of up to 30 Gbps. It supports enterprise features like threat intelligence, DNS proxy, custom DNS, and web categories.

Azure Firewall Premium

Azure Firewall Premium is recommended for securing highly sensitive applications, such as those involved in payment processing. It supports advanced threat protection capabilities like malware and TLS inspection. Azure Firewall Premium utilizes advanced hardware and features a higher-performing underlying engine, making it ideal for handling heavier workloads and higher traffic volumes.

Azure Firewall Features Comparison

Here’s a comparison of the features available in each version of Azure Firewall:

Feature Basic Standard Premium
Stateful firewall (Layer 3/Layer 4) Yes Yes Yes
Application FQDN filtering Yes Yes Yes
Network traffic filtering rules Yes Yes Yes
Outbound SNAT support Yes Yes Yes
Threat intelligence-based filtering No Yes Yes
Web categories No Yes Yes
Intrusion Detection and Prevention System (IDPS) No No Yes
TLS Inspection No No Yes
URL Filtering No No Yes

Azure Firewall Architecture

Azure Firewall plays a crucial role in the hub-spoke network architecture pattern in Azure. The hub is a virtual network (VNet) in Azure that acts as a central point of connectivity to your on-premises network. The spokes are VNets that peer with the hub and can be used to isolate workloads. Azure Firewall secures and inspects network traffic, but it also routes traffic between VNets .

A secured hub is an Azure Virtual WAN Hub with associated security and routing policies configured by Azure Firewall Manager. Use secured virtual hubs to easily create hub-and-spoke and transitive architectures with native security services for traffic governance and protection.

Spoke Spoke Routing

How Azure Firewall Works

Azure Firewall operates by using rules and rule collections to manage and filter network traffic. Here are some key concepts:

  • Rule Collections: A set of rules with the same order and priority. Rule collections are executed in priority order.
  • Application Rules: Configure fully qualified domain names (FQDNs) that can be accessed from a virtual network.
  • Network Rules: Configure rules with source addresses, protocols, destination ports, and destination addresses.
  • NAT Rules: Configure DNAT rules to allow incoming Internet or intranet connections.

Azure Firewall integrates with Azure Monitor for viewing and analyzing logs. Logs can be sent to Log Analytics, Azure Storage, or Event Hubs and analyzed using tools like Log Analytics, Excel, or Power BI.

Steps to Deploy and Configure Azure Firewall

Step 1: Set Up the Network

Create a Resource Group
Sign in to the Azure portal:

  • Navigate to Azure Portal.
    • Use your credentials to sign in.
  • Create a Resource Group:
    • On the Azure portal menu, select Resource groups or search for and select Resource groups from any page.
    • Click Create.
    • Enter the following values:
      • Subscription: Select your Azure subscription.
      • Resource group: Enter Test-FW-RG.
      • Region: Select a region (ensure all resources you create are in the same region).
    • Click Review + create and then Create.
  • Create a Virtual Network (VNet)
    • On the Azure portal menu or from the Home page, select Create a resource.
    • Select Networking and search for Virtual network, then click Create.
    • Enter the following values:
      • Subscription: Select your Azure subscription.
      • Resource group: Select Test-FW-RG.
      • Name: Enter Test-FW-VN.
      • Region: Select the same region as the resource group.
  • Click Next: IP Addresses.
    • Configure IP Addresses:
    • Set the Address space to 10.0.0.0/16.
      • Create two subnets:
      • AzureFirewallSubnet: Enter 10.0.1.0/26.
      • Workload-SN: Enter 10.0.2.0/24.
  • Click Next: Security.
    • Configure Security Settings:
    • Leave the default settings for Security.
  • Click Next: Tags.
    • Add Tags (Optional):
    • Tags are useful for organizing resources. Add any tags if needed.
  • Click Next: Review + create.
    • Review and Create:
    • Review the settings and click Create.

 

Vnet

Step 2: Deploy the Firewall

Create the Firewall:

  • On the Azure portal menu, select Create a resource.
    • Search for Firewall and select Create.
    • Enter the following values:
      • Subscription: Select your Azure subscription.
      • Resource group: Select Test-FW-RG.
      • Name: Enter Test-FW.
      • Region: Select the same region as the resource group.
      • Virtual network: Select Test-FW-VN.
      • Subnet: Select AzureFirewallSubnet.
  • Click Next: IP Addresses.
    • Configure IP Addresses:
    • Assign a Public IP Address:
      • Click Add new.
      • Enter a name for the public IP address, e.g., Test-FW-PIP.Click OK.
  • Click Next: Tags.
    • Add Tags (Optional):
    • Add any tags if needed.
  • Click Next: Review + create.
    • Review and Create:
    • Review the settings and click Create.

Deploy Firewall

Step 3: Configure Firewall Rules

Create Application Rules

  • Navigate to the Firewall:
    • Go to the Resource groups and select Test-FW-RG.
    • Click on Test-FW.
  • Configure Application Rules:
    • Select Rules from the left-hand menu.
    • Click Add application rule collection.
      • Enter the following values:Name: Enter AppRuleCollection.
      • Priority: Enter 100.
      • Action: Select Allow.
      • Rules: Click Add rule.
      • Name: Enter AllowGoogle.
      • Source IP addresses: Enter *.
      • Protocol: Select http, https.
      • Target FQDNs: Enter www.google.com.
    • Click Add.
  • Create Network Rules
  • Configure Network Rules:
    • Select Rules from the left-hand menu.
    • Click Add network rule collection.
    • Enter the following values:
      • Name: Enter NetRuleCollection.
      • Priority: Enter 200.
      • Action: Select Allow.
      • Rules: Click Add rule.
      • Name: Enter AllowDNS.
      • Source IP addresses: Enter *.
      • Protocol: Select UDP.
      • Destination IP addresses: Enter 8.8.8.8, 8.8.4.4.Destination ports: Enter 53.
    • Click Add.
  • Create NAT Rules
    • Configure NAT Rules:
      • Select Rules from the left-hand menu.
      • Click Add NAT rule collection.
      • Enter the following values:
        • Name: Enter NATRuleCollection.
        • Priority: Enter 300.
        • Action: Select DNAT.
        • Rules: Click Add rule.
        • Name: Enter AllowRDP.
        • Source IP addresses: Enter *.Protocol: Select TCP.
        • Destination IP addresses: Enter the public IP address of the firewall.
        • Destination ports: Enter 3389.
        • Translated address: Enter the private IP address of the workload server.
        • Translated port: Enter 3389.
      • Click Add.

Rdp

Step 4: Test the Firewall

  • Deploy a Test VM:
    • Create a virtual machine in the Workload-SN subnet.
    • Ensure it has a private IP address within the 10.0.2.0/24 range.
  • Test Connectivity:
    • Attempt to access www.google.com from the test VM to verify the application rule.
    • Attempt to resolve DNS queries to 8.8.8.8 and 8.8.4.4 to verify the network rule.
    • Attempt to connect via RDP to the test VM using the public IP address of the firewall to verify the NAT rule.
  • Monitoring and Managing Azure Firewall
    • Integrate with Azure Monitor:
      • Navigate to the firewall resource.
        • Select Logs from the left-hand menu.
        • Configure diagnostic settings to send logs to Azure Monitor, Log Analytics, or Event Hubs.
  • Analyze Logs:
    • Use Azure Monitor to view and analyze firewall logs.
    • Create alerts and dashboards to monitor firewall activity and performance.

Test Firewall

Best Practices for Azure Firewall

To maximize the performance of your Azure Firewall, it’s important to follow best practices. Here are some recommendations:

  • Optimize Rule Configuration and Processing: Organize rules using firewall policy into Rule Collection Groups and Rule Collections, prioritizing them based on their frequency of use.
  • Use or Migrate to Azure Firewall Premium: Azure Firewall Premium offers a higher-performing underlying engine and includes built-in accelerated networking software.
  • Add Multiple Public IP Addresses to the Firewall: Consider adding multiple public IP addresses (PIPs) to your firewall to prevent SNAT port exhaustion.
]]>
https://blogs.perficient.com/2025/06/03/a-comprehensive-guide-to-azure-firewall/feed/ 0 380960
Inventory Management 25B – Path to Redwood Experience 1.2.3. https://blogs.perficient.com/2025/05/30/inventory-management-25b-path-to-redwood-experience-1-2-3/ https://blogs.perficient.com/2025/05/30/inventory-management-25b-path-to-redwood-experience-1-2-3/#respond Fri, 30 May 2025 10:56:28 +0000 https://blogs.perficient.com/?p=382158

I have been writing about the Redwood Experience with Supply Chain Management, especially with the Inventory Management. Oracle has gone all-in with Redwood Experience in Inventory Management in 25B.

The 25 Inventory Management readiness documentation lists all new features and how to use them, so I will not repeat this greatly written document: https://docs.oracle.com/en/cloud/saas/readiness/scm/25b/inv25b/index.html

For the previous features in Redwood, please consider visiting the Readiness documentation: https://docs.oracle.com/en/cloud/saas/readiness/scm-all.html

This page is my personal favorite since it provides easy to find features and documentation along with that.

 1. Why?

You may be asking the question: why is the Redwood so hot and why do I have to transform?

If you are an Oracle customer or you have been in Oracle space for a while (I have been in the space for almost three decades), you know that once Oracle sets a vision and starts delivering new technology it becomes the future.  We have witnessed this when Oracle moved the business applications from 10.7 Character mode to 10SC (Smart Client) 10NCA (Network Architecture). We went from character mode to GUI.  It wasn’t easy and quick, but it happened. Then we moved from major releases in EBS and got used to the Self Service architecture.

Oracle delivered the Fusion Applications long time ago and we have witnessed that each quarterly release has added more functionality.  Since 2024 Oracle has been improving user interface and adding mobility to the Inventory Management pages, but the most radical improvements have happened in 25A and 25B.  Now, almost 100% of the Inventory Management is in Redwood and it’s the next generation of Cloud applications.

Redwood brings better usability and better user interface that I explained in my past blog https://blogs.perficient.com/2025/05/30/starting-redwood-experience-with-25a-inventory-management/, but it also opens the door for Artificial Intelligence (AI).

Oracle is expected to release major AI improvements in 25C that  I plan to write a blog to talk about. Redwood Experience is a prerequisite for all cool AI technology to work.  Agentic AI features or AI Agents will be part of the Fusion Applications which is a topic for another blog.

So, while majority of the screens are optional, why not get ahead of the game and start adopting?

2. How

You may be asking the question: what actions do I need to take to use Redwood

Read Documentation.  In Customer Connect, we are seeing many questions from the Oracle Community about Redwood pages not populating items or screens are coming out blank.  Please see this documentation for the important considerations

https://docs.oracle.com/en/cloud/saas/readiness/scm/25a/inv25a/25A-inventory-wn-t65792.htm

By the way, if you have not registered to Oracle Customer Connect, I highly recommend, so you can get in contact with the rest of your peer Oracle Community members and Oracle ACEs like myself who can possibly respond to your questions: https://community.oracle.com/customerconnect/

Then please see the Profile options for the new features. You will have to flip the profile options at site level from No to Yes, so that the features are enabled.

The documents I previously mentioned have the profile option names and the navigation is to use the task bar from the Functional Setup Manager and search for Manage Administrative Profile Values.

3. What

You may be asking the question: what Redwood Pages I should use first

Adoption is very critical when changing the user experience. Change Management becomes critical migrating from traditional cloud pages to the newly designed Redwood Pages.  What I would recommend is to first enable the configuration pages, so that the internal Oracle team and business analysts have a feel of the Redwood Experience.

Then there are a few pages that users can be beneficial that I mentioned in my prior blog: https://blogs.perficient.com/2025/05/30/starting-redwood-experience-with-25a-inventory-management/

One bold move is to flip all features to Redwood and start testing internally first in a lower pod.  Oracle has designed this, so companies have time to take on as much as they can during the course of an unidentified period of time. As of today, Oracle has not announced when the Redwood Experience will be mandatory.  Most pages are possible to switch back and forth, but please read the feature’s release note to see if there is a not that will explicitly say that once it’s turned on, there is not a path to go back.

In conclusion, Oracle Fusion Application’s future is in Redwood Experience and built in AI, so I recommend that you try to adapt and use.

Contact Mehmet Erisen at Perficient for more introspection of this functionality, and how Perficient and Oracle Fusion Cloud can digitalize and modernize your ERP platform.

]]>
https://blogs.perficient.com/2025/05/30/inventory-management-25b-path-to-redwood-experience-1-2-3/feed/ 0 382158
Starting Redwood Experience with 25A Inventory Management https://blogs.perficient.com/2025/05/30/starting-redwood-experience-with-25a-inventory-management/ https://blogs.perficient.com/2025/05/30/starting-redwood-experience-with-25a-inventory-management/#respond Fri, 30 May 2025 10:17:28 +0000 https://blogs.perficient.com/?p=381183

Oracle has delivered many features in Redwood Experience as of 25A. The purpose of this blog is to give a taste of a few Redwood pages and provide a recommendation to migrate to Redwood Experience on select pages to ease the transition and adoption. The pages I’ll cover in this blog are:

  • Item Quantities (Manage Item Quantities)
  • Inventory Transactions (Review Completed Transactions)
  • Lot and Serial Numbers (Manage Lots and Manage Serials)
  • Review Item Supply and Demand (Review Item Supply and Demand)

*names in parenthesis are the traditional cloud menu entries

Important Step before you start

Oracle has documented the Important actions to be taken before the Redwood enablement in 25A readiness notes.  We recommend that you read and follow this documentation for successful roll out https://docs.oracle.com/en/cloud/saas/readiness/scm/25a/inv25a/25A-inventory-wn-t65792.htm This document also has a list of newly designed Redwood pages that you can pick and chose for your company. I wanted to cover a few that seems to be a good start with Redwood enablement.  

Item Quantities

“Manage Item Quantities” is one of the most visited pages in Oracle Fusion Inventory Management.  It’s a very functional page that one can see on hand quantities as well as incoming stock and stock in receiving.  The page also allows various actions and provides additional information.  I’ll share a few screenshots of the user interface to explain the look and feel of the Redwood Pages. The traditional page Picture1 The page gave great functionality with the lack of exporting the results.  Also, the nodes in the page caused users to drill down multiple levels, for example from Item to Subinventory and Locator, to LOT number and project and task.  This multi-node architecture was good to see the information but prevented users from seeing all data in one place and exporting it. The Redwood Experience “Item Quantities” page takes the “Manage Item Quantities” page to the next level.  Immediately, the user will enjoy a more modern responsive experience.  The data is displayed in a tabular format Picture2 The Redwood Experience gives the user a flexible layout. Easily add remove filters, follow the deep link to the Lot or Serial number, see the on hand quantities as well as other measures such as available to transact, available to reserve, inbound in receiving quantities all in one line. This is a huge improvement compared to the traditional Manage Item Quantities page. The Redwood Experience pages are user friendly and support download of the “on hand quantities” without needing additional reporting. In this case, I’m downloading all items that are in my inventory organization by clicking the export button. Clicking the Inbound deep link will show the user inbound details. Clicking the Lot under item Control will take the user to the Lot information page.   Please note that the user is now at the lot details page.   After testing the Redwood Item Quantities page, I was very pleased with the improvements:

  1. Seamless navigation to various pages within the same user interface
  2. Ability to export the tabular data (functionality highly sought after by users)
  3. All information in one place, no nodes, no drilling
  4. Additional functionality of directly creating subinventory transfers, transer orders, miscellaneous transactions and more

  Details of Actions One caveat I observed on this page is that the item description cannot be added at this time to Redwood Experience, but it is available if the user clicks on the item number.  There is also a need to scroll down to the last record, so the export includes all results. After carefully reviewing the new Redwood Experience, and testing the page thoroughly, I believe that this is one of the Redwood Pages that can be enabled to transition from traditional user interface to the Redwood Experience.

Inventory Transactions

This page with Redwood Experience replaces the Completed Transactions page.  Once again, it’s another Redwood Page that is easy to adopt and use.  It has all the bells and whistles of the traditional cloud page and more. What I like about the Redwood Experience is that it is so easy to add and remove columns.  One can quickly scroll through the available columns and check or uncheck multiple columns at a time, and use the Control+F (Find) feature of the browser.   Another improvement is that the user can share the saved searches with others leveraging the filter save tool. The one has been on the wish list for many users.

Lot and Serial Numbers

The newly designed Lot and Serial Numbers Redwood page is also a great way to start the Redwood Experience with one caveat.  If you frequently jump to Lot/Serial transactions or on hand quantities from the manage Lot/Serial pages, this feature is yet not available as of 25A in Redwood Experience.

Review Item Supply and Demand

Another easy to transform Redwood page is the Item Supply and Demand page.  All functionality is the same, but Redwood Experience has additional columns such as supply/demand quantity in separate columns, Party, Work Order Description, Shipping Priority and Created by to give the user better information. User acceptance and adoption comes with time, so the sooner the transition begins, the more successful the implementations will go. Perficient can help you with your transition from traditional Fusion or legacy on-prem applications to the SCM Redwood experience. When you are ready to take the first step and you’re looking for some advice, contact us. Our strategy is to craft a path for our clients that will make the transition as seamless as possible to the user community and their support staff.

 

Contact Mehmet Erisen at Perficient for more introspection of this functionality, and how Perficient and Oracle Fusion Cloud can digitalize and modernize your ERP platform.

]]>
https://blogs.perficient.com/2025/05/30/starting-redwood-experience-with-25a-inventory-management/feed/ 0 381183
Securely Store API, Bearer and Auth Tokens with Platform Cache https://blogs.perficient.com/2025/05/28/securely-store-api-bearer-and-auth-tokens-with-platform-cache/ https://blogs.perficient.com/2025/05/28/securely-store-api-bearer-and-auth-tokens-with-platform-cache/#comments Wed, 28 May 2025 14:00:11 +0000 https://blogs.perficient.com/?p=381645

Imagine you are retrieving an API key/Bearer token from an external system to make a new callout to another external system. But there is one issue: you cannot determine when the new callout will take place, so you need to store the token value somewhere in your org for later use. Probably you will be thinking of storing it in your custom object or custom metadata. But creating a separate object/metadata for such a case may not be an efficient way. Then what? I think this is where Platform Cache helps you. Platform Cache provides temporary storage for such data in such cases. You can set up your platform cache by following the steps outlined in this trailhead. Here, I will explain how you can use the platform cache for this use case: I expect that you have already configured your platform cache in your dev org. Now suppose, you receive a bearer token from one API to pass into another callout after sometime, so you can store that access token in the Platform cache and avoid the hassle of creating a separate custom object/metadata or Custom Setting (Although you cannot update a Custom Metadata/Setting using Apex code).

The Trailhead referenced in this article is crucial for understanding and setting up the platform cache. If you have not gone through the trailhead, then stop here and complete the trailhead first.

How to Use Platform Cache to Fetch a Bearer Token

How to fetch a bearer token and save it into the Platform cache using Apex:

Picture1

Now, you can fetch the bearer token from the Platform cache and use it in your next callout with the following code:

Picture2 Picture3

 

Remember, Platform Cache provides temporary storage, and data can be evicted from it because of memory pressure. Platform Cache does not guarantee the determined lifetime of the value stored; therefore, it is better not to store any value with an extended validity or sensitive information.

Although platform cache is used to improve performance by evading unnecessary repeated API calls, there are a few concerns about using the Platform cache.

When Not to Use Platform Cache

  • If the value contains sensitive information
  • If the value has to be stored for the long term
  • If you need environment-specific secret management

References

External System used: https://www.reqres.in
Technology used: Salesforce

]]>
https://blogs.perficient.com/2025/05/28/securely-store-api-bearer-and-auth-tokens-with-platform-cache/feed/ 1 381645
Azure IoT Operations: Empowering the Future of Connectivity and Automation https://blogs.perficient.com/2025/05/27/azure-iot-operations-empowering-the-future-of-connectivity-and-automation/ https://blogs.perficient.com/2025/05/27/azure-iot-operations-empowering-the-future-of-connectivity-and-automation/#respond Wed, 28 May 2025 03:24:11 +0000 https://blogs.perficient.com/?p=381664

In today’s world, the Internet of Things (IoT) is revolutionizing industries across the globe by connecting devices, systems, and people in ways that were once unimaginable. From smart homes to advanced manufacturing, IoT is creating new opportunities for innovation, efficiency, and data-driven decision-making. At the forefront of this transformation is Microsoft Azure, a cloud computing platform that has become a powerhouse for IoT operations.

In this blog, we’ll dive into the essentials of Azure IoT operations, how it simplifies and optimizes IoT workflows, and the key features that make it the go-to solution for businesses looking to scale their IoT systems.

What is Azure IoT?

Azure IoT is a set of services, solutions, and tools from Microsoft that allow businesses to securely connect, monitor, and control IoT devices across various environments. Azure IoT offers comprehensive capabilities for deploying IoT solutions, managing the entire device lifecycle, and providing data-driven insights that enhance decision-making processes.

At the core of Azure IoT is the ability to integrate IoT devices with cloud-based analytics, machine learning, and data processing tools, enabling businesses to leverage real-time data for more informed decisions.

Why Azure for IoT Operations?

Microsoft Azure is one of the leading cloud platforms that offer robust, scalable, and secure solutions for managing IoT operations. It is built on a foundation of powerful infrastructure that allows organizations to quickly deploy IoT systems and scale them based on evolving business needs. Here’s why businesses choose Azure for their IoT operations:

Scalability

Azure IoT can support millions of devices, allowing businesses to scale their IoT systems from a handful of devices to a vast ecosystem of connected devices across different locations. This scalability ensures that companies can grow their IoT initiatives without being limited by infrastructure constraints.

Security

Security is a major concern when managing IoT devices, as these devices are often vulnerable to cyberattacks. Azure IoT integrates comprehensive security measures, including device identity management, encryption, secure data transmission, and compliance with industry standards. This ensures that IoT operations are protected from potential threats.

Data Analytics and Insights

Azure provides a suite of analytics tools, including Azure Machine Learning, Power BI, and Azure Stream Analytics, that enable businesses to analyze data generated by their IoT devices in real-time. This enables predictive maintenance, real-time monitoring, and data-driven decision-making, ultimately improving operational efficiency.

Integration with Other Microsoft Services

Azure IoT seamlessly integrates with other Microsoft tools and services, such as Office 365, Microsoft Teams, and Azure Active Directory. This enables businesses to seamlessly integrate IoT data into their existing workflows and processes, fostering collaboration and facilitating more informed business operations.

Cross-Platform Compatibility

Azure IoT supports a wide variety of devices, operating systems, and protocols, making it a versatile solution that can integrate with existing IoT deployments, regardless of the technology stack. This interoperability allows businesses to get the most out of their IoT investments.

 

Iot3

Key Components of Azure IoT Operations

Azure IoT comprises various tools and services that address different aspects of IoT operations, ranging from device connectivity to data processing and analytics. Let’s explore some of the key components:

Azure IoT Hub

At the heart of Azure IoT operations is Azure IoT Hub, a fully managed service that enables secure and reliable communication between IoT devices and the cloud. It allows businesses to connect, monitor, and control millions of IoT devices from a single platform.

The IoT Hub provides two-way communication, allowing devices to send data to the cloud while also receiving commands. This bi-directional communication is crucial for remote monitoring, updating device configurations, and managing device health in real-time.

Azure Digital Twins

Azure Digital Twins is an advanced service that enables businesses to create digital models of physical environments. These models are used to visualize and analyze IoT data, providing a more comprehensive understanding of how devices and systems interact in the real world.

By utilizing Azure Digital Twins, organizations can optimize their operations, enhance asset management, and simulate scenarios for predictive maintenance and improved energy efficiency.

Azure IoT Edge

Azure IoT Edge enables businesses to run Azure services, including machine learning, analytics, and AI, directly on IoT devices at the edge of the network. This reduces latency and enables faster decision-making by processing data locally, rather than relying solely on cloud-based processing.

IoT Edge is ideal for scenarios where real-time data processing is critical, such as autonomous vehicles, industrial automation, or remote monitoring of remote assets.

Azure IoT Central

Azure IoT Central is a fully managed IoT SaaS (Software as a Service) solution that simplifies the deployment, management, and monitoring of IoT applications. With IoT Central, businesses can quickly deploy IoT solutions without requiring deep technical expertise in cloud infrastructure.

It offers an intuitive interface for managing devices, setting up dashboards, and creating alerts. IoT Central significantly reduces the complexity and time required to deploy IoT systems.

Azure Time Series Insights

Azure Time Series Insights is a fully managed analytics and storage service for time-series data. It is specifically designed for handling large volumes of data generated by IoT devices, such as sensor data, telemetry data, and event logs.

Time Series Insights offers powerful visualization and querying capabilities, enabling businesses to uncover trends and patterns in their IoT data. This is especially useful for monitoring long-term performance, detecting anomalies, and optimizing processes.

Optimizing IoT Operations with Azure

Azure IoT operations can be further optimized by integrating advanced technologies such as Artificial Intelligence (AI) and Machine Learning (ML). These technologies enable businesses to collect and store IoT data, as well as derive actionable insights from it.

Predictive Maintenance

By analyzing IoT data, Azure IoT can predict equipment failures before they occur. Using machine learning algorithms, businesses can identify patterns that indicate potential breakdowns and perform maintenance only when necessary, reducing downtime and maintenance costs.

Smart Automation

Azure IoT enables businesses to automate processes based on real-time data. For example, in smart factories, devices can automatically adjust production lines based on environmental conditions, inventory levels, or supply chain disruptions, increasing efficiency and reducing human error.

Energy Management

Azure IoT can help businesses optimize energy usage by continuously monitoring energy consumption and adjusting operations accordingly. Smart building solutions, for example, can automatically control lighting, heating, and cooling systems to reduce energy waste and lower costs.

Iot4

Use Case

Azure IoT Operations Configuration Example: A Step-by-Step Guide

When configuring Azure IoT operations, you’re setting up a system where devices can securely connect to the cloud, send telemetry data, and receive commands for actions. Let’s walk through a practical configuration example using Azure IoT Hub, a key service in Azure IoT operations by following below link.

Tutorial: Send telemetry from an IoT Plug and Play device to Azure IoT Hub

Conclusion

Azure IoT operations are transforming how businesses leverage the Internet of Things to improve efficiency, enhance customer experiences, and unlock new revenue streams. With its powerful cloud infrastructure, end-to-end solutions, and integration with Microsoft’s suite of tools, Azure is a leading choice for businesses looking to capitalize on the potential of IoT.

By deploying Azure IoT, companies can connect their devices, analyze real-time data, optimize operations, and make data-driven decisions that enhance their bottom line. Whether you’re starting small with a few connected devices or deploying large-scale, enterprise-wide IoT solutions, Azure provides the tools, security, and scalability needed to succeed in the world of connected technology.

As IoT continues to evolve, Azure will undoubtedly remain at the forefront of this exciting and transformative field, helping businesses drive innovation and stay competitive in an increasingly connected world.

]]>
https://blogs.perficient.com/2025/05/27/azure-iot-operations-empowering-the-future-of-connectivity-and-automation/feed/ 0 381664
How to Optimize Sitecore Headless and Next.js on Vercel https://blogs.perficient.com/2025/05/22/how-to-optimize-sitecore-headless-and-next-js-on-vercel/ https://blogs.perficient.com/2025/05/22/how-to-optimize-sitecore-headless-and-next-js-on-vercel/#respond Thu, 22 May 2025 16:47:13 +0000 https://blogs.perficient.com/?p=381796

Maybe you’ve already made the switch to XM Cloud, or maybe you’re still evaluating it as the answer to all your digital delivery challenges. Spoiler alert: it won’t magically solve everything — but with the right setup and smart optimizations, it can absolutely deliver fast, scalable, and maintainable experiences.

If you’re using Sitecore Headless with Next.js, you’re already building on a modern and flexible foundation. Add in a deployment platform like Vercel, and you’ve got serious power at your fingertips. But unlocking that potential requires knowing where to fine-tune — both at the application and platform level.

Streamline Your Layout and API Payloads

The Sitecore Layout Service is versatile but can return bulky JSON payloads if left unchecked. Clean up your responses by:

  • Removing unused placeholders and renderings

  • Filtering out internal tracking or analytics fields unless explicitly needed

  • Configuring the Layout Service to tailor the response to your frontend needs

If you’re using Sitecore Search or XM Cloud with GraphQL, concise queries will help keep your pages fast and predictable

  • Request only the fields you need

  • Use first: or limit: to control result size
  • Organize queries into reusable fragments for maintainability and performance

Smaller payloads result in faster hydration, quicker time-to-interactive, and lower bandwidth usage — all especially valuable for mobile-heavy audiences.

Use Webhooks for Smarter Publishing (On-demand Revalidation or ODR)

Don’t rely on manual rebuilds or blanket cache clears. XM Cloud supports webhooks on publish, which opens the door to smarter automation:

  • Trigger on-demand ISR revalidation for updated pages

  • Push new content to Edge Config, CDNs, or search indexes

  • Notify external systems (e.g., analytics, commerce, personalization) immediately

It’s the best way to keep content fresh without sacrificing performance or rebuilding the entire site.

Choose the Right Rendering Method: SSR, SSG, or ISR?

Not every page needs to be dynamic, and not every page should be static. Picking the right rendering strategy is critical — especially in a Sitecore headless app where you’re mixing marketing content with personalization and real-time updates.

Here’s how to decide:

Use SSR (Server-Side Rendering) when:

  • The page depends on the user session or request (e.g., personalization, authenticated pages)

  • You’re rendering in preview mode for content authors

Use SSG (Static Site Generation) when:

  • The content rarely changes (e.g., static landing pages or campaigns)

  • You want instant load times and no server cost

Use ISR (Incremental Static Regeneration) when:

  • Content changes periodically, but not per-request

  • You want to combine the speed of static with the freshness of dynamic

Use next/link with Prefetching

If you’re still using regular <a> tags or not thinking about navigation performance, this one’s for you. The next/link component enables fast, client-side routing and automatic prefetching of pages in the background.

Example:

import Link from 'next/link';

<Link href="/products" prefetch={true}>About Us</Link>
  • Use it for all internal links

  • Set prefetch={true} on high-priority routes

  • Check behavior in your browser’s network tab — look for .json page data being fetched in advance

This alone can make your site feel instantly faster to users.

Optimize Fonts with next/font

Sitecore headless apps don’t include next/font by default, but it’s worth integrating. It allows you to self-host fonts in a performance-optimized way and avoid layout shifts.

Example:

import { Inter } from 'next/font/google';

const inter = Inter({ subsets: ['latin'] });

Apply fonts globally or per-page to improve loading consistency and avoid FOUT (Flash of Unstyled Text). Better fonts = better user experience.

Clean Up Your Codebase

Performance isn’t just about server-side logic — it’s also about keeping your codebase lean and clean.

What to review:

  • Old personalization plugins that are no longer used

  • Middleware that’s too permissive or generic in its matching

  • Outdated multisite logic if you’ve already split into multiple Vercel projects

  • Unused components or fetch logic in shared utilities

Use Vercel performance insights to track slow routes and spot cold starts.

Enable Fluid Compute

Fluid Compute lets Vercel reuse idle time across your serverless functions. That means better performance and lower costs — without any code changes.

To enable it:

  • Go to your Vercel project settings

  • Navigate to Functions

  • Toggle Fluid Compute on

You can monitor the impact under Observability → Logs in your dashboard. It’s a low-effort win. Read more details about Fluid Compute in my previous blog!

Be Selective with Middleware

Next.js middleware is powerful but potentially expensive in performance terms. Use it wisely:

  • Limit middleware to only essential routes

  • Avoid using fetch() inside middleware — use Edge Config instead

  • Replace multisite plugins with separate Vercel projects

  • Audit unused or legacy logic, especially leftover personalization

Track middleware behavior through the Middleware tab in Vercel Logs.

Manage Redirects with Edge Config

For the fastest possible redirects, manage them directly in Vercel using Edge Config. This keeps Sitecore out of the request path and ensures instant resolution at the edge.

  • Store all redirect data in Edge Config
  • Deploy updates as part of your app or via external config tools
  • Avoid real-time fetches from Sitecore for redirect logic

If you’re managing a large volume of redirects, consider using a bloom filter to optimize memory usage. Just note that bloom filters introduce a small delay due to redirect verification.

Conclusion

Optimizing a Sitecore Headless application, especially one deployed on Vercel, is about making dozens of small, smart decisions that add up to big wins in performance, scalability, and developer happiness. Whether it’s pruning your Layout Service output or toggling a setting in your Vercel dashboard, each move brings you closer to a faster, more responsive site.

XM Cloud doesn’t come pre-optimized — but that’s actually a good thing. It gives you the power and flexibility to build the way you want. Just make sure you’re building it right.

Optimization Checklist

Sitecore & XM Cloud

  • Prune Layout Service JSON (remove unused placeholders and fields)

  • Use GraphQL efficiently (limit queries, use fragments)

  • Set up publish webhooks for on-demand rendering or cache purging

Rendering Strategy

  • Use SSR for personalized/authenticated content

  • Use SSG for static pages

  • Use ISR for hybrid performance/freshness

Next.js

  • Replace <a> with next/link and enable prefetching

  • Add next/font for consistent and fast font rendering

Vercel

  • Enable Fluid Compute for better serverless efficiency

  • Use middleware only where necessary and avoid fetch inside

  • Use Edge Config for fast redirect handling

  • Monitor logs and performance insights for slow routes and cold starts

]]>
https://blogs.perficient.com/2025/05/22/how-to-optimize-sitecore-headless-and-next-js-on-vercel/feed/ 0 381796
IOT and API Integration With MuleSoft: The Road to Seamless Connectivity https://blogs.perficient.com/2025/05/21/iot-and-api-integration-with-mulesoft-the-road-to-seamless-connectivity/ https://blogs.perficient.com/2025/05/21/iot-and-api-integration-with-mulesoft-the-road-to-seamless-connectivity/#respond Wed, 21 May 2025 09:08:59 +0000 https://blogs.perficient.com/?p=381483

In today’s hyper-connected world, the Internet of Things (IoT) is transforming industries, from smart manufacturing to intelligent healthcare. However, the real potential of IoT is to connect continuously with enterprise systems, providing real-time insights and automating. This is where MuleSoft’s Anypoint Platform comes in, a disturbance in integrating IoT units and API to create an ecosystem. This blog explains how MuleSoft sets the platform for connection and introduces a strong basis for IoT and API integration that goes beyond the specific dashboard to offer scalability, safety, and efficiency.

Objective

In this blog, I will show MuleSoft’s ability to integrate IoT devices with enterprise systems through API connectivity, focusing on real-time data processing. I will provide an example of how MuleSoft’s Anypoint Platform connects to an MQTT broker and processes IoT device sensor data. The example highlights MuleSoft’s ability to handle IoT protocols like MQTT and transform data for insights.

How Does MuleSoft Facilitate IoT Integration?

The MuleSoft’s Anypoint Platform is specific to the API connection, native protocol support, and a comprehensive integration structure to handle the complications of IoT integration. This is how MuleSoft IOT does the integration comfortably:

  1. API Connectivity for Scalable Ecosystems

MuleSoft’s API strategy categorizes integrations into System, Process, and Experience APIs, allowing modular connections between IoT devices and enterprise systems. For example, in a smart city, System APIs gather data from traffic sensors and insights into a dashboard. This scalability avoids the chaos of point-to-point integrations, a fault in most visualization-focused tools.

  1. Native IoT Protocol Support

IoT devices are based on protocols such as MQTT, AMQP, and CoAP, which MuleSoft supports. Without middleware, this enables direct communication between sensors and gateways. In a scenario, MuleSoft is better able to connect MQTT data from temperature sensors to a cloud platform such as Azure IoT Hub than other tools that require custom plugins.

  1. Real-Time Processing and Automation

IoT requires real-time data processing, and MuleSoft’s runtime engine processes data streams in real time while supporting automation. For example, if a factory sensor picks up a fault, MuleSoft can invoke an API to notify maintenance teams and update systems. MuleSoft integrates visualization with actionable workflows.

  1. Pre-Built Connectors for Setup

MuleSoft’s Anypoint Exchange provides connectors for IoT platforms (e.g., AWS IoT) and enterprise systems (e.g., Salesforce). In healthcare, connectors link patient wearables to EHRs, reducing development time. This plug-and-play approach beats custom integrations commonly required by other tools.

  1. Centralized Management and Security

IoT devices manage sensitive information, and MuleSoft maintains security through API encryption and OAuth. Its Management Center provides a dashboard to track device health and data flows, offering centralized control that standalone dashboard applications cannot provide without additional infrastructure.

  1. Hybrid and Scalable Deployments

MuleSoft’s hybrid model supports both on-premises and cloud environments, providing flexibility for IoT deployments. Its scalability handles growing networks, such as fleets of connected vehicles, making it a future-proof solution.

Building a Simple IoT Integration with MuleSoft

To demonstrate MuleSoft’s IoT integration, below I have created a simple flow in Anypoint Studio that connects to an MQTT Explorer, processes sensor data, and logs it to the dashboard integration. This flow uses a public MQTT Explorer to simulate IoT sensor data. The following are the steps for the Mule API flow:

Api Flowchart

Step 1: Setting Up the Mule Flow

In Anypoint Studio, create a new Mule project (e.g., ‘IoT-MQTT-Demo’). Design a flow with an MQTT Connector to connect to an explorer, a Transform Message component to process data, and a Logger to output results.

Step1

Step 2: Configuring the MQTT Connector

Configure the MQTT Connector properties. In General Settings, configure on a public broker (“tcp://test.mosquitto.org:1883”). Add the topic filter “iot/sensor/data” and select QoS “AT_MOST_ONCE”.

Step2

Step 3: Transforming the Data

Use DataWeave to parse the incoming JSON payload (e.g., ‘{“temperature”: 25.5 }’) and add a timestamp. The DataWeave code is:

“`

%dw 2.0

output application/json


{

sensor: “Temperature”,

value: read(payload, “application/json“).temperature default “”,

timestamp: now()

 

}

“`

Step3

Step 4: Connect to MQTT

                Click on the Connections and use the credentials as shown below to connect to the MQTT explorer:

Step4

 Step 5: Simulating IoT Data

Once the MQTT connects using an MQTT Explorer, publish a sample message ‘{“temperature”: 28 }’ to the topic ‘iot/sensor/data’, sending to the Mule flow as shown below.

Step5

Step 6: Logging the Output

Run the API and publish the message from the MQTT explorer, and the processed data will be logged into the console. Below shows an example log:

Step6

The above example highlights MuleSoft’s process for connecting IoT devices, processing data, and preparing it for visualization or automation.

Challenges in IoT Integration and MuleSoft’s Solutions

IoT integration faces challenges:

  • Device and Protocol Diversity: IoT ecosystems involve different devices, such as sensors or gateways, using protocols like MQTT or HTTP with different data formats, such as JSON, XML, or binary.
  • Data Volume and Velocity: IoT devices generate high volumes of real-time data, which requires efficient processing to avoid restrictions.
  • Security and Authentication: IoT devices are unsafe and require secure communications like TLS or OAuth for device authentication.
  • Data Transformation and Processing: IoT data sends binary data, which requires transformation from Binary to JSON and needs improvement before use.

The Future of IoT with MuleSoft

The future of IoT with MuleSoft is promising. MuleSoft uses the Anypoint Platform to solve critical integration issues. It integrates different IoT devices and protocols, such as MQTT, to provide data flow between ecosystems. It provides real-time data processing and analytics integration. Security is added with TLS and OAuth.

Conclusion

MuleSoft’s Anypoint Platform reviews IoT and API integration by providing a scalable, secure, real-time solution for connecting devices to enterprise systems. As I showed in the example, MuleSoft processes MQTT-based IoT data and transforms it for useful insights without external scripts or sensors. By addressing challenges like data volume and security, MuleSoft provides a platform to build IoT ecosystems that provide automation and insights. As IoT keeps growing, MuleSoft’s API connectivity and native protocol support establish it as an innovation, with new smart city, healthcare, and more connectivity. Discover MuleSoft’s Anypoint Platform to unlock the full potential of your IoT projects and set the stage for a connected future.

]]>
https://blogs.perficient.com/2025/05/21/iot-and-api-integration-with-mulesoft-the-road-to-seamless-connectivity/feed/ 0 381483
5 Questions to ask CCaaS Vendors as you Plan Your Cloud Migration https://blogs.perficient.com/2025/05/14/five-questions-to-ask-ccaas-vendors-as-you-plan-to-migrate-to-the-cloud/ https://blogs.perficient.com/2025/05/14/five-questions-to-ask-ccaas-vendors-as-you-plan-to-migrate-to-the-cloud/#comments Wed, 14 May 2025 14:46:58 +0000 https://blogs.perficient.com/?p=381363

Considering migrating your contact center operations to the cloud? Transitioning from a legacy on-premise solution to a Cloud Contact Center as a Service (CCaaS) platform offers significant advantages, including greater flexibility, scalability, improved customer experience, and potential cost savings. However, the success of this transition depends heavily on selecting the right vendor and ensuring alignment with your unique business requirements.  

Here are five essential questions to ask any CCaaS vendor as you plan your migration: 

1. How will your solution integrate with our existing systems?

Integration capabilities are key and may impact the effectiveness of your new cloud solution. Ensure that the proposed CCaaS platform easily integrates with or provides viable alternatives to your current CRM, workforce management solutions, business intelligence/reporting tools, and legacy applications. Smooth integrations are vital for maintaining operational efficiency and enhancing the customer and employee experience. 

2. What degree of customization and flexibility do you offer?

Every contact center has agent processes and customer interaction workflows. Verify that your CCaaS vendor allows customization of critical features like interactive voice response (IVR), agent dashboards, and reporting tools (to name just a few). Flexibility in customization ensures that the platform supports your business goals and enhances operational efficiency without disrupting established workflows. Assess included AI-enabled features such as IVAs, real-time agent coaching, customer sentiment analysis, etc. 

 3. Can you demonstrate robust security measures and regulatory compliance?

Data security and compliance with regulations like HIPAA, GDPR, or PCI are likely critical requirements for your organization. This can be especially true in industries that deal with sensitive customer or patient information. Confirm the vendor’s commitment to comprehensive security protocols, including the ability to redact or mask Personally Identifiable Information (PII). Ask your vendor for clearly defined compliance certifications and if they conduct regular security audits. 

 4. What are your strategies for business continuity and disaster recovery?

Uninterrupted service is critical for contact centers, and it’s essential to understand how the CCaaS vendor handles service disruptions, outages, and disaster scenarios. Ask about their redundancy measures, geographic data center distribution, automatic failover procedures, and guarantees outlined in their Service Level Agreements (SLAs).

 5. What level of training and support do you provide during and after implementation?

It is impossible to overstate the importance of good change management and enablement. Transitioning to a cloud environment involves adapting to new technologies and processes. Determine the availability of the vendor’s training programs, materials, and support channels.  

 By proactively addressing these five key areas, your organization can significantly streamline your migration process and ensure long-term success in your new cloud-based environment. Selecting the right vendor based on these criteria will facilitate a smooth transition and empower your team to deliver exceptional customer experiences efficiently and reliably. 

]]>
https://blogs.perficient.com/2025/05/14/five-questions-to-ask-ccaas-vendors-as-you-plan-to-migrate-to-the-cloud/feed/ 2 381363
PIM for Azure Resources https://blogs.perficient.com/2025/05/14/pim-for-azure-resources/ https://blogs.perficient.com/2025/05/14/pim-for-azure-resources/#comments Wed, 14 May 2025 10:06:18 +0000 https://blogs.perficient.com/?p=381068

Privileged Identity Management

Privileged Identity Management (PIM) is a service in Microsoft Entra ID that enables you to manage, control, and monitor access to important resources in your organization. These resources include those in Microsoft Entra ID, Azure, and other Microsoft Online Services such as Microsoft 365 or Microsoft Intune. This blog has been written to help those who want to set up just-in-time access for Azure resources and provide access to the subscription level only.

Why do we need PIM for Azure Resources?

Better Security for Important Access

PIM ensures that only the right people can access essential systems when needed and only for a short time. This reduces the chances of misuse by someone with powerful access.

Giving Only the Minimum Access

PIM ensures that people only have the access they need to do their jobs. This means they can’t access anything unnecessary, keeping things secure.

Time-Limited Access

With PIM, users can get special access for a set period. Once the time is up, the access is automatically removed, preventing anyone from holding on to unnecessary permissions.

Access When Needed

PIM gives Just-in-Time (JIT) Access, meaning users can only request higher-level access when needed, and it is automatically taken away after a set time. This reduces the chances of having access for too long.

Approval Process for Access

PIM lets you set up a process where access needs to be approved by someone (like a manager or security) before it’s given. This adds another layer of control.

Tracking and Monitoring

PIM keeps detailed records of who asked for and received special access, when they accessed something, and what they did. This makes it easier to catch any suspicious activities.

Temporary Admin Access

Instead of giving someone admin access all the time, PIM allows it to be granted for specific tasks. Admins only get special access when needed, and for as long as necessary, so there is less risk.

Meeting Legal and Security Standards

Some industries require companies to follow strict rules (like protecting personal information). PIM helps meet these rules by controlling who has access and keeping track of it for audits.

 How to set up PIM in Azure

Create Security Group & Map to Subscriptions

  • Step 1: Create security groups for each Azure subscription to manage access control.
    • The security groups are derived from Azure Entra IDs. As illustrated in the snapshot below, use the global search box in the Azure portal to find the appropriate services.

Pim 1

 

  • Step 2: Select the service you need, then click New Group to create a new security group. Fill in all necessary details, including group name, description, and any other required attributes.

Pim 2

 

    • Create a separate group for each subscription.
    • If your account includes two subscriptions, such as Prod and Non-Prod, create distinct security groups for each subscription. This allows users to request access to a specific subscription.
    • Make the user a member of both groups, enabling them to choose which subscription resources they wish to activate.
    • The screenshot below shows that the Demo-Group security group will be created and assigned to its corresponding subscription.

Pim 3

 

Navigate to PIM (Privileged Identity Management)

  • Step 3: In the Azure portal, navigate to Identity Governance and select Privileged Identity Management (PIM) to manage privileged access.

Pim 4

 

Enable PIM for Azure Resources

  • Step 4: You can select the specific section within PIM you wish to enable PIM for. For this setup, we are focusing on enabling PIM for subscription-level access to control who can activate privileged access for Azure subscriptions.
  • Step 5: Choose Azure Resources from the list of available options in PIM, as shown in the screenshot below.

Pim 5

 

    • An assignment needs to be created for the groups we created so that members of those groups will see an option to activate access for their respective subscriptions.
  • Step 6: As per the screenshots below, once you select Azure resources, select the subscription and group for which you want to create assignments.

Pim 6

 

Pim 7

 

    • As per the image below, under the Resource section, subscription has been selected for which we want to give permission. Under Resource Type is subscription, choose the role you want to give permission to, and the Demo-Group security group is selected.

Pim 8

 

  • Step 7: Once the assignment is complete, users who are part of a group need to log out and log back in to see the changes applied. To view and activate your assignments in PIM, follow the steps below:

1. Navigate to the Assignments Section

  • Go to PIM (Privileged Identity Management) by selecting:
  • Entry IDIdentity GovernancePIMAzure ResourcesActivate Role.

2. Select Your Assignment

  • In this section, you will see a list of the assignments for which you are eligible.

3. Activate the Role

  • To activate a role, click on Activate. By default, the assignment will be set for 8 hours. If necessary, you may adjust the duration by justifying the requirement and enabling the assignment.

4. Validation and Finalization

  • The system will take some time to validate your request. Once completed, the assignment will appear under the Active Assignments.

Pim 12 1

 

  • Step 8: As shown in the screenshot below, the activation duration can be set to 24 hours by editing the assignment settings.

Pim 10

 

    • You can modify the assignment settings and adjust the values according to your specific requirements. Please refer to the screenshot below for more details.

Pim 11

 

Conclusion

Azure PIM helps make your system safer by ensuring that only the right people can access essential resources for a short time. It lets you give access when needed (just-in-time), require approval for special access, automatically manage who can access what, and keep track of everything. PIM is essential for organizations that want to limit who can access sensitive information, ensure only the necessary people have the correct permissions at the right time, and prevent unauthorized access.

]]>
https://blogs.perficient.com/2025/05/14/pim-for-azure-resources/feed/ 1 381068
Strategic Cloud Partner: Key to Business Success, Not Just Tech https://blogs.perficient.com/2025/05/13/strategic-cloud-partner-key-to-business-success-not-just-tech/ https://blogs.perficient.com/2025/05/13/strategic-cloud-partner-key-to-business-success-not-just-tech/#comments Tue, 13 May 2025 14:20:07 +0000 https://blogs.perficient.com/?p=381334

Cloud is easy—until it isn’t.

Perficient’s Edge: A Strategic Cloud Partner Focused on Business Outcomes

Cloud adoption has skyrocketed. Multi-cloud. Hybrid cloud. AI-optimized workloads. Clients are moving fast, but many are moving blindly. The result? High costs, low returns, and strategies that stall before they scale.

That’s why this moment matters. Now, more than ever, your clients need a partner who brings more than just cloud expertise—they need business insight, strategic clarity, and real results.

In our latest We Are Perficient episode, we sat down with Kiran Dandu, Perficient’s Managing Director, to uncover exactly how we’re helping clients not just adopt cloud, but win with it.

If you’re in sales, this conversation is your cheat sheet for leading smarter cloud conversations with confidence.

Key #1: Start with Business Outcomes, Not Infrastructure

Kiran makes one thing clear from the start: “We don’t start with cloud. We start with what our clients want to achieve.”

At Perficient, cloud is a means to a business end. That’s why we begin every engagement by aligning cloud architecture with long-term business objectives—not just technical requirements.

Perficient’s Envision Framework: Aligning Cloud with Business Objectives

  • Define their ideal outcomes
  • Assess their existing workloads
  • Select the right blend of public, private, hybrid, or multi-cloud models
  • Optimize performance and cost every step of the way

This outcome-first mindset isn’t just smarter—it’s what sets Perficient apart from traditional cloud vendors.

Key #2: AI in the Cloud – Delivering Millions in Savings Today

Forget the hype—AI is already transforming how we operate in the cloud. Kiran breaks down the four key areas where Perficient is integrating AI to drive real value:

  • DevOps automation: AI accelerates code testing and deployment, reducing errors and speeding up time-to-market.
  • Performance monitoring: Intelligent tools predict and prevent downtime before it happens.
  • Cost optimization: AI identifies underused resources, helping clients cut waste and invest smarter.
  • Security and compliance: With real-time threat detection and automated incident response, clients stay protected 24/7.

The result? A cloud strategy that’s not just scalable, but self-improving.

Key #3: Beyond Cloud Migration to Continuous Innovation

Moving to the cloud isn’t the end goal—it’s just the beginning.

Kiran emphasizes how Perficient’s global delivery model and agile methodology empower clients to not only migrate, but to evolve and innovate faster. Our teams help organizations:

  • Integrate complex systems seamlessly
  • Continuously improve infrastructure as business needs change
  • Foster agility across every department—not just IT

And it’s not just theory. Our global consultants, including the growing talent across LATAM, are delivering on this promise every day.

“The success of our cloud group is really going to drive the success of the organization.”
Kiran Dandu

Global Talent, Local Impact: The Power of a Diverse Strategic Cloud Partner

While visiting our offices in Medellín, Colombia, Kiran highlighted the value of diversity in driving cloud success:

“This reminds me of India in many ways—there’s talent, warmth, and incredible potential here.”

That’s why Perficient is investing in uniting its global cloud teams. The cross-cultural collaboration between North America, LATAM, Europe, and India isn’t just a feel-good story—it’s the engine behind our delivery speed, technical excellence, and customer success.

Key Takeaways for Sales: Lead Smarter Cloud Conversations

If your client is talking about the cloud—and trust us, they are—this interview is part of your toolkit.
You’ll walk away understanding:

  • Why Perficient doesn’t just build cloud platforms—we build cloud strategies that deliver
  • How AI and automation are creating real-time ROI for our clients
  • What makes our global model the best-kept secret in cloud consulting
  • And how to speak the language of business outcomes, not just cloud buzzwords

Watch the Full Interview: Deep Dive with Kiran Dandu

Want to hear directly from the source? Don’t miss Kiran’s full interview, packed with strategic insights that will elevate your next sales conversation.

Watch now and discover how Perficient is transforming cloud into a competitive advantage.

Choose Perficient: Your Client’s Strategic Cloud Partner for a Competitive Edge

Perficient is not just another cloud partner—we’re your client’s competitive edge. Let’s start leading the cloud conversation like it.

]]>
https://blogs.perficient.com/2025/05/13/strategic-cloud-partner-key-to-business-success-not-just-tech/feed/ 1 381334
Outside Processing vs Contract Manufacturing https://blogs.perficient.com/2025/05/07/outside-processing-vs-contract-manufacturing/ https://blogs.perficient.com/2025/05/07/outside-processing-vs-contract-manufacturing/#respond Wed, 07 May 2025 12:15:35 +0000 https://blogs.perficient.com/?p=380696

When it comes to manufacturing, companies (OEM) require services from their manufacturing partners to help with the production of finished and semi-finished products.  There are two known solutions that Oracle Fusion SCM suite offers: Outside Processing (OSP) and Contract Manufacturing.  Both solutions involve a third-party vendor and a service component to either help complete a work order, fulfill a sales order, or fulfill subassembly demand. Both solutions serve a purpose and are quite powerful. Before I jump into the comparison, here’s a textbook definition of both solutions:

OSP:

OSP is the process of outsourcing a portion of the work order that is being done in house. For example, a steel shop that can cut and weld steel to manufacture frames may send the frames to a paint shop (vendor) to get painted. The steel shop then receives the painted frames in house and perhaps performs a few more value-added steps to complete the work order.  The paint portion of this work order is considered an outside job.  Companies may prefer outside processing for various reasons. The steel manufacturer may not be interested in installing a paint booth and employing painters, or the company may have a paint booth but it’s backlogged or is down. Specialization may be required. In all these scenarios, a vendor is needed to help.

Contract Manufacturing:

Contract Manufacturing is to (optionally) provide materials to a vendor and expect the vendor to produce assemblies and send them to external or internal customers.  Typically, the company ships raw materials and/or subassemblies to their vendor and manages stock in their warehouse.  With contract manufacturing, the vendor is in complete control of manufacturing process and is supposed to update and complete work orders or communicate back the process.  The OEM usually has owned stock at the vendor location and tracks in its books.

The one-million-dollar question; which one to pick?

In most cases the “textbook” response can be straightforward. In some cases, companies that are using production steps from a vendor as one of the operations in their in-house work orders use the OSP solution. It’s straightforward and quite easy to set up. Read this blog for OSP treatment advice in Cost Management.

Contract Manufacturing may be used directly to fulfill back-to-back sales orders, or to fulfill the Supply Planning generated subassembly transfer orders and work orders.  Two drawbacks to Contract Manufacturing (as of Release 25A), is that the second portion of the solution can only be accomplished by Supply Planning. Contract Manufacturing is a robust solution, but it requires vendors to actively participate providing feedback on inventory levels, or updating the production progress in Oracle Fusion.

There are use cases where companies may use a vendor to produce subassemblies, but would like to create manual work orders, manage the inventory, and want a less complicated solution.  In this case, the OSP solution can work beautifully simulating a Contract Manufacturing solution.

OSP Contract Manufacturing
New Inventory Organization Optional Required
Supply Planning Optional May be required
Service Items Required Required
Blanket Purchase Agreements Optional Optional
Ease of Implementation Easy More Complicated

 

Based on the business requirements, the OEM may choose to go with OSP.  Let’s assume that the inventory sent to the vendor for Contract Manufacturing is only sent when new assemblies are required and there isn’t an opportunity to electronically integrate through webservices and communication is with emails or with other correspondence.

For this OEM, it is feasible to create a Work Center in Oracle Fusion Manufacturing for its vendor, automatically create service purchase orders, and have dedicated supply and completion subinventories.  Inventory Management could be a little challenging dedicating the raw material for contract manufacturer use only, but not unmanageable.  In this simple scenario, the OEM doesn’t have to go through a complex setup to use Contract Manufacturing, and they can go with the OSP solution.

There are various use cases and potential solutions using Oracle Supply Planning, Inventory Management, and min-max planning, Oracle Manufacturing, Inventory and Procurement, and the Supplier Portal to fulfill various manufacturing scenarios.

Contact Mehmet Erisen at Perficient for more introspection of this functionality, and how Perficient and Oracle Fusion Cloud can digitalize and modernize your ERP platform.

]]>
https://blogs.perficient.com/2025/05/07/outside-processing-vs-contract-manufacturing/feed/ 0 380696