Technical Articles / Blogs / Perficient https://blogs.perficient.com/category/technical/ Expert Digital Insights Wed, 14 May 2025 16:36:47 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Technical Articles / Blogs / Perficient https://blogs.perficient.com/category/technical/ 32 32 30508587 HCL Commerce V9.1 – Coexistence of the Headless Next.js Ruby & Aurora Storefronts https://blogs.perficient.com/2025/05/14/blog-series-hcl-commerce-v9-1-part-2/ https://blogs.perficient.com/2025/05/14/blog-series-hcl-commerce-v9-1-part-2/#respond Wed, 14 May 2025 16:35:05 +0000 https://blogs.perficient.com/?p=381075

HCL Commerce v9.1 release saw a major change in features, functionality, and technology. This blog series will focus on each of these components separately. Some examples of these changes include HCL Commerce Search, which is powered by Elasticsearch, a modern storefront that uses Next.js, containerized cloud native architecture, modern business user tooling, and provides support for new integrations and companion software.

Part 2 of this blog series will focus on the coexistence of the Next.js Ruby & Aurora Storefronts.

Background

A client had multiple e-sites running on the HCL Commerce v9 using the Aurora JSP-based storefront. The client wanted to migrate to the Next.js Ruby storefront and take advantage of the modern headless store, including server-side rendering (SSR) for page optimization. The client wanted a cost-effective solution to drive ROI through built-in SEO capabilities, improved page site performance (increase Google Core Web Vitals), and improved end-user experience.

A migration of multiple e-sites to the Next.js Ruby storefront with HCL Commerce Search using Elasticsearch and the client-specific customizations can be a large rewrite.  Perficient worked with the client to find a cost-effective solution and identified the home page and the product details page (PDP) to migrate to the Next.js Ruby storefront.  This also allowed the client the ability to evaluate the storefront and capabilities before migrating the remaining pages to the Next.js Ruby storefront.

Pros & Cons of the Hybrid Approach

The hybrid approach has several pros and cons and can vary based on each client and the business requirements. This client used many e-marketing spots throughout the site, and it was challenging to maintain duplicate content to support both storefronts. Since the content syntax is different between storefronts, any changes to the common header and footer navigation will need to be maintained for both storefronts. Another consideration is implementing third-party integrations and ensuring compatibility with both storefronts. For example, Segment was used for Analytics tracking, and our team had to ensure that events were triggering successfully with the correct data on both storefront pages. One of the most critical components of a hybrid approach is correctly identifying and routing requests so that pages are rendered correctly between the Aurora and the Next.js Ruby storefronts. The client had PDP URLs with a unique SEO pattern allowing the Perficient team to create rules to route requests so they can be rendered by the correct storefront container. Post migration, the client immediately started seeing the advantages of the Next.js Ruby storefront’s features and capabilities. The client saw improvements in page load times and on Core Web Vitals for the migrated pages.

Conclusion

The hybrid approach allowed the client to take advantage of the newer technology and realize the ROI on the migrated pages. The site benefited from the Core Web Vitals score increase, enhanced SEO capabilities, and improved page performance. The hybrid approach allowed the technical and marketing teams to familiarize themselves with the features and capabilities of the Next.js Ruby storefront and deploy it to the most impactful areas of the site. As a next step, the client is migrating the remaining pages to the Next.js Ruby storefront to fully take advantage of HCL’s continued enhancements.

To obtain further information from our award-winning team, please visit https://www.perficient.com/who-we-are/partners/hcl.

Other Blogs in the Series

HCL Commerce V9.1 – The Power of the Next.js Ruby Storefront

]]>
https://blogs.perficient.com/2025/05/14/blog-series-hcl-commerce-v9-1-part-2/feed/ 0 381075
5 Questions to ask CCaaS Vendors as you Plan Your Cloud Migration https://blogs.perficient.com/2025/05/14/five-questions-to-ask-ccaas-vendors-as-you-plan-to-migrate-to-the-cloud/ https://blogs.perficient.com/2025/05/14/five-questions-to-ask-ccaas-vendors-as-you-plan-to-migrate-to-the-cloud/#respond Wed, 14 May 2025 14:46:58 +0000 https://blogs.perficient.com/?p=381363

Considering migrating your contact center operations to the cloud? Transitioning from a legacy on-premise solution to a Cloud Contact Center as a Service (CCaaS) platform offers significant advantages, including greater flexibility, scalability, improved customer experience, and potential cost savings. However, the success of this transition depends heavily on selecting the right vendor and ensuring alignment with your unique business requirements.  

Here are five essential questions to ask any CCaaS vendor as you plan your migration: 

1. How will your solution integrate with our existing systems?

Integration capabilities are key and may impact the effectiveness of your new cloud solution. Ensure that the proposed CCaaS platform easily integrates with or provides viable alternatives to your current CRM, workforce management solutions, business intelligence/reporting tools, and legacy applications. Smooth integrations are vital for maintaining operational efficiency and enhancing the customer and employee experience. 

2. What degree of customization and flexibility do you offer?

Every contact center has agent processes and customer interaction workflows. Verify that your CCaaS vendor allows customization of critical features like interactive voice response (IVR), agent dashboards, and reporting tools (to name just a few). Flexibility in customization ensures that the platform supports your business goals and enhances operational efficiency without disrupting established workflows. Assess included AI-enabled features such as IVAs, real-time agent coaching, customer sentiment analysis, etc. 

 3. Can you demonstrate robust security measures and regulatory compliance?

Data security and compliance with regulations like HIPAA, GDPR, or PCI are likely critical requirements for your organization. This can be especially true in industries that deal with sensitive customer or patient information. Confirm the vendor’s commitment to comprehensive security protocols, including the ability to redact or mask Personally Identifiable Information (PII). Ask your vendor for clearly defined compliance certifications and if they conduct regular security audits. 

 4. What are your strategies for business continuity and disaster recovery?

Uninterrupted service is critical for contact centers, and it’s essential to understand how the CCaaS vendor handles service disruptions, outages, and disaster scenarios. Ask about their redundancy measures, geographic data center distribution, automatic failover procedures, and guarantees outlined in their Service Level Agreements (SLAs).

 5. What level of training and support do you provide during and after implementation?

It is impossible to overstate the importance of good change management and enablement. Transitioning to a cloud environment involves adapting to new technologies and processes. Determine the availability of the vendor’s training programs, materials, and support channels.  

 By proactively addressing these five key areas, your organization can significantly streamline your migration process and ensure long-term success in your new cloud-based environment. Selecting the right vendor based on these criteria will facilitate a smooth transition and empower your team to deliver exceptional customer experiences efficiently and reliably. 

]]>
https://blogs.perficient.com/2025/05/14/five-questions-to-ask-ccaas-vendors-as-you-plan-to-migrate-to-the-cloud/feed/ 0 381363
HCL Commerce V9.1 – The Power of the Next.js Ruby Storefront https://blogs.perficient.com/2025/05/07/hcl-commerce-v9-1-the-power-of-the-next-js-ruby-storefront/ https://blogs.perficient.com/2025/05/07/hcl-commerce-v9-1-the-power-of-the-next-js-ruby-storefront/#respond Wed, 07 May 2025 16:33:07 +0000 https://blogs.perficient.com/?p=380971

The HCL Commerce v9.1 release saw major features, functionality, and technology changes. This blog series will focus on each of these components separately. Some examples of these changes include HCL Commerce Search, which is powered by Elasticsearch, a modern storefront that uses Next.js, containerized cloud-native architecture, modern business user tooling, and support for new integrations and companion software.

Part 1 of this blog series will focus on the HCL Commerce Next.js-based Ruby storefront.

Next.js Ruby store

Benefits of the Next.js Ruby Storefront

The Ruby Storefront is an HCL Commerce-provided Next.js-based B2B & B2C starter store that exploits the powerful features and capabilities of the HCL Commerce platform. It is a fully headless store utilizing REST services to interact with the HCL Commerce logic framework to drive the features and capabilities of the platform. The store uses server-side rendering (SSR), which helps drive improvements in initial page load times, Google Core Web Vitals, performance, and overall page optimizations. The store also provides a generic data layer for Google Analytics (GA4) and has built-in SEO capabilities, which are crucial for digital marketing. The storefront has prebuilt components, is CDN optimized, and supports the mobile-first approach that allows business owners a faster time to market.

Template-based Layouts in the Storefront

The storefront utilizes a template-based layout for each page, such as the home page and the product detail page (PDP). Having separate layouts allows customers to render each page differently based on the business requirements. These layouts support e-marketing spots and segmentation to drive a more personalized experience in the targeted area of the layout. There is also support for category and product-specific pages, which allow business users more control. Our team has taken advantage of the template-based approach to help incrementally migrate existing customers and leverage the benefits of the Next.js Ruby storefront with a hybrid migration approach.

Template-based Layouts

Hybrid Approach

A complete migration to the Next.js Ruby storefront can be costly and time-consuming. As a result, the Perficient team has developed a solution that allows customers to migrate to the Next.js storefront using a hybrid approach. The solution enables the legacy Java Server Pages (JSP) based Aurora Storefronts pages to run in parallel with the new modern Next.js Ruby storefront pages. Additionally, as of HCL Commerce 9.1.15, HCL has provided the ability to use Elasticsearch or SOLR as the back-end search engine, which functions seamlessly with the Next.js Ruby storefront. This hybrid approach can be a cost-effective solution that helps drive ROI for pages where it is most needed.

Conclusion

HCL Commerce Next.js Ruby Storefront is a feature-packed headless storefront built using one of the latest and most popular technologies. The storefront can leverage either Elasticsearch or SOLR search as the back-end search engine. This serves as the foundation for efficient collaboration with our clients to migrate incrementally and cost-effectively from the legacy JSP Aurora store to the Next.js Ruby storefront.

To obtain further information from our award-winning team, please visit https://www.perficient.com/who-we-are/partners/hcl.

Other Blogs in the Series

 

HCL Commerce V9.1 – Coexistence of the Headless Next.js Ruby & Aurora Storefronts

]]>
https://blogs.perficient.com/2025/05/07/hcl-commerce-v9-1-the-power-of-the-next-js-ruby-storefront/feed/ 0 380971
Creating a Microsoft 365 Group or Office 365 Group https://blogs.perficient.com/2025/05/04/creating-a-microsoft-365-group-or-office-365-group/ https://blogs.perficient.com/2025/05/04/creating-a-microsoft-365-group-or-office-365-group/#respond Mon, 05 May 2025 04:38:54 +0000 https://blogs.perficient.com/?p=380430

Microsoft 365 offers several types of groups; each designed for different collaboration and communication needs:

  1. Microsoft 365 Groups (M365): These are used for collaboration between users both inside and outside your organization. They include a shared mailbox, calendar, SharePoint site, Microsoft Teams and more.
  2. Distribution Groups: These are used for sending email notifications to a group of people. They are ideal for broadcasting information to a set group of people.
  3. Security Groups: These are used for granting access to resources such as SharePoint sites. They help manage permissions and access control.
  4. Mail-enabled Security Groups: These combine the features of security groups and distribution groups, allowing you to grant access to resources and send email notifications to the group members.
  5. Dynamic Distribution Groups: A Dynamic Distribution List (or DDL) is created to expedite the mass sending of email messages and other information to a set of people. As the name suggests, it does not mean to have static members, and the recipient criteria is defined by the set of filters and conditions that are put on it.

Out of the above group we are interested to know about Microsoft 365 or formerly known as Office 365 group. Let’s start with the following:

How to Create a Microsoft 365 group (M365 group)?

Creating a Microsoft 365 Group can be done in several ways, depending on your role and the tools you have access to. Here are the main methods:

  1. Using the Microsoft 365 Admin Center
  2. Go to the Microsoft 365 admin center.
  3. Expand Groups and select Active groups.
  4. Click on +Add a Microsoft 365 group.
  5. Fill in the group details, such as name, description, and privacy settings.
  6. Add owners and members to the group.
  7. Review your settings and click Create group.

M365admin Pic

Outlook on the Web

  1. Open Outlook on the web.
  2. In the left pane, select Go to Group >> New mail drops down as shown below:
  3. Enter the group name, description, and privacy settings.
  4. Add members and click Create.

Owapic

Outlook Desktop App

  1. Open Outlook.
  2. Go to the Home tab and select New Group.
  3. Enter the group name, description, and privacy settings.
  4. Add members and click Create.

Outlookpic

Using Microsoft Teams

  1. Open Microsoft Teams.
  2. Click on Teams in the left sidebar.
  3. Select Join or create a team at the bottom of the Teams list.
  4. Choose Create a team and then from scratch.
  5. Select Private or Public and enter the team’s name and description.
  6. Add members and click Create.

Teamspic

Using PowerShell

For more advanced users, you can use PowerShell to create a Microsoft 365 Group:

  1. Open PowerShell and connect to your Microsoft 365 account.
  2. Use the New-UnifiedGroup cmdlet to create the group. For example:
New-UnifiedGroup -DisplayName “Group Name” -Alias “groupalias” -EmailAddresses “groupalias@yourdomain.com”
  • Manage Microsoft 365 group using PowerShell:

Add-UnifiedGroupLinks (ExchangePowerShell) | Microsoft Learn

Collaboration Features of M365 Group:

Microsoft 365 Groups offer a variety of collaboration features designed to enhance teamwork and productivity. Here are some of the key features:

  1. Shared Mailbox: Each group gets a shared inbox in Outlook where group conversations are stored. This makes it easy to keep track of discussions and ensures everyone stays in the loop.
  2. Shared Calendar: Groups have a shared calendar for scheduling and managing events. This helps coordinate meetings and deadlines.
  3. SharePoint Document Library: A SharePoint site is created for each group, providing a central location for storing and sharing files. This ensures that all members have access to the latest documents
  4. Planner: Microsoft Planner is integrated with groups, allowing members to create, assign, and track tasks. This helps in managing projects and ensuring that tasks are completed on time
  5. OneNote Notebook: Each group gets a shared OneNote notebook for taking and organizing notes. This is useful for brainstorming sessions, meeting notes, and more
  6. Microsoft Teams Integration: Groups can be connected to Microsoft Teams, providing a hub for chat, video meetings, and collaboration. This integration enhances real-time communication and teamwork
  7. Power BI: Groups can use Power BI to create and share dashboards and reports, making it easier to visualize and analyze data
  8. Viva Engage (formerly Yammer): If the group was created from Viva Engage, members can engage in social networking and community discussions
    1. : Groups can share and manage video content using Microsoft Stream, making it easy to distribute training videos, presentations, and other multimedia content
  9. Project for the Web: If you have Project for the web, groups can use Roadmap to plan and track project progress

These features collectively provide a comprehensive suite of tools to support collaboration, communication, and project management within your organization.

]]>
https://blogs.perficient.com/2025/05/04/creating-a-microsoft-365-group-or-office-365-group/feed/ 0 380430
End-to-End Monitoring for EC2: Deploying Dynatrace OneAgent on Linux https://blogs.perficient.com/2025/04/30/end-to-end-monitoring-for-ec2-deploying-dynatrace-oneagent-on-linux/ https://blogs.perficient.com/2025/04/30/end-to-end-monitoring-for-ec2-deploying-dynatrace-oneagent-on-linux/#comments Wed, 30 Apr 2025 10:45:12 +0000 https://blogs.perficient.com/?p=380533

Objective: Enable resource monitoring for AWS EC2 instances using the Dynatrace monitoring tool (OneAgent) to gain real-time insights into system performance, detect anomalies, and optimize resource utilization.

What is Dynatrace?

Dynatrace is a platform for observability and application performance monitoring (APM) that delivers real-time insights into application performance, infrastructure oversight, and analytics powered by AI. It assists teams in detecting, diagnosing, and resolving problems more quickly by providing comprehensive monitoring across logs, metrics, traces, and insights into user experience.

Dynatrace OneAgent

Dynatrace OneAgent is primarily a single binary file that comprises a collection of specialized services tailored to your monitoring setup. These services collect metrics related to various components of your hosts, including hardware specifications, operating systems, and application processes. The agent also has the capability to closely monitor specific technologies (such as Java, Node.js, and .NET) by embedding itself within these processes and analyzing them from the inside. This enables you to obtain code-level visibility into the services that your application depends on.

Key Features of Dynatrace OneAgent

  • Automatic Deployment – OneAgent installs automatically and starts collecting data without manual configuration.
  • Full-Stack Monitoring – It monitors everything from application code to databases, servers, containers, and networks.
  • AI-Powered Insights – Works with Dynatrace’s Davis AI engine to detect anomalies and provide root cause analysis.
  • Auto-Discovery – Automatically detects services, processes, and dependencies.
  • Low Overhead – Designed to have minimal impact on system performance.
  • Multi-Platform Support – Works with Windows, Linux, Kubernetes, AWS, Azure, GCP, and more.

Prerequisites to Implement OneAgent

  1. Dynatrace account
  2. AWS EC2 instance with Linux as the operating system and enable the SSH port (22).

How to Implement Dynatrace OneAgent

Step 1. Dynatrace OneAgent configuration

Log in to the Dynatrace portal and search for Deploy OneAgent.

P1

Select the platform on which your application is running. In our case, it is Linux.

P2

Create a token that is required for authentication.

P3

After generating a token, you will receive a command to download and execute the installer on the EC2 instance.

P4

Step 2: Log in to the EC2 instance using SSH and run the command to download the installer.

After this, run the command to run the installer.

P5

P6

The Dynatrace one agent has now been installed on the EC2 instance.

P7

Output

Now we can monitor various resource usage based on application and infrastructure level on the Dynatrace dashboard.

P8

Conclusion

Enabling resource monitoring for AWS EC2 instances using Dynatrace provides comprehensive observability, allowing teams to detect performance issues, optimize resource utilization, and ensure application reliability. By leveraging Dynatrace OneAgent, organizations can automate monitoring, gain AI-driven insights, and enhance cloud efficiency. Implementing this solution not only improves operational visibility but also facilitates proactive troubleshooting, reduces downtime, and optimizes cloud costs.

 

 

]]>
https://blogs.perficient.com/2025/04/30/end-to-end-monitoring-for-ec2-deploying-dynatrace-oneagent-on-linux/feed/ 1 380533
Solution Highlight – Oracle Revenue Management / SSP – Part 1 https://blogs.perficient.com/2025/04/29/solution-highlight-oracle-revenue-management-ssp-part-1/ https://blogs.perficient.com/2025/04/29/solution-highlight-oracle-revenue-management-ssp-part-1/#comments Tue, 29 Apr 2025 19:59:00 +0000 https://blogs.perficient.com/?p=380582

In the first blog post of this three-part Solution Highlight series featuring a proven leader in defense-grade, high assurance cyber security solutions, I will cover Oracle Revenue Management.  My colleague, Mehmet Erisen will share his views on Global Supply Chain Management including Manufacturing with OSP and intercompany order fulfillment across business units featuring Oracle Supply Chain Management. We’ll round out the series with the third and final blog post focused on Salesforce to Order Cloud integration. 

 

About Our Client: a trailblazer in the cyber security space, our client needed the ability to automate its complex and manual revenue allocation processes. 

 

Challenge

  • Manual revenue recognition processes leading to errors and delays 
  • Difficulty in complying with ASC 606 / IFRS 15 standards 
  • Lack of real-time visibility into revenue reporting 

Solution

Implemented Oracle Revenue Management – Managing Bundles and Stand-alone Selling Price (SSP)  

Oracle Fusion ERP provides robust functionality for managing and automating the implementation of product bundles and determining the SSP for revenue recognition under ASC 606 and IFRS 15 standards. Key highlights include: 

  • Revenue Management: Automates revenue processing tasks, minimizing manual interventions, allowing organizations to comply efficiently and consistently with the ASC 606 and IFRS 15 core principles 
  • Bundling Capabilities: Allows seamless configuration and management of product/service bundles with clear pricing structures 
  • Automation and Scalability: Automates complex revenue allocation processes, improving efficiency and scalability 
  • Real-time Analytics: Provides insights into sales trends and SSP analysis, enabling data-driven pricing strategies 

 

Benefits

  • Reduced Manual Effort – Eliminated spreadsheet-based tracking 
  • Improved Accuracy – Minimized revenue leakage and misreporting 
  • Faster Close Cycles – Automated recognition speeds up month-end close 
  • Regulatory Compliance – Ensured adherence to ASC 606 / IFRS 15 
  • Enhanced Visibility – Real-time insights into revenue performance 

 

Oracle Revenue Management Cloud enables organizations to automate revenue recognition, reduce compliance risks, and gain real-time financial insights. This solution delivers value for companies with complex revenue streams, such as SaaS, manufacturing, and professional services. 

This solution is particularly effective for companies looking to streamline revenue recognition while maintaining compliance and operational efficiency.  

Let me know if you’d like a deeper dive into any of these features! 

]]>
https://blogs.perficient.com/2025/04/29/solution-highlight-oracle-revenue-management-ssp-part-1/feed/ 1 380582
Perficient Included in IDC Market Glance: Healthcare Provider Operational IT Solutions, 1Q25 https://blogs.perficient.com/2025/04/25/perficient-included-in-idc-market-glance-healthcare-provider-operational-it-solutions-1q25/ https://blogs.perficient.com/2025/04/25/perficient-included-in-idc-market-glance-healthcare-provider-operational-it-solutions-1q25/#respond Fri, 25 Apr 2025 17:47:27 +0000 https://blogs.perficient.com/?p=380606

As technology continues to advance, patients and care teams expect to seamlessly engage with tools that support better health and accelerate progress. These developments demand the rapid, secure, scalable, and compliant sharing of data. 

By aligning enterprise and business goals with digital technology, healthcare organizations (HCOs) can activate strategies for transformative outcomes and improve experiences and efficiencies across the health journey. 

IDC Market Glance: Healthcare Provider Operational IT Solutions, 1Q25 

Perficient is proud to be included in the categories of IT Services and SI services in the IDC Market Glance: Healthcare Provider Operational IT Solutions, 1Q25 report (doc #US52221325, March 2025). We believe our inclusion in this report’s newly introduced “Services” segmentation underscores our expertise to leverage AI-driven automation and advanced analytics, optimize technology investments, and navigate evolving industry challenges. 

IDC states, “This expansion reflects the industry’s shift toward outsourced expertise, scalable service models, and strategic partnerships to manage complex operational IT and infrastructure efficiently.” 

IDC defines IT Services as, “managed IT services, ensuring system reliability, cybersecurity, and infrastructure optimization. These solutions support healthcare provider transformation initiatives, helpdesk management, network monitoring, and compliance with healthcare IT regulations.” The SI Services category is defined by IDC as, “system integration services that help deploy technologies and connect disparate systems, including EHRs, RCM platforms, ERP solutions, and third-party applications to enhance interoperability, efficiency, automation, and compliance with industry standards.”  

Advanced Solutions for Data-Driven Success 

We imagine, engineer, and optimize scalable, reliable technologies and data, partnering with healthcare leaders to better understand consumer expectations and strategically align digital investments with business priorities.  

Our end-to-end professional services include: 

  • Digital transformation strategy:  The healthcare industry’s rapid evolution requires attention in several areas – adopting new care models, capitalizing on disruptive technologies, and affecting regulatory, operational, financial, and organizational change. We equip HCOs to recognize and speed past potential hurdles in order to maximize ROI by making the most of technology, operational, and financial resources. 
  • Cloud-native environments: Cloud technology is the primary enabler of business transformation and outcomes-focused value. Investing in cloud allows HCOs to overcome limitations of legacy systems, improve stability, and reduce costs. It also leads to better solution quality, faster feature delivery, and encourages a culture of innovation. Our expert consultants tailor cloud solutions to unique business needs, empowering teams and fueling growth, intelligence, and long-term profitability. 
  • Hyper-scalable data infrastructures: We equip HCOs to maximize the value of information across the care ecosystem by uncovering the most meaningful, trustworthy data and enriching it with critical context so you can use it to answer difficult questions, power meaningful experiences, and automate smart decisions. Trusting data begins with having trust in the people, processes, and systems that source, move, transform, and manage that data. We partner to build data into a powerful, differentiating asset that can accelerate clinical, marketing, and operational excellence as information is exchanged across organizations, systems, devices, and applications. 
  • AI ecosystems: HCO’s face mounting competition, financial pressures, and macro uncertainties. Enhance operations with innovative and intelligent AI and automation solutions that help you overcome complex challenges, streamline processes, and unlock new levels of productivity. Holistic business transformation and advanced analytics are front and center in this industry evolution, and generative AI (GenAI) and agentic AI have fundamentally shifted how organizations approach intelligence within digital systems. According to IDC, “GenAI will continue to redefine workflows, while agentic AI shows promise to drive real-time, responsive, and interpretive orchestration across operations.” Position yourself for success now and in the future with enhanced customer interactions, reduced operational costs, and data-driven decision-making powered by our AI expertise. 
  • Digital experiences: Digital-first care options are changing the face of healthcare experiences, bringing commerce-like solutions to consumers who search for and choose care that best fits their personal priorities and needs. We build high-impact experience strategies and put them in motion, so your marketing investments drive results that grow lasting relationships and support healthy communities. As the healthcare landscape continues to evolve – with organizational consolidations and new disruptors reshaping the marketplace – we help you proactively and efficiently attract and nurture prospective patients and caregivers as they make health decisions. 

We don’t just implement solutions; we create intelligent strategies that align technology with your key business priorities and organizational capabilities. Our approach goes beyond traditional data services. We create AI-ready intelligent ecosystems that breathe life into your data strategy and accelerate transformation. By combining technical excellence, global reach, and a client-centric approach, we’re able to drive business transformation, boost operational resilience, and enhance health outcomes. 

Success in Action: Illuminating a Clear Path to Care With AI-Enabled Search 

Empower Healthcare Experiences Through Innovative Technology 

Whether you want to redefine workflows, personalize care pathways, or revolutionize proactive health management, Perficient can help you boost efficiencies and a competitive edge.  

We combine strategy, industry best practices, and technology expertise to deliver award-winning results for leading health systems: 

  • Business Transformation: Transform strategy into action: improve operations, lower costs, build operational resilience, and optimize care. 
  • Modernization: Provide quality, cost-effective tools and platforms that enable exceptional care. 
  • Data Analytics: Enable trusted data access and insight to clinical, operational, and financial teams across the healthcare ecosystem. 
  • Consumer Experience: Harness data and technology to drive optimal healthcare outcomes and experiences. 

Discover why we have been trusted by the 10 largest health systems and the 10 largest health insurers in the U.S. Explore our healthcare expertise and contact us to learn more.

]]>
https://blogs.perficient.com/2025/04/25/perficient-included-in-idc-market-glance-healthcare-provider-operational-it-solutions-1q25/feed/ 0 380606
Part 1 – Marketing Cloud Personalization and Mobile Apps: Functionality 101 https://blogs.perficient.com/2025/04/21/part-1-marketing-cloud-personalization-and-mobile-apps-functionality-101/ https://blogs.perficient.com/2025/04/21/part-1-marketing-cloud-personalization-and-mobile-apps-functionality-101/#comments Mon, 21 Apr 2025 21:45:01 +0000 https://blogs.perficient.com/?p=379201

Over the past three years working with Marketing Cloud Personalization (formerly Interaction Studio), I’ve always been intrigued by the Mobile icon and its capabilities. A few months ago, I decided to take a hands-on approach by developing my own application to explore this functionality firsthand, testing its implementation and understanding its real-world impact. And that  is what this blog is about.

The Overall Process

The overall steps of the Marketing Cloud Personalization Mobile integration goes as follows:

  1. Have an Application (Understatement)
  2. Have access to the app project and code.
  3. Integrate the Evergage SDK library to the app.
  4. Create a Mobile App inside Personalization UI
  5. Create a connection between the app and the Personalization Dataset
  6. Track views and actions of the user in the app (code implementation).
  7. Publish and track campaign actions and push notifications.

That’s all… easy right?. Within this blog we will review how to do the connection between MCP and the mobile app and how to create a first interaction (steps 1 and part of step 6).

For this demo, I developed an iOS application using the Swift programming language. While I’m not yet an expert, I’ve been steadily learning how to navigate Xcode and implement functionality using Swift. This project has been a great opportunity to expand my skills in iOS development and better understand the tools and frameworks available within Apple’s ecosystem.

Integrate the Evergage SDK in the App

The iOS app I create is very simple (for now), it just a label, a button and an input field. The user types something in the input field, then clicks the button and the data is sent to the label to be shown.

Iphone 16 App Simulator View

So, we need to add the Evergage SDK inside the app project. Download the Evergage iOS SDK (v1.4.1), unzip it and open the static folder. There, the Evergage.xcframework is the one we are about to use. When you have the folder ready, you need to copy the folder into your app. You should have something like this:

Evergage Framework FolderMobileapp Folder Structure

After you added your folder, you need to Build your app again with Command + B.

Now we need to validate the framework is there, so go to Target -> General -> Frameworks, Libraries and Embedded Content. You should see something like this, and since I’m using the static folder, the Do Not Embed is ok.

General Information In Xcode

Validate the Framework Search Path contains a path where the framework was copied/installed. This step would probably be done manually since sometimes the path doesn’t appear. Build the app again to validate if no errors appears.

Framework Search Paths

To validate this works, go to the AppDelegate.swift and type Import Evergage, if no errors appear, you are good to go 🙂

Import Evergage View

 

Create a Mobile App Inside Personalization

Next, we have to create the Native App inside the Personalization dataset of your choice.

Hoover over Mobile and click Add Native App

Mpc Mobile View

Fill the information of the App Name and Bundle ID. For the Bundle ID, go to Target > General > Identity

Add Native App

You will with something like this:

Demoapp Mpc View

Create the Connection to the Dataset

In the AppDelegate.swift , we will do the equivalent to add the JavaScript beacon on the page.

  1. First, we need to import the Evergage class reference. This allow the start of the Marketing Cloud Personalization iOS SDK. Our tracking interactions now should be done inside a UIViewController inherited classes.
  2. Change the didFinishLaunchingWithOptions to willFinishLaounchingWithOptions
  3. Inside the application function we do the following:
    1. Create a singleton instance of Evergage. A Singleton is a creational design pattern that lets you ensure that a class has only one instance, while providing a global access point to this instance. So with this, it provides a global access point to the instance, which can be used to coordinate actions across our app.
    2. Set the user id. For this, we set the evergage.userId using the evergage.anonymousId , but if we already have the email or an id for the user, we should passed right away.
    3. Start the Evergage configuration. Here we pass the Personalization’s account id and dataset id. Other values set are the usePushNotifications and the useDesignMode . The last one help us to connect the Personalization web console for action mapping screen.

 

//Other imports
Import Evergage

@main
class AppDelegate: UIResponder, UIApplicationDelegate {



    func application(_ application: UIApplication, willFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]?) -> Bool{
        
        //Create an singleton instance of Evergage
        let evergage = Evergage.sharedInstance()
        
        //Set User ID as anonymous
        evergage.userId = evergage.anonymousId
        
        //Start the Evergage Configuration with our Dataset information
        evergage.start { (clientConfigurationBuilder)   in
            clientConfigurationBuilder.account = "ACCOUNT_ID"
            clientConfigurationBuilder.dataset = "DATASET_ID"
            // if we want to user push notification campaings
            clientConfigurationBuilder.usePushNotifications = true
            //Allow user-initiated gesture to connect to the Personalization web console for action mapping screens.
            clientConfigurationBuilder.useDesignMode = true
        }
        
        
        
        // Override point for customization after application launch.
        return true
    }
}

 

 

If we launch the app at this very moment, we will get the following inside  Marketing Cloud personalization

Eventstream Report Interaction Action Description

This is very good and with that we are certain its working and sending the information to Marketing Cloud Personalization.

Track Actions

So, in order to track a screen we can use the evergageScreen . We use this property as part of the EVGScreen and EVGContext classes for tracking and personalization. This is possible when the app is using UIViewController for each of the screens or pages we have.

class ViewController: UIViewController {

        override func viewDidLoad() {
            super.viewDidLoad()
            // Do any additional setup after loading the view.
            trackScreen()
        }
        
        func trackScreen(){
            
            evergageScreen?.trackAction("Main Screen")
            
        }
}

 

Interaction Action Forbutton

If we would want to track the action of click a button, we can do something similar, for example this:

@IBAction func handleClick(_ sender: UIButton) {
        
        labelText.text = inputField.text
        evergageScreen?.trackAction("Button clicked")
        
    }

In this code, each time the user clicks a button, the handleClick function will trigger the action. the inputField.text will be assign to the labelText.text and the trackAction function will be triggered and the action will sent to our dataset.

Wrapping Up Part 1: What’s next?

That wraps up the first part of this tutorial! We’ve covered the basic about how to add the Personalization SDK inside a mobile iOS application, how to create a Mobile App within Personalization and do a very basic action tracking in a view. In Part 2, we’ll dive into tracking more complex actions like view item and view item detail which are part of the catalog object action’s for tracking items.

]]>
https://blogs.perficient.com/2025/04/21/part-1-marketing-cloud-personalization-and-mobile-apps-functionality-101/feed/ 5 379201
What does SFO have to do with Oracle? https://blogs.perficient.com/2025/04/21/what-does-sfo-have-to-do-with-oracle/ https://blogs.perficient.com/2025/04/21/what-does-sfo-have-to-do-with-oracle/#respond Mon, 21 Apr 2025 10:33:06 +0000 https://blogs.perficient.com/?p=380320

Isn’t SFO an airport?  The airport one would travel if the destination is Oracle’s Redwood Shores campus.  Widely known as the initialism for the San Francisco International Airport, the answer would be correct if this question were posed in that context.  However, in Oracle Fusion, SFO stands for the Supply Chain Financial Orchestration. Based on what it does, we cannot call it an airport, but it sure is a control tower for financial transactions.

As companies are expanding their presence across countries and continents through mergers and acquisitions or natural growth, it becomes inevitable for the companies to transact across the borders and produce intercompany financial transactions.

Supply Chain Financial Orchestration (SFO), is the place where Oracle Fusion handles those transactions. The material may move one way, but for legal or financial reasons the financial flow could be following a different path.

A Typical Scenario

A Germany-based company sells to its EU customers from its Berlin office, but ships from its warehouses in New Delhi and Beijing.

Global

Oracle Fusion SFO takes care of all those transactions and as transactions are processed in Cost Management, financial trade transactions are created, and corporations can see their internal margins, intercompany accounting, and intercompany invoices.

Oh wait, the financial orchestration doesn’t have to be across countries only.  What if a corporation wants to measure its manufacturing and sales operations profitability?  Supply Chain Financial Orchestration is there for you.

In short, SFO is a tool that is part of the Supply Chain management offering that helps create intercompany trade transactions for various business cases.

Contact Mehmet Erisen at Perficient for more introspection of this functionality, and how Perficient and Oracle Fusion Cloud can digitalize and modernize your ERP platform.

www.oracle.com

www.perficient.com

]]>
https://blogs.perficient.com/2025/04/21/what-does-sfo-have-to-do-with-oracle/feed/ 0 380320
Roeslein and Associates goes live with Oracle Project Driven Supply Chain https://blogs.perficient.com/2025/04/21/roeslein-and-associates-goes-live-with-oracle-project-driven-supply-chain/ https://blogs.perficient.com/2025/04/21/roeslein-and-associates-goes-live-with-oracle-project-driven-supply-chain/#respond Mon, 21 Apr 2025 10:20:05 +0000 https://blogs.perficient.com/?p=368833

Roeslein & Associates 

Business Challenge + Opportunity 

Replaced disparate and outdated legacy systems with Oracle Fusion Cloud Manufacturing at a well-established manufacturing company.  We implemented a scalable Fusion solution, including Project Driven Supply Chain (PDSC), and full Financial and Supply Chain Management Suites to enable Roeslein to execute and extend their business processes globally. 

The challenge in manufacturing was to set standard manufacturing processes to fulfill highly customized demand originating from their customers. In addition, Perficient designed a Supply Chain Data Architecture to support the functionality of the solution. 

Achievements

  • Created Global Solution Template to be used globally 
  • Redesigned Enterprise Structure to enable Roeslein to track profits in different business units. 
  • Defined processes to execute standard manufacturing processes for custom and highly flexible manufacturing demand 
  • Implemented Project Driven Supply Chain including Inventory, Manufacturing, Order Management, Procurement and Cost Management 
  • Implemented Solutions to support aftermarket part orders in addition to Manufacturing Orders 
  • Designed two Integration between Fusion and UKG to support labor capture in Manufacturing and Projects 
  • Built Integration between Roeslein’s  eCommerce Platform and Fusion to support of their Aftermarket Business 

 

Contact Mehmet Erisen at Perficient for more introspection of this phenomenal achievement.  Congratulations to Roeslein & Associates and their entire staff! 

]]>
https://blogs.perficient.com/2025/04/21/roeslein-and-associates-goes-live-with-oracle-project-driven-supply-chain/feed/ 0 368833
How the Change to TLS Certificate Lifetimes Will Affect Sitecore Projects (and How to Prepare) https://blogs.perficient.com/2025/04/18/how-the-change-to-tls-certificate-lifetimes-will-affect-sitecore-projects-and-how-to-prepare/ https://blogs.perficient.com/2025/04/18/how-the-change-to-tls-certificate-lifetimes-will-affect-sitecore-projects-and-how-to-prepare/#respond Fri, 18 Apr 2025 13:54:17 +0000 https://blogs.perficient.com/?p=380286

TLS certificate lifetimes are being significantly reduced over the next few years as part of an industry-wide push toward greater security and automation. Here’s the phased timeline currently in place:

  • Now through March 15, 2026: Maximum lifetime is 398 days

  • Starting March 15, 2026: Reduced to 200 days

  • Starting March 15, 2027: Further reduced to 100 days

  • Starting March 15, 2029: Reduced again to just 47 days

For teams managing Sitecore implementations, this is more than a policy shift—it introduces operational urgency. As certificates begin expiring more frequently, any reliance on manual tracking or last-minute renewals could result in costly downtime or broken integrations.

If your Sitecore environment includes secure endpoints, custom domains, or external integrations, now is the time to assess your certificate strategy and move toward automation.

Why This Matters for Sitecore

Sitecore projects often involve:

  • Multiple environments (development, staging, production) with different certificates

  • Custom domains or subdomains used for CDNs, APIs, headless apps, or marketing campaigns

  • Third-party integrations that require secure connections

  • Marketing and personalization features that rely on seamless uptime

A single expired certificate can lead to downtime, loss of customer trust, or failed integrations—any of which could severely impact your digital experience delivery.

Key Risks of Shorter TLS Lifetimes

  • Increased risk of missed renewals if teams rely on manual tracking

  • Broken environments due to expired certs in Azure, IIS, or Kubernetes configurations

  • Delayed deployments when certificates must be re-issued last minute

  • SEO and trust damage if browsers start flagging your site as insecure

How to Prepare Your Sitecore Project Teams

To stay ahead of the TLS certificate lifecycle changes, here are concrete steps you should take:

1. Inventory All TLS Certificates

  • Audit all environments and domains using certificates

  • Include internal services, custom endpoints, and non-production domains

  • Use a centralized tracking tool (e.g., Azure Key Vault, HashiCorp Vault, or a certificate management platform)

2. Automate Certificate Renewals

  • Wherever possible, switch to automated certificate issuance and renewal

  • Use services like:

    • Azure App Service Managed Certificates

    • Let’s Encrypt with automation scripts

    • ACME protocol integrations for Kubernetes

  • For Azure-hosted Sitecore instances, leverage Key Vault and App Gateway integrations

3. Establish Certificate Ownership

  • Assign clear ownership of certificate management per environment or domain

  • Document who is responsible for renewals and updates

  • Add certificate health checks to your DevOps dashboards

4. Integrate Certificate Checks into CI/CD Pipelines

  • Validate certificate validity before deployments

  • Fail builds if certificates are nearing expiration

  • Include certificate management tasks as part of environment provisioning

5. Educate Your Team

  • Hold knowledge-sharing sessions with developers, infrastructure engineers, and marketers

  • Make sure everyone understands the impact of expired certificates on the Sitecore experience

6. Test Expiry Scenarios

  • Simulate certificate expiry in non-production environments

  • Monitor behavior in Sitecore XP and XM environments, including CD and CM roles

  • Validate external systems (e.g., CDNs, integrations, identity providers) against cert failures

Final Thoughts

TLS certificate management is no longer a “set it and forget it” task. With shorter lifetimes becoming the norm, proactive planning is essential to avoid downtime and ensure secure, uninterrupted experiences for your users.

Start by auditing your current certificates and work toward automating renewals. Make certificate monitoring part of your DevOps practice, and ensure your Sitecore teams are aware of the upcoming changes.

Action Items for This Week:

  • Identify all TLS certificates in your Sitecore environments

  • Document renewal dates and responsible owners

  • Begin automating renewals for at least one domain

  • Review Azure and Sitecore documentation for certificate integration options

]]>
https://blogs.perficient.com/2025/04/18/how-the-change-to-tls-certificate-lifetimes-will-affect-sitecore-projects-and-how-to-prepare/feed/ 0 380286
Scoping, Hoisting and Temporal Dead Zone in JavaScript https://blogs.perficient.com/2025/04/17/scoping-hoisting-and-temporal-dead-zone-in-javascript/ https://blogs.perficient.com/2025/04/17/scoping-hoisting-and-temporal-dead-zone-in-javascript/#respond Thu, 17 Apr 2025 11:44:38 +0000 https://blogs.perficient.com/?p=380251

Before mastering JavaScript, it’s crucial to understand how it thinks behind the scenes. Concepts like scope, hoisting, and the temporal dead zone might sound tricky at first, but they form the backbone of how your code behaves.
In this blog, we’ll break down these key ideas in the simplest way possible—so you can write cleaner code, avoid unexpected bugs, and truly understand what’s happening when your script runs.

What is Scope in JavaScript?

Think of scope like a boundary or container that controls where you can use a variable in your code.

In JavaScript, a variable is only available in the part of the code where it was created. If you try to use it outside that area, it won’t work—that’s because of scope.

This helps in two big ways:

  • Keeps your code safe – Only the right parts of the code can access the variable.
  • Avoids name clashes – You can use the same variable name in different places without them interfering with each other.

JavaScript mainly uses two types of scope:

1.Global Scope – Available everywhere in your code.

2.Local Scope – Available only inside a specific function or block.

 

Global Scope

When you start writing JavaScript code, you’re already in the global scope—this is like the outermost area of your code where variables can live.

If you create a variable outside of any function or block, it’s considered global, which means it can be used anywhere in your code.

var a = 5; // Global variable
function add() {
  return a + 10; // Using the global variable inside a function
}
console.log(window.a); // 5

In this example, a is declared outside of any function, so it’s globally available—even inside add().

A quick note:

  • If you declare a variable with var, it becomes a property of the window object in browsers.
  • But if you use let or const, the variable is still global, but not attached to window.
let name = "xyz";
function changeName() {
  name = "abc";  // Changing the value of the global variable
}
changeName();
console.log(name); // abc

In this example, we didn’t create a new variable—we just changed the value of the existing one.

👉 Important:
If you redeclare a global variable inside a function (using let, const, or var again), JavaScript treats it as a new variable in a new scope—not the same one. We’ll cover that in more detail later.

 

 Local Scope

In JavaScript, local scope means a variable is only accessible in a certain part of the code—usually inside a function or a block.

There are two types of local scope:

1.Functional Scope

Whenever you create a function, it creates its own private area for variables. If you declare a variable inside a function, it only exists inside that function.

let firstName = "Shilpa"; // Global
function changeName() {
  let lastName = "Syal"; // Local to this function
console.log (`${firstName} ${lastName}`);
}
changeName();
console.log (lastName); // ❌ Error! Not available outside the function

You can even use the same variable name in different functions without any issue:

function mathMarks() {
  let marks = 80;
  console.log (marks);
}
function englishMarks() {
  let marks = 85;
  console.log (marks);
}

Here, both marks variables are separate because they live in different function scopes.

 

2.Block Scope

Thanks to let and const, you can now create variables that only exist inside a block (like an if, for, or {}).

 

function getMarks() {
  let marks = 60;
  if (marks > 50) {
    const points = 10;
    console.log (marks + points); // ✅ Works here
  }
  console.log (points); // ❌ Uncaught Reference Error: points is not defined
}

 As points variable is declared in if block using the let keyword, it will not be only accessible outside as shown above. Now try the above example using var keyword i.e declare “points” variable with var and spot the difference.

LEXICAL SCOPING & NESTED SCOPE:

When you create a function (outer function) that contains another function (inner function), then the inner function has access to the outer function’s variables and methods. This is known as Lexical Scoping.

function outerFunction() {
  let outerVar = "I’m outside";
  function innerFunction() {
      console.log (outerVar); // ✅ Can access outerVar
  }
  innerFunction();
}

In other terms, variables & methods defined in parent function are automatically available to its child functions. But it doesn’t work the other way around—the outer function can’t access the inner function’s variables.

 

VARIABLE SCOPE OR VARIABLE SHADOWING:

You can declare variables with the same name at different scopes. If there’s a variable in the global scope and you create variable with the same name in a function, then you will not get any error. In this case, local variables take priority over global variables. This is known as Variable shadowing, as inner scope variables temporary shadows the outer scope variable with the same name.

If the local variable and global variable have the same name then changing the value of one variable does not affect the value of another variable.

let name = "xyz"
function getName() {
  let name = "abc"            // Redeclaring the name variable
      console.log (name)  ;        //abc
}
getName();
console.log (name) ;          //xyz

To access a variable, JS Engine first going to look in the scope that is currently in execution, and if it doesn’t find there, it will look to its closest parent scope to see if a variable exist there and that lookup process will continue the way up, until JS Engine reaches the global scope. In that case, if the global scope doesn’t have the variable, then it will throw a reference error, as the variable doesn’t exist anywhere up the scope chain.

let bonus = 500;
function getSalary() {
 if(true) {
     return 10000 + bonus;  // Looks up and finds bonus in the outer scope
  }
}
   console.log (getSalary()); // 10500

 

Key Takeaways: Scoping Made Simple

Global Scope: Variables declared outside any function are global and can be used anywhere in your code.

Local Scope: Variables declared inside a function exist only inside that function and disappear once the function finishes.

Global Variables Last Longer: They stay alive as long as your program is running.

Local Variables Are Temporary: They’re created when the function runs and removed once it ends.

Lexical Scope: Inner functions can access variables from outer functions, but not the other way around.

Block Scope with let and const: You can create variables that exist only inside {} blocks like if, for, etc.

Same Name, No Clash: Variables with the same name in different scopes won’t affect each other—they live in separate “worlds.” 

Hoisting

To understand Hoisting in JS, it’s essential to know how execution context works. Execution context is an environment where JavaScript code is executed.

It has two main phases:

1.Creation Phase: During this phase JS allocated memory or hoist variables, functions and objects. Basically, hoisting happens here.

2.Execution Phase: During this phase, code is executed line by line.

-When js code runs, JavaScript hoists all the variables and functions i.e. assigns a memory space for those variables with special value undefined.

 

Key Takeaways from Hoisting and let’s explore some examples to illustrate how hoisting works in different scenarios:

  1. functions– Functions are fully hoisted. They can invoke before their declaration in code.
foo (); // Output: "Hello, world!"
 function foo () {
     console.log ("Hello, world!");
 }
  1. var – Variables declared with var are hoisted in global scope but initialized with undefined. Accessible before the declaration with undefined.
console.log (x); // Output: undefined
 var x = 5;

This code seems straightforward, but it’s interpreted as:

var x;
console.log (x); // Output: undefined
 x = 5;

3.Let, Const – Variables declared with Let and const are hoisted in local scope or script scope but stay in TDZ. These variables enter the Temporal Dead Zone (TDZ) until their declaration is encountered. Accessing in TDZ, results is reference Error

console.log (x); // Throws Reference Error: Cannot access 'x' before initialization
 let x = 5;


What is Temporal Dead Zone (TDZ)?

In JavaScript, all variable declarations—whether made using var, let, or const—are hoisted, meaning the memory for them is set aside during the compilation phase, before the code actually runs. However, the behaviour of hoisting differs based on how the variable is declared.

For variables declared with let and const, although they are hoisted, they are not initialized immediately like var variables. Instead, they remain in an uninitialized state and are placed in a separate memory space. During this phase, any attempt to access them will result in a Reference Error.

This period—from the start of the block until the variable is initialized—is known as the Temporal Dead Zone (TDZ). It’s called a “dead zone” because the variable exists in memory but cannot be accessed until it has been explicitly declared and assigned a value in the code.

console.log (x); //x is not defined -- Reference Error.
let a=10; //b is undefined.
var b= 100; // you cannot access a before initialization Reference Error.

👉 Important: The Temporal Dead Zone helps prevent the use of variables before they are properly declared and initialized, making code more predictable and reducing bugs.

 

🧾 Conclusion

JavaScript hoisting and scoping are foundational concepts that can feel tricky at first, but once you understand them, they make your code more structured and predictable. Hoisting helps explain why some variables and functions work even before they’re declared, while scoping defines where your variables live and how accessible they are. By keeping these concepts in mind and practicing regularly, you’ll be able to write cleaner, more reliable JavaScript. The more you experiment with them, the more confident you’ll become as a developer. Keep learning, keep building, and everything will start to click. Happy coding! 🙌

 

 

]]>
https://blogs.perficient.com/2025/04/17/scoping-hoisting-and-temporal-dead-zone-in-javascript/feed/ 0 380251