Technical Articles / Blogs / Perficient https://blogs.perficient.com/category/technical/ Expert Digital Insights Wed, 02 Jul 2025 06:04:11 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Technical Articles / Blogs / Perficient https://blogs.perficient.com/category/technical/ 32 32 30508587 From Flow to Fabric: Connecting Power Automate to Microsoft Fabric https://blogs.perficient.com/2025/07/02/power-automate-microsoft-fabric-eventstream/ https://blogs.perficient.com/2025/07/02/power-automate-microsoft-fabric-eventstream/#respond Wed, 02 Jul 2025 06:04:11 +0000 https://blogs.perficient.com/?p=383644

In this blog, we’ll walk through how to use Eventstream in Microsoft Fabric to capture events triggered by Power Automate and store them in a Lakehouse table. Whether you’re building dashboards, triggering insights, or analyzing user interactions, this integration provides a powerful way to bridge business logic with analytics.

1. Create a Power Automate Flow to Post Data

Start by creating a Power Automate flow. Here’s what your flow should look like:

Flow View

Input

You can choose any input you like. In this example, we’re using “Name”Choose Input

 

2. Choose the “Send Event” Trigger

Add the Send Event trigger to your flow.

Send event

Leave the flow as it is for now and move on to Microsoft Fabric.

3. Set Up Microsoft Fabric

Go to https://app.fabric.microsoft.com, create a new Workspace (name it as you wish), then:

    • Create a Lakehouse (using the + New Item button)
    • Add an Eventstream (select Get Data)  Microsoft Fabrics ImageMicrosoft Fabrics Image

 

In the Eventstream, choose Custom Endpoints.

4. Configure the Input and Publish

  • Give your input a name of your choice.
  • Click Publish.Microsoft Fabrics Image

After publishing, your input will be updated.
Go to Details, then to SAS Key Authentication, and copy the Event Hub Name and Primary Connection String.

Microsoft Fabrics Image

5. Connect Power Automate to Microsoft Fabric

Return to Power Automate and:

  • Use your Workspace Name to form a connection.
  • Paste the Primary Connection String and click Create.
  • Manually enter the Event Hub Name (it won’t appear dynamically).

Microsoft Fabrics ImagePower Automate ImageClick Save to complete the connection.

Make sure to enter the data you want to pass (e.g., Name) in the Content field.

6. Set the Destination in Lakehouse

  • Click on Lakehouse and connect it to your Workspace.Microsoft fabrics Image
  • For the table:
    • Click Create New under the Delta Table option, or
    • Create a table directly in the Lakehouse.Microsoft fabrics Image
    • Note: If data isn’t transferring, try creating the table in the Lakehouse first, then form the connection in Eventstream.

7. Finalize the Connection

  • Form the connection.
  • Click Publish.Microsoft fabrics Image

And that’s it! Your data will now be stored in the Lakehouse.

Conclusion

Connecting Power Automate to Microsoft Fabric using Eventstream provides a robust and efficient solution for real-time data integration. Looking ahead, this setup can be extended to include:

  • Advanced analytics with notebooks
  • Real-time Power BI dashboards
  • Integration with machine learning models

This unlocks deeper insights and intelligent automation across business processes.

]]>
https://blogs.perficient.com/2025/07/02/power-automate-microsoft-fabric-eventstream/feed/ 0 383644
How Leading Firms Are Acting on 2025 Wealth and Asset Management Trends https://blogs.perficient.com/2025/06/30/how-leading-firms-are-acting-on-2025-wealth-and-asset-management-trends/ https://blogs.perficient.com/2025/06/30/how-leading-firms-are-acting-on-2025-wealth-and-asset-management-trends/#respond Mon, 30 Jun 2025 14:41:37 +0000 https://blogs.perficient.com/?p=383711

Wealth management firms are under pressure to deliver more—faster, smarter, and with greater precision. The conversations we’re hearing aren’t about what might happen in the future, but rather about what needs to happen now.

Earlier this year, we published 5 Leading Digital Trends Shaping Wealth Management in 2025, outlining the macro shifts redefining the industry. After an energizing few days at BNY INSITE25, those trends came into sharper focus and we saw firsthand how firms are beginning to operationalize these shifts—translating insight into action.

Following the event, we sat down with Ken Fishman and John Galifi to reflect on the conversations, insights, and key themes that emerged. The dialogue offered a clear view into where the wealth and asset management industry is headed and how firms are beginning to act on the trends shaping 2025.

Key Challenges and Opportunities in Digital Wealth Management

While the trends are clear, the path to execution is complex. At BNY INSITE25, several strategic themes emerged that reflect how firms are navigating this complexity. From democratizing access to alternatives to building AI-enabled workflows, these themes reveal where the industry is placing its bets—and how firms are aligning people, processes, and platforms to deliver on the promise of digital transformation.

The focus is shifting toward scalable strategies that drive growth, enhance personalization, and strengthen operational resilience by turning vision into measurable outcomes.

1. Democratizing Access to Alternatives

Platforms like Wove and iCapital are transforming how advisors access and deliver alternative investments. These tools are helping to enable greater portfolio personalization and open new distribution channels for asset managers. But with this innovation comes complexity, particularly around data aggregation, governance, and reporting.

As Ken and John noted, the firms that succeed will be those that can scale personalization without sacrificing operational integrity.

Explore More: Future-Proof Your Tech Investment

2. AI Adoption with Purpose

The AI conversation has matured. It’s no longer about experimentation, it’s about enablement. From automating marketing tear sheets to navigating regulatory complexity, firms are embedding AI into workflows to drive productivity while keeping humans in the loop.

Zico Kolter’s keynote speech emphasized the importance of intentionality in AI adoption. The focus now is on scaling with purpose—ensuring AI enhances the advisor experience rather than complicating it.

You May Also Enjoy: Transform Your Business With Cutting-Edge AI and Automation Solutions

3. Strategic Tech Stack Decisions

WealthTech innovation is empowering advisors to focus on what matters most: client relationships. But selecting and implementing the right tech stack is more complex than ever. Firms must align technology with business strategy and execute with precision.

At BNY INSITE25, conversations with leaders underscored the importance of this alignment. Whether it’s data lineage, self-service entitlements, or advisor enablement, the message was clear: technology must serve the business, not the other way around.

Success In Action: At the Heart of Financial Services

Looking Ahead: Advisor Onboarding is a Strategic Imperative

While the mainstage themes dominated the spotlight, one of the most pressing challenges surfaced inside conversations: advisor onboarding.

Unlike client onboarding, which is largely internal and process-driven, advisor onboarding is a multifaceted challenge involving legal, regulatory, and operational complexity. From licensing and credentialing to technology enablement and book of business transitions, the process is often a bottleneck for growth, especially for RIAs and broker-dealers looking to scale.

Firms are actively searching for solutions to this pain point. And for good reason: a poorly executed onboarding process can lead to compliance risks, client attrition, and advisor disengagement.

Success In Action: Speeding Insights and Powering Investment Experiences

Advisor Onboarding Framework: 12 Essentials for Wealth Management Firms

A modern advisor onboarding strategy is critical for scaling growth, ensuring compliance, and delivering a seamless advisor and client experience. As highlighted in our 2025 Wealth Management TrendsClient Advisor Empowerment was our #2 trend, and this framework is a direct reflection of that insight.

Empowered advisors need more than tools; they need a frictionless start.

A well-designed onboarding experience ensures they’re equipped from day one to deliver high-quality, personalized service efficiently and confidently.

Here are the 12 foundational components every firm should consider:

  1. Talent Strategy Alignment: Tailor onboarding for advisor types—established, career changers, junior analysts, or internal successors.
  2. Regulatory & Credentialing Compliance: Complete FINRA/SEC background checks, Form U4, and licensing (SIE, Series 7/66) to ensure legal readiness.
  3. Data & Documentation Collection: Gather IDs, employment history, certifications, NDAs, and client lists to support CRM setup and audit compliance.
  4. Technology & Platform Enablement: Provision CRM, trading tools, planning software, secure email, and cybersecurity protocols for day-one productivity.
  5. Book of Business Transition: Manage ACATs, repapering, custodial setup, and client introductions to preserve trust and assets.
  6. Branch Office Setup (If Applicable): Coordinate leasing, regulatory registration, IT/security installation, and local staffing for geographic expansion.
  7. Support & Admin Staff Enablement: Train support teams on CRM, compliance workflows, and service protocols to enhance advisor efficiency.
  8. Training & Cultural Integration: Introduce firm values, DEI principles, investment philosophy, and operational workflows to build long-term engagement.
  9. Compensation & Incentive Alignment: Define compensation structures, bonuses, and performance metrics to drive advisor motivation and retention.
  10. Client Communication Strategy: Send welcome kits, advisor bios, FAQs, and transition updates to build trust and reduce client attrition.
  11. Cross-Functional Coordination: Engage compliance, operations, marketing, and IT teams early; assign onboarding liaisons to streamline execution.
  12. Performance Milestones & Feedback: Set 30/60/90-day goals, gather feedback, and adjust support to ensure successful advisor integration.

Build Smarter, Scale Faster, and Elevate Your Wealth Strategy

BNY INSITE25 reinforced what we’ve long believed: the future of wealth and asset management is digital, data-driven, and deeply human. It’s about using technology to enhance the advisor-client relationship, and solve the real, often overlooked challenges that stand in the way of growth.

We empower wealth and asset managers with proactive insights, hyper-personalized experiences, and proactive risk management to drive sustainable growth.

  • Business Transformation: Develop and optimize strategies and processes for efficient wealth management operations.
  • Modernization: Upgrade technology and processes to ensure seamless integration and enhanced, streamlined client advisor experiences.
  • Data + Analytics: Harness data-driven insights for personalized investment strategies, client collaboration, and operational efficiency.
  • Risk + Compliance: Implement robust strategies to safeguard investor relationships and ensure regulatory adherence.
  • Consumer Experience: Enhance engagement and satisfaction with tailored advisory services and digital tools.

Discover why we have been trusted by 16 of the 20 largest wealth management firms. Explore our financial services expertise and contact us to learn more.

]]>
https://blogs.perficient.com/2025/06/30/how-leading-firms-are-acting-on-2025-wealth-and-asset-management-trends/feed/ 0 383711
Part 2 – Marketing Cloud Personalization and Mobile Apps: Tracking Items https://blogs.perficient.com/2025/06/25/part-2-marketing-cloud-personalization-and-mobile-apps-tracking-items/ https://blogs.perficient.com/2025/06/25/part-2-marketing-cloud-personalization-and-mobile-apps-tracking-items/#respond Wed, 25 Jun 2025 19:22:08 +0000 https://blogs.perficient.com/?p=381961

In the first part of this series, we covered how to connect a mobile app to Marketing Cloud Personalization using Salesforce’s Mobile SDK. In this post, we’ll explore how to send catalog items from the mobile app to your dataset.

What’s new on the DemoApp?

Since the last post, I made some changes in the app. The app is connected to a free NASA API where takes information from the Mars Rover Photos connection. This connection returns an array of images taken on a specific earth’s date. For the demo purposes I’m only using the first record on that array. This  API is designed to collect image data gathered by NASA’s Curiosity, Opportunity, and Spirit rovers on Mars. The API make it more easily available to other developers, educators, and citizen scientists.

The app has two different views, the main view and the display image view. In the main view, the user picks the Earth’s date and the app sends it to the API to retrieve a picture. The second view displays the picture along with some information (see image below). The goal here is to send the item (the picture and its information) to Personalization.

Simulator Screenshot Iphone 16 Pro

 

The role of Personalization’s Event API

Marketing Cloud Personalization provides an Event API that sources use to send event data to the platform, where the event pipeline processes it. Then you can return Campaigns data that can be served to the end user. Developers cannot use of this API to handle mobile application events.

“The Personalization Mobile SDK has separate functionality used for mobile app event processing, with built-in features that are currently unavailable through the Event API.”

So, we can eliminate possibility to use the Event API in this use case.

Tracking Items

Tracking items is an important part of any Personalization implementation. Configure the Catalog Object so that it log and event when a user has viewed productFor example, imagine you have an app that sells the sweaters you knit. You want to know how many users go to “Red and Blue Sweater” product. With that information you can promote other products that those users might like, so they will be more likely to buy from you.

There are two ways to track items and actions. Track catalog objects like Products, Articles, Blogs and Categories (which are the main catalog objects in Personalization) and you can also add Tags as a related catalog objects for those I name before.

You can also track actions like AddToCard, RemoveFromCart and Purchase.

 

Process Catalog Objects/Item Data

In order to process the catalog and item data we are going to sent from our mobile application, we need to activate the Process Item Data from Native Mobile Apps . This option will make it possible for Personalization to process the data. By default Personalization ignores all the mobile catalog data that it receives.

To activate this functionality, hover over SETTING > GENERAL SETUP > ADVANCE OPTIONS > Activate Process Item Data from Native Mobile Apps

Process Item Data From Native Mobile Apps configuration inside Salesforce Marketing Cloud Personalization

The SDK currently works for Products, Articles and Blogs, those are called Items. They will be able to track purchases, comments, or views. They also will be able to relate with other catalog objects like brand, category and keyword.

Methods to process item data

The following methods are used to track the action of the users viewing and Item or the detail of and item. The web comparison for these methods are the SalesforceInteractions.CatalogObjectInteractionName.ViewCatalogObject and the SalesforceInteractions.CatalogObjectInteractionName.ViewCatalogObjectDetail

viewItem: and viewItem:actionName:

These methods track when a user views an item. Personalization will automatically track the time spent viewing the item while the context, app, and user is active. The item will remain the one viewed until this method or viewItemDetail  are called again. See documentation Here

The second method have the actionName  parameter that is use for different action name to distinguish this View Item.

evergageScreen?.viewItem(_ item: EVGItem?)
evergageScreen?.viewItem(_ item: EVGItem?, actionName: String?)

View Item Interaction in the Event Stream:

Event Detail Interaction

View Item interaction using the actionName parameter

Event Detail Interaction with Action Field parameter

EVGItem is an abstract base class. An item is something in the app that users can view or otherwise engage with. Classes like EVGProduct or EVGArticle inherits from this class

The question mark at the end of String and EVGItem  means that the value is optional or can be  nil . The last one can happen to the EVGItem  if have some invalid value.

viewItemDetail: and viewItemDetail:actionName:

These methods track details when a user views an item, such as looking at other product images or opens the specifications tab. Personalization will automatically track the time spent viewing the item while the context, app, and user is active. The item will remain the one viewed until this method or viewItem:  are called again.

The second method have the actionName  parameter that its use for different action name to distinguish this View Item Detail.

evergageScreen?.viewItemDetail(_ item: EVGItem?)
evergageScreen?.viewItemDetail(_ item: EVGItem?, actionName: String?)

View Item Detail interaction in the Event Stream:

Event Item Detail interaction

View Item Detail interaction but using the actionName parameter:

Event Item Detail With Action interaction

Now we have define those EVGItem objects with the actual catalog object we want to track Blog / Category / Articles / Product.

 

The EVGProduct Class

By definition, a Product is an item that a business can sell to users.  Products can be added to EVGLineItem objects when they have been ordered by the user.

We have a group of initializers we can use to create an Evergage product and send it back to Personalization. The EVGProduct class have variety of methods we can use. For this post I will show the most relevant to use.

Something important to remember is that in order to use classes like EVGProduct or EVGArticle, we need to import the Evergagelibrary.

The productWithId: method

The most basic of them all, we just need to pass the ID of the product and that’s it. This can be useful if we don’t want to provide too much information.

evergageScreen?.viewItem(EVGProduct.init(id: "p123"))

The productWithId:name:price:url:imageUrl:evgDescription: method

Builds an EVGProduct, including many of the commonly used fields. This constructor use the id, name, price, url, image and description fields of the Product catalog object.

As a reminder , I’m building my Product catalog object using the images from the Mars Rover Photos with some other attributes that we got in the response from the API

For this constructor, the values I’m sending in the parameters are:

  • The ID of the image returned in the JSON
  • The full name of the camera that took the photo
  • A price of 10 (just because this needs a value here)
  • The image URL returned in the JSON
  • A description I made using the earth, landing and launching dates.

All I just have to do is pass the new item using any of the method we use to track item data.

let item : EVGItem = EVGProduct.init(id:String(id),
                                      name: name,
                                      price: 10,
                                      url: url,
                                      imageUrl: imageUrl,
                                      evgDescription: "This is a photo taken form \(roverName). Earth Date: \(earthDate). Landing Date: \(landingDate). Launch Date: \(launchDate)")
        
 evergageScreen?.viewItemDetail(item)

 

The item declaration ir correct since EVGProduct inherits from EVGItem.

After populating the information, the catalog object will look like this inside Marketing Cloud Personalization:

Product Catalog Object Item inside SFMC Personalization

The productFromJSONDictionary: method

As the name says it creates an EVGProduct from the provided JSON dictionary. A JSON dictionary is an array of key-value pair in the form [String : Any] where you add attributes from the Product catalog object.

let productDict : [String : Any] = [
           "_id": String(id),
           "url": url,
           "name": name,
           "imageUrl": imageUrl,
           "description": "This is a photo taken form \(roverName). Earth Date: \(earthDate). Landing Date: \(landingDate). Launch Date: \(launchDate)",
           "price": 10,
           "currency": "USD",
           "inventoryCount": 2
]

let itemJson: EVGItem? = EVGProduct.init(fromJSONDictionary: productDict)
evergageScreen?.viewItemDetail(itemJson, actionName: "User did specific action")

Then you can initialize the EVGProduct object with the constructor that uses the fromJSONDictionary parameter.

The last step here will be sent the action with the viewItemDetail method.

This is how the record should look like after the creation in the dataset.

Product created using JSON method

 

Final Class

This is how our class will look with the methods to sent the item interactions.

Swift class code with the method to sent interaction to Personalization

Bonus: How to set attributes values?

Imagine you also want to set attributes to sent to personalization like first name, last name, email address or zip code. If you want to do that, all you need to do its to use the setUserAttribute method inside the AppDelegate class or after the user logs in. We used this class to pass the id of the user and to set the datasetID.

After the user logs in you can pass the information you need to personalization The setUserAttribute:forName: sets an attribute (a name/value pair) on the user. The next event will send the new value to the Personalization dataset.

evergage.setUserAttribute("attributeValue", forName: "attributeName")

//Following the example
evergage.userId = evergage.anonymousId
evergage.setUserAttribute("Raul", forName: "firstName")
evergage.setUserAttribute("Juliao", forName: "lastName")
evergage.setUserAttribute("raul@gmail.com", forName: "emailAddress")
evergage.setUserAttribute("123456", forName: "zipCode")

The set attributes event:

Event interaction setting user information.

The Customer’s Profile view

Customer Profile View pointing the newly set attributes

 

Conclusion: Syncing Your Mobile App’s Catalog with Personalization

To wrap things up, setting up Articles, Blogs, and Categories works pretty much the same way as setting up Products. The structure stays consistent—you just have to keep in mind that each one belongs to a different class, so you’ll need to tweak things slightly depending on what you’re working with.

That said, one big limitation to note is that you can’t send custom attributes in catalog objects, even if you try using the JSON dictionary method. I tested a few different approaches, and unfortunately, it only supports the default attributes.

Also, the documentation doesn’t really go into detail about using other types of catalog objects outside of Articles, Blogs, Products, and Categories. It’s unclear if custom catalog objects are supported at all through the mobile SDK, which makes things a bit tricky if you’re looking to do something more advanced.

In part 3 we are going to take a look at how to set push notifications and mobile campaigns.

]]>
https://blogs.perficient.com/2025/06/25/part-2-marketing-cloud-personalization-and-mobile-apps-tracking-items/feed/ 0 381961
Elevating API Automation: Exploring Karate as an Alternative to Rest-Assured https://blogs.perficient.com/2025/06/25/elevating-api-automation-exploring-karate-as-an-alternative-to-rest-assured/ https://blogs.perficient.com/2025/06/25/elevating-api-automation-exploring-karate-as-an-alternative-to-rest-assured/#respond Wed, 25 Jun 2025 08:06:16 +0000 https://blogs.perficient.com/?p=367928

Karate, according to Karate Labs, is the only open-source tool that unifies API test automation, mocks, performance testing, and UI automation into a single framework. Using Behavior Driven Development (BDD) syntax enables easy scenario writing, even for non-programmers. With built-in assertions, a reporting mechanism, and parallel test execution, Karate streamlines project development and maintenance by offering compile-free, readable code.

The Karate Framework was created by Peter Thomas in 2017 with the goal of making testing functionalities accessible to everyone. Although it was written in Java, the framework’s files are not restricted to Java, making it more versatile and user-friendly.

Key Features of Karate

  • Utilizes the easy-to-understand Gherkin language.
  • Requires no advanced programming knowledge like Java.
  • Offers built-in parallel testing capabilities, eliminating the need for external tools like Maven or Gradle.
  • Includes a UI for debugging tests.
  • Built on popular Cucumber standards.
  • Simple to create and set up a testing framework.
  • Allows calling one feature file from another.
  • Provides built-in support for Data-Driven Testing, eliminating the need for external frameworks.
  • Features native REST reporting, with optional integration with Cucumber for enhanced UI reports and clarity.
  • Offers in-house support for switching configurations across different testing environments (QA, Stage, Prod, Pre-Prod).
  • Seamlessly integrates with CI/CD pipelines.
  • Supports various types of HTTP calls, including:
    • WebSocket support
    • SOAP requests
    • HTTP
    • Browser cookie handling
    • HTTPS
    • HTML-form data
    • XML requests

Karate vs. Rest-Assured: A Comparison

  • Rest-Assured: A Java-based library designed for testing REST services, Rest-Assured allows you to write test scripts using Java. It excels at handling various request types, enabling the verification of different business logic combinations.
  • Karate Framework: A Cucumber/Gherkin-based tool, Karate is used for testing both SOAP and REST services. It offers an easy-to-understand syntax, making it accessible to both technical and non-technical users.
    Rest-Assured Karate
    Plain Text No Yes
    Parallel Execution Partial Yes
    Data Driven Testing Not built in built in

    Compared with Cucumber enhancement

    Cucumber Karate
    Built in Step Definitions No Yes
    Parallel Execution No Yes
    Re-use feature files No Yes

For a more detailed comparison, visit Karate VS RestAssured 

Why Karate?

Karate is worth adopting because it unifies API, UI, mock‑service and performance testing in a single, low‑code framework while remaining fast, readable, and easy for both testers and developers to maintain. Its domain-specific language (DSL) enables even non-Java teams to write plain-text scenarios, while still integrating smoothly with Java and CI/CD pipelines.

1. Unified Feature Set

Karate is the only open-source tool that combines API automation, UI automation (via a Selenium-free engine), service virtualization mocks, and Gatling-powered performance testing in one framework, eliminating the need for multiple tools.

  • 1.1 API + Web in the Same Script

Within a single feature file, you can switch from calling a REST endpoint to driving a browser, enabling true end‑to‑end scenarios without context‑switching or extra libraries.

  • 1.2 Re‑usable Performance Tests

Karate lets you reuse functional API tests as Gatling load tests, saving the effort of rewriting user flows in a separate performance tool.

2. Productivity & Ease of Use

  • 2.1 Low‑Code DSL

Tests are written in a Gherkin‑like syntax that hides Java boilerplate; glue code is unnecessary, lowering the barrier for non‑programmers.

  • 2.2 Less Code, Faster Feedback

Because feature files are plain text and do not need compilation, developers iterate faster than with code‑heavy libraries like Rest Assured.

  • 2.3 Built‑In Assertions & Reports

Karate ships with powerful JSON/XML matchers and generates rich HTML reports out of the box, so teams spend zero time wiring external assertion or reporting frameworks.

3. Performance & Scalability

Parallel execution is built‑in; benchmarks show Karate tests often run faster than equivalent Rest Assured suites, which matters when suites grow large.

4. Team Collaboration & Maintainability

  • No Java prerequisite: Business testers can contribute directly, improving coverage and shared understanding.

  • Single truth of test logic: API specs, functional checks, mocks, and load profiles live in one place, reducing duplication and drift.

  • CI/CD ready: Karate runs via JUnit/TestNG and generates standard reports that integrate seamlessly with Jenkins, GitHub Actions, Azure DevOps, and other platforms, eliminating the need for plugins.

5. When Karate Shines

Scenario Why Karate Helps
Green‑field API project Rapid authoring & mocks speed up backend‑frontend co‑development
Microservices with contract testing DSL assertions keep contracts readable; mocks isolate services
Teams with mixed skill levels Non‑coders write tests; engineers extend with Java only when needed
Need one tool for API + UI Avoids juggling Selenium/WebDriver + Rest Assured

6. Potential Limitations

Karate’s power comes from its opinionated DSL—teams needing highly customised Java code or advanced XML handling may prefer lower‑level libraries.

  • Challenges in Karate Framework

Challegesinkarate

Karate is great for quick, readable API tests, but it has limitations in IDE support, type safety, UI complexity, and community resources. For more advanced scenarios, you may need to combine it with other tools or use more code-centric frameworks.

Tools Needed for Working with the Karate Framework

Eclipse

Eclipse is an Integrated Development Environment (IDE) widely used for Java programming. It serves as a robust platform for developing and managing Karate projects.

Maven

Maven is a build automation tool primarily used for Java projects. It facilitates setting up a Karate environment and managing project dependencies. To configure Eclipse with Maven, you can follow the instructions for Maven installation here.

To use Karate with Maven, you’ll need to include the following dependencies in your pom.xml.

<dependencies>
    <dependency>
        <groupId>com.intuit.karate</groupId>
        <artifactId>karate-apache</artifactId>
        <version>0.9.6</version>
        <scope>test</scope>
    </dependency>
    <dependency>
        <groupId>com.intuit.karate</groupId>
        <artifactId>karate-junit4</artifactId>
        <version>0.9.6</version>
        <scope>test</scope>
    </dependency>
</dependencies>

Note: The latest versions of these dependencies may be available in the Maven repository.

If we wanted to enable Cucumber reporting, the following dependency is also to be added.

<dependency>
   <groupId>net.masterthought</groupId>
   <artifactId>cucumber-reporting</artifactId>
   <version>5.3.0</version>
</dependency>

Java Environment Setup on Your System

You’ll need to set up the JDK (Java Development Kit) and JRE (Java Runtime Environment) on your system to start working with Karate Framework scripts.

Now with this, we are all set to start with creating the Karate framework.

Conclusion

This overview highlights the advantages of the Karate Framework for API testing, offering a simpler and more accessible alternative to other tools, such as Rest-Assured, by reducing the need for advanced programming knowledge and offering powerful built-in features.

Adopting Karate can reduce your test tool stack, speed up automation, and make quality a shared responsibility across technical and non‑technical roles. By covering functional, load, and even UI tests with the same syntax, teams gain faster feedback, simpler maintenance, and a smoother path to continuous delivery.

]]>
https://blogs.perficient.com/2025/06/25/elevating-api-automation-exploring-karate-as-an-alternative-to-rest-assured/feed/ 0 367928
Microsoft Copilot for Power Platform https://blogs.perficient.com/2025/06/17/microsoft-copilot-for-power-platform/ https://blogs.perficient.com/2025/06/17/microsoft-copilot-for-power-platform/#respond Wed, 18 Jun 2025 03:25:39 +0000 https://blogs.perficient.com/?p=382923

Introduction to Copilot for Power Platform

Microsoft Copilot is a revolutionary AI-powered tool for Power Platform, designed to streamline the development process and enhance the intelligence of your applications. This learning path will take you through the fundamentals of Copilot and its integration with Power Apps, Power Automate, Power Virtual Agents, and AI Builder.

Copilot in Microsoft Power Platform helps app makers quickly solve business problems. A copilot is an AI assistant that can help you perform tasks and obtain information. You interact with a copilot by using a chat experience. Microsoft has added copilots across the different Microsoft products to help users be more productive. Copilots can be generic, such as Microsoft Copilot, and not tied to a specific Microsoft product. Alternatively, a copilot can be context-aware and tailored to the Microsoft product or application that you’re using at the time.

Picture1

Microsoft Power Platform Copilots & Specializations.

Microsoft Power Platform has several copilots that are available to makers and users.

Microsoft Copilot for Microsoft Power Apps

Use this copilot to help create a canvas app directly from your ideas. Give the copilot a natural language description, such as “I need an app to track my customer feedback.” Afterward, the copilot offers a data structure for you to iterate until it’s exactly what you need, and then it creates pages of a canvas app for you to work with that data. You can edit this information along the way. Additionally, this copilot helps you edit the canvas app after you create it. Power Apps also offers copilot controls for users to interact with Power Apps data, including copilots for canvas apps and model-driven apps.

Microsoft Copilot for Microsoft Power Automate

Use this copilot to create automation that communicates with connectors and improves business outcomes. This copilot can work with cloud flows and desktop flows. Copilot for Power Automate can help you build automation by explaining actions, adding actions, replacing actions, and answering questions.

Microsoft Copilot for Microsoft Power Pages

Use this copilot to describe and create an external-facing website with Microsoft Power Pages. As a result, you have theming options, standard pages to include, and AI-generated stock images and relevant text descriptions for the website that you’re building. You can edit this information as you build your Power Pages website.

How Copilots Work

You can create a copilot by using a language model, which is like a computer program that can understand and generate human-like language. A language model can perform various natural language processing tasks based on a deep-learning algorithm. The massive amounts of data that the language model processes can help the copilot recognize, translate, predict, or generate text and other types of content.

Despite being trained on a massive amount of data, the language model doesn’t contain information about your specific use case, such as the steps in a Power Automate flow that you’re editing. The copilot shares this information for the system to use when it interacts with the language model to answer your questions. This context is commonly referred to as grounding data. Grounding data is use case-specific data that helps the language model perform better for a specific topic. Additionally, grounding data ensures that your data and IP are never part of training the language model.

Accelerate Solution Building with Copilot

Consider the various copilots in Microsoft Power Platform as specialized assistants that can help you become more productive. Copilot can help you accelerate solution building in the following ways:

  • Prototyping
  • Inspiration
  • Help with completing tasks
  • Learning about something

Prototyping

Prototyping is a way of taking an idea that you discussed with others or drew on a whiteboard and building it in a way that helps someone understand the concept better. You can also use prototyping to validate that an idea is possible. For some people, having access to your app or website can help them become a supporter of your vision, even if the app or website doesn’t have all the features that they want.

Inspiration

Building on the prototyping example, you might need inspiration on how to evolve the basic prototype that you initially proposed. You can ask Copilot for inspiration on how to handle the approval of which ideas to prioritize. Therefore, you might ask Copilot, “How could we handle approval?”

Help with Completing Tasks

By using a copilot to assist in your solution building in Microsoft Power Platform, you can complete more complex tasks in less time than if you do them manually. Copilot can also help you complete small, tedious tasks, such as changing the color of all buttons in an app.

Learn about Something

While building an app, flow, or website, you can open a browser and use your favorite search engine to look up something that you’re trying to figure out. With Copilot, you can learn without leaving the designer. For example, your Power Automate flow has a step to List Rows from Dataverse, and you want to find out how to check if rows are retrieved. You could ask Copilot, “How can I check if any rows were returned from the List rows step?”

Knowing the context of your flow, Copilot would respond accordingly.

Design and Plan with Copilot

Copilot can be a powerful way to accelerate your solution-building. However, it’s the maker’s responsibility to know how to interact with it. That interaction includes writing prompts to get the desired results and evaluating the results that Copilot provides.

Consider the Design First

While asking Copilot to “Help me automate my company to run more efficiently” seems ideal, that prompt is unlikely to produce useful results from Microsoft Power Platform Copilots.

Consider the following example, where you want to automate the approval of intake requests. Without significant design thinking, you might use the following prompt with Copilot for Power Automate.

Copilot in cloud flow

Picture2

“Create an approval flow for intake requests and notify the requestor of the result.”

This prompt produces the following suggested cloud flow.

Picture3

While the prompt is an acceptable start, you should consider more details that can help you create a prompt that might get you closer to the desired flow.

A good way to improve your success is to spend a few minutes on a whiteboard or other visual design tool, drawing out the business process.

Picture4

Include the Correct Ingredients in the Prompt

A prompt should include as much relevant information as possible. Each prompt should include your intended goal, context, source, and outcome.

When you’re starting to build something with Microsoft Power Platform copilots, the first prompt that you use sets up the initial resource. For Power Apps, this first prompt is to build a table and an app. For Power Automate, this first prompt is to set up the trigger and the initial steps. For Power Pages, this first prompt sets up the website.

Consider the previous example and the sequence of steps in the sample drawing. You might modify your initial prompt to be similar to the following example.

“When I receive a response to my Intake Request form, start and wait for a new approval. If approved, notify the requestor saying so and also notify them if the approval is denied.”

Continue the Conversation

You can iterate with your copilot. After you establish the context, Copilot remembers it.

The key to starting to build an idea with Copilot is to consider how much to include with the first prompt and how much to refine and add after you set up the resource. Knowing this key consideration is helpful because you don’t need to get a perfect first prompt, only one that builds the idea. Then, you can refine the idea interactively with Copilot.

6 Unique Copilot Features in Power Platform

  1. Natural Language Power FX Formulas in Power Apps

Copilot enables developers to write Power FX formulas using natural language. For instance, typing /subtract datepicker1 from datepicker2 in a label control prompts Copilot to generate the corresponding formula, such as DateDiff(DatePicker1. SelectedDate, DatePicker2. SelectedDate, Days). This feature simplifies formula creation, especially for those less familiar with coding.

  1. AI-Powered Document Analysis with AI Builder

By integrating Copilot with AI Builder, users can automate the extraction of data from documents, such as invoices or approval forms. For example, Copilot can extract approval justifications and auto-generate emails for swift approvals within Outlook. This process streamlines workflows and reduces manual data entry.

  1. Automated Flow Creation in Power Automate

Copilot assists users in creating automated workflows by interpreting natural language prompts. For example, a user can instruct Copilot to “Create a flow that sends an email when a new item is added to SharePoint,” and Copilot will generate the corresponding flow. This feature accelerates the automation process without requiring extensive coding knowledge.

  1. Conversational App Development in Power Apps Studio

In Power Apps Studio, Copilot allows developers to build and edit apps using natural language commands. For instance, typing “Add a button to my header” or “Change my container to align center” enables Copilot to execute these changes, simplifying the development process and making it more accessible.

  1. Generative Topic Creation in Power Virtual Agents

Copilot facilitates the creation of conversation topics in Power Virtual Agents by generating them from natural language descriptions. For example, describing a topic like “Customer Support” prompts Copilot to create a topic with relevant trigger phrases and nodes, streamlining the bot development process.

  1. AI-Driven Website Creation in Power Pages

Copilot assists in building websites by interpreting natural language descriptions. For example, stating “Create a homepage with a contact form and a product gallery” prompts Copilot to generate the corresponding layout and components, expediting the website development process.

Limitations of Copilot

LimitationDescriptionExample
1. Limited understanding of business contextCopilot doesn’t always understand your specific business rules or logic.You ask Copilot to "generate a travel approval form," but your org requires approval from both the team lead and HR. Copilot might only include one level of approval.
2. Restricted to available connectors and dataCopilot can only access data sources that are already connected in your app.You ask it to "show top 5 sales regions," but haven’t connected your Sales DB — Copilot can't help unless that connection is preconfigured.
3. Not fully customizable outputYou might not get exactly the layout, formatting, or logic you want — especially for complex logic.Copilot generates a form with 5 input fields, but doesn't group them or align them properly; you still need to fine-tune it manually.
4. Model hallucination (AI guessing wrong info)Like other LLMs, Copilot may “guess” when unsure — and guess incorrectly.You ask Copilot to create a formula for filtering “Inactive users,” and it writes a filter condition that doesn’t exist in your dataset.
5. English-only or limited language supportMost effective prompts and results come in English; support for other languages is limited or not optimized.You try to ask Copilot in Hindi, and it misinterprets the logic or doesn't return relevant suggestions.
6. Requires clean, named data structuresCopilot struggles when your tables/columns aren't clearly named.If you name a field fld001_status instead of Status, Copilot might fail to identify it correctly or generate unreadable code.
7. Security roles not respected by CopilotCopilot may suggest features that would break your security model if implemented directly.You generate a data view for all users, but your app is role-based — Copilot won’t automatically apply row-level security filters.
8. No support for complex logic or multi-step workflowsIt’s good at simple flows, but not for things like advanced branching, looping, or nested conditions.You ask Copilot to automate a 3-level approval chain with reminder logic and escalation — it gives a very basic starting point.
9. Limited offline or disconnected useCopilot and generated logic assume you’re online.If your app needs to work offline (e.g., for field workers), Copilot-generated logic may not account for offline sync or local caching.
10. Only works inside Microsoft ecosystemCopilot doesn’t support 3rd-party AI tools natively.If your company uses Google Cloud or OpenAI directly, Copilot won’t connect unless you build custom connectors or use HTTP calls.

Build Good Prompts

Knowing how to best interact with the copilot can help get your desired results quickly. When you’re communicating with the copilot, make sure that you’re as clear as you can be with your goals. Review the following dos and don’ts to help guide you to a more successful copilot-building experience.

Do’s of Prompt-Building

To have a more successful copilot building experience, do the following:

  • Be clear and specific.
  • Keep it conversational.
  • Give examples.
  • Check for accuracy.
  • Provide contextual details.
  • Be polite.

Don’ts of Prompt-Building

  • Be vague.
  • Give conflicting instructions.
  • Request inappropriate or unethical tasks or information.
  • Interrupt or quickly change topics.
  • Use slang or jargon.

Conclusion

Copilot in Microsoft Power Platform marks a major step forward in making low-code development truly accessible and intelligent. By enabling users to build apps, automate workflows, analyze data, and create bots using natural language, it empowers both technical and non-technical users to turn ideas into solutions faster than ever.

It transforms how people interact with technology by:

  • Accelerating solution creation
  • Lowering technical barriers
  • Enhancing productivity and innovation

With built-in security, compliance with organizational governance, and continuous improvements from Microsoft’s AI advancements, Copilot is not just a tool—it’s a catalyst for transforming how organizations solve problems and deliver value.

As AI continues to evolve, Copilot will play a central role in democratizing software development and helping organizations move faster and smarter with data-driven, automated tools.

]]>
https://blogs.perficient.com/2025/06/17/microsoft-copilot-for-power-platform/feed/ 0 382923
Integrating Drupal with Salesforce SSO via SAML and Dynamic User Sync https://blogs.perficient.com/2025/06/14/integrating-drupal-with-salesforce-sso-via-saml-and-dynamic-user-sync/ https://blogs.perficient.com/2025/06/14/integrating-drupal-with-salesforce-sso-via-saml-and-dynamic-user-sync/#respond Sat, 14 Jun 2025 05:43:30 +0000 https://blogs.perficient.com/?p=382943

Single Sign-On (SSO) is a crucial part of modern web applications, enabling users to authenticate once and access multiple systems securely. If your organization uses Salesforce as an Identity Provider (IdP) and Drupal as a Service Provider (SP), you can establish a secure SSO connection using the SAML protocol.

In this blog, we’ll walk through how to integrate Drupal with Salesforce for SSO using the SAML Authentication module. We’ll also explore how to dynamically sync user data—like first name, last name, company, and roles—from Salesforce into Drupal during login.

Prerequisites

Before starting, ensure you have the following:

  • A working Drupal 9 or 10 site.
  • Access to the Salesforce admin console.
  • The SAML Authentication module installed in Drupal.
  • SSL enabled on your Drupal site (SAML requires HTTPS).

Step 1: Install the SAML Authentication Module in Drupal

You can install the module via Composer:

composer require drupal/saml_auth

Then enable it using Drush or through the Drupal admin interface:

drush en saml_auth

Dependencies (like simplesamlphp) may need to be managed manually or via the simplesamlphp_auth module if you prefer a different approach.

Step 2: Configure Salesforce as an Identity Provider (IdP)

  • Log in to Salesforce, and go to: Setup → Apps → App Manager → New Connected App
  • Fill in the basic details, then under Web App Settings:
    • Enable SAML.
    • Entity ID: Use your Drupal site’s SP Entity ID (e.g., https://example.com/saml/metadata)
    • ACS URL: https://example.com/saml/acs
    • Subject Type: Usually Email or Username.
    • Name ID Format: urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress
  • Add custom attributes:
    • FirstName
    • LastName
    • Company
    • Roles
  • Download the IdP metadata or note:
    • IdP SSO URL
    • IdP Entity ID
    • X.509 certificate

Step 3: Configure the SAML Authentication Module in Drupal

Navigate to: Admin → Configuration → People → SAML Authentication Settings (/admin/config/people/saml)

Fill in the settings:

  • IdP Entity ID and SSO URL: From Salesforce.
  • X.509 Certificate: Paste the public cert here.
  • SP Entity ID: Can be your site URL or a custom value.
  • ACS URL: Must match what you provided to Salesforce.
  • NameID format: Match Salesforce (usually emailAddress).
  • User match field: Set to mail.

Step 4: Dynamic User Synchronization

By default, SAML Authentication handles user login and account creation, but we extended this with custom logic to map additional attributes from Salesforce into the Drupal user profile.

Salesforce sends additional user information in the SAML assertion, including:

  • First name
  • Last name
  • Company
  • Roles

We’ve extended the default SAML authentication behavior with a custom hook or event subscriber to:

  • Create new users in Drupal using the email as the unique identifier.
  • Populate additional profile fields like first name, last name, and company.
  • Assign user roles dynamically based on the roles attribute from Salesforce.

This ensures that user accounts are fully provisioned and kept up-to-date every time a user logs in through SSO.

Step 5: Test the SSO Flow

  • Log out of your Drupal site.
  • Navigate to /saml/login.
  • You’ll be redirected to Salesforce to authenticate.
  • After login, you’ll be redirected back to Drupal and logged in automatically with synced user details.

Check that:

  • A new Drupal user is created if it doesn’t exist.
  • First name, last name, and company fields are populated.
  • Roles are assigned correctly.

If there’s an error, enable debugging logs and inspect the SAML response and assertion for mismatches.

Conclusion

Integrating Salesforce with Drupal using the SAML Authentication module enables a seamless and secure SSO experience. This is particularly useful for organizations using Salesforce as a central identity system. With proper configuration, users can enjoy frictionless access to your Drupal site while benefiting from Salesforce’s authentication infrastructure.

]]>
https://blogs.perficient.com/2025/06/14/integrating-drupal-with-salesforce-sso-via-saml-and-dynamic-user-sync/feed/ 0 382943
Boost Cloud Efficiency: AWS Well-Architected Cost Tips https://blogs.perficient.com/2025/06/09/boost-cloud-efficiency-aws-well-architected-cost-tips/ https://blogs.perficient.com/2025/06/09/boost-cloud-efficiency-aws-well-architected-cost-tips/#respond Mon, 09 Jun 2025 06:36:11 +0000 https://blogs.perficient.com/?p=378814

In today’s cloud-first world, building a secure, high-performing, resilient, and efficient infrastructure is more critical than ever. That’s where the AWS Well-Architected Framework comes in a powerful guide designed to help architects and developers make informed decisions and build better cloud-native solutions.

What is the AWS Well-Architected Framework?

The AWS Well-Architected Framework provides a consistent approach for evaluating and improving your cloud architecture. It’s built around six core pillars that represent key areas of focus for building robust and scalable systems:

  • Operational Excellence – Continuously monitor and improve systems and processes.
  • Security – Protect data, systems, and assets through risk assessments and mitigation strategies.
  • Reliability – Ensure workloads perform as intended and recover quickly from failures.
  • Performance Efficiency – Use resources efficiently and adapt to changing requirements.
  • Cost Optimization – Avoid unnecessary costs and maximize value.
  • Sustainability – Minimize environmental impact by optimizing resource usage and energy consumption

98bb6d5d218aea2968fc8e8bba96ef68b6a7730c 1600x812

Explore the AWS Well-Architected Framework here https://aws.amazon.com/architecture/well-architected

AWS Well -Architected Timeline

Time to time, AWS made some changes in the framework and introduce new resources which we can follow to utilize them better for our use cases and get better architecture.

Oip

AWS Well-Architected Tool

To help you apply these principles, AWS offers the Well-Architected Tool—a free service that guides you through evaluating your workloads against the six pillars.

How it Works:

  • Select a workload.
  • Answer a series of questions aligned with the framework.
  • Review insights and recommendations.
  • Generate reports and track improvements over time.

Try the AWS Well-Architected Tool here https://aws.amazon.com/well-architected-tool/

Go Deeper with Labs and Lenses

AWS also Provides:

Deep Dive: Cost Optimization Pillar

Cost Optimization is not just about cutting costs—it’s about maximizing value. It ensures that your cloud investments align with business goals and scale efficiently.

Why It Matters:

  • Understand your spending patterns.
  • Ensure costs support growth, not hinder it.
  • Maintain control as usage scales.

5 Best Practices for Cost Optimization

  1. Practice Cloud Financial Management
  • Build a cost optimization team.
  • Foster collaboration between finance and tech teams.
  • Use budgets and forecasts.
  • Promote cost-aware processes and culture.
  • Quantify business value through automation and lifecycle management.
  1. Expenditure and Usage Awareness
  • Implement governance policies.
  • Monitor usage and costs in real-time.
  • Decommission unused or underutilized resources.
  1. Use Cost-Effective Resources
  • Choose the right services and pricing models.
  • Match resource types and sizes to workload needs.
  • Plan for data transfer costs.
  1. Manage Demand and Supply
  • Use auto-scaling, throttling, and buffering to avoid over-provisioning.
  • Align resource supply with actual demand patterns.
  1. Optimize Over Time
  • Regularly review new AWS features and services.
  • Adopt innovations that reduce costs and improve performance.

Conclusion

The AWS Well-Architected Framework is more than a checklist—it’s a mindset. By embracing its principles, especially cost optimization, you can build cloud environments that are not only efficient and scalable but also financially sustainable.

]]>
https://blogs.perficient.com/2025/06/09/boost-cloud-efficiency-aws-well-architected-cost-tips/feed/ 0 378814
PWC-IDMC Migration Gaps https://blogs.perficient.com/2025/06/05/pwc-idmc-migration-gaps/ https://blogs.perficient.com/2025/06/05/pwc-idmc-migration-gaps/#respond Thu, 05 Jun 2025 05:26:54 +0000 https://blogs.perficient.com/?p=382445

In the age of technological advancements happening almost every minute, upgrading a business is essential to survive competition, offering a customer experience beyond expectations while deploying fewer resources to derive value from any process or business.

Platform upgrades, software upgrades, security upgrades, architectural enhancements, and so on are required to ensure stability, agility, and efficiency.

Customers prefer to move from Legacy systems to the Cloud due to the offerings it brings. From cost, monitoring, maintenance, operations, ease of use, and landscape, Cloud has transformed D&A businesses significantly over the last decade.

Movement from Informatica Powercenter to IDMC has been perceived as the need of the hour due to the humongous advantages it offers. Developers must understand both flavors to perform this code transition effectively.

This post explains the PWC vs IDMC CDI gaps from different perspectives.

  • Development
  • Data
  • Operations

Development

  • The difference in native datatypes can be observed in IDMC when importing Source, Target, or Lookup. Workaround as follows.,
    • If any consistency is observed in IDMC mappings with Native Datatype/Precision/Scale, ensure that the Metadata Is Edited to keep them in sync between DDL and CDI mappings.
  • In CDI, taskflow workflow parameter values experience read and consumption issues. Workaround as follows.,
    • A Dummy Mapping task has to be created where the list of Parameters/Variables needs to be defined for further consumption by tasks within the taskflows (Ex, Command task/Email task, etc)
    • Make sure to limit the # of Dummy Mapping tasks during this process
    • Best practice is to create 1 Dummy Mapping task for a folder to capture all the Parameters/Variables required for that entire folder.
    • For Variables whose value needs to be persistent for the next taskflow run, make sure the Variable value is mapped to the Dummy Mapping task via an Assignment task. This Dummy mapping task would be used at the start and end of the task flow to ensure that the overall task flow processing is enabled for Incremental Data processing.
  • All mapping tasks/sessions in IDMC are reusable. They could be used in any task flow. If some Audit sessions are expected to run concurrently within other taskflows, ensure that the property “Allow the mapping task to be executed simultaneously” is enabled.
  • Sequence generator: Data overlap issues in CDI. Workaround as follows.,
    • If a sequence generator is likely to be used in multiple sessions/workflows, it’s better to make it a reusable/SHARED Sequence.
  • VSAM Sources/Normalizer was not available in CDI. Workaround as follows.,
    • Use the Sequential File connector type for mappings using Mainframe VSAM Sources/Normalizer.
  • Sessions are configured to have STOP ON ERRORS >0. Workaround as follows.,
    • Ensure the LINK conditions for the next task to be “PreviousTask.TaskStatus – STARTS WITH ANY OF 1, 2” within CDI taskflows.
  • Partitions are not supported with Sources under Query mode. Workaround as follows.,
    • Ensure multiple sessions are created and run in parallel as a workaround.
  • Currently, parameterization of Schema/Table is not possible for Mainframe DB2. Workaround as follows.,
    • Use an ODBC-type connection to access DB2 with Schema/Table parameterization.
  • A mapping with a LOOKUP transformation used across two sessions cannot be overridden at the session or mapping task level to enable or disable caching. Workaround as follows.,
    • Use 2 different mappings with LOOKUP transformations if 1 mapping/session has to have cache enabled and the other mapping/session has to have cache disabled.

Data

  • IDMC Output data containing additional Double quotes. Workaround as follows.,
    • Session level – use this property – __PMOV_FFW_ESCAPE_QUOTE=No
    • Administrator settings level – use this property – UseCustomSessionConfig = Yes
  • IDMC Output data containing additional Scale values with Decimal datatype (ex., 11.00). Workaround as follows.,
    • Use IF-THEN-ELSE statement to remove Unwanted 0s in data (O/P : from 11.00 -> 11)

Operations

  • CDI doesn’t store logs beyond 1000 mapping tasks run in 3 days on Cloud (it does store logs in Secure Agent). Workaround as follows.,
    • To retain Cloud job run stats, create Audit tables and use the Data Marketplace utility to get the Audit info (Volume processes, Start/End time, etc) loaded to the Audit tables by scheduling this job at regular intervals (Hourly or Daily).
  • Generic Restartability issues occur during IDMC Operations. Workaround as follows.,
    • Ensure a Dummy assignment task is introduced whenever the code contains Custom error handling flow.
  • SKIP FAILED TASK and RESUME FROM NEXT TASK operations have issues in IDMC. Workaround as follows.,
    • Ensure every LINK condition has an additional condition appended, “Mapping task. Fault.Detail.ErrorOutputDetail.TaskStatus=1”
  • In PWC, any task can be run from anywhere within a workflow; however, this is not possible in IDMC. Workaround as follows.
    • Feature request worked upon by GCS to update the Software
  • IDMC mapping task config level is not capable due to parameter concatenation issues. Workaround as follows.,
    • Ensure to use a separate parameter within the parameter file to have the Mapping task log file names suffixed with the Concurrent run workflow instance name.
  • IDMC doesn’t honour the “Save Session log for these runs” property set at the mapping task level when the session log file name is parameterized. Workaround as follows.,
    • Ensure to copy the mapping task log files in the Secure agent server after the job run
  • If Session Log File Directory contains / (Slash) when used along with parameters (ex., $PMSessionLogDir/ABC) under Session Log Directory Path, this would append every run log to the same log file. Workaround as follows.,
    • Ensure to use a separate parameter within the parameter file for $PMSessionLogDir
  • In IDMC, the @numAppliedRows and @numAffectedRows features are not available to get the source and target success rows to load them in the audit table. Workaround as follows.,
    • @numAppliedRows is used instead of @numAffectedRows
  • Concurrent runs cannot be performed on taskflows from the CDI Data Integration UI. Workaround as follows.,
    • Use the Paramset utility to upload concurrent paramsets and use the runAJobCli utility to run taskflows with multiple concurrent run instances from the command prompt.

Conclusion

While performing PWC to IDMC conversions, the following Development and Operations workarounds will help avoid rework and save effort, thereby achieving customer satisfaction in delivery.

]]>
https://blogs.perficient.com/2025/06/05/pwc-idmc-migration-gaps/feed/ 0 382445
IDMC – CDI Best Practices https://blogs.perficient.com/2025/06/05/idmc-cdi-best-practices/ https://blogs.perficient.com/2025/06/05/idmc-cdi-best-practices/#respond Thu, 05 Jun 2025 05:01:33 +0000 https://blogs.perficient.com/?p=382442

Every end product must meet and exceed customer expectations. For a successful delivery, it is not just about doing what matters, but also about how it is done by following and implementing the desired standards.

This post outlines the best practices to consider with IDMC CDI ETL during the following phases.

  • Development
  • Operations 

Development Best Practices

  • Native Datatypes check between Database table DDLs and IDMC CDI Mapping Source, Target, and Lookup objects.
    • If any consistency is observed in IDMC mappings with Native Datatype/Precision/Scale, ensure that the Metadata Is Edited to keep them in sync between DDL and CDI mappings.
  • In CDI, workflow parameter values in order to be consumed by the taskflows, a Dummy Mapping task has to be created where the list of Parameters/Variables needs to be defined for further consumption by tasks within the taskflows (Ex, Command task/Email task, etc)
    • Make sure to limit the # of Dummy Mapping tasks during this process
    • Best practice is to create 1 Dummy Mapping task for a folder to capture all the Parameters/Variables required for that entire folder.
    • For Variables whose value needs to be persistent for the next taskflow run, make sure the Variable value is mapped to the Dummy Mapping task via an Assignment task. This Dummy mapping task would be used at the start and end of the task flow to ensure that the overall task flow processing is enabled for Incremental Data processing.
  • If some Audit sessions are expected to run concurrently within other taskflows, ensure that the property “Allow the mapping task to be executed simultaneously” is enabled.
  • Avoid using the SUSPEND TASKFLOW option, as it requires manual intervention during job restarts. Additionally, this property may cause issues during job restarts.
  • Ensure correct parameter representation using Single Dollar/Double Dollar. Incorrect representation will cause the parameters not to be read by CDI during Job runs.
  • While working with Flatfiles in CDI mappings, always enable the property “Retain existing fields at runtime”.
  • If a sequence generator is likely to be used in multiple sessions/workflows, it’s better to make it a reusable/SHARED Sequence.
  • Use the Sequential File connector type for mappings using Mainframe VSAM Sources/Normalizer.
  • If a session is configured to have STOP ON ERRORS >0, ensure the LINK conditions for the next task to be “PreviousTask.TaskStatus – STARTS WITH ANY OF 1, 2” within CDI taskflows.
  • For mapping task failure flows, set the LINK conditions for the next task to be “PreviousTask.Fault.Detail.ErrorOutputDetail.TaskStatus – STARTS WITH ANY OF 1, 2” within CDI taskflows.
  • Partitions are not supported with Sources under Query mode. Ensure multiple sessions are created and run in parallel as a workaround.
  • Currently, parameterization of Schema/Table is not possible for Mainframe DB2. Use an ODBC-type connection to access DB2 with Schema/Table parameterization.

Operations Best Practices

  • Use Verbose data Session log config only if absolutely required, and then only in the lower environment.
  • Ensure the Sessions pick the parameter values properly during job execution
    • This can be verified by changing the parameter names and values to incorrect values and determining if the job fails during execution. If the job fails, it means that the parameters are READ correctly by the CDI sessions.
  • Ensure the Taskflow name and API name always match. If different, the job will face issues during execution via the runAJobCli utility from the command prompt.
  • CDI doesn’t store logs beyond 1000 mapping tasks run in 3 days on Cloud (it does store logs in Secure Agent). To retain Cloud job run stats, create Audit tables and use the Data Marketplace utility to get the Audit info (Volume processes, Start/End time, etc) loaded to the Audit tables by scheduling this job at regular intervals (Hourly or Daily).
  • In order to ensure no issues with Generic Restartability during Operations, ensure a Dummy assignment task is introduced whenever the code contains Custom error handling flow.
  • In order to facilitate SKIP FAILED TASK and RESUME FROM NEXT TASK operations, ensure every LINK condition has an additional condition appended, “Mapping task. Fault.Detail.ErrorOutputDetail.TaskStatus=1”
  • If mapping task log file names are to be suffixed with the Concurrent run workflow instance name, ensure it is done within the Parameter file. IDMC mapping task config level is not capable due to parameter concatenation issues.
  • Ensure to copy mapping task log files in the Secure agent server after job run, since IDMC doesn’t honour the “Save Session log for these runs” property set at the mapping task level when the session log file name is parameterized.
  • Ensure Session Log File Directory doesn’t contain / (Slash) when used along with parameters (ex., $PMSessionLogDir/ABC) under Session Log Directory Path. When used, this would append every run log to the same log file.
  • Concurrent runs cannot be performed on taskflows from the  CDI Data Integration UI. Use the Paramset utility to upload concurrent paramsets and use the runAJobCli utility to run taskflows with multiple concurrent run instances from the command prompt.

Conclusion

In addition to coding best practices, following these Development and Operations best practices will help avoid rework and save efforts, thereby achieving customer satisfaction with the Delivery.

]]>
https://blogs.perficient.com/2025/06/05/idmc-cdi-best-practices/feed/ 0 382442
Redefining CCaaS Solutions Success in the Digital Era https://blogs.perficient.com/2025/06/03/redefining-ccaas-success-in-the-digital-era/ https://blogs.perficient.com/2025/06/03/redefining-ccaas-success-in-the-digital-era/#comments Tue, 03 Jun 2025 20:26:24 +0000 https://blogs.perficient.com/?p=382347

With the advancement of technology, machine learning and AI capabilities in the customer care space, customer expectations are evolving faster than ever before. Customers expect smoother, context-aware, personalized, and generally more effective and faster experiences across channels when contacting a support center. 

This calls for a need to revisit and redefine the success metrics for a Contact Center as a Service (CCaaS) strategy. 

 

Let’s break this down into two categories. The first category includes key metrics that are still essential to be measured. The standards for these metrics though are raised and the way they are measured have evolved. The second category introduces new metrics that are emerging because of advanced CCaaS capabilities in a modern contact center landscape. 

  

Key Traditional Success Metrics Reimagined  

  

Customer Satisfaction (CSAT) remains a cornerstone success metric. Every improvement a customer service center is looking to make, from improving operational efficiencies to enhancing agent and customer experience, will directly or indirectly impact the customer and is aimed at elevating that customer experiences. With automated personalized journeys being an important part of modern customer service, it is important to monitor real-time analytics on automated journeys in addition to live agent interactions. This helps better understand the customer experience and find opportunities to fine tune the friction points to improve customer satisfaction. Customer service is not only about resolving customer issues, but also about providing an effortless experience. 

  

First Contact Resolution is still a key success metric in the CCaaS space, but modern tools can revolutionize the extent a customer service center can go to improve this metric, so the standards for this metric have raised. Passing context effectively across channels, real-time monitoring, predictive analytics and insights, and proactive outreach can increase the likelihood of addressing customer needs on the first contact or even sometimes without the need for a live agent interaction. 

  

Customer Retention Rate metric has been revamped with the advancement of technology in customer service. Advanced predictive analytics can help track the customer experience throughout their journey and shed light on the underlying customer behavior patterns. This will enable proactive engagement strategies personalized to every customer. Real-time sentiment analysis can provide instant feedback to the customer service representatives and their supervisors to give them a chance to course correct immediately in order to shift the sentiment to a positive experience and retain customers. 

  

Emerging Success Metrics 

  

Agent Experience and Satisfaction has a direct impact on the operation of a contact center and hence the customer experience. Traditionally, this metric was not tracked broadly as an important metric to measure a successful contact center strategy. However, we know today that agent experience and satisfaction is a key metric for transforming contact centers from cost centers into revenue generating units. Contact centers can leverage modern tools in different areas from agent performance monitoring, training and identifying knowledge gaps to providing automated workflows and real-time agent assistance, to elevate the agent experience.

These strategies and tools help agents become more effective and productive while providing service. Satisfied agents are more motivated to help customers effectively. This can improve metrics like First Contact Resolution rate and Average Handle Time. Happy and productive agents are more likely to engage positively with customers to discuss potential cross-sell and upsell opportunities. Moreover, agent turnover and the cost associated with that will be lowered due to the reduced burden of onboarding and training new agents regularly and constantly being short of staff. 

  

Sentiment Analysis and Real-time Interaction Quality provides immediate insights to the contact center representatives about the customer’s emotions, the conversation tone, and the effectiveness of their interactions. This will help the contact center representatives to refine their interaction strategy on the spot to maintain a positive and effective engagement with the customer. These transforms contact centers into emotionally intelligent, customer-focused support centers. This makes a huge difference in a time where the quality of experience matters as much as the outcome. 

  

Predictive Analysis Accuracy represents an entirely new set of metrics for a modern contact center that leverages predictive analytics in its operation. It is crucial to measure this metric and evaluate the accuracy of the forecasts against customer behavior and demands as well as the agent workflow needs. Inaccurate predictions are not only ineffective but can also be harmful to contact center operations. They can lead to poor decision making, confusion, and disappointing customer experiences. Accuracy in the anticipation of customer needs can enable proactive outreach, positive and effective interactions, less friction points and reduced service contacts while facilitating effective automatic upsell and cross-sell initiatives. 

  

Technology Utilization Rate is an important metric to track in a modern and evolving customer care solution. While with the latest technological advancements a lot of intelligent automation and enhancements can be made within a CCaaS solution, a contact center strategy is required to identify the most impactful modern capabilities for every customer service operation. The strategy needs to incorporate tracking the success of the technology adoption through system usage data and adoption metrics. This ensures that technology is being leveraged effectively and is providing value to business. The technology utilization tracking can also reveal training and adoption gaps, ensuring that modern tools are not just implemented for the sake of innovation, but are actively contributing to improved efficiency within a contact center. 

  

Conclusion

The development of advanced native capabilities and integration of modern tools within CCaaS platforms are revolutionizing the customer care industry and reshaping customer expectations. Staying ahead of this shift is crucial. While utilizing these advancements to achieve operational efficiencies, it is equally important to redefine the success metrics that provide businesses with insights and feedback on a modern CCaaS strategic roadmap. Adopting a fresh approach to capturing traditional metrics like Customer Satisfaction Scores and First Contact Resolution, combined with measuring new metrics such as Real-time Interaction Quality and Predictive Analysis Accuracy will offer a comprehensive view of a contact center’s maturity and its progress towards a successful and effective modern CCaaS solution. 

We can measure these metrics by utilizing built-in monitoring and analytical tools of modern CCaaS platforms along with AI-powered services integrations for features like Sentiment and Real-time Quality Analysis. We can gather regular feedback and data from agents and automated tracking tools to monitor system usability and efficiency. All this data can be streamed and displayed on a unified custom analytics dashboard, providing a comprehensive view of contact center performance and effectiveness. 

]]>
https://blogs.perficient.com/2025/06/03/redefining-ccaas-success-in-the-digital-era/feed/ 1 382347
Over The Air Updates for React Native Apps https://blogs.perficient.com/2025/06/02/over-the-air-ota-deployment-process-for-mobile-app/ https://blogs.perficient.com/2025/06/02/over-the-air-ota-deployment-process-for-mobile-app/#respond Mon, 02 Jun 2025 14:07:24 +0000 https://blogs.perficient.com/?p=349211

Mobile App development is rapidly growing and so is the expectation of robust support. “Mobile first” is the set paradigm for many application development teams. Unlike web deployment, an app release has to go through the review process via App Store Connect and Google Play. Minor or major releases follow the app review same process, which can take 1-4 days. Hot fixes or critical security patches are also bound by the review cycle restrictions.  This may lead to service disruptions, negative app and customer reviews.

Let’s say that the latest version of an app is version 1.2. However, a critical bug was identified in version 1.1. The app developers may release version 1.3, but the challenge would be that it may take a while to release the new version (unless a forced update mechanism is implemented for the app). Another potential challenge would be the fact that there is no guarantee that the user would have auto updates on.

Luckily, “Over The Air” updates comes to the rescue in such situations.

The Over The Air (OTA), deployment process for mobile apps allows developers to push updates without going through the traditional review process. The OTA update process enables faster delivery for any hot fix or patch.

While this is very exciting, it does come with a few limitations:

  • This feature is not intended for major updates or large feature launches.
  • OTA primarily works with JavaScript bundlers so native feature changes cannot be deployed via OTA deployment.

Mobile OTA Deployment

React Native consists of JavaScript and Native code. When the app gets compiled, it creates the JSbundles for Android and iOS apps along with the native builds. OTA also relies on the JavaScript bundles and hence React Native apps are great candidates to take advantage of the OTA update technology.

One of our client’s app has an OTA deployment process implemented using App Center. However, Microsoft has decided to retire App Center as of March 31, 2025. Hence, we started exploring the alternatives. One of the alternate solutions on the the table was provided by App Center and the other was to find a similar PAAS solution from another provider. Since back-end stack was AWS, we chose to go with EAS Update.

EAS Update

EAS Update is a hosted service that serves updates for projects using expo-updates library. Once the EAS Update is configured correctly, the app will be listening for any targeted version of the app on the EAS dev cloud server. Expo provides a great documentation on setup and configuration.

How Does It Work?

In a nutshell;

  1. Integrate “EAS Updates” in the app project.
  2. The user has the app installed on their device.
  3. The development team made a bug fix/patch and generated JSbundle for the targeted app version and uploaded to the Expo.dev cloud server.
  4. Next time the user opens the app (frequencies can be configurable, we can set on app resume/start), the app will check if any bundle is available to be installed. If there is an update available, the newer version of the app from Expo will be installed on user’s device.

Over The Air Update process flow

OTA deployment process

Additional details can be found at https://docs.expo.dev/eas-update/how-it-works/.

Implementation Details:

If you are new to React Native app development, this article may help Ramp Up On React/React Native In Less Than a Month. And if you are transitioning from React to React Native, you may find this React Native – A Web Developer’s Perspective on Pivoting to Mobile useful.

I am using my existing React-Native 0.73.7 app. However, one can start a fresh React Native App for your test.

Project configuration requires us to setup expo-modules. The Expo installation guide provides an installer which handles configuration.  Our project needed an SDK 50 version of the installer.

  • Using npx install-expo-modules@0.8.1, I installed Expo, SDK-50, in alignment with our current React native version 0.73.7, which added the following dependencies.
"@expo/vector-icons": "^14.0.0",
"expo-asset": "~9.0.2",
"expo-file-system": "~16.0.9",
"expo-font": "~11.10.3",
"expo-keep-awake": "~12.8.2",
"expo-modules-autolinking": "1.10.3",
"expo-modules-core": "1.11.14",
"fbemitter": "^3.0.0",
"whatwg-url-without-unicode": "8.0.0-3"
  • Installed Expo-updates v0.24.14 package which added the following dependencies.
"@expo/code-signing-certificates": "0.0.5",
"@expo/config": "~8.5.0",
"@expo/config-plugins": "~7.9.0",
"arg": "4.1.0",
"chalk": "^4.1.2",
"expo-eas-client": "~0.11.0",
"expo-manifests": "~0.13.0",
"expo-structured-headers": "~3.7.0",
"expo-updates-interface": "~0.15.1",
"fbemitter": "^3.0.0",
"resolve-from": "^5.0.0"
  • Created expo account at https://expo.dev/signup
  • To setup the account execute, eas configure
  • This generated the project id and other account details.
  • Following channels were created: staging, uat, and production.
  • Added relevant project values to app.json, added Expo.plist, and updated same in AndroidManifest.xml.
  • Scripts block of package.json has been updated to use npx expo to launch the app.
  • AppDelegate.swift was refactored as part of the change.
  • App Center and CodePush assets and references were removed.
  • Created custom component to display a modal prompt when new update is found.

OTA Deployment:

  • Execute the command via terminal:
EAS_CHANNEL=staging RUNTIME_VERSION="7.13" eas update --message "build:[QA] - 7.13.841 - 25.5.9.4 - OTA Test2 commit"
  • Once the package is published, I can see my update available in expo.dev as shown in the image below.

EAS update OTA deployment

EAS update screen once OTA deployment is successful.

Test:

  1. Unlike App center, Expo provides the same package for iOS and Android targets.
  2. The targeted version package is available on the expo server.
  3. App restart or resume will display the popup (custom implementation) informing “A new update is available.”.
  4. When a user hits “OK” button in the popup, the update will be installed and content within the App will restart.
  5. If the app successfully restarts, the update is successfully installed.

Considerations:

  • In metro.config.js – the @rnx-kit/metro-serializer had to be commented out due to compatibility issue with EAS Update bundle process.
  • @expo/vector-icons package causes Android release build to crash on app startup. This package can be removed but if package-lock.json is removed the package will reinstall as an expo dependency and again, cause the app to crash. The issue is described in the comments here: https://github.com/expo/expo/issues/26521. There is no solution available at the moment. The expo vector icons package isn’t being handled correctly during the build process. It is caused by the react-native-elements package. When removed, the files are no longer added to app.manifest and the app builds and runs as expected.
  • Somehow the font require statements in node_modules/react-native-elements/dist/helpers/getIconType.js are being picked up during the expo-updates generation of app.manifest even though the files are not used our app. The current solution is to go ahead and include the fonts in the package but this is not optimal. Better solution is to filter those fonts from expo-updates process.

Deployment Troubleshooting:

  • Error fetching latest Expo update: Error: “channel-name” is not allowed to be empty.

The headers “expo-runtime-version”, “expo-channel-name”, and “expo-platform” are required. They can also be set with the query parameters “runtime-version”, “channel-name”, and “platform”. Learn more: https://github.com/expo/fyi/blob/main/eas-update-missing-headers.md

The configuration values for iOS app are maintained in Supporting/Expo.plist. The above error indicates that the EXUpdatesRequestHeadersblock in the plist might be missing.

OTA deployment is very useful when large number of customers are using the app and any urgent hot fix or patch needs to be released. You can set this for your lower environments as well as the production.

In my experience, it is very reliable and the expo team is doing great job on maintaining it.

So take advantage of this amazing service and Happy coding!

 

For more information about Perficient’s Mobile Solutions expertise, subscribe to our blog or contact our Mobile Solutions team today!

]]>
https://blogs.perficient.com/2025/06/02/over-the-air-ota-deployment-process-for-mobile-app/feed/ 0 349211
Inventory Management 25B – Path to Redwood Experience 1.2.3. https://blogs.perficient.com/2025/05/30/inventory-management-25b-path-to-redwood-experience-1-2-3/ https://blogs.perficient.com/2025/05/30/inventory-management-25b-path-to-redwood-experience-1-2-3/#respond Fri, 30 May 2025 10:56:28 +0000 https://blogs.perficient.com/?p=382158

I have been writing about the Redwood Experience with Supply Chain Management, especially with the Inventory Management. Oracle has gone all-in with Redwood Experience in Inventory Management in 25B.

The 25 Inventory Management readiness documentation lists all new features and how to use them, so I will not repeat this greatly written document: https://docs.oracle.com/en/cloud/saas/readiness/scm/25b/inv25b/index.html

For the previous features in Redwood, please consider visiting the Readiness documentation: https://docs.oracle.com/en/cloud/saas/readiness/scm-all.html

This page is my personal favorite since it provides easy to find features and documentation along with that.

 1. Why?

You may be asking the question: why is the Redwood so hot and why do I have to transform?

If you are an Oracle customer or you have been in Oracle space for a while (I have been in the space for almost three decades), you know that once Oracle sets a vision and starts delivering new technology it becomes the future.  We have witnessed this when Oracle moved the business applications from 10.7 Character mode to 10SC (Smart Client) 10NCA (Network Architecture). We went from character mode to GUI.  It wasn’t easy and quick, but it happened. Then we moved from major releases in EBS and got used to the Self Service architecture.

Oracle delivered the Fusion Applications long time ago and we have witnessed that each quarterly release has added more functionality.  Since 2024 Oracle has been improving user interface and adding mobility to the Inventory Management pages, but the most radical improvements have happened in 25A and 25B.  Now, almost 100% of the Inventory Management is in Redwood and it’s the next generation of Cloud applications.

Redwood brings better usability and better user interface that I explained in my past blog https://blogs.perficient.com/2025/05/30/starting-redwood-experience-with-25a-inventory-management/, but it also opens the door for Artificial Intelligence (AI).

Oracle is expected to release major AI improvements in 25C that  I plan to write a blog to talk about. Redwood Experience is a prerequisite for all cool AI technology to work.  Agentic AI features or AI Agents will be part of the Fusion Applications which is a topic for another blog.

So, while majority of the screens are optional, why not get ahead of the game and start adopting?

2. How

You may be asking the question: what actions do I need to take to use Redwood

Read Documentation.  In Customer Connect, we are seeing many questions from the Oracle Community about Redwood pages not populating items or screens are coming out blank.  Please see this documentation for the important considerations

https://docs.oracle.com/en/cloud/saas/readiness/scm/25a/inv25a/25A-inventory-wn-t65792.htm

By the way, if you have not registered to Oracle Customer Connect, I highly recommend, so you can get in contact with the rest of your peer Oracle Community members and Oracle ACEs like myself who can possibly respond to your questions: https://community.oracle.com/customerconnect/

Then please see the Profile options for the new features. You will have to flip the profile options at site level from No to Yes, so that the features are enabled.

The documents I previously mentioned have the profile option names and the navigation is to use the task bar from the Functional Setup Manager and search for Manage Administrative Profile Values.

3. What

You may be asking the question: what Redwood Pages I should use first

Adoption is very critical when changing the user experience. Change Management becomes critical migrating from traditional cloud pages to the newly designed Redwood Pages.  What I would recommend is to first enable the configuration pages, so that the internal Oracle team and business analysts have a feel of the Redwood Experience.

Then there are a few pages that users can be beneficial that I mentioned in my prior blog: https://blogs.perficient.com/2025/05/30/starting-redwood-experience-with-25a-inventory-management/

One bold move is to flip all features to Redwood and start testing internally first in a lower pod.  Oracle has designed this, so companies have time to take on as much as they can during the course of an unidentified period of time. As of today, Oracle has not announced when the Redwood Experience will be mandatory.  Most pages are possible to switch back and forth, but please read the feature’s release note to see if there is a not that will explicitly say that once it’s turned on, there is not a path to go back.

In conclusion, Oracle Fusion Application’s future is in Redwood Experience and built in AI, so I recommend that you try to adapt and use.

Contact Mehmet Erisen at Perficient for more introspection of this functionality, and how Perficient and Oracle Fusion Cloud can digitalize and modernize your ERP platform.

]]>
https://blogs.perficient.com/2025/05/30/inventory-management-25b-path-to-redwood-experience-1-2-3/feed/ 0 382158