In this blog, we’ll walk through how to use Eventstream in Microsoft Fabric to capture events triggered by Power Automate and store them in a Lakehouse table. Whether you’re building dashboards, triggering insights, or analyzing user interactions, this integration provides a powerful way to bridge business logic with analytics.
Start by creating a Power Automate flow. Here’s what your flow should look like:
You can choose any input you like. In this example, we’re using “Name”
Add the Send Event trigger to your flow.
Leave the flow as it is for now and move on to Microsoft Fabric.
Go to https://app.fabric.microsoft.com, create a new Workspace (name it as you wish), then:
In the Eventstream, choose Custom Endpoints.
After publishing, your input will be updated.
Go to Details, then to SAS Key Authentication, and copy the Event Hub Name and Primary Connection String.
Return to Power Automate and:
Click Save to complete the connection.
Make sure to enter the data you want to pass (e.g., Name) in the Content field.
And that’s it! Your data will now be stored in the Lakehouse.
Connecting Power Automate to Microsoft Fabric using Eventstream provides a robust and efficient solution for real-time data integration. Looking ahead, this setup can be extended to include:
This unlocks deeper insights and intelligent automation across business processes.
]]>Wealth management firms are under pressure to deliver more—faster, smarter, and with greater precision. The conversations we’re hearing aren’t about what might happen in the future, but rather about what needs to happen now.
Earlier this year, we published 5 Leading Digital Trends Shaping Wealth Management in 2025, outlining the macro shifts redefining the industry. After an energizing few days at BNY INSITE25, those trends came into sharper focus and we saw firsthand how firms are beginning to operationalize these shifts—translating insight into action.
Following the event, we sat down with Ken Fishman and John Galifi to reflect on the conversations, insights, and key themes that emerged. The dialogue offered a clear view into where the wealth and asset management industry is headed and how firms are beginning to act on the trends shaping 2025.
While the trends are clear, the path to execution is complex. At BNY INSITE25, several strategic themes emerged that reflect how firms are navigating this complexity. From democratizing access to alternatives to building AI-enabled workflows, these themes reveal where the industry is placing its bets—and how firms are aligning people, processes, and platforms to deliver on the promise of digital transformation.
The focus is shifting toward scalable strategies that drive growth, enhance personalization, and strengthen operational resilience by turning vision into measurable outcomes.
Platforms like Wove and iCapital are transforming how advisors access and deliver alternative investments. These tools are helping to enable greater portfolio personalization and open new distribution channels for asset managers. But with this innovation comes complexity, particularly around data aggregation, governance, and reporting.
As Ken and John noted, the firms that succeed will be those that can scale personalization without sacrificing operational integrity.
Explore More: Future-Proof Your Tech Investment
The AI conversation has matured. It’s no longer about experimentation, it’s about enablement. From automating marketing tear sheets to navigating regulatory complexity, firms are embedding AI into workflows to drive productivity while keeping humans in the loop.
Zico Kolter’s keynote speech emphasized the importance of intentionality in AI adoption. The focus now is on scaling with purpose—ensuring AI enhances the advisor experience rather than complicating it.
You May Also Enjoy: Transform Your Business With Cutting-Edge AI and Automation Solutions
WealthTech innovation is empowering advisors to focus on what matters most: client relationships. But selecting and implementing the right tech stack is more complex than ever. Firms must align technology with business strategy and execute with precision.
At BNY INSITE25, conversations with leaders underscored the importance of this alignment. Whether it’s data lineage, self-service entitlements, or advisor enablement, the message was clear: technology must serve the business, not the other way around.
Success In Action: At the Heart of Financial Services
While the mainstage themes dominated the spotlight, one of the most pressing challenges surfaced inside conversations: advisor onboarding.
Unlike client onboarding, which is largely internal and process-driven, advisor onboarding is a multifaceted challenge involving legal, regulatory, and operational complexity. From licensing and credentialing to technology enablement and book of business transitions, the process is often a bottleneck for growth, especially for RIAs and broker-dealers looking to scale.
Firms are actively searching for solutions to this pain point. And for good reason: a poorly executed onboarding process can lead to compliance risks, client attrition, and advisor disengagement.
Success In Action: Speeding Insights and Powering Investment Experiences
A modern advisor onboarding strategy is critical for scaling growth, ensuring compliance, and delivering a seamless advisor and client experience. As highlighted in our 2025 Wealth Management Trends, Client Advisor Empowerment was our #2 trend, and this framework is a direct reflection of that insight.
Empowered advisors need more than tools; they need a frictionless start.
A well-designed onboarding experience ensures they’re equipped from day one to deliver high-quality, personalized service efficiently and confidently.
Here are the 12 foundational components every firm should consider:
BNY INSITE25 reinforced what we’ve long believed: the future of wealth and asset management is digital, data-driven, and deeply human. It’s about using technology to enhance the advisor-client relationship, and solve the real, often overlooked challenges that stand in the way of growth.
We empower wealth and asset managers with proactive insights, hyper-personalized experiences, and proactive risk management to drive sustainable growth.
Discover why we have been trusted by 16 of the 20 largest wealth management firms. Explore our financial services expertise and contact us to learn more.
]]>In the first part of this series, we covered how to connect a mobile app to Marketing Cloud Personalization using Salesforce’s Mobile SDK. In this post, we’ll explore how to send catalog items from the mobile app to your dataset.
Since the last post, I made some changes in the app. The app is connected to a free NASA API where takes information from the Mars Rover Photos connection. This connection returns an array of images taken on a specific earth’s date. For the demo purposes I’m only using the first record on that array. This API is designed to collect image data gathered by NASA’s Curiosity, Opportunity, and Spirit rovers on Mars. The API make it more easily available to other developers, educators, and citizen scientists.
The app has two different views, the main view and the display image view. In the main view, the user picks the Earth’s date and the app sends it to the API to retrieve a picture. The second view displays the picture along with some information (see image below). The goal here is to send the item (the picture and its information) to Personalization.
Marketing Cloud Personalization provides an Event API that sources use to send event data to the platform, where the event pipeline processes it. Then you can return Campaigns data that can be served to the end user. Developers cannot use of this API to handle mobile application events.
“The Personalization Mobile SDK has separate functionality used for mobile app event processing, with built-in features that are currently unavailable through the Event API.”
So, we can eliminate possibility to use the Event API in this use case.
Tracking items is an important part of any Personalization implementation. Configure the Catalog Object so that it log and event when a user has viewed productFor example, imagine you have an app that sells the sweaters you knit. You want to know how many users go to “Red and Blue Sweater” product. With that information you can promote other products that those users might like, so they will be more likely to buy from you.
There are two ways to track items and actions. Track catalog objects like Products, Articles, Blogs and Categories (which are the main catalog objects in Personalization) and you can also add Tags as a related catalog objects for those I name before.
You can also track actions like AddToCard, RemoveFromCart and Purchase.
In order to process the catalog and item data we are going to sent from our mobile application, we need to activate the Process Item Data from Native Mobile Apps . This option will make it possible for Personalization to process the data. By default Personalization ignores all the mobile catalog data that it receives.
To activate this functionality, hover over SETTING > GENERAL SETUP > ADVANCE OPTIONS > Activate Process Item Data from Native Mobile Apps
The SDK currently works for Products, Articles and Blogs, those are called Items. They will be able to track purchases, comments, or views. They also will be able to relate with other catalog objects like brand, category and keyword.
The following methods are used to track the action of the users viewing and Item or the detail of and item. The web comparison for these methods are the SalesforceInteractions.CatalogObjectInteractionName.ViewCatalogObject and the SalesforceInteractions.CatalogObjectInteractionName.ViewCatalogObjectDetail
These methods track when a user views an item. Personalization will automatically track the time spent viewing the item while the context, app, and user is active. The item will remain the one viewed until this method or viewItemDetail
are called again. See documentation Here
The second method have the actionName
parameter that is use for different action name to distinguish this View Item.
evergageScreen?.viewItem(_ item: EVGItem?) evergageScreen?.viewItem(_ item: EVGItem?, actionName: String?)
View Item Interaction in the Event Stream:
View Item interaction using the actionName
parameter
EVGItem
is an abstract base class. An item is something in the app that users can view or otherwise engage with. Classes like EVGProduct or EVGArticle inherits from this class
The question mark at the end of String
and EVGItem
means that the value is optional or can be nil
. The last one can happen to the EVGItem
if have some invalid value.
These methods track details when a user views an item, such as looking at other product images or opens the specifications tab. Personalization will automatically track the time spent viewing the item while the context, app, and user is active. The item will remain the one viewed until this method or viewItem:
are called again.
The second method have the actionName
parameter that its use for different action name to distinguish this View Item Detail.
evergageScreen?.viewItemDetail(_ item: EVGItem?) evergageScreen?.viewItemDetail(_ item: EVGItem?, actionName: String?)
View Item Detail interaction in the Event Stream:
View Item Detail interaction but using the actionName parameter:
Now we have define those EVGItem objects with the actual catalog object we want to track Blog / Category / Articles / Product.
By definition, a Product is an item that a business can sell to users. Products can be added to EVGLineItem
objects when they have been ordered by the user.
We have a group of initializers we can use to create an Evergage product and send it back to Personalization. The EVGProduct class have variety of methods we can use. For this post I will show the most relevant to use.
Something important to remember is that in order to use classes like EVGProduct or EVGArticle, we need to import the
Evergage
library.
The most basic of them all, we just need to pass the ID of the product and that’s it. This can be useful if we don’t want to provide too much information.
evergageScreen?.viewItem(EVGProduct.init(id: "p123"))
Builds an EVGProduct
, including many of the commonly used fields. This constructor use the id, name, price, url, image and description fields of the Product catalog object.
As a reminder , I’m building my Product catalog object using the images from the Mars Rover Photos with some other attributes that we got in the response from the API
For this constructor, the values I’m sending in the parameters are:
All I just have to do is pass the new item using any of the method we use to track item data.
let item : EVGItem = EVGProduct.init(id:String(id), name: name, price: 10, url: url, imageUrl: imageUrl, evgDescription: "This is a photo taken form \(roverName). Earth Date: \(earthDate). Landing Date: \(landingDate). Launch Date: \(launchDate)") evergageScreen?.viewItemDetail(item)
The item declaration ir correct since EVGProduct
inherits from EVGItem.
After populating the information, the catalog object will look like this inside Marketing Cloud Personalization:
As the name says it creates an EVGProduct
from the provided JSON dictionary. A JSON dictionary is an array of key-value pair in the form [String : Any]
where you add attributes from the Product catalog object.
let productDict : [String : Any] = [ "_id": String(id), "url": url, "name": name, "imageUrl": imageUrl, "description": "This is a photo taken form \(roverName). Earth Date: \(earthDate). Landing Date: \(landingDate). Launch Date: \(launchDate)", "price": 10, "currency": "USD", "inventoryCount": 2 ] let itemJson: EVGItem? = EVGProduct.init(fromJSONDictionary: productDict) evergageScreen?.viewItemDetail(itemJson, actionName: "User did specific action")
Then you can initialize the EVGProduct
object with the constructor that uses the fromJSONDictionary
parameter.
The last step here will be sent the action with the viewItemDetail
method.
This is how the record should look like after the creation in the dataset.
This is how our class will look with the methods to sent the item interactions.
Imagine you also want to set attributes to sent to personalization like first name, last name, email address or zip code. If you want to do that, all you need to do its to use the setUserAttribute method inside the AppDelegate class or after the user logs in. We used this class to pass the id of the user and to set the datasetID.
After the user logs in you can pass the information you need to personalization The setUserAttribute:forName: sets an attribute (a name/value pair) on the user. The next event will send the new value to the Personalization dataset.
evergage.setUserAttribute("attributeValue", forName: "attributeName") //Following the example evergage.userId = evergage.anonymousId evergage.setUserAttribute("Raul", forName: "firstName") evergage.setUserAttribute("Juliao", forName: "lastName") evergage.setUserAttribute("raul@gmail.com", forName: "emailAddress") evergage.setUserAttribute("123456", forName: "zipCode")
The set attributes event:
The Customer’s Profile view
To wrap things up, setting up Articles, Blogs, and Categories works pretty much the same way as setting up Products. The structure stays consistent—you just have to keep in mind that each one belongs to a different class, so you’ll need to tweak things slightly depending on what you’re working with.
That said, one big limitation to note is that you can’t send custom attributes in catalog objects, even if you try using the JSON dictionary method. I tested a few different approaches, and unfortunately, it only supports the default attributes.
Also, the documentation doesn’t really go into detail about using other types of catalog objects outside of Articles, Blogs, Products, and Categories. It’s unclear if custom catalog objects are supported at all through the mobile SDK, which makes things a bit tricky if you’re looking to do something more advanced.
In part 3 we are going to take a look at how to set push notifications and mobile campaigns.
]]>Karate, according to Karate Labs, is the only open-source tool that unifies API test automation, mocks, performance testing, and UI automation into a single framework. Using Behavior Driven Development (BDD) syntax enables easy scenario writing, even for non-programmers. With built-in assertions, a reporting mechanism, and parallel test execution, Karate streamlines project development and maintenance by offering compile-free, readable code.
Rest-Assured | Karate | |
---|---|---|
Plain Text | No | Yes |
Parallel Execution | Partial | Yes |
Data Driven Testing | Not built in | built in |
Cucumber | Karate | |
---|---|---|
Built in Step Definitions | No | Yes |
Parallel Execution | No | Yes |
Re-use feature files | No | Yes |
For a more detailed comparison, visit Karate VS RestAssured
Karate is worth adopting because it unifies API, UI, mock‑service and performance testing in a single, low‑code framework while remaining fast, readable, and easy for both testers and developers to maintain. Its domain-specific language (DSL) enables even non-Java teams to write plain-text scenarios, while still integrating smoothly with Java and CI/CD pipelines.
Karate is the only open-source tool that combines API automation, UI automation (via a Selenium-free engine), service virtualization mocks, and Gatling-powered performance testing in one framework, eliminating the need for multiple tools.
Within a single feature file, you can switch from calling a REST endpoint to driving a browser, enabling true end‑to‑end scenarios without context‑switching or extra libraries.
Karate lets you reuse functional API tests as Gatling load tests, saving the effort of rewriting user flows in a separate performance tool.
Tests are written in a Gherkin‑like syntax that hides Java boilerplate; glue code is unnecessary, lowering the barrier for non‑programmers.
Because feature files are plain text and do not need compilation, developers iterate faster than with code‑heavy libraries like Rest Assured.
Karate ships with powerful JSON/XML matchers and generates rich HTML reports out of the box, so teams spend zero time wiring external assertion or reporting frameworks.
Parallel execution is built‑in; benchmarks show Karate tests often run faster than equivalent Rest Assured suites, which matters when suites grow large.
No Java prerequisite: Business testers can contribute directly, improving coverage and shared understanding.
Single truth of test logic: API specs, functional checks, mocks, and load profiles live in one place, reducing duplication and drift.
CI/CD ready: Karate runs via JUnit/TestNG and generates standard reports that integrate seamlessly with Jenkins, GitHub Actions, Azure DevOps, and other platforms, eliminating the need for plugins.
Scenario | Why Karate Helps |
---|---|
Green‑field API project | Rapid authoring & mocks speed up backend‑frontend co‑development |
Microservices with contract testing | DSL assertions keep contracts readable; mocks isolate services |
Teams with mixed skill levels | Non‑coders write tests; engineers extend with Java only when needed |
Need one tool for API + UI | Avoids juggling Selenium/WebDriver + Rest Assured |
Karate’s power comes from its opinionated DSL—teams needing highly customised Java code or advanced XML handling may prefer lower‑level libraries.
Karate is great for quick, readable API tests, but it has limitations in IDE support, type safety, UI complexity, and community resources. For more advanced scenarios, you may need to combine it with other tools or use more code-centric frameworks.
Eclipse is an Integrated Development Environment (IDE) widely used for Java programming. It serves as a robust platform for developing and managing Karate projects.
Maven is a build automation tool primarily used for Java projects. It facilitates setting up a Karate environment and managing project dependencies. To configure Eclipse with Maven, you can follow the instructions for Maven installation here.
To use Karate with Maven, you’ll need to include the following dependencies in your pom.xml
.
<dependencies> <dependency> <groupId>com.intuit.karate</groupId> <artifactId>karate-apache</artifactId> <version>0.9.6</version> <scope>test</scope> </dependency> <dependency> <groupId>com.intuit.karate</groupId> <artifactId>karate-junit4</artifactId> <version>0.9.6</version> <scope>test</scope> </dependency> </dependencies>
Note: The latest versions of these dependencies may be available in the Maven repository.
If we wanted to enable Cucumber reporting, the following dependency is also to be added.
<dependency> <groupId>net.masterthought</groupId> <artifactId>cucumber-reporting</artifactId> <version>5.3.0</version> </dependency>
You’ll need to set up the JDK (Java Development Kit) and JRE (Java Runtime Environment) on your system to start working with Karate Framework scripts.
Now with this, we are all set to start with creating the Karate framework.
This overview highlights the advantages of the Karate Framework for API testing, offering a simpler and more accessible alternative to other tools, such as Rest-Assured, by reducing the need for advanced programming knowledge and offering powerful built-in features.
Adopting Karate can reduce your test tool stack, speed up automation, and make quality a shared responsibility across technical and non‑technical roles. By covering functional, load, and even UI tests with the same syntax, teams gain faster feedback, simpler maintenance, and a smoother path to continuous delivery.
]]>Microsoft Copilot is a revolutionary AI-powered tool for Power Platform, designed to streamline the development process and enhance the intelligence of your applications. This learning path will take you through the fundamentals of Copilot and its integration with Power Apps, Power Automate, Power Virtual Agents, and AI Builder.
Copilot in Microsoft Power Platform helps app makers quickly solve business problems. A copilot is an AI assistant that can help you perform tasks and obtain information. You interact with a copilot by using a chat experience. Microsoft has added copilots across the different Microsoft products to help users be more productive. Copilots can be generic, such as Microsoft Copilot, and not tied to a specific Microsoft product. Alternatively, a copilot can be context-aware and tailored to the Microsoft product or application that you’re using at the time.
Microsoft Power Platform has several copilots that are available to makers and users.
Use this copilot to help create a canvas app directly from your ideas. Give the copilot a natural language description, such as “I need an app to track my customer feedback.” Afterward, the copilot offers a data structure for you to iterate until it’s exactly what you need, and then it creates pages of a canvas app for you to work with that data. You can edit this information along the way. Additionally, this copilot helps you edit the canvas app after you create it. Power Apps also offers copilot controls for users to interact with Power Apps data, including copilots for canvas apps and model-driven apps.
Use this copilot to create automation that communicates with connectors and improves business outcomes. This copilot can work with cloud flows and desktop flows. Copilot for Power Automate can help you build automation by explaining actions, adding actions, replacing actions, and answering questions.
Use this copilot to describe and create an external-facing website with Microsoft Power Pages. As a result, you have theming options, standard pages to include, and AI-generated stock images and relevant text descriptions for the website that you’re building. You can edit this information as you build your Power Pages website.
You can create a copilot by using a language model, which is like a computer program that can understand and generate human-like language. A language model can perform various natural language processing tasks based on a deep-learning algorithm. The massive amounts of data that the language model processes can help the copilot recognize, translate, predict, or generate text and other types of content.
Despite being trained on a massive amount of data, the language model doesn’t contain information about your specific use case, such as the steps in a Power Automate flow that you’re editing. The copilot shares this information for the system to use when it interacts with the language model to answer your questions. This context is commonly referred to as grounding data. Grounding data is use case-specific data that helps the language model perform better for a specific topic. Additionally, grounding data ensures that your data and IP are never part of training the language model.
Consider the various copilots in Microsoft Power Platform as specialized assistants that can help you become more productive. Copilot can help you accelerate solution building in the following ways:
Prototyping is a way of taking an idea that you discussed with others or drew on a whiteboard and building it in a way that helps someone understand the concept better. You can also use prototyping to validate that an idea is possible. For some people, having access to your app or website can help them become a supporter of your vision, even if the app or website doesn’t have all the features that they want.
Building on the prototyping example, you might need inspiration on how to evolve the basic prototype that you initially proposed. You can ask Copilot for inspiration on how to handle the approval of which ideas to prioritize. Therefore, you might ask Copilot, “How could we handle approval?”
By using a copilot to assist in your solution building in Microsoft Power Platform, you can complete more complex tasks in less time than if you do them manually. Copilot can also help you complete small, tedious tasks, such as changing the color of all buttons in an app.
While building an app, flow, or website, you can open a browser and use your favorite search engine to look up something that you’re trying to figure out. With Copilot, you can learn without leaving the designer. For example, your Power Automate flow has a step to List Rows from Dataverse, and you want to find out how to check if rows are retrieved. You could ask Copilot, “How can I check if any rows were returned from the List rows step?”
Knowing the context of your flow, Copilot would respond accordingly.
Copilot can be a powerful way to accelerate your solution-building. However, it’s the maker’s responsibility to know how to interact with it. That interaction includes writing prompts to get the desired results and evaluating the results that Copilot provides.
While asking Copilot to “Help me automate my company to run more efficiently” seems ideal, that prompt is unlikely to produce useful results from Microsoft Power Platform Copilots.
Consider the following example, where you want to automate the approval of intake requests. Without significant design thinking, you might use the following prompt with Copilot for Power Automate.
“Create an approval flow for intake requests and notify the requestor of the result.”
This prompt produces the following suggested cloud flow.
While the prompt is an acceptable start, you should consider more details that can help you create a prompt that might get you closer to the desired flow.
A good way to improve your success is to spend a few minutes on a whiteboard or other visual design tool, drawing out the business process.
A prompt should include as much relevant information as possible. Each prompt should include your intended goal, context, source, and outcome.
When you’re starting to build something with Microsoft Power Platform copilots, the first prompt that you use sets up the initial resource. For Power Apps, this first prompt is to build a table and an app. For Power Automate, this first prompt is to set up the trigger and the initial steps. For Power Pages, this first prompt sets up the website.
Consider the previous example and the sequence of steps in the sample drawing. You might modify your initial prompt to be similar to the following example.
“When I receive a response to my Intake Request form, start and wait for a new approval. If approved, notify the requestor saying so and also notify them if the approval is denied.”
You can iterate with your copilot. After you establish the context, Copilot remembers it.
The key to starting to build an idea with Copilot is to consider how much to include with the first prompt and how much to refine and add after you set up the resource. Knowing this key consideration is helpful because you don’t need to get a perfect first prompt, only one that builds the idea. Then, you can refine the idea interactively with Copilot.
Copilot enables developers to write Power FX formulas using natural language. For instance, typing /subtract datepicker1 from datepicker2 in a label control prompts Copilot to generate the corresponding formula, such as DateDiff(DatePicker1. SelectedDate, DatePicker2. SelectedDate, Days). This feature simplifies formula creation, especially for those less familiar with coding.
By integrating Copilot with AI Builder, users can automate the extraction of data from documents, such as invoices or approval forms. For example, Copilot can extract approval justifications and auto-generate emails for swift approvals within Outlook. This process streamlines workflows and reduces manual data entry.
Copilot assists users in creating automated workflows by interpreting natural language prompts. For example, a user can instruct Copilot to “Create a flow that sends an email when a new item is added to SharePoint,” and Copilot will generate the corresponding flow. This feature accelerates the automation process without requiring extensive coding knowledge.
In Power Apps Studio, Copilot allows developers to build and edit apps using natural language commands. For instance, typing “Add a button to my header” or “Change my container to align center” enables Copilot to execute these changes, simplifying the development process and making it more accessible.
Copilot facilitates the creation of conversation topics in Power Virtual Agents by generating them from natural language descriptions. For example, describing a topic like “Customer Support” prompts Copilot to create a topic with relevant trigger phrases and nodes, streamlining the bot development process.
Copilot assists in building websites by interpreting natural language descriptions. For example, stating “Create a homepage with a contact form and a product gallery” prompts Copilot to generate the corresponding layout and components, expediting the website development process.
Limitation | Description | Example |
---|---|---|
1. Limited understanding of business context | Copilot doesn’t always understand your specific business rules or logic. | You ask Copilot to "generate a travel approval form," but your org requires approval from both the team lead and HR. Copilot might only include one level of approval. |
2. Restricted to available connectors and data | Copilot can only access data sources that are already connected in your app. | You ask it to "show top 5 sales regions," but haven’t connected your Sales DB — Copilot can't help unless that connection is preconfigured. |
3. Not fully customizable output | You might not get exactly the layout, formatting, or logic you want — especially for complex logic. | Copilot generates a form with 5 input fields, but doesn't group them or align them properly; you still need to fine-tune it manually. |
4. Model hallucination (AI guessing wrong info) | Like other LLMs, Copilot may “guess” when unsure — and guess incorrectly. | You ask Copilot to create a formula for filtering “Inactive users,” and it writes a filter condition that doesn’t exist in your dataset. |
5. English-only or limited language support | Most effective prompts and results come in English; support for other languages is limited or not optimized. | You try to ask Copilot in Hindi, and it misinterprets the logic or doesn't return relevant suggestions. |
6. Requires clean, named data structures | Copilot struggles when your tables/columns aren't clearly named. | If you name a field fld001_status instead of Status, Copilot might fail to identify it correctly or generate unreadable code. |
7. Security roles not respected by Copilot | Copilot may suggest features that would break your security model if implemented directly. | You generate a data view for all users, but your app is role-based — Copilot won’t automatically apply row-level security filters. |
8. No support for complex logic or multi-step workflows | It’s good at simple flows, but not for things like advanced branching, looping, or nested conditions. | You ask Copilot to automate a 3-level approval chain with reminder logic and escalation — it gives a very basic starting point. |
9. Limited offline or disconnected use | Copilot and generated logic assume you’re online. | If your app needs to work offline (e.g., for field workers), Copilot-generated logic may not account for offline sync or local caching. |
10. Only works inside Microsoft ecosystem | Copilot doesn’t support 3rd-party AI tools natively. | If your company uses Google Cloud or OpenAI directly, Copilot won’t connect unless you build custom connectors or use HTTP calls. |
Knowing how to best interact with the copilot can help get your desired results quickly. When you’re communicating with the copilot, make sure that you’re as clear as you can be with your goals. Review the following dos and don’ts to help guide you to a more successful copilot-building experience.
To have a more successful copilot building experience, do the following:
Copilot in Microsoft Power Platform marks a major step forward in making low-code development truly accessible and intelligent. By enabling users to build apps, automate workflows, analyze data, and create bots using natural language, it empowers both technical and non-technical users to turn ideas into solutions faster than ever.
It transforms how people interact with technology by:
With built-in security, compliance with organizational governance, and continuous improvements from Microsoft’s AI advancements, Copilot is not just a tool—it’s a catalyst for transforming how organizations solve problems and deliver value.
As AI continues to evolve, Copilot will play a central role in democratizing software development and helping organizations move faster and smarter with data-driven, automated tools.
]]>Single Sign-On (SSO) is a crucial part of modern web applications, enabling users to authenticate once and access multiple systems securely. If your organization uses Salesforce as an Identity Provider (IdP) and Drupal as a Service Provider (SP), you can establish a secure SSO connection using the SAML protocol.
In this blog, we’ll walk through how to integrate Drupal with Salesforce for SSO using the SAML Authentication module. We’ll also explore how to dynamically sync user data—like first name, last name, company, and roles—from Salesforce into Drupal during login.
Prerequisites
Before starting, ensure you have the following:
Step 1: Install the SAML Authentication Module in Drupal
You can install the module via Composer:
composer require drupal/saml_auth
Then enable it using Drush or through the Drupal admin interface:
drush en saml_auth
Dependencies (like simplesamlphp) may need to be managed manually or via the simplesamlphp_auth module if you prefer a different approach.
Step 2: Configure Salesforce as an Identity Provider (IdP)
Step 3: Configure the SAML Authentication Module in Drupal
Navigate to: Admin → Configuration → People → SAML Authentication Settings (/admin/config/people/saml)
Fill in the settings:
Step 4: Dynamic User Synchronization
By default, SAML Authentication handles user login and account creation, but we extended this with custom logic to map additional attributes from Salesforce into the Drupal user profile.
Salesforce sends additional user information in the SAML assertion, including:
We’ve extended the default SAML authentication behavior with a custom hook or event subscriber to:
This ensures that user accounts are fully provisioned and kept up-to-date every time a user logs in through SSO.
Step 5: Test the SSO Flow
Check that:
If there’s an error, enable debugging logs and inspect the SAML response and assertion for mismatches.
Conclusion
Integrating Salesforce with Drupal using the SAML Authentication module enables a seamless and secure SSO experience. This is particularly useful for organizations using Salesforce as a central identity system. With proper configuration, users can enjoy frictionless access to your Drupal site while benefiting from Salesforce’s authentication infrastructure.
]]>In today’s cloud-first world, building a secure, high-performing, resilient, and efficient infrastructure is more critical than ever. That’s where the AWS Well-Architected Framework comes in a powerful guide designed to help architects and developers make informed decisions and build better cloud-native solutions.
The AWS Well-Architected Framework provides a consistent approach for evaluating and improving your cloud architecture. It’s built around six core pillars that represent key areas of focus for building robust and scalable systems:
Explore the AWS Well-Architected Framework here https://aws.amazon.com/architecture/well-architected
Time to time, AWS made some changes in the framework and introduce new resources which we can follow to utilize them better for our use cases and get better architecture.
To help you apply these principles, AWS offers the Well-Architected Tool—a free service that guides you through evaluating your workloads against the six pillars.
Try the AWS Well-Architected Tool here https://aws.amazon.com/well-architected-tool/
Cost Optimization is not just about cutting costs—it’s about maximizing value. It ensures that your cloud investments align with business goals and scale efficiently.
Why It Matters:
The AWS Well-Architected Framework is more than a checklist—it’s a mindset. By embracing its principles, especially cost optimization, you can build cloud environments that are not only efficient and scalable but also financially sustainable.
]]>In the age of technological advancements happening almost every minute, upgrading a business is essential to survive competition, offering a customer experience beyond expectations while deploying fewer resources to derive value from any process or business.
Platform upgrades, software upgrades, security upgrades, architectural enhancements, and so on are required to ensure stability, agility, and efficiency.
Customers prefer to move from Legacy systems to the Cloud due to the offerings it brings. From cost, monitoring, maintenance, operations, ease of use, and landscape, Cloud has transformed D&A businesses significantly over the last decade.
Movement from Informatica Powercenter to IDMC has been perceived as the need of the hour due to the humongous advantages it offers. Developers must understand both flavors to perform this code transition effectively.
This post explains the PWC vs IDMC CDI gaps from different perspectives.
While performing PWC to IDMC conversions, the following Development and Operations workarounds will help avoid rework and save effort, thereby achieving customer satisfaction in delivery.
]]>Every end product must meet and exceed customer expectations. For a successful delivery, it is not just about doing what matters, but also about how it is done by following and implementing the desired standards.
This post outlines the best practices to consider with IDMC CDI ETL during the following phases.
In addition to coding best practices, following these Development and Operations best practices will help avoid rework and save efforts, thereby achieving customer satisfaction with the Delivery.
]]>With the advancement of technology, machine learning and AI capabilities in the customer care space, customer expectations are evolving faster than ever before. Customers expect smoother, context-aware, personalized, and generally more effective and faster experiences across channels when contacting a support center.
This calls for a need to revisit and redefine the success metrics for a Contact Center as a Service (CCaaS) strategy.
Let’s break this down into two categories. The first category includes key metrics that are still essential to be measured. The standards for these metrics though are raised and the way they are measured have evolved. The second category introduces new metrics that are emerging because of advanced CCaaS capabilities in a modern contact center landscape.
Customer Satisfaction (CSAT) remains a cornerstone success metric. Every improvement a customer service center is looking to make, from improving operational efficiencies to enhancing agent and customer experience, will directly or indirectly impact the customer and is aimed at elevating that customer experiences. With automated personalized journeys being an important part of modern customer service, it is important to monitor real-time analytics on automated journeys in addition to live agent interactions. This helps better understand the customer experience and find opportunities to fine tune the friction points to improve customer satisfaction. Customer service is not only about resolving customer issues, but also about providing an effortless experience.
First Contact Resolution is still a key success metric in the CCaaS space, but modern tools can revolutionize the extent a customer service center can go to improve this metric, so the standards for this metric have raised. Passing context effectively across channels, real-time monitoring, predictive analytics and insights, and proactive outreach can increase the likelihood of addressing customer needs on the first contact or even sometimes without the need for a live agent interaction.
Customer Retention Rate metric has been revamped with the advancement of technology in customer service. Advanced predictive analytics can help track the customer experience throughout their journey and shed light on the underlying customer behavior patterns. This will enable proactive engagement strategies personalized to every customer. Real-time sentiment analysis can provide instant feedback to the customer service representatives and their supervisors to give them a chance to course correct immediately in order to shift the sentiment to a positive experience and retain customers.
Agent Experience and Satisfaction has a direct impact on the operation of a contact center and hence the customer experience. Traditionally, this metric was not tracked broadly as an important metric to measure a successful contact center strategy. However, we know today that agent experience and satisfaction is a key metric for transforming contact centers from cost centers into revenue generating units. Contact centers can leverage modern tools in different areas from agent performance monitoring, training and identifying knowledge gaps to providing automated workflows and real-time agent assistance, to elevate the agent experience.
These strategies and tools help agents become more effective and productive while providing service. Satisfied agents are more motivated to help customers effectively. This can improve metrics like First Contact Resolution rate and Average Handle Time. Happy and productive agents are more likely to engage positively with customers to discuss potential cross-sell and upsell opportunities. Moreover, agent turnover and the cost associated with that will be lowered due to the reduced burden of onboarding and training new agents regularly and constantly being short of staff.
Sentiment Analysis and Real-time Interaction Quality provides immediate insights to the contact center representatives about the customer’s emotions, the conversation tone, and the effectiveness of their interactions. This will help the contact center representatives to refine their interaction strategy on the spot to maintain a positive and effective engagement with the customer. These transforms contact centers into emotionally intelligent, customer-focused support centers. This makes a huge difference in a time where the quality of experience matters as much as the outcome.
Predictive Analysis Accuracy represents an entirely new set of metrics for a modern contact center that leverages predictive analytics in its operation. It is crucial to measure this metric and evaluate the accuracy of the forecasts against customer behavior and demands as well as the agent workflow needs. Inaccurate predictions are not only ineffective but can also be harmful to contact center operations. They can lead to poor decision making, confusion, and disappointing customer experiences. Accuracy in the anticipation of customer needs can enable proactive outreach, positive and effective interactions, less friction points and reduced service contacts while facilitating effective automatic upsell and cross-sell initiatives.
Technology Utilization Rate is an important metric to track in a modern and evolving customer care solution. While with the latest technological advancements a lot of intelligent automation and enhancements can be made within a CCaaS solution, a contact center strategy is required to identify the most impactful modern capabilities for every customer service operation. The strategy needs to incorporate tracking the success of the technology adoption through system usage data and adoption metrics. This ensures that technology is being leveraged effectively and is providing value to business. The technology utilization tracking can also reveal training and adoption gaps, ensuring that modern tools are not just implemented for the sake of innovation, but are actively contributing to improved efficiency within a contact center.
The development of advanced native capabilities and integration of modern tools within CCaaS platforms are revolutionizing the customer care industry and reshaping customer expectations. Staying ahead of this shift is crucial. While utilizing these advancements to achieve operational efficiencies, it is equally important to redefine the success metrics that provide businesses with insights and feedback on a modern CCaaS strategic roadmap. Adopting a fresh approach to capturing traditional metrics like Customer Satisfaction Scores and First Contact Resolution, combined with measuring new metrics such as Real-time Interaction Quality and Predictive Analysis Accuracy will offer a comprehensive view of a contact center’s maturity and its progress towards a successful and effective modern CCaaS solution.
We can measure these metrics by utilizing built-in monitoring and analytical tools of modern CCaaS platforms along with AI-powered services integrations for features like Sentiment and Real-time Quality Analysis. We can gather regular feedback and data from agents and automated tracking tools to monitor system usability and efficiency. All this data can be streamed and displayed on a unified custom analytics dashboard, providing a comprehensive view of contact center performance and effectiveness.
]]>Mobile App development is rapidly growing and so is the expectation of robust support. “Mobile first” is the set paradigm for many application development teams. Unlike web deployment, an app release has to go through the review process via App Store Connect and Google Play. Minor or major releases follow the app review same process, which can take 1-4 days. Hot fixes or critical security patches are also bound by the review cycle restrictions. This may lead to service disruptions, negative app and customer reviews.
Let’s say that the latest version of an app is version 1.2. However, a critical bug was identified in version 1.1. The app developers may release version 1.3, but the challenge would be that it may take a while to release the new version (unless a forced update mechanism is implemented for the app). Another potential challenge would be the fact that there is no guarantee that the user would have auto updates on.
Luckily, “Over The Air” updates comes to the rescue in such situations.
The Over The Air (OTA), deployment process for mobile apps allows developers to push updates without going through the traditional review process. The OTA update process enables faster delivery for any hot fix or patch.
While this is very exciting, it does come with a few limitations:
React Native consists of JavaScript and Native code. When the app gets compiled, it creates the JSbundles for Android and iOS apps along with the native builds. OTA also relies on the JavaScript bundles and hence React Native apps are great candidates to take advantage of the OTA update technology.
One of our client’s app has an OTA deployment process implemented using App Center. However, Microsoft has decided to retire App Center as of March 31, 2025. Hence, we started exploring the alternatives. One of the alternate solutions on the the table was provided by App Center and the other was to find a similar PAAS solution from another provider. Since back-end stack was AWS, we chose to go with EAS Update.
EAS Update is a hosted service that serves updates for projects using expo-updates library. Once the EAS Update is configured correctly, the app will be listening for any targeted version of the app on the EAS dev cloud server. Expo provides a great documentation on setup and configuration.
In a nutshell;
OTA deployment process
Additional details can be found at https://docs.expo.dev/eas-update/how-it-works/.
If you are new to React Native app development, this article may help Ramp Up On React/React Native In Less Than a Month. And if you are transitioning from React to React Native, you may find this React Native – A Web Developer’s Perspective on Pivoting to Mobile useful.
I am using my existing React-Native 0.73.7 app. However, one can start a fresh React Native App for your test.
Project configuration requires us to setup expo-modules. The Expo installation guide provides an installer which handles configuration. Our project needed an SDK 50 version of the installer.
"@expo/vector-icons": "^14.0.0", "expo-asset": "~9.0.2", "expo-file-system": "~16.0.9", "expo-font": "~11.10.3", "expo-keep-awake": "~12.8.2", "expo-modules-autolinking": "1.10.3", "expo-modules-core": "1.11.14", "fbemitter": "^3.0.0", "whatwg-url-without-unicode": "8.0.0-3"
"@expo/code-signing-certificates": "0.0.5", "@expo/config": "~8.5.0", "@expo/config-plugins": "~7.9.0", "arg": "4.1.0", "chalk": "^4.1.2", "expo-eas-client": "~0.11.0", "expo-manifests": "~0.13.0", "expo-structured-headers": "~3.7.0", "expo-updates-interface": "~0.15.1", "fbemitter": "^3.0.0", "resolve-from": "^5.0.0"
EAS_CHANNEL=staging RUNTIME_VERSION="7.13" eas update --message "build:[QA] - 7.13.841 - 25.5.9.4 - OTA Test2 commit"
EAS update screen once OTA deployment is successful.
@rnx-kit/metro-serializer
had to be commented out due to compatibility issue with EAS Update bundle process.The headers “expo-runtime-version”, “expo-channel-name”, and “expo-platform” are required. They can also be set with the query parameters “runtime-version”, “channel-name”, and “platform”. Learn more: https://github.com/expo/fyi/blob/main/eas-update-missing-headers.md
The configuration values for iOS app are maintained in Supporting/Expo.plist. The above error indicates that the EXUpdatesRequestHeaders
block in the plist might be missing.
OTA deployment is very useful when large number of customers are using the app and any urgent hot fix or patch needs to be released. You can set this for your lower environments as well as the production.
In my experience, it is very reliable and the expo team is doing great job on maintaining it.
So take advantage of this amazing service and Happy coding!
For more information about Perficient’s Mobile Solutions expertise, subscribe to our blog or contact our Mobile Solutions team today!
]]>I have been writing about the Redwood Experience with Supply Chain Management, especially with the Inventory Management. Oracle has gone all-in with Redwood Experience in Inventory Management in 25B.
The 25 Inventory Management readiness documentation lists all new features and how to use them, so I will not repeat this greatly written document: https://docs.oracle.com/en/cloud/saas/readiness/scm/25b/inv25b/index.html
For the previous features in Redwood, please consider visiting the Readiness documentation: https://docs.oracle.com/en/cloud/saas/readiness/scm-all.html
This page is my personal favorite since it provides easy to find features and documentation along with that.
1. Why?
You may be asking the question: why is the Redwood so hot and why do I have to transform?
If you are an Oracle customer or you have been in Oracle space for a while (I have been in the space for almost three decades), you know that once Oracle sets a vision and starts delivering new technology it becomes the future. We have witnessed this when Oracle moved the business applications from 10.7 Character mode to 10SC (Smart Client) 10NCA (Network Architecture). We went from character mode to GUI. It wasn’t easy and quick, but it happened. Then we moved from major releases in EBS and got used to the Self Service architecture.
Oracle delivered the Fusion Applications long time ago and we have witnessed that each quarterly release has added more functionality. Since 2024 Oracle has been improving user interface and adding mobility to the Inventory Management pages, but the most radical improvements have happened in 25A and 25B. Now, almost 100% of the Inventory Management is in Redwood and it’s the next generation of Cloud applications.
Redwood brings better usability and better user interface that I explained in my past blog https://blogs.perficient.com/2025/05/30/starting-redwood-experience-with-25a-inventory-management/, but it also opens the door for Artificial Intelligence (AI).
Oracle is expected to release major AI improvements in 25C that I plan to write a blog to talk about. Redwood Experience is a prerequisite for all cool AI technology to work. Agentic AI features or AI Agents will be part of the Fusion Applications which is a topic for another blog.
So, while majority of the screens are optional, why not get ahead of the game and start adopting?
2. How
You may be asking the question: what actions do I need to take to use Redwood
Read Documentation. In Customer Connect, we are seeing many questions from the Oracle Community about Redwood pages not populating items or screens are coming out blank. Please see this documentation for the important considerations
https://docs.oracle.com/en/cloud/saas/readiness/scm/25a/inv25a/25A-inventory-wn-t65792.htm
By the way, if you have not registered to Oracle Customer Connect, I highly recommend, so you can get in contact with the rest of your peer Oracle Community members and Oracle ACEs like myself who can possibly respond to your questions: https://community.oracle.com/customerconnect/
Then please see the Profile options for the new features. You will have to flip the profile options at site level from No to Yes, so that the features are enabled.
The documents I previously mentioned have the profile option names and the navigation is to use the task bar from the Functional Setup Manager and search for Manage Administrative Profile Values.
3. What
You may be asking the question: what Redwood Pages I should use first
Adoption is very critical when changing the user experience. Change Management becomes critical migrating from traditional cloud pages to the newly designed Redwood Pages. What I would recommend is to first enable the configuration pages, so that the internal Oracle team and business analysts have a feel of the Redwood Experience.
Then there are a few pages that users can be beneficial that I mentioned in my prior blog: https://blogs.perficient.com/2025/05/30/starting-redwood-experience-with-25a-inventory-management/
One bold move is to flip all features to Redwood and start testing internally first in a lower pod. Oracle has designed this, so companies have time to take on as much as they can during the course of an unidentified period of time. As of today, Oracle has not announced when the Redwood Experience will be mandatory. Most pages are possible to switch back and forth, but please read the feature’s release note to see if there is a not that will explicitly say that once it’s turned on, there is not a path to go back.
In conclusion, Oracle Fusion Application’s future is in Redwood Experience and built in AI, so I recommend that you try to adapt and use.
Contact Mehmet Erisen at Perficient for more introspection of this functionality, and how Perficient and Oracle Fusion Cloud can digitalize and modernize your ERP platform.
]]>