You may have seen this pop-up message appearing on the main page of your EPM instance:
Oracle has released a new Statement of Direction outlining the de-support of several legacy components in Oracle Cloud EPM, including:
These changes will impact all major business processes such as
Applies To: All Oracle Cloud EPM Users
Effective Date: October 2025 (25.10 Release)
In 2023, Oracle introduced Forms 2.0 and Dashboards 2.0 as enhanced successors to their legacy counterparts. Since then, Oracle has delivered continuous improvements based on global user feedback, resulting in widespread adoption of the 2.0 versions.
To streamline the platform and focus on innovation, Oracle will officially de-support Forms 1.0 and Dashboards 1.0 in October 2025.
Oracle encourages customers to submit enhancement ideas for the new pages via Customer Connect Idea Labs.
This transition is a strategic move to modernize the Oracle Cloud EPM experience. While change can be challenging, the new tools offer improved performance, usability, and long-term support. By preparing now, your organization can ensure a smooth and successful migration.
Need help planning your upgrade or training your team? Reach out to your Oracle representative or contact us directly—we’re here to help.
https://support.oracle.com/support/?anchorId=&kmContentId=10164145
https://docs.oracle.com/en/cloud/saas/freeform/freef/converting_dashboards_10_to_20.html
https://docs.oracle.com/en/cloud/saas/freeform/ffuuu/about_forms_versions.html
]]>
In today’s cloud-first world, securing your DNS layer is more critical than ever. DNS (Domain Name System) is a foundational element of network infrastructure, but it’s often overlooked as a security risk. Attackers frequently exploit DNS to launch phishing campaigns, exfiltrate data, and communicate with command-and-control servers. Proactive DNS security is no longer optional – it’s essential.
To strengthen DNS-layer security, Amazon Route 53 Resolver DNS Firewall provides robust control over DNS traffic by enabling the use of domain lists, allowing specific domains to be explicitly permitted or denied. Complementing these custom lists are AWS Managed Domain Lists, which autonomously block access to domains identified as malicious, leveraging threat intelligence curated by AWS and its trusted security partners. While this method is highly effective in countering known threats, cyber adversaries are increasingly employing sophisticated evasion techniques that go undetected by conventional blocklists. In this blog, I’ll explore DNS vulnerabilities, introduce Route 53 Resolver DNS Firewall, and walk you through practical strategies to safeguard your cloud resources.
By analyzing attributes such as query entropy, length, and frequency, the service can detect and intercept potentially harmful DNS traffic, even when interacting with previously unknown domains. This proactive approach enhances defense against advanced tactics, such as DNS tunneling and domain generation algorithms (DGAs), which attackers often use to establish covert communication channels or maintain malware connectivity with command-and-control servers.
In this blog, I’ll guide you through a hands-on journey into the world of DNS-layer threats and the tools available to defend against them. You’ll discover how to configure effective Route 53 Resolver DNS Firewall Advanced rules. I’ll also walk through a real-world threat detection scenario, demonstrating how the service seamlessly integrates with AWS Security Hub to provide enhanced visibility and actionable alerts. By the end of this post, you’ll be equipped with the knowledge to implement DNS Firewall rules that deliver intelligent, proactive protection for your AWS workloads.
DNS tunneling and Domain Generation Algorithms (DGAs) are sophisticated techniques employed by cyber adversaries to establish hidden communication channels and evade traditional security measures.
DNS Tunneling: This method exploits the DNS protocol by encapsulating non-DNS data within DNS queries and responses. Since DNS traffic is typically permitted through firewalls and security devices to facilitate normal internet operations, attackers leverage this trust to transmit malicious payloads or exfiltrate sensitive data without detection. The risks associated with DNS tunneling are significant, including unauthorized data transfer, persistent command-and-control (C2) communication, and the potential for malware to bypass network restrictions. Detecting such activity requires vigilant monitoring for anomalies such as unusually large DNS payloads, high-frequency queries to unfamiliar domains, and irregular query patterns.
Domain Generation Algorithms (DGAs): DGAs enable malware to generate a vast number of pseudo-random domain names, which are used to establish connections with Command and Control (C2) servers. This dynamic approach makes it challenging for defenders to block malicious domains using traditional blacklisting techniques, as the malware can swiftly switch to new domains if previous ones are taken down. The primary risks posed by DGAs include the resilience of malware infrastructures, difficulty in predicting and blocking malicious domains, and the potential for widespread distribution of malware updates. Effective mitigation strategies involve implementing advanced threat intelligence, machine learning models to detect anomalous domain patterns, and proactive domain monitoring to identify and block suspicious activities.
Understanding and addressing the threats posed by DNS tunneling and DGAs are crucial for maintaining robust cybersecurity defenses.
Route 53 Resolver DNS Firewall Advanced enhances DNS-layer security by intelligently analyzing DNS queries in real time to detect and block threats that traditional firewalls or static domain blocklists might miss. Here’s a breakdown of how it operates:
When a DNS query is made from resources within your VPC, it is routed through the Amazon Route 53 Resolver. DNS Firewall Advanced inspects each query before it is resolved. It doesn’t just match the domain name against a list—it analyses the structure, behaviour, and characteristics of the domain itself.
The advanced firewall uses machine learning models trained on massive datasets of real-world domain traffic. These models understand what “normal” DNS behaviour looks like and can flag anomalies such as:
This allows it to detect suspicious domains, even if they’ve never been seen before.
Each suspicious query is scored based on how closely it resembles malicious behaviour. You can configure confidence levels—High, Medium, or Low:
Based on your configured rules and confidence thresholds, the firewall can:
These controls give you flexibility to tailor the firewall’s behavior to your organization’s risk tolerance.
You can organize rules into rule groups, apply AWS Managed Domain Lists, and define custom rules based on your environment’s needs. You can also associate these rule groups with specific VPCs, ensuring DNS protection is applied at the network boundary.
Despite performing deep inspections, the firewall processes each DNS request in under a millisecond. This ensures there is no perceptible impact on application performance.
The above figure shows Route 53 DNS Firewall logs ingested into CloudWatch and analysed through Contributor Insights.
To begin, I’ll demonstrate how to manually create a Route 53 Resolver DNS Firewall Advanced rule using the AWS Management Console. This rule will be configured to block DNS queries identified as high-confidence DNS tunneling attempts.
Route 53 Resolver query logging offers comprehensive visibility into DNS queries originating from resources within your VPCs, allowing you to monitor and analyze DNS traffic for both security and compliance purposes. When enabled, query logging captures key details for each DNS request—such as the queried domain name, record type, response code, and the source VPC or instance. This capability becomes especially powerful when paired with Route 53 Resolver DNS Firewall, as it enables you to track blocked DNS queries and refine your security rules based on real traffic behavior within your environment. Below are sample log entries generated when the DNS Firewall identifies and acts upon suspicious activity, showcasing the depth of information available for threat analysis and incident response.
The following is an example of a DNS tunneling block.
This type of alert is useful in:
Amazon Route 53 Resolver DNS Firewall Advanced marks a significant advancement in protecting organizations against sophisticated DNS-layer threats. As discussed, DNS queries directed to the Route 53 Resolver take a distinct route that bypasses conventional AWS security measures such as security groups, network ACLs, and even AWS Network Firewall, introducing a potential security blind spot within many environments. In this post, I’ve examined how attackers exploit this gap using techniques like DNS tunneling and domain generation algorithms (DGAs), and how Route 53 Resolver DNS Firewall Advanced leverages real-time pattern recognition and anomaly detection to mitigate these risks. You also explored how to set up the service via the AWS Management Console and deploy it using a CloudFormation template that includes pre-configured rules to block high-confidence threats and alert on suspicious activity. Additionally, you saw how enabling query logging enhances visibility into DNS behavior and how integrating with AWS Security Hub consolidates threat insights across your environment. By adopting these capabilities, you can better safeguard your infrastructure from advanced DNS-based attacks that traditional blocklists often miss, strengthening your cloud security posture without compromising performance.
]]>When using cloud-based event-driven systems, it’s essential to respond to changes at the storage level, such as when files are added, modified, or deleted. Google Cloud Platform (GCP) makes this easy by enabling Cloud Storage and Pub/Sub to talk to one another directly. This arrangement lets you send out structured real-time alerts whenever something happens inside a bucket. This configuration is specifically designed to catch deletion occurrences. When a file is deleted from a GCS bucket, a message is sent to a Pub/Sub topic. That subject becomes the main connection, providing alerts to any systems that are listening, such as a Cloud Run service, an external API, or another microservice. These systems can then react by cleaning up data, recording the incident, or sending out alarms. The architecture also takes care of critical backend needs. It employs IAM roles to set limits on who can access what, has retry rules in case something goes wrong for a short time, and links to a Dead-Letter Queue (DLQ) to keep messages that couldn’t be sent even after numerous tries. The whole system stays loosely coupled and strong because it employs technologies that are built into GCP. You can easily add or remove downstream services without changing the initial bucket. This pattern is a dependable and adaptable way to enforce cleanup rules, track changes for auditing, or initiate actions in real-time. In this article, we’ll explain the fundamental ideas, show you how to set it up, and talk about the important design choices that make this type of event notification system work with Pub/Sub to keep everything running smoothly.
Pub/Sub makes it easy to respond to changes in Cloud Storage, like when a file is deleted, without having to connect everything closely. You don’t link each service directly to the storage bucket. Instead, you send events using Pub/Sub. This way, logging tools, data processors, and alarm systems may all work on their own without interfering with each other. The best thing? You can count on it. Even if something goes wrong, Pub/Sub makes sure that events don’t get lost. And since you only pay when messages are delivered or received, you don’t have to pay for resources that aren’t being used. This setup lets you be flexible, respond in real time, and evolve, which is great for cloud-native systems that need to be able to adapt and stay strong.
If you don’t already have a bucket, go to the Cloud Storage console, click ‘Create Bucket’, and follow these steps:
– Name: Choose a globally unique bucket name (e.g., demopoc-pubsub)
– Location: Pick a region or multi-region
– Default settings: You can leave the rest unchanged for this demo
Go to Pub/Sub in the Cloud Console and:
1. Click ‘Create Topic’
2. Name it something like demo-poc-pubsub
3. Leave the rest as defaults
4. Click Create
Go to the IAM permissions for your topic:
1. In the Pub/Sub console, go to your topic
2. Click ‘Permissions’
3. Click ‘Grant Access’
4. Add the GCS service account: service-<project-number-sample>@gs-project-accounts.iam.gserviceaccount.com
5. Assign it the role: Pub/Sub Publisher
6. Click Save
Open your cloud shell terminal and run:
gcloud storage buckets notifications create gs://my-delete-audit-bucket –topic=gcs-object-delete-topic –event-types=OBJECT_DELETE –payload-format=json
Explanation of gsutil command:
gs://my-delete-audit-bucket: Your storage bucket
–topic: Pub/Sub topic name
–event-types=OBJECT_DELETE: Triggers only when objects are deleted
–payload-format=json: Format of the Pub/Sub message
Expected message payload:
{
“kind”: “storage#object”,
“bucket”: “my-delete-audit-bucket”,
“name”: “test.txt”,
“timeDeleted”: “2025-06-05T14:32:29.123Z”
}
Use Case: Stop people from accidentally deleting data or make sure that data is kept for at least a certain amount of time.How it works:
When an object is deleted, the system checks to see if it should have been deleted based on its name, metadata, and other factors.
It logs the incident or restores the object from backup (for example, using Nearline or Archive tier) if it finds a violation.
Why it matters:
Helps keep data safe from being lost because to accidental or unlawful deletion. Makes sure that data lifecycle policies are followed
3. Start cleaning jobs downstream
Use Case: When an object is removed, connected data in other systems should be cleaned away automatically.How it works:
When you delete a GCS object, a Cloud Function or Dataflow pipeline is triggered by the Pub/Sub message. This job deletes records that are linked to it in Big-Query, Fire-store, or invalidates cache/CDN entries.
Why it matters:
Keeps data systems working together. Stops recordings from being left behind or metadata from getting old
4. Alerts in Real Time
Use Case: Let teams or monitoring systems know when sensitive or unexpected removals happen.How it works:
A Cloud Function that listens to Pub/Sub looks at the event. It gives an alert if the deletion meets certain criteria, such as a certain folder or file type.
Why it matters
Allows for a response in real time. Increases the ability to see high-risk operations
Result:
We created a modular, fault-tolerant real-time event-driven pipeline by using a Pub/Sub-based notification system for deleting Cloud Storage objects. When an object is added to or removed from the specified GCS bucket, alert notifications are sent to a Pub/Sub topic. That topic makes sure that the message gets to one or more downstream consumers.
Combining Cloud Storage with Pub/Sub for deleting objects is a basic idea in today’s GCP design. It publishes events to a Pub/Sub topic in almost real time when something is deleted. These events can be used for audit trails, enforcing data policies, automatic cleanups, and even alarms.This method promotes loose coupling by enabling Cloud Storage send events without having to know who the subscribers are. Cloud Functions, Dataflow, and custom applications that subscribers use can handle messages on their own. This makes the system easier to scale and manage.Using pub/sub makes production workflows more organized because it adds reliability, parallelism, retries, and other benefits. If GCP engineers want to design cloud systems that are adaptable, responsive, and ready for the future, they need to be experts at event-driven integration.
]]>In this blog, we’ll walk through how to use Eventstream in Microsoft Fabric to capture events triggered by Power Automate and store them in a Lakehouse table. Whether you’re building dashboards, triggering insights, or analyzing user interactions, this integration provides a powerful way to bridge business logic with analytics.
Start by creating a Power Automate flow. Here’s what your flow should look like:
You can choose any input you like. In this example, we’re using “Name”
Add the Send Event trigger to your flow.
Leave the flow as it is for now and move on to Microsoft Fabric.
Go to https://app.fabric.microsoft.com, create a new Workspace (name it as you wish), then:
In the Eventstream, choose Custom Endpoints.
After publishing, your input will be updated.
Go to Details, then to SAS Key Authentication, and copy the Event Hub Name and Primary Connection String.
Return to Power Automate and:
Click Save to complete the connection.
Make sure to enter the data you want to pass (e.g., Name) in the Content field.
And that’s it! Your data will now be stored in the Lakehouse.
Connecting Power Automate to Microsoft Fabric using Eventstream provides a robust and efficient solution for real-time data integration. Looking ahead, this setup can be extended to include:
This unlocks deeper insights and intelligent automation across business processes.
]]>This section explains how to install, configure, and deploy a Coveo Atomic project using the Coveo CLI
To get started, install the Coveo CLI globally with npm:
npm install -g @coveo/cli
To ensure you’re always using the latest version, update it anytime with:
npm update -g @coveo/cli
Once the CLI is installed, you will need to authenticate to your coveo organization. Use the following command, replacing the placeholders with your specific organization details:
coveo auth:login --environment=prod --organization=<your-organization> --region=<your-region>
For example:
coveo auth:login --environment=prod --organization=blogtestorgiekhkuqk --region=us
After logging in, initialize a new atomic project by running:
coveo atomic:init <project-name> --type=app
For example:
coveo atomic:init atomicInterface --type=app
Once the project is ready, build the application:
npm run build
This command compiles your code and prepares it for deployment. It creates a production-ready build inside the dist/ folder.
Then deploy your interface to Coveo using:
coveo ui:deploy
After deployment, your search interface will be hosted on Coveo’s infrastructure, ready to embed anywhere—like Adobe
This section guides you through using and initializing the Atomic-Hosted-Page component of your Coveo project.
If you have customized your Atomic search page locally and deployed it to the Coveo infrastructure, then it will be listed in the Custom Deployment tab of the Search Pages (platform-ca | platform-eu | platform-au) page of the Administration Console. You can use the atomic-hosted-page
component to consume it from anywhere on the web.
Once you have installed the atomic-hosted-page
or atomic-hosted-ui
web component, you’ll need to add a script like the following to initialize the atomic-hosted-page
component:
<head> <!-- ... --> <script> (async () => { await customElements.whenDefined('atomic-hosted-ui'); const atomicHostedUIPage = document.querySelector('atomic-hosted-ui'); await atomicHostedUIPage.initialize({ accessToken: '<ACCESS_TOKEN>', organizationId: '<ORGANIZATION_ID>', pageId: '<PAGE_ID>' }); })(); </script> <!-- ... --> <atomic-hosted-ui hosted-type="code"></atomic-hosted-ui> <!-- ... --> </head>
In this script, replace the placeholders with coveo specific details:
<ACCESS_TOKEN> (string) is an API key or platform token that grants the View all access level on the Search Pages domain in the target Coveo organization.
<ORGANIZATION_ID> (string) is the unique identifier of your organization (for example, mycoveoorganizationa1b23c).
<PAGE_ID> (string) is the unique identifier of the hosted page, which you can copy from the Administration Console.
Login to Adobe AEM Author Instance
Example URL: https://author-555.adobeaemcloud.com/
Navigate to the AEM Sites Console
Go to:https://author-555.adobeaemcloud.com/sites.html/content/blog/us/en/search-results
The Sites Console in AEM, used to manage your website’s pages and structure.
Create or Select the Page
Create new or use an existing page, for example: search-results
.
Select the page’s checkbox → click Edit (top toolbar).
You’ll be redirected to the Page Editor: https://author-555.adobeaemcloud.com/editor.html/content/blog/us/en/search-results.html.
Embed the Coveo Script:
In the Page Editor, open the Content Tree on the left, select Layout Container, click the Configure (wrench icon) button
Choose Embed Type
Choose Embed → iFrame. Paste your <atomic-hosted-page> script inside the iFrame.
Preview and Publish the Page
Click Page Information icon → Publish Page, the alert confirms that the page will be live
View the Published Page
Example URL:http://localhost:4502/content/blog/us/en/search-results.html
That’s it—you’ve successfully embedded your Coveo Atomic CLI-based Hosted Search Page inside Adobe!
Use a hosted page in your infrastructure | Coveo Atomic
]]>
Microsoft Copilot is a revolutionary AI-powered tool for Power Platform, designed to streamline the development process and enhance the intelligence of your applications. This learning path will take you through the fundamentals of Copilot and its integration with Power Apps, Power Automate, Power Virtual Agents, and AI Builder.
Copilot in Microsoft Power Platform helps app makers quickly solve business problems. A copilot is an AI assistant that can help you perform tasks and obtain information. You interact with a copilot by using a chat experience. Microsoft has added copilots across the different Microsoft products to help users be more productive. Copilots can be generic, such as Microsoft Copilot, and not tied to a specific Microsoft product. Alternatively, a copilot can be context-aware and tailored to the Microsoft product or application that you’re using at the time.
Microsoft Power Platform has several copilots that are available to makers and users.
Use this copilot to help create a canvas app directly from your ideas. Give the copilot a natural language description, such as “I need an app to track my customer feedback.” Afterward, the copilot offers a data structure for you to iterate until it’s exactly what you need, and then it creates pages of a canvas app for you to work with that data. You can edit this information along the way. Additionally, this copilot helps you edit the canvas app after you create it. Power Apps also offers copilot controls for users to interact with Power Apps data, including copilots for canvas apps and model-driven apps.
Use this copilot to create automation that communicates with connectors and improves business outcomes. This copilot can work with cloud flows and desktop flows. Copilot for Power Automate can help you build automation by explaining actions, adding actions, replacing actions, and answering questions.
Use this copilot to describe and create an external-facing website with Microsoft Power Pages. As a result, you have theming options, standard pages to include, and AI-generated stock images and relevant text descriptions for the website that you’re building. You can edit this information as you build your Power Pages website.
You can create a copilot by using a language model, which is like a computer program that can understand and generate human-like language. A language model can perform various natural language processing tasks based on a deep-learning algorithm. The massive amounts of data that the language model processes can help the copilot recognize, translate, predict, or generate text and other types of content.
Despite being trained on a massive amount of data, the language model doesn’t contain information about your specific use case, such as the steps in a Power Automate flow that you’re editing. The copilot shares this information for the system to use when it interacts with the language model to answer your questions. This context is commonly referred to as grounding data. Grounding data is use case-specific data that helps the language model perform better for a specific topic. Additionally, grounding data ensures that your data and IP are never part of training the language model.
Consider the various copilots in Microsoft Power Platform as specialized assistants that can help you become more productive. Copilot can help you accelerate solution building in the following ways:
Prototyping is a way of taking an idea that you discussed with others or drew on a whiteboard and building it in a way that helps someone understand the concept better. You can also use prototyping to validate that an idea is possible. For some people, having access to your app or website can help them become a supporter of your vision, even if the app or website doesn’t have all the features that they want.
Building on the prototyping example, you might need inspiration on how to evolve the basic prototype that you initially proposed. You can ask Copilot for inspiration on how to handle the approval of which ideas to prioritize. Therefore, you might ask Copilot, “How could we handle approval?”
By using a copilot to assist in your solution building in Microsoft Power Platform, you can complete more complex tasks in less time than if you do them manually. Copilot can also help you complete small, tedious tasks, such as changing the color of all buttons in an app.
While building an app, flow, or website, you can open a browser and use your favorite search engine to look up something that you’re trying to figure out. With Copilot, you can learn without leaving the designer. For example, your Power Automate flow has a step to List Rows from Dataverse, and you want to find out how to check if rows are retrieved. You could ask Copilot, “How can I check if any rows were returned from the List rows step?”
Knowing the context of your flow, Copilot would respond accordingly.
Copilot can be a powerful way to accelerate your solution-building. However, it’s the maker’s responsibility to know how to interact with it. That interaction includes writing prompts to get the desired results and evaluating the results that Copilot provides.
While asking Copilot to “Help me automate my company to run more efficiently” seems ideal, that prompt is unlikely to produce useful results from Microsoft Power Platform Copilots.
Consider the following example, where you want to automate the approval of intake requests. Without significant design thinking, you might use the following prompt with Copilot for Power Automate.
“Create an approval flow for intake requests and notify the requestor of the result.”
This prompt produces the following suggested cloud flow.
While the prompt is an acceptable start, you should consider more details that can help you create a prompt that might get you closer to the desired flow.
A good way to improve your success is to spend a few minutes on a whiteboard or other visual design tool, drawing out the business process.
A prompt should include as much relevant information as possible. Each prompt should include your intended goal, context, source, and outcome.
When you’re starting to build something with Microsoft Power Platform copilots, the first prompt that you use sets up the initial resource. For Power Apps, this first prompt is to build a table and an app. For Power Automate, this first prompt is to set up the trigger and the initial steps. For Power Pages, this first prompt sets up the website.
Consider the previous example and the sequence of steps in the sample drawing. You might modify your initial prompt to be similar to the following example.
“When I receive a response to my Intake Request form, start and wait for a new approval. If approved, notify the requestor saying so and also notify them if the approval is denied.”
You can iterate with your copilot. After you establish the context, Copilot remembers it.
The key to starting to build an idea with Copilot is to consider how much to include with the first prompt and how much to refine and add after you set up the resource. Knowing this key consideration is helpful because you don’t need to get a perfect first prompt, only one that builds the idea. Then, you can refine the idea interactively with Copilot.
Copilot enables developers to write Power FX formulas using natural language. For instance, typing /subtract datepicker1 from datepicker2 in a label control prompts Copilot to generate the corresponding formula, such as DateDiff(DatePicker1. SelectedDate, DatePicker2. SelectedDate, Days). This feature simplifies formula creation, especially for those less familiar with coding.
By integrating Copilot with AI Builder, users can automate the extraction of data from documents, such as invoices or approval forms. For example, Copilot can extract approval justifications and auto-generate emails for swift approvals within Outlook. This process streamlines workflows and reduces manual data entry.
Copilot assists users in creating automated workflows by interpreting natural language prompts. For example, a user can instruct Copilot to “Create a flow that sends an email when a new item is added to SharePoint,” and Copilot will generate the corresponding flow. This feature accelerates the automation process without requiring extensive coding knowledge.
In Power Apps Studio, Copilot allows developers to build and edit apps using natural language commands. For instance, typing “Add a button to my header” or “Change my container to align center” enables Copilot to execute these changes, simplifying the development process and making it more accessible.
Copilot facilitates the creation of conversation topics in Power Virtual Agents by generating them from natural language descriptions. For example, describing a topic like “Customer Support” prompts Copilot to create a topic with relevant trigger phrases and nodes, streamlining the bot development process.
Copilot assists in building websites by interpreting natural language descriptions. For example, stating “Create a homepage with a contact form and a product gallery” prompts Copilot to generate the corresponding layout and components, expediting the website development process.
Limitation | Description | Example |
---|---|---|
1. Limited understanding of business context | Copilot doesn’t always understand your specific business rules or logic. | You ask Copilot to "generate a travel approval form," but your org requires approval from both the team lead and HR. Copilot might only include one level of approval. |
2. Restricted to available connectors and data | Copilot can only access data sources that are already connected in your app. | You ask it to "show top 5 sales regions," but haven’t connected your Sales DB — Copilot can't help unless that connection is preconfigured. |
3. Not fully customizable output | You might not get exactly the layout, formatting, or logic you want — especially for complex logic. | Copilot generates a form with 5 input fields, but doesn't group them or align them properly; you still need to fine-tune it manually. |
4. Model hallucination (AI guessing wrong info) | Like other LLMs, Copilot may “guess” when unsure — and guess incorrectly. | You ask Copilot to create a formula for filtering “Inactive users,” and it writes a filter condition that doesn’t exist in your dataset. |
5. English-only or limited language support | Most effective prompts and results come in English; support for other languages is limited or not optimized. | You try to ask Copilot in Hindi, and it misinterprets the logic or doesn't return relevant suggestions. |
6. Requires clean, named data structures | Copilot struggles when your tables/columns aren't clearly named. | If you name a field fld001_status instead of Status, Copilot might fail to identify it correctly or generate unreadable code. |
7. Security roles not respected by Copilot | Copilot may suggest features that would break your security model if implemented directly. | You generate a data view for all users, but your app is role-based — Copilot won’t automatically apply row-level security filters. |
8. No support for complex logic or multi-step workflows | It’s good at simple flows, but not for things like advanced branching, looping, or nested conditions. | You ask Copilot to automate a 3-level approval chain with reminder logic and escalation — it gives a very basic starting point. |
9. Limited offline or disconnected use | Copilot and generated logic assume you’re online. | If your app needs to work offline (e.g., for field workers), Copilot-generated logic may not account for offline sync or local caching. |
10. Only works inside Microsoft ecosystem | Copilot doesn’t support 3rd-party AI tools natively. | If your company uses Google Cloud or OpenAI directly, Copilot won’t connect unless you build custom connectors or use HTTP calls. |
Knowing how to best interact with the copilot can help get your desired results quickly. When you’re communicating with the copilot, make sure that you’re as clear as you can be with your goals. Review the following dos and don’ts to help guide you to a more successful copilot-building experience.
To have a more successful copilot building experience, do the following:
Copilot in Microsoft Power Platform marks a major step forward in making low-code development truly accessible and intelligent. By enabling users to build apps, automate workflows, analyze data, and create bots using natural language, it empowers both technical and non-technical users to turn ideas into solutions faster than ever.
It transforms how people interact with technology by:
With built-in security, compliance with organizational governance, and continuous improvements from Microsoft’s AI advancements, Copilot is not just a tool—it’s a catalyst for transforming how organizations solve problems and deliver value.
As AI continues to evolve, Copilot will play a central role in democratizing software development and helping organizations move faster and smarter with data-driven, automated tools.
]]>To make streamline project development and maintenance, in any programming language, we need the support of metadata, configuration, and documentation. Project configurations can be done using configuration files. Configuration files are easy to use and make it user friendly to interact with developer. One such type of configuration files used in DBT are the YAML files.
In this blog, will go through the required YAML files in DBT.
Let’s understand first what YAML is and DBT
DBT (Data Build Tool) :
Data transformation is the important process in modern analytics. DBT is a system to transform, clean and aggregate data within data warehouse. The power of DBT lies in its utilization of YAML files for both configuration and transformation.
Note:
Please go through link for DBT(DBT)
What is YAML file:
YAML acronym as “Yet Another Markup Language.” It is easy to read and understand. YAML is superset of JSON.
Common use of YAML file:
– Configuration Management:
Use to define configuration like roles, environment.
– CI/CD Pipeline:
CI/CD tools depend on YAML file to describe their pipeline.
– Data Serialization:
YAML can manage complex data types such as linked list, arrays, etc.
– API:
YAML can be use in defining API contracts and specification.
Sample Example of YAML file:
YAML files are the core of defining configuration and transformation in DBT. YAML files have “.yml” extension.
The most important YAML file is
profiles.yml:
This file needs to be locally. It contains sensitive that can be used to connect with target data warehouse.
Purpose:
It consists of main configuration details to which connect with data warehouse(Snowflake, Postgres, etc.)
profile configuration looks like as :
Note:
We should not share profiles.yml file with anyone because it consists of target data warehouse information. This file will be used in DBT core and not in DBT cloud.
YAML file classification according to DBT component:
Let us go through different components of DBT with corresponding YAML files:
1.dbt_project.yml:
This is the most important configuration file in DBT. This file tells DBT what configuration
need to use for projects. By default, dbt_project.yml is the current directory structure
For Example:
name: string config-version: 2 version: version profile: profilename model-paths: [directorypath] seed-paths: [directorypath] test-paths: [directorypath] analysis-paths: [directorypath] macro-paths: [directorypath] snapshot-paths: [directorypath] docs-paths: [directorypath] asset-paths: [directorypath] packages-install-path: directorypath clean targets: [directorypath] query-comment: string require-dbt-version: version-range | [version-range] flags: <global-configs> dbt-cloud: project-id: project_id # Required defer-env-id: environment # Optional exposures: +enabled: true | false. quoting: database: true | false schema: true | false identifier: true | false metrics: <metric-configs> models: <model-configs> seeds: <seed-configs> semantic-models: <semantic-model-configs> saved-queries: <saved-queries-configs> snapshots: <snapshot-configs> sources: <source-configs> tests: <test-configs> vars: <variables> on-run-start: sql-statement | [sql-statement] on-run-end: sql-statement | [sql-statement] dispatch: - macro_namespace: packagename search_order: [packagename] restrict-access: true | false
Model:
Models use SQL language that defines how your data is transformed .In a model, configuration file, you define the source and the target tables and their transformations. It is under the model directory of DBT project, and we can give name as per our convenience.
Below is the example:
This is the YAML file in model. Given name as “schema.yml”
Purpose of model YML file:
It configures the model level metadata such as tags, materialization, name, column which use for transforming the data
It looks like as below:
version: 2 models: - name: my_first_dbt_model description: "A starter dbt model" columns: - name: id description: "The primary key for this table" data_tests: - unique - not_null - name: my_second_dbt_model description: "A starter dbt model" columns: - name: id description: "The primary key for this table" data_tests: - unique - not_null
2.Seed:
Seeds used to load CSV files into data model. This is useful for staging before applying any
transformation.
Below is the example:
Purpose of Seeds YAML file:
To define the path of CSV file under seed directory and which column needs to transform in CSV file and load into the data warehouse tables.
Configuration file looks like as below:
version: 2 seeds: - name: <name> description: Raw data from a source database: <database name> schema: <database schema> materialized: table sql: |- SELECT id, name FROM <source_table>
Testing:
Testing is a key step in any project. Similarly, DBT create test folder to test unique constraints, not null values.
Create dbtTest.yml file under test folder of DBT project
And it looks like as below:
Purpose of test YML file as:
It helps to check data integrity quality and separates from the business logic
It looks like as below:
columns: - name: order_id tests: - not_null - unique
As we go through different YAML files in DBT and purpose for the same.
Conclusion:
dbt and its YAML files provide human readable way to manage data transformation. With dbt, we can easily create, transform, and test the data models and make valuable tools for data professionals. With both DBT and YAML, it empowers you to work more efficiently as data analyst. Data engineers or business analysts
Thanks for reading.
]]>
Serverless is changing the game—no need to manage servers anymore. In this blog, we’ll see how to build a serverless blogging platform using AWS Lambda and Python. It’s scalable, efficient, and saves cost—perfect for modern apps.
Before starting the demo, make sure you have: an AWS account, basic Python knowledge, AWS CLI and Boto3 installed.
Open the Lambda service and click “Create function.” Choose “Author from scratch,” name it something like BlogPostHandler, select Python 3.x, and give it a role with access to DynamoDB and S3. Then write your code using Boto3 to handle CRUD operations for blog posts stored in DynamoDB.
First, go to REST API and click “Build.” Choose “New API,” name it something like BlogAPI, and select “Edge optimized” for global access. Then create a resource like /posts, add methods like GET or POST, and link them to your Lambda function (e.g. BlogPostHandler) using Lambda Proxy integration. After setting up all methods, deploy it by creating a stage like prod. You’ll get an Invoke URL which you can test using Postman or curl.
Open DynamoDB and click “Create table.” Name it something like BlogPosts, set postId as the partition key. If needed, add a sort key like category for filtering. Default on-demand capacity is fine—it scales automatically. You can also add extra attributes like timestamp or tags for sorting and categorizing. Once done, hit “Create.”
.
First, make your front-end files—HTML, CSS, maybe some JavaScript. Then go to AWS S3, create a new bucket with a unique name, and upload your files like index.html. This will host your static website.
After uploading, set the bucket policy to allow public read access so anyone can view your site. That’s it—your static website will now be live from S3.
{ "Version": "2012-10-17", "Statement": [ { "Sid": "PublicReadGetObject", "Effect": "Allow", "Principal": "*", "Action": "s3:GetObject", "Resource": "arn:aws:s3:::your-bucket-name/*" } ] }
After uploading, don’t forget to replace your-bucket-name in the bucket policy with your actual S3 bucket name. This makes sure the permissions work properly. Now your static site is live—S3 will serve your HTML, CSS, and JS smoothly and reliably.
Go to CloudFront and create a new Web distribution. Set the origin to your S3 website URL (like your-bucket-name.s3-website.region.amazonaws.com, not the ARN). For Viewer Protocol Policy, choose “Redirect HTTP to HTTPS” for secure access. Leave other settings as-is unless you want to tweak cache settings. Then click “Create Distribution”—your site will now load faster worldwide.
To let your frontend talk to the backend, you need to enable CORS in API Gateway. Just open the console, go to each method (like GET, POST, DELETE), click “Actions,” and select “Enable CORS.” That’s it—your frontend and backend can now communicate properly.
Additionally, in your Lambda function responses.(We already added in our lambda function), make sure to include the following headers.
That’s it—your serverless blogging platform is ready! API Gateway gives you the endpoints, Lambda handles the logic, DynamoDB stores your blog data, and S3 + CloudFront serve your frontend fast and globally. Fully functional, scalable, and no server headaches!
Building a serverless blog with AWS Lambda and Python shows how powerful and flexible serverless really is. It’s low-maintenance, cost-effective, and scales easily perfect for anything from a personal blog to a full content site. A solid setup for modern web apps!
]]>Clinical trial data management is critical to pharmaceutical research, yet it remains a significant challenge for many organizations. The industry faces several persistent hurdles:
Our tailored solution for a top-five life sciences leader integrated data from 13 sources and included bi-directional EDC integration and multiple AI models. Our deep understanding of clinical trial processes, data management, and platforms proved instrumental in delivering a solution that met—and exceeded—expectations.
Want to know more about our approach to clinical trial data collaboration? Check out our guide on the subject.
Discover why the largest life sciences organizations – including 14 of the top 20 pharma/biotech firms, 6 of the top 10 CROs, and 14 of the top 20 medical device organizations – have counted on our world-class industry capabilities and experience with leading technology innovators. Our deep expertise in life sciences and digital technologies, including artificial intelligence and machine learning, helps transform the R&D process and deliver meaningful value to patients and healthcare professionals.
Contact us to learn about our life sciences and healthcare expertise and capabilities, and how we can help you transform your business.
]]>
Digital transformation in insurance isn’t slowing down. But here’s the good news: agents aren’t being replaced by technology. They’re being empowered by it. Agents are more essential than ever in delivering value. For insurance leaders making strategic digital investments, the opportunity lies in enabling agents to deliver personalized, efficient, and human-centered experiences at scale.
Drawing from recent industry discussions and real-world case studies, we’ve gathered insights to highlight four key themes where digital solutions are transforming agent effectiveness and unlocking measurable business value.
Customers want to feel seen, and they expect tailored advice with seamless service. When you deliver personalized experiences, you build stronger loyalty, increase engagement, and drive better results.
Look for platforms that bring all your customer data together and enable real-time personalization. This isn’t just about marketing. It’s a growth strategy.
Success In Action: Proving Rapid Value and Creating Better Member Experiences
Agents spend too much time on repetitive, low-value tasks. Automation can streamline these processes, allowing agents to focus on complex, high-value interactions that need a human touch.
Start with automation in the back-office to build confidence and demonstrate ROI. Then expand to customer-facing processes to enhance speed and service without sacrificing the personal feel.
Explore More: Transform Your Business With Cutting-Edge AI and Automation Solutions
Insurance is a document-heavy industry. Unlocking the value trapped in unstructured data is critical to enabling AI and smarter decision-making.
Prioritize digitization as a foundational investment. Without clean, accessible data, personalization and automation efforts will stall.
Related: Data-Driven Companies Move Faster and Smarter
The future of insurance distribution lies in human-AI collaboration. Agentic frameworks empower agents with intelligent prompts, decision support, and operational insights.
Start building toward a connected digital ecosystem where AI supports—not replaces—your teams. That’s how you can deliver empathetic, efficient, and accurate service.
You May Also Enjoy: Top 5 Digital Trends for Insurance in 2025
The most successful carriers seeing the biggest wins are those that blend the precision of machines with human empathy. They’re transforming how agents engage, advise, and deliver value.
“If you don’t have data fabric, platform modernization, and process optimization, you can’t deliver personalization at scale. It’s a crawl, walk, run journey—but the results are real.”
Carriers and brokers count on us to help modernize, innovate, and win in an increasingly competitive marketplace. Our solutions power personalized omnichannel experiences and optimize performance across the enterprise.
We are trusted by leading technology partners and consistently mentioned by analysts. Discover why we have been trusted by 13 of the 20 largest P&C firms and 11 of the 20 largest annuity carriers. Explore our insurance expertise and contact us to learn more.
]]>In today’s cloud-first world, building a secure, high-performing, resilient, and efficient infrastructure is more critical than ever. That’s where the AWS Well-Architected Framework comes in a powerful guide designed to help architects and developers make informed decisions and build better cloud-native solutions.
The AWS Well-Architected Framework provides a consistent approach for evaluating and improving your cloud architecture. It’s built around six core pillars that represent key areas of focus for building robust and scalable systems:
Explore the AWS Well-Architected Framework here https://aws.amazon.com/architecture/well-architected
Time to time, AWS made some changes in the framework and introduce new resources which we can follow to utilize them better for our use cases and get better architecture.
To help you apply these principles, AWS offers the Well-Architected Tool—a free service that guides you through evaluating your workloads against the six pillars.
Try the AWS Well-Architected Tool here https://aws.amazon.com/well-architected-tool/
Cost Optimization is not just about cutting costs—it’s about maximizing value. It ensures that your cloud investments align with business goals and scale efficiently.
Why It Matters:
The AWS Well-Architected Framework is more than a checklist—it’s a mindset. By embracing its principles, especially cost optimization, you can build cloud environments that are not only efficient and scalable but also financially sustainable.
]]>With the advancement of technology, machine learning and AI capabilities in the customer care space, customer expectations are evolving faster than ever before. Customers expect smoother, context-aware, personalized, and generally more effective and faster experiences across channels when contacting a support center.
This calls for a need to revisit and redefine the success metrics for a Contact Center as a Service (CCaaS) strategy.
Let’s break this down into two categories. The first category includes key metrics that are still essential to be measured. The standards for these metrics though are raised and the way they are measured have evolved. The second category introduces new metrics that are emerging because of advanced CCaaS capabilities in a modern contact center landscape.
Customer Satisfaction (CSAT) remains a cornerstone success metric. Every improvement a customer service center is looking to make, from improving operational efficiencies to enhancing agent and customer experience, will directly or indirectly impact the customer and is aimed at elevating that customer experiences. With automated personalized journeys being an important part of modern customer service, it is important to monitor real-time analytics on automated journeys in addition to live agent interactions. This helps better understand the customer experience and find opportunities to fine tune the friction points to improve customer satisfaction. Customer service is not only about resolving customer issues, but also about providing an effortless experience.
First Contact Resolution is still a key success metric in the CCaaS space, but modern tools can revolutionize the extent a customer service center can go to improve this metric, so the standards for this metric have raised. Passing context effectively across channels, real-time monitoring, predictive analytics and insights, and proactive outreach can increase the likelihood of addressing customer needs on the first contact or even sometimes without the need for a live agent interaction.
Customer Retention Rate metric has been revamped with the advancement of technology in customer service. Advanced predictive analytics can help track the customer experience throughout their journey and shed light on the underlying customer behavior patterns. This will enable proactive engagement strategies personalized to every customer. Real-time sentiment analysis can provide instant feedback to the customer service representatives and their supervisors to give them a chance to course correct immediately in order to shift the sentiment to a positive experience and retain customers.
Agent Experience and Satisfaction has a direct impact on the operation of a contact center and hence the customer experience. Traditionally, this metric was not tracked broadly as an important metric to measure a successful contact center strategy. However, we know today that agent experience and satisfaction is a key metric for transforming contact centers from cost centers into revenue generating units. Contact centers can leverage modern tools in different areas from agent performance monitoring, training and identifying knowledge gaps to providing automated workflows and real-time agent assistance, to elevate the agent experience.
These strategies and tools help agents become more effective and productive while providing service. Satisfied agents are more motivated to help customers effectively. This can improve metrics like First Contact Resolution rate and Average Handle Time. Happy and productive agents are more likely to engage positively with customers to discuss potential cross-sell and upsell opportunities. Moreover, agent turnover and the cost associated with that will be lowered due to the reduced burden of onboarding and training new agents regularly and constantly being short of staff.
Sentiment Analysis and Real-time Interaction Quality provides immediate insights to the contact center representatives about the customer’s emotions, the conversation tone, and the effectiveness of their interactions. This will help the contact center representatives to refine their interaction strategy on the spot to maintain a positive and effective engagement with the customer. These transforms contact centers into emotionally intelligent, customer-focused support centers. This makes a huge difference in a time where the quality of experience matters as much as the outcome.
Predictive Analysis Accuracy represents an entirely new set of metrics for a modern contact center that leverages predictive analytics in its operation. It is crucial to measure this metric and evaluate the accuracy of the forecasts against customer behavior and demands as well as the agent workflow needs. Inaccurate predictions are not only ineffective but can also be harmful to contact center operations. They can lead to poor decision making, confusion, and disappointing customer experiences. Accuracy in the anticipation of customer needs can enable proactive outreach, positive and effective interactions, less friction points and reduced service contacts while facilitating effective automatic upsell and cross-sell initiatives.
Technology Utilization Rate is an important metric to track in a modern and evolving customer care solution. While with the latest technological advancements a lot of intelligent automation and enhancements can be made within a CCaaS solution, a contact center strategy is required to identify the most impactful modern capabilities for every customer service operation. The strategy needs to incorporate tracking the success of the technology adoption through system usage data and adoption metrics. This ensures that technology is being leveraged effectively and is providing value to business. The technology utilization tracking can also reveal training and adoption gaps, ensuring that modern tools are not just implemented for the sake of innovation, but are actively contributing to improved efficiency within a contact center.
The development of advanced native capabilities and integration of modern tools within CCaaS platforms are revolutionizing the customer care industry and reshaping customer expectations. Staying ahead of this shift is crucial. While utilizing these advancements to achieve operational efficiencies, it is equally important to redefine the success metrics that provide businesses with insights and feedback on a modern CCaaS strategic roadmap. Adopting a fresh approach to capturing traditional metrics like Customer Satisfaction Scores and First Contact Resolution, combined with measuring new metrics such as Real-time Interaction Quality and Predictive Analysis Accuracy will offer a comprehensive view of a contact center’s maturity and its progress towards a successful and effective modern CCaaS solution.
We can measure these metrics by utilizing built-in monitoring and analytical tools of modern CCaaS platforms along with AI-powered services integrations for features like Sentiment and Real-time Quality Analysis. We can gather regular feedback and data from agents and automated tracking tools to monitor system usability and efficiency. All this data can be streamed and displayed on a unified custom analytics dashboard, providing a comprehensive view of contact center performance and effectiveness.
]]>