Data + Intelligence Articles / Blogs / Perficient https://blogs.perficient.com/category/services/data-intelligence/ Expert Digital Insights Tue, 09 Dec 2025 16:29:17 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Data + Intelligence Articles / Blogs / Perficient https://blogs.perficient.com/category/services/data-intelligence/ 32 32 30508587 Getting Started with Python for Automation https://blogs.perficient.com/2025/12/09/getting-started-with-python-for-automation/ https://blogs.perficient.com/2025/12/09/getting-started-with-python-for-automation/#respond Tue, 09 Dec 2025 14:00:21 +0000 https://blogs.perficient.com/?p=388867

Automation has become a core part of modern work, allowing teams to reduce repetitive tasks, save time, and improve accuracy. Whether it’s generating weekly reports, organizing files, processing large amounts of data, or interacting with web applications, automation helps individuals and companies operate more efficiently. Among all programming languages used for automation, Python is one of the most widely adopted because of its simplicity and flexibility. 

Why Python Is Perfect for Automation 

Python is known for having a clean and readable syntax, which makes it easy for beginners to start writing scripts without needing deep programming knowledge. The language is simple enough for non-developers, yet powerful enough for complex automation tasks. Another major advantage is the availability of thousands of libraries. These libraries allow Python to handle file operations, manage Excel sheets, interact with APIs, scrape websites, schedule tasks, and even control web browsers – all with minimal code. Because of this, Python becomes a single tool capable of automating almost any repetitive digital task. 

What You Can Automate with Python 

Python can automate everyday tasks that would otherwise require significant manual effort. Simple tasks like renaming multiple files, organizing folders, or converting file formats can be completed instantly using small scripts. It is also commonly used for automating Excel-based workflows, such as cleaning datasets, merging sheets, generating monthly summaries, or transforming data between formats. Python is equally powerful for web-related automation: collecting data from websites, making API calls, sending automated emails, downloading content, and filling out online forms. For more advanced uses, Python can also automate browser testing, server monitoring, and deployment processes. 

Setting Up Your Python Automation Environment 

Getting started is straightforward. After installing Python, you can use an editor like VS Code or PyCharm to write your scripts. Libraries required for automation can be installed using a single command, making setup simple. Once you have your environment ready, writing your first script usually takes only a few minutes. For example, a short script can rename files in a folder, send an email, or run a function at a specific time of the day. Python’s structure is beginner-friendly, so even basic programming knowledge is enough to start automating everyday tasks. 

Examples of Simple Automation 

A typical example is a script that automatically renames files. Instead of renaming hundreds of files one by one, Python can loop through the folder and rename them instantly. Another example is an automated email script that can send daily reminders or reports. Python can also schedule tasks so that your code runs every morning, every hour, or at any time you choose. These examples show how even small scripts can add real value to your workflow by reducing repetitive manual tasks. 

Best Practices When Building Automation 

As you begin writing automation scripts, it helps to keep the code organized and reliable. Using virtual environments ensures that your project libraries remain clean. Adding error-handling prevents scripts from stopping unexpectedly. Logging enables you to track what your script does and when it executes. Once your automation is ready, you can run it automatically using tools like Task Scheduler on Windows or cron on Linux, so the script works in the background without your involvement. 

How Companies Use Python Automation 

Python automation is widely used across industries. IT teams rely on it to monitor servers, restart services, and handle deployment tasks. Business teams use it to generate reports, clean data, update dashboards, and manage document workflows. Marketing teams use automation for scraping competitor information, scheduling social media posts, or tracking engagement. For developers, Python helps with testing, error checking, and system integration via APIs. Across all these areas, automation improves efficiency and reduces human error. 

Conclusion 

Python is an excellent starting point for anyone who wants to begin automating daily tasks. Its simplicity, combined with its powerful ecosystem of libraries, makes it accessible to beginners and useful for professionals. Even basic automation scripts can save hours of work, and as you grow more comfortable, you can automate more complex processes involving data, web interactions, and system management. Learning Python for automation not only makes your work easier but also adds valuable skills for professional growth. 

 

]]>
https://blogs.perficient.com/2025/12/09/getting-started-with-python-for-automation/feed/ 0 388867
AI and the Future of Financial Services UX https://blogs.perficient.com/2025/12/01/ai-banking-transparency-genai-financial-ux/ https://blogs.perficient.com/2025/12/01/ai-banking-transparency-genai-financial-ux/#comments Mon, 01 Dec 2025 18:00:28 +0000 https://blogs.perficient.com/?p=388706

I think about the early ATMs now and then. No one knew the “right” way to use them. I imagine a customer in the 1970s standing there, card in hand, squinting at this unfamiliar machine and hoping it would give something back; trying to decide if it really dispensed cash…or just ate cards for sport. That quick panic when the machine pulled the card in is an early version of the same confusion customers feel today in digital banking.

People were not afraid of machines. They were afraid of not understanding what the machine was doing with their money.

Banks solved it by teaching people how to trust the process. They added clear instructions, trained staff to guide customers, and repeated the same steps until the unfamiliar felt intuitive. 

However, the stakes and complexity are much higher now, and AI for financial product transparency is becoming essential to an optimized banking UX.

Today’s banking customer must navigate automated underwriting, digital identity checks, algorithmic risk models, hybrid blockchain components, and disclosures written in a language most people never use. Meanwhile, the average person is still struggling with basic money concepts.

FINRA reports that only 37% of U.S. adults can answer four out of five financial literacy questions (FINRA Foundation, 2022).

Pew Research finds that only about half of Americans understand key concepts like inflation and interest (Pew Research Center, 2024).

Financial institutions are starting to realize that clarity is not a content task or a customer service perk. It is structural. It affects conversion, compliance, risk, and trust. It shapes the entire digital experience. And AI is accelerating the pressure to treat clarity as infrastructure.

When customers don’t understand, they don’t convert. When they feel unsure, they abandon the flow. 

 

How AI is Improving UX in Banking (And Why Institutions Need it Now)

Financial institutions often assume customers will “figure it out.” They will Google a term, reread a disclosure, or call support if something is unclear. In reality, most customers simply exit the flow.

The CFPB shows that lower financial literacy leads to more mistakes, higher confusion, and weaker decision-making (CFPB, 2019). And when that confusion arises during a digital journey, customers quietly leave without resolving their questions.

This means every abandoned application costs money. Every misinterpreted term creates operational drag. Every unclear disclosure becomes a compliance liability. Institutions consistently point to misunderstanding as a major driver of complaints, errors, and churn (Lusardi et al., 2020).

Sometimes it feels like the industry built the digital bank faster than it built the explanation for it.

Where AI Makes the Difference

Many discussions about AI in financial services focus on automation or chatbots, but the real opportunity lies in real-time clarity. Clarity that improves financial product transparency and streamlines customer experience without creating extra steps.

In-context Explanations That Improve Understanding

Research in educational psychology shows people learn best when information appears the moment they need it. Mayer (2019) demonstrates that in-context explanations significantly boost comprehension. Instead of leaving the app to search unfamiliar terms, customers receive a clear, human explanation on the spot.

Consistency Across Channels

Language in banking is surprisingly inconsistent. Apps, websites, advisors, and support teams all use slightly different terms. Capgemini identifies cross-channel inconsistency as a major cause of digital frustration (Capgemini, 2023). A unified AI knowledge layer solves this by standardizing definitions across the system.

Predictive Clarity Powered by Behavioral Insight

Patterns like hesitation, backtracking, rapid clicking, or form abandonment often signal confusion. Behavioral economists note these patterns can predict drop-off before it happens (Loibl et al., 2021). AI can flag these friction points and help institutions fix them.

24/7 Clarity, Not 9–5 Support

Accenture reports that most digital banking interactions now occur outside of business hours (Accenture, 2023). AI allows institutions to provide accurate, transparent explanations anytime, without relying solely on support teams.

At its core, AI doesn’t simplify financial products. It translates them.

What Strong AI-Powered Customer Experience Looks Like

Onboarding that Explains Itself

  • Mortgage flows with one-sentence escrow definitions.
  • Credit card applications with visual explanations of usage.
  • Hybrid products that show exactly what blockchain is doing behind the scenes. The CFPB shows that simpler, clearer formats directly improve decision quality (CFPB, 2020).

A Unified Dictionary Across Channels

The Federal Reserve emphasizes the importance of consistent terminology to help consumers make informed decisions (Federal Reserve Board, 2021). Some institutions now maintain a centralized term library that powers their entire ecosystem, creating a cohesive experience instead of fragmented messaging.

Personalization Based on User Behavior

Educational nudges, simplified paths, multilingual explanations. Research shows these interventions boost customer confidence (Kozup & Hogarth, 2008). 

Transparent Explanations for Hybrid or Blockchain-backed Products

Customers adopt new technology faster when they understand the mechanics behind it (University of Cambridge, 2021). AI can make complex automation and decentralized components understandable.

The Urgent Responsibilities That Come With This

 

GenAI can mislead customers without strong data governance and oversight. Poor training data, inconsistent terminology, or unmonitored AI systems create clarity gaps. That’s a problem because those gaps can become compliance issues. The Financial Stability Oversight Council warns that unmanaged AI introduces systemic risk (FSOC, 2023). The CFPB also emphasizes the need for compliant, accurate AI-generated content (CFPB, 2024).

Customers are also increasingly wary of data usage and privacy. Pew Research shows growing fear around how financial institutions use personal data (Pew Research Center, 2023). Trust requires transparency.

Clarity without governance is not clarity. It’s noise.

And institutions cannot afford noise.

What Institutions Should Build Right Now

To make clarity foundational to customer experience, financial institutions need to invest in:

  • Modern data pipelines to improve accuracy
  • Consistent terminology and UX layers across channels
  • Responsible AI frameworks with human oversight
  • Cross-functional collaboration between compliance, design, product, and analytics
  • Scalable architecture for automated and decentralized product components
  • Human-plus-AI support models that enhance, not replace, advisors

When clarity becomes structural, trust becomes scalable.

Why This Moment Matters

I keep coming back to the ATM because it perfectly shows what happens when technology outruns customer understanding. The machine wasn’t the problem. The knowledge gap was. Financial services are reliving that moment today.

Customers cannot trust what they do not understand.

And institutions cannot scale what customers do not trust.

GenAI gives financial organizations a second chance to rebuild the clarity layer the industry has lacked for decades, and not as marketing. Clarity, in this new landscape, truly is infrastructure.

Related Reading

References 

  • Accenture. (2023). Banking top trends 2023. https://www.accenture.com
  • Capgemini. (2023). World retail banking report 2023. https://www.capgemini.com
  • Consumer Financial Protection Bureau. (2019). Financial well-being in America. https://www.consumerfinance.gov
  • Consumer Financial Protection Bureau. (2020). Improving the clarity of mortgage disclosures. https://www.consumerfinance.gov
  • Consumer Financial Protection Bureau. (2024). Supervisory highlights: Issue 30. https://www.consumerfinance.gov
  • Federal Reserve Board. (2021). Consumers and mobile financial services. https://www.federalreserve.gov
  • FINRA Investor Education Foundation. (2022). National financial capability study. https://www.finrafoundation.org
  • Financial Stability Oversight Council. (2023). Annual report. https://home.treasury.gov
  • Kozup, J., & Hogarth, J. (2008). Financial literacy, public policy, and consumers’ self-protection. Journal of Consumer Affairs, 42(2), 263–270.
  • Loibl, C., Grinstein-Weiss, M., & Koeninger, J. (2021). Consumer financial behavior in digital environments. Journal of Economic Psychology, 87, 102438.
  • Lusardi, A., Mitchell, O. S., & Oggero, N. (2020). The changing face of financial literacy. University of Pennsylvania, Wharton School.
  • Mayer, R. (2019). The Cambridge handbook of multimedia learning. Cambridge University Press.
  • Pew Research Center. (2023). Americans and data privacy. https://www.pewresearch.org
  • Pew Research Center. (2024). Americans and financial knowledge. https://www.pewresearch.org
  • University of Cambridge. (2021). Global blockchain benchmarking study. https://www.jbs.cam.ac.uk
]]>
https://blogs.perficient.com/2025/12/01/ai-banking-transparency-genai-financial-ux/feed/ 6 388706
Sitecore Content SDK: What It Offers and Why It Matters https://blogs.perficient.com/2025/11/19/sitecore-content-sdk-what-it-offers-and-why-it-matters/ https://blogs.perficient.com/2025/11/19/sitecore-content-sdk-what-it-offers-and-why-it-matters/#respond Wed, 19 Nov 2025 15:08:05 +0000 https://blogs.perficient.com/?p=388367

Sitecore has introduced the Content SDK for XM Cloud-now Sitecore AI to streamline the process of fetching content and rendering it on modern JavaScript front-end applications. If you’re building a website on Sitecore AI, the new Content SDK is the modern, recommended tool for your development team.

Think of it as a specialized, lightweight toolkit built for one specific job: getting content from Sitecore AI and displaying it on your modern frontend application (like a site built with Next.js).

Because it’s purpose-built for Sitecore AI, it’s fast, efficient, and doesn’t include a lot of extra baggage. It focuses purely on the essential “headless” task of fetching and rendering content.

What About the JSS SDK?
This is the original toolkit Sitecore created for headless development.

The key difference is that the JSS SDK was designed to be a one-size-fits-all solution. It had to support both the new, headless Sitecore AI and Sitecore’s older, all-in-one platform, Sitecore XP/XM.

To do this, it had to include extra code and dependencies to support older features, like the “Experience Editor”. This makes the JSS SDK “bulkier” and more complex. If you’re only using Sitecore AI, you’re carrying around a lot of extra weight you simply don’t need.

The Sitecore Content SDK is the modern, purpose-built toolkit for developers using Sitecore AI, providing seamless, out-of-the-box integration with the platform’s most powerful capabilities. This includes seamless visual editing that empowers marketers to build and edit pages in real-time, as well as built-in hooks for personalization and analytics that simplify the delivery and tracking of targeted user experiences. For developers, it provides GraphQL utilities to streamline data fetching and is deeply optimized for Next.js, enabling high-performance features like server-side rendering. Furthermore, with the recent introduction of App Router support (in beta), the SDK is evolving to give developers even more granular control over performance, SEO, bundle sizes, and security through a more modern, modular code structure.

What does the Content SDK offer?

1) App Router support (v1.2)

With version 1.2.0, Sitecore Content SDK introduces App Router support in beta. While the full fledged stable release is expected soon, developers can already start exploring its benefits and work flow with 1.2 version.
This isn’t just a minor update; it’s a huge step toward making your front-end development more flexible and highly optimized.

Why should you care? –
The App Router introduces a fantastic change to your starter application’s code structure and how routing works. Everything becomes more modular and declarative, aligning perfectly with modern architecture practices. This means defining routes and layouts is cleaner, content fetching is neatly separated from rendering, and integrating complex Next.js features like dynamic routes is easier than ever. Ultimately, this shift makes your applications much simpler to scale and maintain as they grow on Sitecore AI.

Performance: Developers can fine-tune route handling with nested layouts and more aggressive and granular caching to seriously boost overall performance, leading to faster load times.

Bundle Size: Smaller bundle size because it uses React Server Components (RSC) to render components. It help fetch and render component from server side without making the static files in bundle.

Security: It helps with security by giving improved control over access to specific routes and content.

With the starter kit applications, this is how app router routing structure looks like:

Approute

 

2) New configs – sitecore.config.ts & sitecore.cli.config.ts

The sitecore.config.ts file, located in the root of your application, acts as the central configuration point for Content SDK projects. It is replacement of the older temp/config file used by the JSS SDK. It contains properties that can be used throughout the application just by importing the file. It contains important properties like sitename, defaultLanguage, edge props like contextid. Starter templates include a very lightweight version containing only the mandatory parameters necessary to get started. Developers can easily extend this file as the project grows and requires more specific settings.

Key Aspects:

Environment Variable Support: This file is designed for deployment flexibility using a layered approach. Any configuration property present in this file can be sourced in three ways, listed in order of priority:

  1. Explicitly defined in the configuration file itself.
  2. Fallback to a corresponding environment variable (ideal for deployment pipelines).
  3. Use a default value if neither of the above is provided.

This layered approach ensures flexibility and simplifies deployment across environments.

 

The sitecore.cli.config.ts file is dedicated to defining and configuring the commands and scripts used during the development and build phases of a Content SDK project.

Key Aspects:

CLI Command Configuration: It dictates the commands that execute as part of the build process, such as generateMetadata() and generateSites(), which are essential for generating Sitecore-related data and metadata for the front-end.

Component Map Generation: This file manages the configuration for the automatic component map generation. This process is crucial for telling Sitecore how your front-end components map to the content structure, allowing you to specify file paths to scan and define any files or folders to exclude. Explored further below.

Customization of Build Process: It allows developers to customize the Content SDK’s standard build process by adding their own custom commands or scripts to be executed during compilation.

While sitecore.config.ts handles the application’s runtime settings (like connection details to Sitecore AI), sitecore.cli.config.ts works in conjunction to handle the development-time configuration required to prepare the application for deployment.

Cli Config

 

3) Component map

In Sitecore Content SDK-based applications, every custom component must be manually registered in the .sitecore/component-map.ts file located in the app’s root. The component map is a registry that explicitly links Sitecore renderings to their corresponding frontend component implementations. The component map tells the Content SDK which frontend component to render for each component receives from Sitecore. When the rendering gets added to any page via presentation, component map tells which frontend rendering should be rendered at the place.

Key Aspects:

Unlike JSS implementations that automatically maps components, the Content SDK’s explicit component map enables better tree-shaking. Your final production bundle will only include the components you have actually registered and use, resulting in smaller, more efficient application sizes.

This is how it looks like: (Once you start creating custom component, you have to add the component name here to register.)

Componentmap

 

4) Import map

The import map is a tool used specifically by the Content SDK’s code generation feature. It manages the import paths of components that are generated or used during the build process. It acts as a guide for the code generation engine, ensuring that any new code it creates correctly references your existing components.
Where it is: It is a generated file, typically found at ./sitecore/import-map.ts, that serves as an internal manifest for the build process. You generally do not need to edit this file manually.
It simplifies the logic of code generation, guaranteeing that any newly created code correctly and consistently references your existing component modules.

The import map generation process is configurable via the sitecore.cli.config.ts file. This allows developers to customize the directories scanned for components.

 

5) defineMiddleware in the Sitecore Content SDK

defineMiddleware is a utility for composing a middleware chain in your Next.js app. It gives you a clean, declarative way to handle cross-cutting concerns like multi-site routing, personalization, redirects, and security all in one place. This centralization aligns perfectly with modern best practices for building scalable, maintainable functions.

The JSS SDK leverages a “middleware plugin” pattern. This system is effective for its time, allowing logic to be separated into distinct files. However, this separation often requires developers to manually manage the ordering and chaining of multiple files, which could become complex and less transparent as the application grew. The Content SDK streamlines this process by moving the composition logic into a single, highly readable utility which can customizable easily by extending Middleware

Middleware

 

6) Debug Logging in Sitecore Content SDK

Debug logging helps you see what the SDK is doing under the hood. Super useful for troubleshooting layout/dictionary fetches, multisite routing, redirects, personalization, and more. The Content SDK uses the standard DEBUG environment variable pattern to enable logging by namespace. You can selectively turn on logging for only the areas you need to troubleshoot, such as: content-sdk:layout (for layout service details) or content-sdk:dictionary (for dictionary service details)
For all available namespaces and parameters, refer to sitecore doc – https://doc.sitecore.com/sai/en/developers/content-sdk/debug-logging-in-content-sdk-apps.html#namespaces 

 

7) Editing & Preview

In the context of Sitecore’s development platform, editing and preview render optimization with the Content SDK involves leveraging middleware, architecture, and framework-specific features to improve the performance of rendering content in editing and preview modes. The primary goal is to provide a fast and responsive editing experience for marketers using tools like Sitecore AI Pages and the Design Library. EditingRenderMiddleware: The Content SDK for Next.js includes optimized middleware for editing scenarios. Instead of a multi-step process involving redirects, the optimized middleware performs an internal, server-side request to return the HTML directly. This reduces overhead and speeds up rendering significantly.
This feature Works out of the box in most environments: Local container, Vercel / Netlify, SitecoreAI (defaults to localhost as configured)

For custom setups, override the internal host with: SITECORE_INTERNAL_EDITING_HOST_URL=https://host
This leverages a Integration with XM Cloud/Sitecore AI Pages for visual editing and testing of components.

 

8) SitecoreClient

The SitecoreClient class in the Sitecore Content SDK is a centralized data-fetching service that simplifies communication with your Sitecore content backend typically with Experience Edge or preview endpoint via GraphQL endpoints.
Instead of calling multiple services separately, SitecoreClient lets you make one organized request to fetch everything needed for a page layout, dictionary, redirects, personalization, and more.

Key Aspect:

Unified API: One client to access layout, dictionary, sitemap, robots.txt, redirects, error pages, multi-site, and personalization.
To understand all key methods supported, please refer to sitecore documentation: https://doc.sitecore.com/sai/en/developers/content-sdk/the-sitecoreclient-api.html#key-methods

Sitecoreclientmethods

9) Built-In Capabilities for Modern Web Experiences

GraphQL Utilities: Easily fetch content, layout, dictionary entries, and site info from Sitecore AI’s Edge and Preview endpoints.
Personalization & A/B/n Testing: Deploy multiple page or component variants to different audience segments (e.g., by time zone or language) with no custom code.
Multi-site Support: Seamlessly manage and serve content across multiple independent sites from a single Sitecore AI instance.
Analytics & Event Tracking: Integrated support via the Sitecore Cloud SDK for capturing user behavior and performance metrics.
Framework-Specific Features: Includes Next.js locale-based routing for internationalization, and supports both SSR and SSG for flexible rendering strategies.

 

10) Cursor for AI development

Starting with Content SDK version 1.1, Sitecore has provided comprehensive “Cursor rules” to facilitate AI-powered development.
The integration provides Cursor with sufficient context about the Content SDK ecosystem and Sitecore development patterns. These set of rules and context helps to accelerate the development. The cursor rules are created for contentsdk with starter application under .cursor folder. This enables the AI to better assist developers with tasks specific to building headless Sitecore components, leading to improved development consistency and speed following same patterns just by providing few commands in generic terms. Example given in below screenshot for Hero component which can act as a pattern to create another similar component by cursor.

Cursorrules

 

11) Starter Templates and Example Applications

To accelerate development and reduce setup time, the Sitecore Content SDK includes a set of starter templates and example applications designed for different use cases and development styles.
The SDK provides a Next.js JavaScript starter template that enables rapid integration with Sitecore AI. This template is optimized for performance, scalability, and best practices in modern front-end development.
Starter Applications in examples

basic-nextjs -A minimal Next.js application showcasing how to fetch and render content from Sitecore AI using the Content SDK. Ideal for SSR/SSG use cases and developers looking to build scalable, production-ready apps.

basic-spa -A single-page application (SPA) example that demonstrates client-side rendering and dynamic content loading. Useful for lightweight apps or scenarios where SSR is not required.

Other demo site to showcase Sitecore AI capabilities using the Content SDK:

kit-nextjs-article-starter

kit-nextjs-location-starter

kit-nextjs-product-starter

kit-nextjs-skate-park

 

Final Thoughts

The Sitecore Content SDK represents a major leap forward for developers building on Sitecore AI. Unlike the older JSS SDK, which carried legacy dependencies, the Content SDK is purpose-built for modern headless architectures—lightweight, efficient, and deeply optimized for frameworks like Next.js. With features like App Router support, runtime and CLI configuration flexibility, and explicit component mapping, it empowers teams to create scalable, high-performance applications while maintaining clean, modular code structures.

]]>
https://blogs.perficient.com/2025/11/19/sitecore-content-sdk-what-it-offers-and-why-it-matters/feed/ 0 388367
Chandra OCR: The BEST in Open-Source AI Document Parsing https://blogs.perficient.com/2025/11/19/chandra-ocr-open-source-document-parsing/ https://blogs.perficient.com/2025/11/19/chandra-ocr-open-source-document-parsing/#respond Wed, 19 Nov 2025 13:31:58 +0000 https://blogs.perficient.com/?p=388476

In the specialized field of Optical Character Recognition (OCR), a new open-source model from Datalab is setting a new benchmark for accuracy and versatility. Chandra OCR, released in October 2025, has rapidly ascended to the top of the leaderboards, outperforming even proprietary giants like GPT-4o and Gemini Pro on key benchmarks.

Beyond Simple Text Extraction

Chandra is not just another OCR tool; it’s a comprehensive document AI solution. Unlike traditional pipeline-based approaches that process documents in chunks, Chandra utilizes full-page decoding. This allows it to understand the entire context of a page, leading to significant improvements in accuracy and layout awareness.

Key Capabilities:

  • Layout-Aware Output: Chandra preserves the original document structure, outputting to Markdown, HTML, or JSON with remarkable fidelity.
  • Image & Figure Extraction: It can identify, caption, and extract images and figures from within a document.
  • Advanced Language Support: Chandra supports over 40 languages and can even read handwritten text, making it a truly global solution.
  • Specialized Content: The model excels at handling complex content, including mathematical equations and intricate tables.

Unrivaled Performance

Category Score Rank
Tables 88.0 #1
Old Scans Math 80.3 #1
Old Scans 50.4 #1
Long Tiny Text 92.3 #1
Base Documents 99.9 Near-Perfect

Chandra’s performance on the independent olmOCR benchmark is nothing short of revolutionary. With an overall score of 83.1%, it has established a new state-of-the-art for open-source OCR models.

Chandra Ocr RankSource: https://medium.com/data-science-in-your-pocket/chandra-ocr-beats-deepseek-ocr-47267b6f4895

Accessible and Production-Ready

Datalab has made Chandra widely accessible. It is available as an open-source project on GitHub and Hugging Face, and also as a hosted API with a free tier for developers to get started. For high-throughput applications, quantized versions of the model are available for on-premises deployment, capable of processing up to 4 pages per second on an H100 GPU.

Why Chandra OCR Matters

The release of Chandra OCR is a watershed moment for document AI. It provides a free, open-source, and commercially viable alternative to expensive proprietary solutions, without compromising on performance. For developers and businesses that rely on accurate and structured data extraction, Chandra OCR is a game-changer.

Read more

Cross-posted from https://www.linkedin.com/pulse/chandra-ocr-best-open-source-ai-document-parsing-matthew-aberham-3fx1e

]]>
https://blogs.perficient.com/2025/11/19/chandra-ocr-open-source-document-parsing/feed/ 0 388476
Interview with Prasad Sogalad: Becoming a Databricks Partner Champion https://blogs.perficient.com/2025/11/18/interview-with-prasad-sogalad-becoming-a-databricks-partner-champion/ https://blogs.perficient.com/2025/11/18/interview-with-prasad-sogalad-becoming-a-databricks-partner-champion/#respond Tue, 18 Nov 2025 15:18:05 +0000 https://blogs.perficient.com/?p=388417

The Databricks Partner Champion Program recognizes individuals who demonstrate technical mastery, thought leadership, and a commitment to advancing the data and AI ecosystem. We sat down with Perficient lead technical consultant Prasad Sogalad, recently named a Databricks Champion, to learn about his journey, insights, and advice for aspiring professionals.

Q: What does it mean to you to be recognized as a Partner Champion?

Prasad: Personally, this recognition validates a sustained commitment to continuous technical excellence and dedicated execution across client engagements.

Professionally, it provides privileged access to strategic intelligence and platform innovation roadmaps. This positioning enables proactive integration of emerging capabilities into client architectures, delivering competitive differentiation through early adoption.

Q: How have you contributed to Databricks’ growth with key clients or markets?

Prasad: My contributions center on driving legacy infrastructure modernization through Lakehouse architecture implementation across strategic vertical markets. For example, I provided architectural leadership for an engagement with a Tier-1 healthcare institution that achieved a 40% improvement in ETL pipeline throughput while reducing costs through compute optimization and Delta Lake strategies.

Q: What technical or business skills were key to achieving this recognition?

Prasad: Recognition as a Databricks Champion requires mastery across both technical and strategic competency dimensions:

Technical Depth in Data Engineering & AI

Comprehensive expertise across the data engineering technology stack, including Apache Spark optimization techniques, Delta Lake transactional architecture, Unity Catalog governance frameworks, and MLOps workflow patterns. This extends to advanced capabilities in performance tuning, cost optimization, cluster configuration, and architectural pattern selection optimized for specific use case requirements and scale characteristics. 

Architectural Vision & Business Alignment

The ability to decompose complex, multi-faceted business challenges—such as fraud detection systems, supply chain visibility platforms, or regulatory compliance reporting—into scalable, production-ready Lakehouse implementations. This requires translating high-level stakeholder requirements and strategic business objectives into technically sound, maintainable architectures that deliver measurable ROI and sustainable competitive advantage.

Q: What advice would you give to someone aiming to follow a similar path?

Prasad: Success requires transcending basic platform utilization to achieve true ecosystem mastery. I recommend a three-pronged approach:

  1. Be a Builder—Develop Engineering Excellence: Move beyond notebook-based experimentation to production-grade engineering practices. This requires developing a comprehensive understanding of Delta Lake internals, mastering advanced Spark optimization techniques for performance and cost efficiency, and implementing robust infrastructure-as-code practices using tools like Terraform and CI/CD pipelines. Focus on building solutions that demonstrate operational excellence, scalability, and maintainability rather than proof-of-concept demonstrations.
  2. Learn Governance—Master Unity Catalog: Develop deep expertise in Unity Catalog architecture, including fine-grained access control patterns, data lineage tracking, and compliance framework implementation. As regulatory requirements intensify and data mesh architectures proliferate across enterprises, governance capabilities become increasingly critical differentiators in client engagements. Demonstrating mastery of security, privacy, and compliance controls positions you as a trusted advisor for enterprise-grade implementations.
  3. Teach What You Know—Engage in Community Leadership: While technical certifications validate knowledge acquisition, Champion recognition requires demonstrated leadership through active knowledge dissemination. Contribute to the community ecosystem through mentorship programs, technical blog posts, conference presentations, or user group facilitation. This external visibility and commitment to elevating others’ capabilities distinguishes practitioners and accelerates the path to Champion recognition.

Q: Are there any recent trends or innovations in Databricks that excite you?

Prasad: I am particularly excited about the convergence of two transformative platform innovation vectors:

LLMs/Generative AI Integration

The integration of advanced AI capabilities within the Databricks platform, particularly through MosaicML and the introduction of native tooling for fine-tuning and deploying large language models directly on the Lakehouse, represents a paradigm shift in enterprise AI development. These capabilities democratize access to Generative AI by enabling organizations to build, customize, and deploy proprietary LLM applications within their existing data infrastructure, eliminating complex cross-platform integrations and data movement overhead while maintaining governance and security controls. This positions the Lakehouse as a comprehensive platform for both traditional analytics and cutting-edge AI workloads.

Databricks Lakebase

The introduction of a fully managed PostgreSQL service represents a fundamental architectural evolution. By providing native transactional database capabilities within the Lakehouse, Databricks eliminates the traditional separation between operational (OLTP) and analytical (OLAP) data stores. This architectural consolidation allows transactional data to reside directly alongside analytical datasets within a unified Lakehouse infrastructure, dramatically simplifying system architecture, reducing data movement latency, and minimizing pipeline complexity. This advancement moves the industry significantly closer to realizing the vision of a truly unified data platform capable of supporting the complete spectrum of enterprise data workloads—from high-velocity transactional systems to complex analytical processing—within a single, governed environment.

Q: Now that you’ve received this recognition, what are your plans?

Prasad: My roadmap focuses on platform enablement and IP adoption. I plan to lead initiatives that drive adoption of proprietary frameworks like ingestion and orchestration systems, and host optimization workshops dedicated to Spark performance and FinOps strategies. These efforts will empower teams and clients to maximize the value of Databricks.

Congratulations Prasad!

We’re proud of Prasad’s achievement and thrilled to add him to our growing list of Databricks Champions. His journey underscores the importance of deep technical expertise, strategic vision, and community engagement.

Perficient and Databricks

Perficient is proud to be a trusted Databricks elite consulting partner with 100s of certified consultants. We specialize in delivering tailored data engineering, analytics, and AI solutions that unlock value and drive business transformation.

Learn more about our Databricks partnership.

]]>
https://blogs.perficient.com/2025/11/18/interview-with-prasad-sogalad-becoming-a-databricks-partner-champion/feed/ 0 388417
Use Cases on AWS AI Services https://blogs.perficient.com/2025/11/09/amazon-web-services-ai/ https://blogs.perficient.com/2025/11/09/amazon-web-services-ai/#comments Sun, 09 Nov 2025 14:48:42 +0000 https://blogs.perficient.com/?p=386758

In today’s AI activated world, there are ample number of AI related tools that organizations can use to tackle diverse business challenges. In line with this, Amazon has it’s set of Amazon Web Services for AI and ML, to address the real-world needs.

This blog provides details on AWS services, but by understanding this writeup you can also get to know how AI and ML capabilities can be used to address various business challenges. To illustrate how these services can be leveraged, I have used a few simple and straightforward use cases and mapped the AWS solutions to them.

 

AI Use Cases : Using AWS Services

1. Employee On boarding process

Any employee onboarding process has its own challenges which can be improved by better information discovery, shortening the onboarding timelines, providing more flexibility to the new hire, option for learning and re-visiting the learning multiple times and enhancing both the security and personalization of the induction experience.

Using natural language queries, the AWS AI service – Amazon Kendra, enables new hires to easily find HR manuals, IT instructions, leave policies, and company guidelines, without needing to know exact file names or bookmark multiple URLs.

Amazon Kendra uses Semantic Search which understands the user’s intent and contextual meaning. Semantic search relies on Vector embeddings, Vector search, Pattern matching and Natural Language Processing.

Real-time data retrieval through Retrieval-augmented Generation (RAG) in Amazon Kendra empowers employees to access up-to-date content securely and efficiently.

Following are examples of few prompts a new hire can use to retrieve information:

  • How can I access my email on my laptop and on my phone.
  • How do I contact the IT support.
  • How can I apply for a leave and who do I reach out to for approvals.
  • How do I submit my timesheet.
  • Where can I find the company training portal.
  • ….etcetera.

Data Security

To protect organizational data and ensure compliance with enterprise security standards, Amazon Kendra supports robust data security measures, including encryption in transit and at rest, and seamless integration with AWS Identity and Access Management (IAM).

Role-based access ensures that sensitive information is only visible to authorized personnel.

Thus, in the Onboarding process, the HR team can provide the personalized touch, and the AI agent ensures the employees have easy, anytime access to the right information throughout their on-boarding journey.

.

2. Healthcare: Unlocking Insights from Unstructured Clinical Data

Healthcare providers always need to extract critical patient information and support timely decision-making. They face the challenge of rapidly analyzing vast amounts of unstructured medical records, such as physician notes, discharge summaries, and clinical reports.

From a data perspective two key features are required, namely, Entity Recognition and Attribute detection. Medical entities include symptoms, medications, diagnoses, and treatment plans. Similarly Attribute detection includes identifying the dosage, frequency and severity associated with these entities.

Amazon provides the service, Amazon Comprehend Medical which uses NLP and ML models for extracting such information from unstructured data available with healthcare organizations.

One of the crucial aspects in healthcare is to handle Security and compliance related to patient’s health data. AWS has Amazon Macie as a security related service which employs machine learning & pattern matching to discover, classify, and protect Protected Health Information (PHI) within Amazon S3 bucket. Such a service helps organizations maintain HIPAA compliance through automated data governance.

 

3. Enterprise data insights

Any large enterprise has data spread across various tools like SharePoint, Salesforce, Leave management portals or some accounting applications.

From these data sets, executives can extract great insights, evaluate what-if scenarios, check on some key performance indicators, and utilize all this for decision making.

We can use AWS AI service, Amazon Q business for this very purpose using various plugins, connectors to DBs, and Retrieval Augmented Generation for up-to-date information.

The user can use natural language to query the system and Amazon Q performs Semantic search to return back contextually appropriate information. It also uses Knowledge Grounding which eventually helps in providing accurate answers not relying solely on training data sets.

To ensure that AI-generated responses adhere strictly to approved enterprise protocols, provide accurate and relevant information, we can define built-in guardrails within Amazon Q, such as Global Controls and Topic blocking.

 

4. Retail company use cases

a) Reading receipts and invoices

The company wants to automate the financial auditing process. In order to achieve this we can use Amazon Textract to read receipts and invoices as it uses machine learning algorithms to accurately identify and extract key information like product names, prices, and reviews.

b) Analyse customer purchasing patterns

The company intends to analyse customer purchasing patterns to predict future sales trends from their large datasets of historical sales data. For these analyses the company wants to build, train, and deploy machine learning models quickly and efficiently.

Amazon SageMaker is the ideal service for such a development.

c) Customer support Bot

The firm receives thousands of customer calls daily. In order to smoothen the process, the firm is looking to create a conversational AI bot which can take text inputs and voice commands.

We can use Amazon Bedrock to create a custom AI application from a dataset of ready to use Foundation models. These models can process large volumes of customer data, generate personalized responses and integrate with other AWS services like Amazon SageMaker for additional processing and analytics.

We can use Amazon Lex to create the bot, and Amazon Polly for text to speech purposes.

d) Image analyses

The company might want to identify and categorize their products based on the images uploaded. To implement this, we can use Amazon S3 and Amazon Rekognition to analyze images as soon as the new product image is uploaded into the storage service.

 

AWS Services for Compliance & Regulations

AWS AI Services for Compliance

AWS Services for Compliance & Regulations

In order to manage complex customer requirements and handling large volumes of sensitive data it becomes essential for us to adhere to various regulations.

Key AWS services supporting these compliance and governance needs include:

  1. AWS Config
    Continuously monitors and records resource configurations to help assess compliance.
  2. AWS Artifact
    Centralized repository for on-demand access to AWS compliance reports and agreements.
  3. AWS CloudTrail
    Logs and tracks all user activity and API calls within your AWS environment for audit purposes.
  4. AWS Inspector
    Automated security assessment service that identifies vulnerabilities and deviations from best practices.
  5. AWS Audit Manager
    Simplifies audit preparation by automating evidence collection and compliance reporting.
  6. AWS Trusted Advisor
    Provides real-time recommendations to optimize security, performance, and cost efficiency.

 

Security and Privacy risks: Vulnerabilities in LLMs

Vulnerabilities in LLMs

Vulnerabilities in LLMs

While dealing with LLMs there are ways available to attack the prompts, however there are various safeguards also against them. Keeping in view the attacks I am noting down some vulnerabilities which are useful to understand the risks around your LLMs.

S.No Vulnerability Description
1 Prompt Injection User input intended to manipulate the LLM
2 Insecure o/p handling Un-validated model’s output.
3 Training data poisoning Malicious data introduced in training set.
4 Model Denial Of Service Disrupting availability by identifying architecture weaknesses.
5 Supply chain vulnerabilities Weakness in s/w, h/w, services used to build or deploy the model.
6 Leakage Leakage of sensitive data.
7 Insecure plugins Flaws in model components.
8 Excessive autonomy Autonomy to the model in decision making.
9 Over – reliance Relying heavily on model’s capabilities.
10 Model theft. Leading to unauthorized re-use of the copies of the model

 

Can you co-relate the above use cases with any of your challenges at hand? Have you been able to use any of the AWS services or other AI platforms for dealing with such challenges?

References:

https://aws.amazon.com/ai/services/
https://www.udemy.com/share/10bvuD/

]]>
https://blogs.perficient.com/2025/11/09/amazon-web-services-ai/feed/ 1 386758
Building for Humans – Even When Using AI https://blogs.perficient.com/2025/10/29/building-for-humans-even-when-using-ai/ https://blogs.perficient.com/2025/10/29/building-for-humans-even-when-using-ai/#comments Thu, 30 Oct 2025 01:03:55 +0000 https://blogs.perficient.com/?p=388108

Artificial Intelligence (AI) is everywhere. Every month brings new features promising “deeper thinking” and “agentic processes.” Tech titans are locked in trillion-dollar battles. Headlines scream about business, economic, and societal concerns. Skim the news and you’re left excited and terrified!

Here’s the thing: we’re still human – virtues, flaws, quirks, and all. We’ve always had our agency, collectively shaping our future. Even now, while embracing AI, we need to keep building for us.

We Fear What We Do Not Know

“AI this… AI that…” Even tech leaders admit they don’t fully understand it. Sci-fi stories warn us with cautionary tales. News cycles fuel anxiety about job loss, disconnected human relationships, and cognitive decline.

Luckily, this round of innovation is surprisingly transparent. You can read the Attention is All You Need paper (2017) that started it all. You can even build your own AI if you want! This isn’t locked behind a walled garden. That’s a good thing.

What the Past Can Tell Us

I like to look at the past to gauge what we can expect from the future. Humans have feared every major invention and technological breakthrough. We expect the worst, but most have proven to improve life.

We’ve always had distractions from books, movies, games, to TikTok brain-rot. Some get addicted and go too deep, while others thrive. People favor entertainment and leisure activities – this is nothing new – so I don’t feel like cognitive decline is anything to worry about. Humanity has overcome all of it before and will continue to do so.

 

.

 

Humans are Simple (and Complicated) Creatures

We look for simplicity and speed. Easy to understand, easy to look at, easy to interact with, easy to buy from. We skim read, we skip video segments, we miss that big red CTA button. The TL;DR culture rules. Even so, I don’t think we’re at risk of the future from Idiocracy (2006).

That’s not to say that we don’t overcomplicate things. The Gods Must Be Crazy movie (1980) has a line that resonates, “The more [we] improved [our] surroundings to make life easier, the more complicated [we] made it.” We bury our users (our customers) in detail when they just want to skim, skip, and bounce.

Building for Computers

The computer revolution (1950s-1980s) started with machines serving humans. Then came automation. And eventually, systems talking to systems.

Fast-forward to the 2010s, where marketers gamed the algorithms to win at SEO, SEM, and social networking. Content was created for computers, not humans. Now we have the dead internet theory. We were building without humans in mind.

We will still have to build for systems to talk to systems. That won’t change. APIs are more important than ever, and agentic AI relies on them. Because of this, it is crucial to make sure what you are building “plays well with others”. But AIs and APIs are tools, not the audience.

Building for Humans

Google used to tell us all to build what people want, as opposed to gaming their systems. I love that advice. However, at first it felt unrealistic…gaming the system worked. Then after many updates, for a short bit, it felt like Google was getting there! Then it got worse and feels like pay-to-play recently.

Now AI is reshaping search and everything else. You can notice the gap between search results and AI recommendations. They don’t match. AI assistants aim to please humans, which is great, until it inevitably changes.

Digital teams must build for AI ingestion, but if you neglect the human aspect and the end user experience, then you will only see short-term wins.

Examples of Building for Humans

  • Make it intuitive and easy. Simple for end users means a lot of work for builders, but it is worth it! Reduce their cognitive load.
  • Build with empathy. Appeal to real people, not just personas and bots. Include feedback loops so they can feel heard.
  • Get to the point. Don’t overwhelm users, instead help them take action! Delight your customers by saving them time.
  • Add humor when appropriate. Don’t be afraid to be funny, weird, or real…it connects on a human level.
  • Consider human bias. Unlike bots and crawlers, humans aren’t always logical. Design for human biases.
  • Watch your users. Focus groups or digital tracking tools are great for observing. Learn from real users and iterate.

Conclusion

Building for humans never goes out of style. Whatever comes after AI will still need to serve people. So as tech evolves, let’s keep honing systems that work with and around our human nature.

……

If you are looking for that extra human touch (built with AI), reach out to your Perficient account manager or use our contact form to begin a conversation.

]]>
https://blogs.perficient.com/2025/10/29/building-for-humans-even-when-using-ai/feed/ 1 388108
See Perficient’s Amarender Peddamalku at the Microsoft 365, Power Platform & Copilot Conference https://blogs.perficient.com/2025/10/23/see-perficients-amarender-peddamalku-at-the-microsoft-365-power-platform-copilot-conference/ https://blogs.perficient.com/2025/10/23/see-perficients-amarender-peddamalku-at-the-microsoft-365-power-platform-copilot-conference/#respond Thu, 23 Oct 2025 17:35:19 +0000 https://blogs.perficient.com/?p=388040

As the year wraps up, so does an incredible run of conferences spotlighting the best in Microsoft 365, Power Platform, and Copilot innovation. We’re thrilled to share that Amarender Peddamalku, Microsoft MVP and Practice Lead for Microsoft Modern Work at Perficient, will be speaking at the Microsoft 365, Power Platform & Copilot Conference in Dallas, November 3–7.

Amarender has been a featured speaker at every TechCon365, DataCon, and PWRCon event this year—and Dallas marks the final stop on this year’s tour. If you’ve missed him before, now’s your chance to catch his insights live!

With over 15 years of experience in Microsoft technologies and a deep focus on Power Platform, SharePoint, and employee experience, Amarender brings practical, hands-on expertise to every session. Here’s where you can find him in Dallas:

Workshops & Sessions

  • Power Automate Bootcamp: From Basics to Brilliance
    Mon, Nov 3 | 9:00 AM – 5:00 PM | Room G6
    A full-day, hands-on workshop for Power Automate beginners.

 

  • Power Automate Multi-Stage Approval Workflows
    Tue, Nov 4 | 9:00 AM – 5:00 PM | Room G2
    Wed, Nov 5 | 3:50 PM – 5:00 PM | Room G6
    Learn how to build dynamic, enterprise-ready approval workflows.

 

  • Ask the Experts
    Wed, Nov 5 | 12:50 PM – 2:00 PM | Expo Hall
    Bring your questions and get real-time answers from Amarender and other experts.

 

  • Build External-Facing Websites Using Power Pages
    Thu, Nov 6 | 1:00 PM – 2:10 PM | Room D
    Discover how to create secure, low-code websites with Power Pages.

 

  • Automate Content Processing Using AI & SharePoint Premium
    Thu, Nov 6 | 4:20 PM – 5:30 PM | Room G6
    Explore how AI and SharePoint Premium (formerly Syntex) can transform content into knowledge.

 

Whether you’re just getting started with Power Platform or looking to scale your automation strategy, Amarender’s sessions will leave you inspired and equipped to take action.

Register now!

]]>
https://blogs.perficient.com/2025/10/23/see-perficients-amarender-peddamalku-at-the-microsoft-365-power-platform-copilot-conference/feed/ 0 388040
Datadog Synthetic Monitoring Integration with Azure DevOps Pipeline for Sitecore https://blogs.perficient.com/2025/10/23/datadog-synthetic-monitoring-integration-with-azure-devops-pipeline-for-sitecore/ https://blogs.perficient.com/2025/10/23/datadog-synthetic-monitoring-integration-with-azure-devops-pipeline-for-sitecore/#respond Thu, 23 Oct 2025 15:35:10 +0000 https://blogs.perficient.com/?p=387828

Datadog Synthetic Monitoring provides automated, simulated user journeys to proactively confirm the health and performance of websites and APIs, helping detect issues before users experience them. Integrating this into our Azure DevOps pipeline ensures that only builds where core site functionality is verified get promoted, reducing the risk of regressions in production. This approach is especially valuable in Sitecore projects, where critical web journeys and API endpoints are essential to user experience.

Why Use This Approach?

  • Immediate feedback: Failing releases are blocked before merging, saving post-release firefighting.
  • Coverage: Synthetic tests simulate real browser actions and API calls over real user flows.
  • Reliability: Automated testing delivers consistent, repeatable validation without manual steps.
  • Visibility: Results are unified within both Datadog and Azure DevOps for full traceability.
  • Scalability: As Sitecore projects grow, synthetic tests can be expanded to cover new endpoints and user scenarios without significant pipeline changes.
  • Environment parity: Tests can be run against staging, UAT, or pre-production environments before the live rollouts for safer releases.

Prerequisites

  • Active Datadog account with Synthetic Monitoring enabled.
  • Datadog API and Application keys created with the appropriate access scope.
  • Azure DevOps project with a working YAML-based CI/CD pipeline.
  • Secure variable storage in Azure DevOps (e.g., Variable Groups, Secret Variables) for credentials.
  • Stable and accessible endpoint URLs for Sitecore environment(s) under test.

High-Level Integration Process

1. Datadog Synthetic Test Creation

  • Create Browser and/or HTTP Synthetic Tests in Datadog tailored for key Sitecore application flows, such as:
    • Homepage load and rendering
    • Login flow and user dashboard navigation
    • Core API calls (search, content retrieval)
    • Critical commerce or form submissions
  • Use relevant tags (e.g., premerge) for search/query filtering by the CI pipeline.
  • Configure assertions to confirm critical elements:
    • Content correctness
    • HTTP status codes
    • Redirect targets
    • Response time SLAs
  • Validate tests in Datadog’s UI with multiple runs before pipeline integration.

Datadogdashboard1

2. Azure DevOps Pipeline Configuration

The Azure DevOps YAML pipeline is set up to invoke Datadog CI, run all tests matching our tag criteria, and fail the pipeline if any test fails.

Key Pipeline Steps

  • Install Datadog CI binary: Downloads and installs the CLI in the build agent.
  • Run Synthetic Tests: Uses the environment variables and search tags to pick synthetic tests (e.g., all with type: browser tag: remerge) and runs them directly.
  • JUnit Reporting & Artifacts: The CLI output is saved, and a JUnit-formatted result file is generated for Azure DevOps’ Tests UI. All test outputs are attached as build artifacts.
  • Conditional Fast-forward Merge: The pipeline proceeds to a gated merge to release/production only if all synthetics pass.

How Results and Flow Work

When All Tests Pass

  • The pipeline completes the Premerge_Datadog_Synthetics stage successfully.
  • Test summaries (JUnit) and CLI outputs are attached to the pipeline run.
  • Approval-gated merge to the Release branch is unblocked; approvers can verify test results before promotion.

Build artifacts include full logs for further review.

     Pipelinepassed

When Any Test Fails

  • If any synthetic (browser/API) test fails, the CLI exits with a non-zero exit code.
  • The JUnit summary will contain failure info and a link to the log details.
  • The pipeline stage fails (Premerge_Datadog_Synthetics), halting the fast-forward merge.
  • Approvers can review the failure in test results and attached artifacts within Azure DevOps.

Only successful resolution and green reruns allow code promotion.

Pipelinefailed

Best Practices for Datadog Synthetic Monitoring

  • Run tests in parallel to reduce wait times.
  • Use separate synthetic tests per microservice or major Sitecore area to isolate failures.
  • Monitor test trends in Datadog to detect gradual performance regression over time.
  • Limit sensitive data in synthetic flows by avoiding the storage of actual credentials.
  • Schedule periodic synthetic runs outside CI/CD to catch environment fluctuations unrelated to code changes.

Security Considerations

  • Store Datadog keys as secret variables in Azure DevOps.
  • Restrict permission for synthetic management to trusted CICD admins.
  • Avoid embedding credentials or sensitive payloads in test scripts.

Conclusion

By integrating Datadog Synthetic Monitoring directly into our CI/CD pipeline with Azure DevOps. Sitecore teams gain a safety net that blocks faulty builds before they hit production, while keeping a detailed audit trail. Combined with careful test design, secure key management, and continuous expansion of coverage, this approach becomes a cornerstone of proactive web application quality assurance.

 

]]>
https://blogs.perficient.com/2025/10/23/datadog-synthetic-monitoring-integration-with-azure-devops-pipeline-for-sitecore/feed/ 0 387828
Perficient at Microsoft Ignite 2025: Let’s Talk AI Strategy https://blogs.perficient.com/2025/10/21/perficient-at-microsoft-ignite-2025-lets-talk-ai-strategy/ https://blogs.perficient.com/2025/10/21/perficient-at-microsoft-ignite-2025-lets-talk-ai-strategy/#respond Tue, 21 Oct 2025 16:49:06 +0000 https://blogs.perficient.com/?p=387885

Microsoft Ignite 2025 is right around the corner—and Perficient is showing up with purpose and a plan to help you unlock real results with AI.

As a proud member of Microsoft’s Inner Circle for AI Business Solutions, we’re at the forefront of helping organizations accelerate their AI transformation. Whether you’re exploring custom copilots, modernizing your data estate, or building secure, responsible AI solutions, our team is ready to meet you where you are—and help you get where you want to go.

Here’s where you can connect with us during Ignite:

Join Us for Happy Hour
Unwind and connect with peers, Microsoft leaders, and the Perficient team at our exclusive happy hour just steps from the Moscone Center.
📍 Fogo de Chão | 🗓 November 17 | 🕔 6:00–9:00 PM
Reach out to our team to get registered!

 

Book a Strategy Session
Need a quiet space to talk AI strategy? We’ve secured a private meeting space across from the venue—perfect for 1:1 conversations about your AI roadmap.
📍 Ember Lounge — 201 3rd St, 8th floor, Suite 8016 | 🗓 November 18-20
Reserve Your Time

 

From copilots to cloud modernization, we’re helping clients across industries turn AI potential into measurable impact. Let’s connect at Ignite and explore what’s possible.

]]>
https://blogs.perficient.com/2025/10/21/perficient-at-microsoft-ignite-2025-lets-talk-ai-strategy/feed/ 0 387885
Salesforce AI for Financial Services: Practical Capabilities That Move the Organization Forward https://blogs.perficient.com/2025/10/20/salesforce-ai-for-financial-services-practical-capabilities-that-move-the-organization-forward/ https://blogs.perficient.com/2025/10/20/salesforce-ai-for-financial-services-practical-capabilities-that-move-the-organization-forward/#respond Mon, 20 Oct 2025 11:01:05 +0000 https://blogs.perficient.com/?p=387746

Turn on CNBC during almost any trading day and you’ll see and hear plenty of AI buzz that sounds great, and may look great in a deck, but falls short in regulated industries. For financial services firms, AI must do two things at once: unlock genuine business value and satisfy strict compliance, privacy, and audit requirements. Salesforce’s AI stack — led by Einstein GPT, Data Cloud, and integrated with MuleSoft, Slack, and robust security controls — is engineered to meet that dual mandate. Here’s a practical look at what Salesforce AI delivers for banks, insurers, credit unions, wealth managers, and capital markets firms, and how to extract measurable value without trading off controls and/or governance.

What Salesforce AI actually is (and why it matters for Financial Services)

Salesforce is widely adopted by financial services firms, with over 150,000 companies worldwide using its CRM, including a significant portion of the U.S. market, where 83% of businesses opt for its Financial Services Cloud (“FSC”). Major financial institutions like Wells Fargo, Bank of America Merrill Lynch and The Bank of New York are among its users, demonstrating its strong presence within the industry. Salesforce has combined together generative AI, predictive models, and enterprise data plumbing into a single ecosystem. Key capabilities include:

  • Einstein GPT: Generative AI tailored for CRM workflows — draft client communications, summarize notes, and surface contextual insights using your internal data.
  • Data Cloud: A real-time customer data platform that ingests, unifies, and models customer profiles at scale, enabling AI to operate on a trusted single source of truth.
  • Tableau + CRM Analytics: Visualize model outcomes, monitor performance, and create operational dashboards that align AI outputs with business KPIs.
  • MuleSoft: Connectors and APIs to bring core banking, trading, and ledger systems into the loop securely.
  • Slack & Flow (and Flow Orchestrator): Operationalize AI outputs into workflows, approvals, and human-in-the-loop processes.

For financial services, that integration matters more than flashy demos: accuracy, traceability, and context are non-negotiable. Salesforce’s ecosystem lets you apply AI where it impacts revenue, risk, and customer retention — and keep audit trails for everything.

High-value financial services use cases

Here are the pragmatic use cases where Salesforce AI delivers measurable ROI:

Client advisory and personalization

Generate personalized portfolio reviews, client outreach, or renewal communications using Einstein GPT combined with up-to-date holdings and risk profiles from Data Cloud. The result: more relevant outreach and higher conversion rates with less advisor time.

Wealth management — scalable advice and relationship mining

AI-driven summarization of client meetings, automated risk-tolerance classifiers, and opportunity scoring help advisors prioritize high-value clients and surface cross-sell opportunities without manual data wrangling.

Commercial lending — faster decisioning and better risk controls

Combine predictive credit risk models with document ingestion (via MuleSoft and integrated OCR) to auto-populate loan applications, flag exceptions, and route for human review where model confidence is low.

Fraud, AML, and compliance augmentation

Use real-time customer profiles and anomaly detection to surface suspicious behaviors. AI can triage alerts and summarize evidence for investigators, improving throughput while preserving explainability for regulators. AI can also reduce the volume of false alerts, which is the bane of every compliance officer ever.

Customer support and claims

RAG-enabled virtual assistants (Einstein + Data Cloud) pull from policy language, transaction history, and client notes to answer common questions or auto-draft claims responses — reducing service time and improving consistency. The virtual assistants can also interact in multiple languages, which helps reduce customer turnover for non-English writing clients.

Sales and pipeline acceleration

Predictive lead scoring, propensity-to-buy models, and AI-suggested next-best actions increase win rates and shorten sales cycles. Integrated workflows push suggestions to reps in Slack or the Salesforce console, making adoption frictionless.

Why Salesforce’s integrated approach reduces risk

Financial firms can’t treat AI as a separate experiment. Salesforce’s value proposition is that AI is embedded into systems that already handle customer interactions, security, and governance. That produces the following practical advantages:

Single source of truth

Data Cloud reduces conflicting customer records and stale insights, which directly lowers the risk of AI producing inappropriate or inaccurate outputs.

Controlled model access and hosting options

Enterprises can choose where data and model inference occur, including private or managed-cloud options, helping meet residency and confidentiality requirements.

Explainability and audit trails

Salesforce logs user interactions, AI-generated outputs, and data lineage into the platform. That creates the documentation regulators ask for and lets financial services executives investigate where models made decisions.

Human-in-the-loop and confidence thresholds

Workflows can be configured so that high-risk or low-confidence outputs require human approval. That’s essential for credit decisions, compliance actions, and investment advice.

Implementation considerations for regulated firms

To assist in your planned deployment of Salesforce AI in financial services, here’s a checklist of practical guardrails and steps:

Start with business outcomes, not models

  • Identify high-frequency, low-risk tasks for pilots (e.g., document summarization, inquiry triage) and measure lift on KPIs like turnaround time, containment rate, and advisor productivity.

Clean and govern your data

Invest in customer identity resolution, canonicalization, and metadata tagging in Data Cloud. Garbage in, garbage out is especially painful when compliance hangs on a model’s output.

Create conservative guardrails

Hard-block actions that have material customer impact (e.g., account closure, fund transfers) from automated flows. Use AI to assist drafting and recommendation, not to execute high-risk transactions autonomously.

Establish model testing and monitoring

Implement A/B tests, accuracy benchmarks, and drift detection. Integrate monitoring into Tableau dashboards and set alerts for performance degradation or unusual patterns.

Document everything for auditors and regulators

Maintain clear logs of training data sources, prompt templates, model versions, and human overrides. Salesforce’s native logging plus orchestration records from Flow help with this.

Train users and change-manage

Advisors, compliance officers, and client service reps should be part of prompt tuning and feedback loops. Incentivize flagging bad outputs — their corrections will dramatically improve model behavior.

Measurable outcomes to expect

When implemented with discipline, financial services firms typically see improvements including:

  • Reduced average handling time and faster loan turnaround
  • Higher client engagement and improved cross-sell conversion
  • Fewer false positives and faster investigator resolution times
  • Better advisor productivity via automated notes and suggested actions

Those outcomes translate into cost savings, improved regulatory posture, and revenue lift — the hard metrics CFOs, CROs, and CCOs require.

Final thoughts — pragmatic AI adoption

Salesforce gives financial institutions a practical path to embed AI into customer-facing and operational workflows without ripping up existing systems. The power isn’t just in the model; it’s in the combination of unified data (Data Cloud), generative assistance (Einstein GPT), secure connectors (MuleSoft), and operationalization (Flows and Slack). If you treat governance, monitoring, and human oversight as first-class citizens, AI becomes an accelerant — not a liability.

To help financial services firms either install or expand their Salesforce capability, Perficient has a 360-degree strategic partnership with Salesforce. While Salesforce itself is the provider of the platform and technology, as a global digital consultancy firm Perficient partners with Salesforce to offer its expertise in implementation, customization, and optimization of Salesforce solutions, leveraging Salesforce’s AI-first technologies and platform to deliver consulting, implementation, and integration services. Working together, Salesforce and Perficient’s partnership helps mutual clients build customer-centric solutions and operate as “agentic enterprises” 

 

 

]]>
https://blogs.perficient.com/2025/10/20/salesforce-ai-for-financial-services-practical-capabilities-that-move-the-organization-forward/feed/ 0 387746
Navigating the AI Frontier: Data Governance Controls at SIFIs in 2025 https://blogs.perficient.com/2025/10/13/navigating-the-ai-frontier-data-governance-controls-at-sifis-in-2025/ https://blogs.perficient.com/2025/10/13/navigating-the-ai-frontier-data-governance-controls-at-sifis-in-2025/#comments Mon, 13 Oct 2025 10:57:25 +0000 https://blogs.perficient.com/?p=387652

The Rise of AI in Banking

AI adoption in banking has accelerated dramatically. Predictive analytics, generative AI, and autonomous agentic systems are now embedded in core banking functions such as loan underwriting, compliance including fraud detection and AML, and customer engagement. 

A recent White Paper by Perficient affiliate Virtusa Agentic Architecture in Banking – White Paper | Virtusa documented that when designed with modularity, composability, Human-in-the-Loop (HITL), and governance, agentic AI agents empower a more responsive, data-driven, and human-aligned approach in financial services.

However, the rollout of agentic and generative AI tools without proper controls poses significant risks. Without a unified strategy and governance structure, Strategically Important Financial Institutions (“SIFIs”) risk deploying AI in ways that are opaque, biased, or non-compliant. As AI becomes the engine of next-generation banking, institutions must move beyond experimentation and establish enterprise-wide controls.

Key Components of AI Data Governance

Modern AI data governance in banking encompasses several critical components:

1. Data Quality and Lineage: Banks must ensure that the data feeding AI models is accurate, complete, and traceable.

Please refer to Perficient’s recent blog on this topic here:

AI-Driven Data Lineage for Financial Services Firms: A Practical Roadmap for CDOs / Blogs / Perficient

2. Model Risk Management: AI models must be rigorously tested for fairness, accuracy, and robustness. It has been documented many times in lending decision-making software that the bias of coders can result in biased lending decisions.

3. Third-Party Risk Oversight: Governance frameworks now include vendor assessments and continuous monitoring. Large financial institutions do not have to develop AI technology solutions themselves (Buy vs Build) but they do need to monitor the risks of having key technology infrastructure owned and/or controlled by third parties.

4. Explainability and Accountability: Banks are investing in explainable AI (XAI) techniques. Not everyone is a tech expert, and models need to be easily explainable to auditors, regulators, and when required, customers.

5. Privacy and Security Controls: Encryption, access controls, and anomaly detection are essential. These are all done already in legacy systems and extending it to the AI environment, whether it is narrow AI, machine learning, or more advanced agentic and/or generative AI it is natural to ensure these proven controls are extended to the new platforms. 

Industry Collaboration and Standards

The FINOS Common Controls for AI Services initiative is a collaborative, cross-industry effort led by the FINtech Open-Source Foundation (FINOS) to develop open-source, technology-neutral baseline controls for safe, compliant, and trustworthy AI adoption in financial services. By pooling resources from major banks, cloud providers, and technology vendors, the initiative creates standardized, open-source technology-neutral controls, peer-reviewed governance frameworks, and real-time validation mechanisms to help financial institutions meet complex regulatory requirements for AI. 

Key participants of FINOS include financial institutions such as BMO, Citibank, Morgan Stanley, and RBC, and key Technology & Cloud Providers include Perficient’s technology partners including Microsoft, Google Cloud, and Amazon Web Services (AWS). The FINOS Common Controls for AI Services initiative aims to create vendor-neutral standards for secure AI adoption in financial services.

At Perficient, we have seen leading financial institutions, including some of the largest SIFIs, establishing formal governance structures to oversee AI initiatives. Broadly, these governance structures typically include:

– Executive Steering Committees at the legal entity level
– Working Groups, at the legal entity as well as the divisional, regional and product levels
– Real-Time Dashboards that allow customizable reporting for boards, executives, and auditors

This multi-tiered governance model promotes transparency, agility, and accountability across the organization.

Regulatory Landscape in 2025

Regulators worldwide are intensifying scrutiny of Artificial Intelligence in banking. The EU AI Act, the U.S. SEC’s cybersecurity disclosure rules, and the National Insititute of Standards and Technology (“NIST”) AI Risk Management Framework are shaping how financial institutions must govern AI systems.

Key regulatory expectations include:

– Risk-Based Classification
– Human Oversight
– Auditability
– Bias Mitigation

Some of these, and other regulatory regimes have been documented and summarized by Perficient at the following links:

AI Regulations for Financial Services: Federal Reserve / Blogs / Perficient

AI Regulations for Financial Services: European Union / Blogs / Perficient 

Eu Ai Act Risk Based Approach

The Road Ahead

As AI becomes integral to banking operations, data governance will be the linchpin of responsible innovation. Banks must evolve from reactive compliance to proactive risk management, embedding governance into every stage of the AI lifecycle.

The journey begins with data—clean, secure, and well-managed. From there, institutions must build scalable frameworks that support ethical AI development, align with regulatory mandates, and deliver tangible business value.

Readers are urged to read the links contained in this blog and then contact Perficient, a global AI-first digital consultancy to discuss how partnering with Perficient can help run a tailored assessment and pilot design that maps directly to your audit and governance priorities and ensure all new tools are rolled out in a well-designed data governance environment.

]]>
https://blogs.perficient.com/2025/10/13/navigating-the-ai-frontier-data-governance-controls-at-sifis-in-2025/feed/ 1 387652