Machine Intelligence Articles / Blogs / Perficient https://blogs.perficient.com/category/services/data-intelligence/machine-intelligence/ Expert Digital Insights Mon, 19 Jan 2026 20:05:09 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Machine Intelligence Articles / Blogs / Perficient https://blogs.perficient.com/category/services/data-intelligence/machine-intelligence/ 32 32 30508587 Part 1: Mobile AI 2026: Why On-Device Intelligence is the New Standard https://blogs.perficient.com/2026/01/19/part-1-mobile-ai-2026-why-on-device-intelligence-is-the-new-standard/ https://blogs.perficient.com/2026/01/19/part-1-mobile-ai-2026-why-on-device-intelligence-is-the-new-standard/#respond Mon, 19 Jan 2026 20:15:36 +0000 https://blogs.perficient.com/?p=389691

Subtitle: From Critical Medical Hardware to the Apple Ecosystem, the future of mobile intelligence is local, instant, and unified.

We are standing at a hardware tipping point. For the last decade, “AI” on mobile effectively meant one thing: sending data to the cloud and waiting for an answer. Especially for those chatbots, adding AI to an app meant integrating a slow, spinning loading indicator while data traveled to a server, waited in a queue, and eventually returned text. Users are tired of waiting. They are overwhelmed by generic bots that feel disconnected from the app they are actually using.

But as we move toward 2026, the script is flipping. Phone manufacturers are shipping devices with neural engines (NPUs) so powerful they rival the desktop GPUs of just a few years ago. This shift isn’t just about faster chatbots or smoother animations; it is reshaping critical industries like healthcare and unifying the mobile ecosystem under a single dominant model family: Google Gemini.

The Hardware Revolution: The “Brain” in Your Pocket

The defining trend of the 2025-2026 cycle is the explosion of Hardware Acceleration. Modern mobile processors—whether it’s the latest Snapdragons powering Android flagships or the A-series chips in iPhones—are no longer just Central Processing Units (CPUs). They are dedicated AI powerhouses capable of “always-on” generative tasks.

This hardware leap means we can now run massive models (like Gemini Nano) directly on the device. The benefits are immediate and transformative:

  • Zero Latency: No network round-trips. The intelligence feels instantaneous.
  • Total Privacy: Sensitive data never leaves the phone’s secure enclave.
  • Offline Reliability: Intelligence works in elevators, basements, and airplanes.

The Critical Use Case: Android in Healthcare

Nowhere is this shift more vital than in the rapidly expanding world of Medical Devices. Android has quietly become the operating system of choice for specialized medical hardware, from handheld ultrasound scanners to patient vitals monitors.

Why is the edge critical here? Because medical environments are unforgiving. A doctor in a rural clinic or a paramedic in a speeding ambulance cannot rely on spotty 5G connections to process a patient’s vitals or analyze an X-ray.

  • Privacy Compliance: Processing sensitive patient data (like facial analysis for pain detection) strictly on-device removes complex regulatory cloud compliance hurdles. The data stays with the patient.
  • Reliability: An Android-based diagnostic tool must work instantly, 100% of the time, regardless of Wi-Fi status.
  • Adoption: We are seeing a massive surge in smart, connected medical tools that rely on commodity Android hardware to deliver hospital-grade diagnostics at a fraction of the cost.

The “One AI” Future: Gemini on iOS & Android

Perhaps the most compelling reason to bet on Gemini is the upcoming unification of the mobile AI landscape. Reports indicate that Apple is partnering with Google to integrate Gemini models into iOS 18 and macOS Sequoia for complex reasoning tasks and summaries, a rollout expected to mature by Spring 2026.

While Apple will handle basic tasks with its own on-device models, it is leaning on Gemini’s superior reasoning for the “heavy lifting.” This creates a unique opportunity for developers:

  • Unified Intelligence: Learning to engineer prompts and integrations for Gemini means you are effectively targeting the entire mobile market—both the Android medical devices and the premium iPhone user base.
  • Cross-Platform Consistency: A feature built on Gemini’s logic will behave consistently whether it’s running on a Samsung Galaxy Tab in a hospital or an iPhone 17 in a consumer’s hand.
  • Future-Proofing: With these updates expected shortly, building expertise in Gemini now puts us ahead of the curve when the feature goes mainstream across billions of Apple devices.

In Part 2, we will leave the strategy behind and dive into the code to see how we are already building this future today on iOS and Android.

]]>
https://blogs.perficient.com/2026/01/19/part-1-mobile-ai-2026-why-on-device-intelligence-is-the-new-standard/feed/ 0 389691
Model Context Protocol (MCP) – Simplified https://blogs.perficient.com/2026/01/08/model-context-protocol-mcp-simplified/ https://blogs.perficient.com/2026/01/08/model-context-protocol-mcp-simplified/#respond Thu, 08 Jan 2026 07:50:15 +0000 https://blogs.perficient.com/?p=389415

What is MCP?

Model Context Protocol (MCP) is an open-source standard for integrating AI applications to external systems. With AI use cases getting traction more and more, it becomes evident that AI applications tend to connect to multiple data sources to provide intelligent and relevant responses.

Earlier AI systems interacted with users through Large language Models (LLM) that leveraged pre-trained datasets. Then, in larger organizations, business users work with AI applications/agents expect more relevant responses from enterprise dataset, from where Retrieval Augmented Generation (RAG) came into play.

Now, AI applications/agents are expected to produce more accurate responses leveraging latest data, that requires AI systems to interact with multiple data sources and fetch accurate information. When multi-system interactions are established, it requires the communication protocol to be more standardized and scalable. That is where MCP enables a standardized way to connect AI applications to external systems.

 

Architecture

Mcp Architecture

Using MCP, AI applications can connect to data source (ex; local files, databases), tools and workflows – enabling them to access key information and perform tasks. In enterprises scenario, AI applications/agents can connect to multiple databases across organization, empowering users to analyze data using natural language chat.

Benefits of MCP

MCP serves a wide range of benefits

  • Development: MCP reduces development time and complexity when building, or integrating with AI application/agent. It makes integrating MCP host with multiple MCP servers simple by leveraging built-in capability discovery feature.
  • AI applications or agents: MCP provides access to an ecosystem of data sources, tools and apps which will enhance capabilities and improve the end-user experience.
  • End-users: MCP results in more capable AI applications or agents which can access your data and take actions on user behalf when necessary.

MCP – Concepts

At the top level of MCP concepts, there are three entities,

  • Participants
  • Layers
  • Data Layer Protocol

 

Participants

MCP follows a client-server architecture where an MCP host – an AI application like enterprise chatbot establishes connections to one or more MCP servers. The MCP host accomplishes this by creating a MCP client for each MCP server. Each MCP client maintains a dedicated connection with its MCP server.

The key participants of MCP architecture are:

  • MCP Host: AI application that coordinates and manages one or more MCP clients
  • MCP Client: A component that maintains a dedicated connection to an MCP server and obtains context from an MCP server for MCP host to interact
  • MCP Server: A program that provides context to MCP clients (i.e. generate responses or perform actions on user behalf)

Mcp Client Server

Layers

MCP consists of two layers:

  • Data layer – Defines JSON-RPC based protocol for client-server communication including,
    • lifecycle management – initiate connection, capability discovery & negotiation, connection termination
    • Core primitives – enabling server features like tools for AI actions, resources for context data, prompt templates for client-server interaction and client features like ask client to sample from host LLM, log messages to client
    • Utility features – Additional capabilities like real-time notifications, track progress for long-running operations
  • Transport Layer – Manages communication channels and authentication between clients and servers. It handles connection establishment, message framing and secure communication between MCP participants

Data Layer Protocol

The core part of MCP is defining the schema and semantics between MCP clients and MCP servers. It is the part of MCP that defines the ways developers can share context from MCP servers to MCP clients.

MCP uses JSON-RPC 2.0 as its underlying RPC protocol. Client and servers send requests to each other and respond accordingly. Notifications can be used when no response is required.

Life Cycle Management

MCP is a stateful protocol that requires lifecycle management. The purpose of lifecycle management is to negotiate the capabilities (i.e. functionalities) that both client and server support.

Primitives

Primitives define what clients and servers can offer each other. These primitives specify the types of contextual information that can be shared with AI applications and the range of actions that can be performed. MCP defines three core primitives that servers can expose:

  • Tools: Executable functions that AI applications can invoke to perform actions (e.g., API calls, database queries)
  • Resources: Data sources that provide contextual information to AI applications (e.g., file contents, API responses, database records)
  • Prompts: Reusable templates that help structure interactions with language models (e.g., system prompts, few-shot examples)

 

Notifications

The protocol supports real-time notifications to enable dynamic updates between servers and clients. For example, when a server’s available tools change – such as when new functionalities are added or existing functionality is updated – the server can send tool update notifications to all its connected clients about these changes.

 

Security in Data Accessing

While AI applications communicate with multiple enterprise data sources thgrouch MCP and fetch real-time sensitive data like customer information, financial data to serve the users, data security becomes absolutely critical factor to be addressed.

MCP ensures secure access.

Authentication and Authorization

MCP implements server-side authentication where each MCP server validates who is making the request. The enterprise system controls access through:

  • User-specific credentials – Each user connecting through MCP has their own authentication tokens
  • Role-based access control (RBAC) – Users only access data that the role permits
  • Session management – Time-limited sessions that expire automatically

Data Access Controls

MCP server acts as a security gateway that enforces the same access policies as direct system access:

    • Users can only query data that they are authroized to access
    • The server validates every request against permission rules
    • Sensitive information can be masked or filtered based on user privileges

Secure Communication

      • Encrypted connections – All data transmissions uses TLS/HTTPS encryption
      • No data storage in AI – AI systems do not store the financial data it accesses; it only process it during the conversation session

Audit and Monitoring

MCP implementations in enterprise ecosystem should include:

      • Complete audit logs – Every data access request is logged with user, timestamp and data accessed
      • Anomaly detection – Engage mechanisms that monitor unusual access patterns and trigger alerts
      • Compliance tracking – All interactions meet regulatory requirements like GDPR, PCI-DSS

Architecture Isolation

Enterprises typically deploy MCP using:

      • Private network deployment – MCP servers stay within the enterprise secure firewall boundary
      • API gateway integration – Requests go through existing security infrastructure
      • No direct database access – MCP connects and access data through secure APIs, not direct access to database

The main idea is that MCP does not bypass existing security. It works within the same security as other enterprise applications, just showing a smarter interface.

 

MCP Implementation & Demonstration

In this section, I will demonstrate a simple use case where MCP client (Claude Desktop) interacts with “Finance Manager” MCP server that can fetch financial information from the database.

Financial data is maintained in Postgres database tables. MCP client (Claude Desktop app) will request information about customer account, MCP host will discover appropriate capability based on user prompt and invoke respective MCP tool function that can fetch data from the database table.

To make MCP client-server in action, there are three parts to be configured

      • Backend Database
      • MCP server implementation
      • MCP server registration in MCP Host

Backend Database

Postgres table “accounts” maintains accounts data with below information, “transactions” table maintains the transaction performed on the accounts

Accounts Table

Transactions Table

MCP server implementation

Mcp Server Implementation

FastMCP class implements MCP server components and creating an object of it initialize and enables access to those components to create enterprise MCP server capabilities.

The annotation “@mcp.tool()” defines the capability and the respective function will be recognized as MCP capability. These functions will be exposed to AI applications and will be invoked from MCP Host to perform designated actions.

In order to invoke MCP capabilities from client, MCP server should be up & running. In this example, there are two functions defined as MCP tool capabilities,

      • get_account_details – The function accept account number as input parameter, query “accounts” table and returns account information
      • add_transaction – The function accepts account number and transaction amount as parameters, make entry into “transactions” table

 

MCP Server Registration in MCP Host

For AI applications to invoke MCP server capability, MCP server should be registered in MCP host at client end. For this demonstration, I am using Claude Desktop as MCP client from where I interact with MCP server.

First, MCP server is registered with MCP host in Claude Desktop as below,

Claude Desktop -> Settings -> Developer -> Local MCP Servers -> Click “Edit Config”

Developer Settings

Open “claude_desktop_config” JSON file in Notepad. Add configurations in the JSON as below. The configurations define the path where MCP server implementation is located and instruct command to MCP host to run. Save the file and close.

Register Mcp Server

Restart “Claude Desktop” application, go to Settings -> Developer -> Local MCP servers tab. The newly added MCP server (finance-manager) will be in running state as below,

Mcp Server Running

Go to chat window in Claude Desktop. Issue a prompt to fetch details of an account in “accounts” table and review the response,

 

Claude Mcp Invocation

User Prompt: User issues a prompt to fetch details of an account.

MCP Discovery & Invoke: The client (Claude Desktop) processes the prompt, interacts with MCP host, automatically discover the relevant capability – get_account_details function in this case – without explicitly mention the function name and invoke the function with necessary parameter.

Response: MCP server process the request, fetch account details from the table and respond details to the client. The client formats the response and present it to the user.

Another example to add a transaction in the backend table for an account,

Mcp Server Add Transaction

Here, “add_transaction” capability has been invoked to add a transaction record in “transactions” table. In the chat window, you could notice that what MCP function is being invoked along with request & response body.

The record has been successfully added into the table,

Add Transaction Postgres Table

Impressive, isn’t it..!!

There are a wide range of use cases implementing MCP servers and integrate with enterprise AI systems that bring in intelligent layer to interact with enterprise data sources.

Here, you may also develop a thought that in what ways MCP (Model Context Protocol) is different from RAG (Retrieval Augmented Generation), as I did so. Based on my research, I just curated a comparison matrix of the features that would add more clarity,

 

Aspect RAG (Retrieval Augmented Generation) MCP (Model Context Protocol)
Purpose Retrieve unstructured docs to improve LLM responses AI agents access structured data/tools dynamically
Data Type Unstructured text (PDFs, docs, web pages) Structured data (JSON, APIs, databases)
Workflow Retrieve → Embed → Prompt injection → Generate AI requests context → Protocol delivers → AI reasons
Context Delivery Text chunks stuffed into prompt Structured objects via standardized interface
Token Usage High (full text in context) Low (references/structured data)
Action Capability Read-only (information retrieval) Read + Write (tools, APIs, actions)
Discovery Pre-indexed vector search Runtime tool/capability discovery
Latency Retrieval + embedding time Real-time protocol calls
Use Case Q&A over documents, chatbots AI agents, tool calling, enterprise systems
Maturity Widely adopted, mature ecosystem Emerging standard (2025+)
Complexity Vector DB + embedding pipeline Protocol implementation + AI agent

 

Conclusion

MCP Servers extend the capabilities of AI assistants by allowing them to interact with external services and data sources using natural language commands. Model Context Protocol (MCP) has a wide range of use cases and there are several enterprises already implemented and hosted MCP servers for AI clients to integrate and interact.

Some of the prominent MCP servers include:

GitHub MCP Server: Allows AI to manage repositories, issues, pull requests, and monitor CI/CD workflows directly within the development environment.

Azure DevOps MCP Server: Integrates AI with Azure DevOps services for managing pipelines, work items and repositories, ideal for teams withing the Microsoft ecosystem.

PostgreSQL MCP Server: bridges the gap between AI and databases, allowing natural language queries, schema exploration and data analysis without manual SQL scripting.

Slack MCP Server: Turns Slack into an AI-powered collaboration hub, enabling message posting, channel management

]]>
https://blogs.perficient.com/2026/01/08/model-context-protocol-mcp-simplified/feed/ 0 389415
AI and the Future of Financial Services UX https://blogs.perficient.com/2025/12/01/ai-banking-transparency-genai-financial-ux/ https://blogs.perficient.com/2025/12/01/ai-banking-transparency-genai-financial-ux/#comments Mon, 01 Dec 2025 18:00:28 +0000 https://blogs.perficient.com/?p=388706

I think about the early ATMs now and then. No one knew the “right” way to use them. I imagine a customer in the 1970s standing there, card in hand, squinting at this unfamiliar machine and hoping it would give something back; trying to decide if it really dispensed cash…or just ate cards for sport. That quick panic when the machine pulled the card in is an early version of the same confusion customers feel today in digital banking.

People were not afraid of machines. They were afraid of not understanding what the machine was doing with their money.

Banks solved it by teaching people how to trust the process. They added clear instructions, trained staff to guide customers, and repeated the same steps until the unfamiliar felt intuitive. 

However, the stakes and complexity are much higher now, and AI for financial product transparency is becoming essential to an optimized banking UX.

Today’s banking customer must navigate automated underwriting, digital identity checks, algorithmic risk models, hybrid blockchain components, and disclosures written in a language most people never use. Meanwhile, the average person is still struggling with basic money concepts.

FINRA reports that only 37% of U.S. adults can answer four out of five financial literacy questions (FINRA Foundation, 2022).

Pew Research finds that only about half of Americans understand key concepts like inflation and interest (Pew Research Center, 2024).

Financial institutions are starting to realize that clarity is not a content task or a customer service perk. It is structural. It affects conversion, compliance, risk, and trust. It shapes the entire digital experience. And AI is accelerating the pressure to treat clarity as infrastructure.

When customers don’t understand, they don’t convert. When they feel unsure, they abandon the flow. 

 

How AI is Improving UX in Banking (And Why Institutions Need it Now)

Financial institutions often assume customers will “figure it out.” They will Google a term, reread a disclosure, or call support if something is unclear. In reality, most customers simply exit the flow.

The CFPB shows that lower financial literacy leads to more mistakes, higher confusion, and weaker decision-making (CFPB, 2019). And when that confusion arises during a digital journey, customers quietly leave without resolving their questions.

This means every abandoned application costs money. Every misinterpreted term creates operational drag. Every unclear disclosure becomes a compliance liability. Institutions consistently point to misunderstanding as a major driver of complaints, errors, and churn (Lusardi et al., 2020).

Sometimes it feels like the industry built the digital bank faster than it built the explanation for it.

Where AI Makes the Difference

Many discussions about AI in financial services focus on automation or chatbots, but the real opportunity lies in real-time clarity. Clarity that improves financial product transparency and streamlines customer experience without creating extra steps.

In-context Explanations That Improve Understanding

Research in educational psychology shows people learn best when information appears the moment they need it. Mayer (2019) demonstrates that in-context explanations significantly boost comprehension. Instead of leaving the app to search unfamiliar terms, customers receive a clear, human explanation on the spot.

Consistency Across Channels

Language in banking is surprisingly inconsistent. Apps, websites, advisors, and support teams all use slightly different terms. Capgemini identifies cross-channel inconsistency as a major cause of digital frustration (Capgemini, 2023). A unified AI knowledge layer solves this by standardizing definitions across the system.

Predictive Clarity Powered by Behavioral Insight

Patterns like hesitation, backtracking, rapid clicking, or form abandonment often signal confusion. Behavioral economists note these patterns can predict drop-off before it happens (Loibl et al., 2021). AI can flag these friction points and help institutions fix them.

24/7 Clarity, Not 9–5 Support

Accenture reports that most digital banking interactions now occur outside of business hours (Accenture, 2023). AI allows institutions to provide accurate, transparent explanations anytime, without relying solely on support teams.

At its core, AI doesn’t simplify financial products. It translates them.

What Strong AI-Powered Customer Experience Looks Like

Onboarding that Explains Itself

  • Mortgage flows with one-sentence escrow definitions.
  • Credit card applications with visual explanations of usage.
  • Hybrid products that show exactly what blockchain is doing behind the scenes. The CFPB shows that simpler, clearer formats directly improve decision quality (CFPB, 2020).

A Unified Dictionary Across Channels

The Federal Reserve emphasizes the importance of consistent terminology to help consumers make informed decisions (Federal Reserve Board, 2021). Some institutions now maintain a centralized term library that powers their entire ecosystem, creating a cohesive experience instead of fragmented messaging.

Personalization Based on User Behavior

Educational nudges, simplified paths, multilingual explanations. Research shows these interventions boost customer confidence (Kozup & Hogarth, 2008). 

Transparent Explanations for Hybrid or Blockchain-backed Products

Customers adopt new technology faster when they understand the mechanics behind it (University of Cambridge, 2021). AI can make complex automation and decentralized components understandable.

The Urgent Responsibilities That Come With This

 

GenAI can mislead customers without strong data governance and oversight. Poor training data, inconsistent terminology, or unmonitored AI systems create clarity gaps. That’s a problem because those gaps can become compliance issues. The Financial Stability Oversight Council warns that unmanaged AI introduces systemic risk (FSOC, 2023). The CFPB also emphasizes the need for compliant, accurate AI-generated content (CFPB, 2024).

Customers are also increasingly wary of data usage and privacy. Pew Research shows growing fear around how financial institutions use personal data (Pew Research Center, 2023). Trust requires transparency.

Clarity without governance is not clarity. It’s noise.

And institutions cannot afford noise.

What Institutions Should Build Right Now

To make clarity foundational to customer experience, financial institutions need to invest in:

  • Modern data pipelines to improve accuracy
  • Consistent terminology and UX layers across channels
  • Responsible AI frameworks with human oversight
  • Cross-functional collaboration between compliance, design, product, and analytics
  • Scalable architecture for automated and decentralized product components
  • Human-plus-AI support models that enhance, not replace, advisors

When clarity becomes structural, trust becomes scalable.

Why This Moment Matters

I keep coming back to the ATM because it perfectly shows what happens when technology outruns customer understanding. The machine wasn’t the problem. The knowledge gap was. Financial services are reliving that moment today.

Customers cannot trust what they do not understand.

And institutions cannot scale what customers do not trust.

GenAI gives financial organizations a second chance to rebuild the clarity layer the industry has lacked for decades, and not as marketing. Clarity, in this new landscape, truly is infrastructure.

Related Reading

References 

  • Accenture. (2023). Banking top trends 2023. https://www.accenture.com
  • Capgemini. (2023). World retail banking report 2023. https://www.capgemini.com
  • Consumer Financial Protection Bureau. (2019). Financial well-being in America. https://www.consumerfinance.gov
  • Consumer Financial Protection Bureau. (2020). Improving the clarity of mortgage disclosures. https://www.consumerfinance.gov
  • Consumer Financial Protection Bureau. (2024). Supervisory highlights: Issue 30. https://www.consumerfinance.gov
  • Federal Reserve Board. (2021). Consumers and mobile financial services. https://www.federalreserve.gov
  • FINRA Investor Education Foundation. (2022). National financial capability study. https://www.finrafoundation.org
  • Financial Stability Oversight Council. (2023). Annual report. https://home.treasury.gov
  • Kozup, J., & Hogarth, J. (2008). Financial literacy, public policy, and consumers’ self-protection. Journal of Consumer Affairs, 42(2), 263–270.
  • Loibl, C., Grinstein-Weiss, M., & Koeninger, J. (2021). Consumer financial behavior in digital environments. Journal of Economic Psychology, 87, 102438.
  • Lusardi, A., Mitchell, O. S., & Oggero, N. (2020). The changing face of financial literacy. University of Pennsylvania, Wharton School.
  • Mayer, R. (2019). The Cambridge handbook of multimedia learning. Cambridge University Press.
  • Pew Research Center. (2023). Americans and data privacy. https://www.pewresearch.org
  • Pew Research Center. (2024). Americans and financial knowledge. https://www.pewresearch.org
  • University of Cambridge. (2021). Global blockchain benchmarking study. https://www.jbs.cam.ac.uk
]]>
https://blogs.perficient.com/2025/12/01/ai-banking-transparency-genai-financial-ux/feed/ 6 388706
Chandra OCR: The BEST in Open-Source AI Document Parsing https://blogs.perficient.com/2025/11/19/chandra-ocr-open-source-document-parsing/ https://blogs.perficient.com/2025/11/19/chandra-ocr-open-source-document-parsing/#respond Wed, 19 Nov 2025 13:31:58 +0000 https://blogs.perficient.com/?p=388476

In the specialized field of Optical Character Recognition (OCR), a new open-source model from Datalab is setting a new benchmark for accuracy and versatility. Chandra OCR, released in October 2025, has rapidly ascended to the top of the leaderboards, outperforming even proprietary giants like GPT-4o and Gemini Pro on key benchmarks.

Beyond Simple Text Extraction

Chandra is not just another OCR tool; it’s a comprehensive document AI solution. Unlike traditional pipeline-based approaches that process documents in chunks, Chandra utilizes full-page decoding. This allows it to understand the entire context of a page, leading to significant improvements in accuracy and layout awareness.

Key Capabilities:

  • Layout-Aware Output: Chandra preserves the original document structure, outputting to Markdown, HTML, or JSON with remarkable fidelity.
  • Image & Figure Extraction: It can identify, caption, and extract images and figures from within a document.
  • Advanced Language Support: Chandra supports over 40 languages and can even read handwritten text, making it a truly global solution.
  • Specialized Content: The model excels at handling complex content, including mathematical equations and intricate tables.

Unrivaled Performance

Category Score Rank
Tables 88.0 #1
Old Scans Math 80.3 #1
Old Scans 50.4 #1
Long Tiny Text 92.3 #1
Base Documents 99.9 Near-Perfect

Chandra’s performance on the independent olmOCR benchmark is nothing short of revolutionary. With an overall score of 83.1%, it has established a new state-of-the-art for open-source OCR models.

Chandra Ocr RankSource: https://medium.com/data-science-in-your-pocket/chandra-ocr-beats-deepseek-ocr-47267b6f4895

Accessible and Production-Ready

Datalab has made Chandra widely accessible. It is available as an open-source project on GitHub and Hugging Face, and also as a hosted API with a free tier for developers to get started. For high-throughput applications, quantized versions of the model are available for on-premises deployment, capable of processing up to 4 pages per second on an H100 GPU.

Why Chandra OCR Matters

The release of Chandra OCR is a watershed moment for document AI. It provides a free, open-source, and commercially viable alternative to expensive proprietary solutions, without compromising on performance. For developers and businesses that rely on accurate and structured data extraction, Chandra OCR is a game-changer.

Read more

Cross-posted from https://www.linkedin.com/pulse/chandra-ocr-best-open-source-ai-document-parsing-matthew-aberham-3fx1e

]]>
https://blogs.perficient.com/2025/11/19/chandra-ocr-open-source-document-parsing/feed/ 0 388476
Building for Humans – Even When Using AI https://blogs.perficient.com/2025/10/29/building-for-humans-even-when-using-ai/ https://blogs.perficient.com/2025/10/29/building-for-humans-even-when-using-ai/#comments Thu, 30 Oct 2025 01:03:55 +0000 https://blogs.perficient.com/?p=388108

Artificial Intelligence (AI) is everywhere. Every month brings new features promising “deeper thinking” and “agentic processes.” Tech titans are locked in trillion-dollar battles. Headlines scream about business, economic, and societal concerns. Skim the news and you’re left excited and terrified!

Here’s the thing: we’re still human – virtues, flaws, quirks, and all. We’ve always had our agency, collectively shaping our future. Even now, while embracing AI, we need to keep building for us.

We Fear What We Do Not Know

“AI this… AI that…” Even tech leaders admit they don’t fully understand it. Sci-fi stories warn us with cautionary tales. News cycles fuel anxiety about job loss, disconnected human relationships, and cognitive decline.

Luckily, this round of innovation is surprisingly transparent. You can read the Attention is All You Need paper (2017) that started it all. You can even build your own AI if you want! This isn’t locked behind a walled garden. That’s a good thing.

What the Past Can Tell Us

I like to look at the past to gauge what we can expect from the future. Humans have feared every major invention and technological breakthrough. We expect the worst, but most have proven to improve life.

We’ve always had distractions from books, movies, games, to TikTok brain-rot. Some get addicted and go too deep, while others thrive. People favor entertainment and leisure activities – this is nothing new – so I don’t feel like cognitive decline is anything to worry about. Humanity has overcome all of it before and will continue to do so.

 

.

 

Humans are Simple (and Complicated) Creatures

We look for simplicity and speed. Easy to understand, easy to look at, easy to interact with, easy to buy from. We skim read, we skip video segments, we miss that big red CTA button. The TL;DR culture rules. Even so, I don’t think we’re at risk of the future from Idiocracy (2006).

That’s not to say that we don’t overcomplicate things. The Gods Must Be Crazy movie (1980) has a line that resonates, “The more [we] improved [our] surroundings to make life easier, the more complicated [we] made it.” We bury our users (our customers) in detail when they just want to skim, skip, and bounce.

Building for Computers

The computer revolution (1950s-1980s) started with machines serving humans. Then came automation. And eventually, systems talking to systems.

Fast-forward to the 2010s, where marketers gamed the algorithms to win at SEO, SEM, and social networking. Content was created for computers, not humans. Now we have the dead internet theory. We were building without humans in mind.

We will still have to build for systems to talk to systems. That won’t change. APIs are more important than ever, and agentic AI relies on them. Because of this, it is crucial to make sure what you are building “plays well with others”. But AIs and APIs are tools, not the audience.

Building for Humans

Google used to tell us all to build what people want, as opposed to gaming their systems. I love that advice. However, at first it felt unrealistic…gaming the system worked. Then after many updates, for a short bit, it felt like Google was getting there! Then it got worse and feels like pay-to-play recently.

Now AI is reshaping search and everything else. You can notice the gap between search results and AI recommendations. They don’t match. AI assistants aim to please humans, which is great, until it inevitably changes.

Digital teams must build for AI ingestion, but if you neglect the human aspect and the end user experience, then you will only see short-term wins.

Examples of Building for Humans

  • Make it intuitive and easy. Simple for end users means a lot of work for builders, but it is worth it! Reduce their cognitive load.
  • Build with empathy. Appeal to real people, not just personas and bots. Include feedback loops so they can feel heard.
  • Get to the point. Don’t overwhelm users, instead help them take action! Delight your customers by saving them time.
  • Add humor when appropriate. Don’t be afraid to be funny, weird, or real…it connects on a human level.
  • Consider human bias. Unlike bots and crawlers, humans aren’t always logical. Design for human biases.
  • Watch your users. Focus groups or digital tracking tools are great for observing. Learn from real users and iterate.

Conclusion

Building for humans never goes out of style. Whatever comes after AI will still need to serve people. So as tech evolves, let’s keep honing systems that work with and around our human nature.

……

If you are looking for that extra human touch (built with AI), reach out to your Perficient account manager or use our contact form to begin a conversation.

]]>
https://blogs.perficient.com/2025/10/29/building-for-humans-even-when-using-ai/feed/ 1 388108
Salesforce AI for Financial Services: Practical Capabilities That Move the Organization Forward https://blogs.perficient.com/2025/10/20/salesforce-ai-for-financial-services-practical-capabilities-that-move-the-organization-forward/ https://blogs.perficient.com/2025/10/20/salesforce-ai-for-financial-services-practical-capabilities-that-move-the-organization-forward/#respond Mon, 20 Oct 2025 11:01:05 +0000 https://blogs.perficient.com/?p=387746

Turn on CNBC during almost any trading day and you’ll see and hear plenty of AI buzz that sounds great, and may look great in a deck, but falls short in regulated industries. For financial services firms, AI must do two things at once: unlock genuine business value and satisfy strict compliance, privacy, and audit requirements. Salesforce’s AI stack — led by Einstein GPT, Data Cloud, and integrated with MuleSoft, Slack, and robust security controls — is engineered to meet that dual mandate. Here’s a practical look at what Salesforce AI delivers for banks, insurers, credit unions, wealth managers, and capital markets firms, and how to extract measurable value without trading off controls and/or governance.

What Salesforce AI actually is (and why it matters for Financial Services)

Salesforce is widely adopted by financial services firms, with over 150,000 companies worldwide using its CRM, including a significant portion of the U.S. market, where 83% of businesses opt for its Financial Services Cloud (“FSC”). Major financial institutions like Wells Fargo, Bank of America Merrill Lynch and The Bank of New York are among its users, demonstrating its strong presence within the industry. Salesforce has combined together generative AI, predictive models, and enterprise data plumbing into a single ecosystem. Key capabilities include:

  • Einstein GPT: Generative AI tailored for CRM workflows — draft client communications, summarize notes, and surface contextual insights using your internal data.
  • Data Cloud: A real-time customer data platform that ingests, unifies, and models customer profiles at scale, enabling AI to operate on a trusted single source of truth.
  • Tableau + CRM Analytics: Visualize model outcomes, monitor performance, and create operational dashboards that align AI outputs with business KPIs.
  • MuleSoft: Connectors and APIs to bring core banking, trading, and ledger systems into the loop securely.
  • Slack & Flow (and Flow Orchestrator): Operationalize AI outputs into workflows, approvals, and human-in-the-loop processes.

For financial services, that integration matters more than flashy demos: accuracy, traceability, and context are non-negotiable. Salesforce’s ecosystem lets you apply AI where it impacts revenue, risk, and customer retention — and keep audit trails for everything.

High-value financial services use cases

Here are the pragmatic use cases where Salesforce AI delivers measurable ROI:

Client advisory and personalization

Generate personalized portfolio reviews, client outreach, or renewal communications using Einstein GPT combined with up-to-date holdings and risk profiles from Data Cloud. The result: more relevant outreach and higher conversion rates with less advisor time.

Wealth management — scalable advice and relationship mining

AI-driven summarization of client meetings, automated risk-tolerance classifiers, and opportunity scoring help advisors prioritize high-value clients and surface cross-sell opportunities without manual data wrangling.

Commercial lending — faster decisioning and better risk controls

Combine predictive credit risk models with document ingestion (via MuleSoft and integrated OCR) to auto-populate loan applications, flag exceptions, and route for human review where model confidence is low.

Fraud, AML, and compliance augmentation

Use real-time customer profiles and anomaly detection to surface suspicious behaviors. AI can triage alerts and summarize evidence for investigators, improving throughput while preserving explainability for regulators. AI can also reduce the volume of false alerts, which is the bane of every compliance officer ever.

Customer support and claims

RAG-enabled virtual assistants (Einstein + Data Cloud) pull from policy language, transaction history, and client notes to answer common questions or auto-draft claims responses — reducing service time and improving consistency. The virtual assistants can also interact in multiple languages, which helps reduce customer turnover for non-English writing clients.

Sales and pipeline acceleration

Predictive lead scoring, propensity-to-buy models, and AI-suggested next-best actions increase win rates and shorten sales cycles. Integrated workflows push suggestions to reps in Slack or the Salesforce console, making adoption frictionless.

Why Salesforce’s integrated approach reduces risk

Financial firms can’t treat AI as a separate experiment. Salesforce’s value proposition is that AI is embedded into systems that already handle customer interactions, security, and governance. That produces the following practical advantages:

Single source of truth

Data Cloud reduces conflicting customer records and stale insights, which directly lowers the risk of AI producing inappropriate or inaccurate outputs.

Controlled model access and hosting options

Enterprises can choose where data and model inference occur, including private or managed-cloud options, helping meet residency and confidentiality requirements.

Explainability and audit trails

Salesforce logs user interactions, AI-generated outputs, and data lineage into the platform. That creates the documentation regulators ask for and lets financial services executives investigate where models made decisions.

Human-in-the-loop and confidence thresholds

Workflows can be configured so that high-risk or low-confidence outputs require human approval. That’s essential for credit decisions, compliance actions, and investment advice.

Implementation considerations for regulated firms

To assist in your planned deployment of Salesforce AI in financial services, here’s a checklist of practical guardrails and steps:

Start with business outcomes, not models

  • Identify high-frequency, low-risk tasks for pilots (e.g., document summarization, inquiry triage) and measure lift on KPIs like turnaround time, containment rate, and advisor productivity.

Clean and govern your data

Invest in customer identity resolution, canonicalization, and metadata tagging in Data Cloud. Garbage in, garbage out is especially painful when compliance hangs on a model’s output.

Create conservative guardrails

Hard-block actions that have material customer impact (e.g., account closure, fund transfers) from automated flows. Use AI to assist drafting and recommendation, not to execute high-risk transactions autonomously.

Establish model testing and monitoring

Implement A/B tests, accuracy benchmarks, and drift detection. Integrate monitoring into Tableau dashboards and set alerts for performance degradation or unusual patterns.

Document everything for auditors and regulators

Maintain clear logs of training data sources, prompt templates, model versions, and human overrides. Salesforce’s native logging plus orchestration records from Flow help with this.

Train users and change-manage

Advisors, compliance officers, and client service reps should be part of prompt tuning and feedback loops. Incentivize flagging bad outputs — their corrections will dramatically improve model behavior.

Measurable outcomes to expect

When implemented with discipline, financial services firms typically see improvements including:

  • Reduced average handling time and faster loan turnaround
  • Higher client engagement and improved cross-sell conversion
  • Fewer false positives and faster investigator resolution times
  • Better advisor productivity via automated notes and suggested actions

Those outcomes translate into cost savings, improved regulatory posture, and revenue lift — the hard metrics CFOs, CROs, and CCOs require.

Final thoughts — pragmatic AI adoption

Salesforce gives financial institutions a practical path to embed AI into customer-facing and operational workflows without ripping up existing systems. The power isn’t just in the model; it’s in the combination of unified data (Data Cloud), generative assistance (Einstein GPT), secure connectors (MuleSoft), and operationalization (Flows and Slack). If you treat governance, monitoring, and human oversight as first-class citizens, AI becomes an accelerant — not a liability.

To help financial services firms either install or expand their Salesforce capability, Perficient has a 360-degree strategic partnership with Salesforce. While Salesforce itself is the provider of the platform and technology, as a global digital consultancy firm Perficient partners with Salesforce to offer its expertise in implementation, customization, and optimization of Salesforce solutions, leveraging Salesforce’s AI-first technologies and platform to deliver consulting, implementation, and integration services. Working together, Salesforce and Perficient’s partnership helps mutual clients build customer-centric solutions and operate as “agentic enterprises” 

 

 

]]>
https://blogs.perficient.com/2025/10/20/salesforce-ai-for-financial-services-practical-capabilities-that-move-the-organization-forward/feed/ 0 387746
Navigating the AI Frontier: Data Governance Controls at SIFIs in 2025 https://blogs.perficient.com/2025/10/13/navigating-the-ai-frontier-data-governance-controls-at-sifis-in-2025/ https://blogs.perficient.com/2025/10/13/navigating-the-ai-frontier-data-governance-controls-at-sifis-in-2025/#comments Mon, 13 Oct 2025 10:57:25 +0000 https://blogs.perficient.com/?p=387652

The Rise of AI in Banking

AI adoption in banking has accelerated dramatically. Predictive analytics, generative AI, and autonomous agentic systems are now embedded in core banking functions such as loan underwriting, compliance including fraud detection and AML, and customer engagement. 

A recent White Paper by Perficient affiliate Virtusa Agentic Architecture in Banking – White Paper | Virtusa documented that when designed with modularity, composability, Human-in-the-Loop (HITL), and governance, agentic AI agents empower a more responsive, data-driven, and human-aligned approach in financial services.

However, the rollout of agentic and generative AI tools without proper controls poses significant risks. Without a unified strategy and governance structure, Strategically Important Financial Institutions (“SIFIs”) risk deploying AI in ways that are opaque, biased, or non-compliant. As AI becomes the engine of next-generation banking, institutions must move beyond experimentation and establish enterprise-wide controls.

Key Components of AI Data Governance

Modern AI data governance in banking encompasses several critical components:

1. Data Quality and Lineage: Banks must ensure that the data feeding AI models is accurate, complete, and traceable.

Please refer to Perficient’s recent blog on this topic here:

AI-Driven Data Lineage for Financial Services Firms: A Practical Roadmap for CDOs / Blogs / Perficient

2. Model Risk Management: AI models must be rigorously tested for fairness, accuracy, and robustness. It has been documented many times in lending decision-making software that the bias of coders can result in biased lending decisions.

3. Third-Party Risk Oversight: Governance frameworks now include vendor assessments and continuous monitoring. Large financial institutions do not have to develop AI technology solutions themselves (Buy vs Build) but they do need to monitor the risks of having key technology infrastructure owned and/or controlled by third parties.

4. Explainability and Accountability: Banks are investing in explainable AI (XAI) techniques. Not everyone is a tech expert, and models need to be easily explainable to auditors, regulators, and when required, customers.

5. Privacy and Security Controls: Encryption, access controls, and anomaly detection are essential. These are all done already in legacy systems and extending it to the AI environment, whether it is narrow AI, machine learning, or more advanced agentic and/or generative AI it is natural to ensure these proven controls are extended to the new platforms. 

Industry Collaboration and Standards

The FINOS Common Controls for AI Services initiative is a collaborative, cross-industry effort led by the FINtech Open-Source Foundation (FINOS) to develop open-source, technology-neutral baseline controls for safe, compliant, and trustworthy AI adoption in financial services. By pooling resources from major banks, cloud providers, and technology vendors, the initiative creates standardized, open-source technology-neutral controls, peer-reviewed governance frameworks, and real-time validation mechanisms to help financial institutions meet complex regulatory requirements for AI. 

Key participants of FINOS include financial institutions such as BMO, Citibank, Morgan Stanley, and RBC, and key Technology & Cloud Providers include Perficient’s technology partners including Microsoft, Google Cloud, and Amazon Web Services (AWS). The FINOS Common Controls for AI Services initiative aims to create vendor-neutral standards for secure AI adoption in financial services.

At Perficient, we have seen leading financial institutions, including some of the largest SIFIs, establishing formal governance structures to oversee AI initiatives. Broadly, these governance structures typically include:

– Executive Steering Committees at the legal entity level
– Working Groups, at the legal entity as well as the divisional, regional and product levels
– Real-Time Dashboards that allow customizable reporting for boards, executives, and auditors

This multi-tiered governance model promotes transparency, agility, and accountability across the organization.

Regulatory Landscape in 2025

Regulators worldwide are intensifying scrutiny of Artificial Intelligence in banking. The EU AI Act, the U.S. SEC’s cybersecurity disclosure rules, and the National Insititute of Standards and Technology (“NIST”) AI Risk Management Framework are shaping how financial institutions must govern AI systems.

Key regulatory expectations include:

– Risk-Based Classification
– Human Oversight
– Auditability
– Bias Mitigation

Some of these, and other regulatory regimes have been documented and summarized by Perficient at the following links:

AI Regulations for Financial Services: Federal Reserve / Blogs / Perficient

AI Regulations for Financial Services: European Union / Blogs / Perficient 

Eu Ai Act Risk Based Approach

The Road Ahead

As AI becomes integral to banking operations, data governance will be the linchpin of responsible innovation. Banks must evolve from reactive compliance to proactive risk management, embedding governance into every stage of the AI lifecycle.

The journey begins with data—clean, secure, and well-managed. From there, institutions must build scalable frameworks that support ethical AI development, align with regulatory mandates, and deliver tangible business value.

Readers are urged to read the links contained in this blog and then contact Perficient, a global AI-first digital consultancy to discuss how partnering with Perficient can help run a tailored assessment and pilot design that maps directly to your audit and governance priorities and ensure all new tools are rolled out in a well-designed data governance environment.

]]>
https://blogs.perficient.com/2025/10/13/navigating-the-ai-frontier-data-governance-controls-at-sifis-in-2025/feed/ 1 387652
AI-Driven Data Lineage for Financial Services Firms: A Practical Roadmap for CDOs https://blogs.perficient.com/2025/10/06/ai-driven-data-lineage-for-financial-services-firms-a-practical-roadmap-for-cdos/ https://blogs.perficient.com/2025/10/06/ai-driven-data-lineage-for-financial-services-firms-a-practical-roadmap-for-cdos/#respond Mon, 06 Oct 2025 11:17:05 +0000 https://blogs.perficient.com/?p=387626

Introduction

Imagine just as you’re sipping your Monday morning coffee and looking forward to a hopefully quiet week in the office, your Outlook dings and you see that your bank’s primary federal regulator is demanding the full input – regulatory report lineage for dozens of numbers on both sides of the balance sheet and the income statement for your latest financial report filed with the regulator. The full first day letter responses are due next Monday, and as your headache starts you remember that the spreadsheet owner is on leave; the ETL developer is debugging a separate pipeline; and your overworked and understaffed reporting team has three different ad hoc diagrams that neither match nor reconcile.

If you can relate to that scenario, or your back starts to tighten in empathy, you’re not alone. Artificial Intelligence (“AI”) driven data lineage for banks is no longer a nice-to-have. We at Perficient working with our clients in banking, insurance, credit unions, and asset managers find that it’s the practical answer to audit pressure, model risk (remember Lehman Brothers and Bear Stearns), and the brittle manual processes that create blind spots. This blog post explains what AI-driven lineage actually delivers, why it matters for banks today, and a phased roadmap Chief Data Officers (“CDOs”) can use to get from pilot to production.

Why AI-driven data lineage for banks matters today

Regulatory pressure and real-world consequences

Regulators and supervisors emphasize demonstrable lineage, timely reconciliation, and governance evidence. In practice, financial services firms must show not just who touched data, but what data enrichment and/or transformations happened, why decisions used specific fields, and how controls were applied—especially under BCBS 239 guidance and evolving supervisory expectations.

In addition, as a former Risk Manager, the author knows that he would have wanted and has spoken to a plethora of financial services executives who want to know that the decisions they’re making on liquidity funding, investments, recording P&L, and hedging trades are based on the correct numbers. This is especially challenging at global firms that operate in in a transaction heavy environment with constantly changing political, interest rate, foreign exchange and credit risk environment.

Operational risks that keep CDOs up at night

Manual lineage—spreadsheets, tribal knowledge, and siloed code—creates slow audits, delayed incident response, and fragile model governance. AI-driven lineage automates discovery and keeps lineage living and queryable, turning reactive fire drills into documented, repeatable processes that will greatly shorten the time QA tickets are closed and reduce compensation costs for misdirected funds. It also provides a scalable foundation for governed data practices without sacrificing traceability.

What AI-driven lineage and controls actually do (written by and for non-tech staff)

At its core, AI-driven data lineage combines automated scanning of code, SQL, ETL jobs, APIs, and metadata with semantic analysis that links technical fields to business concepts. Instead of a static map, executives using AI-driven data lineage get a living graph that shows data provenance at the field level: where a value originated, which transformations touched it, and which reports, models, or downstream services consume it.

AI adds value by surfacing hidden links. Natural language processing reads table descriptions, SQL comments, and even README files (yes they do still exist out there) to suggest business-term mappings that close the business-IT gap. That semantic layer is what turns a technical lineage graph into audit-ready evidence that regulators or auditors can understand.

How AI fixes the pain points keeping CDOs up at night

Faster audits: As a consultant at Perficient, I have seen AI-driven lineage that after implementation allowed executives to answer traceability questions in hours rather than weeks. Automated evidence packages—exportable lineage views and transformation logs—provide auditors with a reproducible trail.
Root-cause and incident response: When a report or model spikes, impact analysis highlights which datasets and pipelines are involved, highlighting responsibility and accountability, speeding remediation and alleviating downstream impact.
Model safety and feature provenance: Lineage that includes training datasets and feature transformations enables validation of model inputs, reproducibility of training data, and enforcement of data controls—supporting explainability and governance requirements. That allows your P&L to be more R&S. (a slogan used by a client that used R&S P&L to mean rock solid profit and loss.)

Tooling, architecture, and vendor considerations

When evaluating vendors, demand field-level lineage, semantic parsing (NLP across SQL, code, and docs), auditable diagram exports, and policy enforcement hooks that integrate with data protection tools. Deployment choices matter in regulated banking environments; hybrid architectures that keep sensitive metadata on-prem while leveraging cloud analytics often strike a pragmatic balance.

A practical, phased roadmap for CDOs

Phase 0 — Align leadership and define success: Engage CRO, COO, and Head of Model Risk. Define 3–5 KPIs (e.g., lineage coverage, evidence time, mean time to root cause) and what “good” will look like. This is often done during a evidence gathering phase by Perficient with clients who are just starting their Artificial Intelligence journey.
Phase 1 — Inventory and quick wins: Target a high-risk area such as regulatory reporting, a few production models, or a critical data domain. Validate inventory manually to establish baseline credibility.
Phase 2 — Pilot AI lineage and controls: Run automated discovery, measure accuracy and false positives, and quantify time savings. Expect iterations as the model improves with curated mappings.
Phase 1 and 2 are usually done by Perficient with clients as a Proof-of-Concept phase to show that the key feeds into and out of existing technology platforms can be done.
Phase 3 — Operationalize and scale: Integrate lineage into release workflows, assign lineage stewards, set SLAs, and connect with ticketing and monitoring systems to embed lineage into day-to-day operations.
Phase 4 — Measure, refine, expand: Track KPIs, adjust models and rules, and broaden scope to additional reports, pipelines, and models as confidence grows.

Risks, human oversight, and governance guardrails

AI reduces toil but does not remove accountability. Executives, auditors and regulators either do or should require deterministic evidence and human-reviewed lineage. Treat AI outputs as recommendations subject to curator approval. This will avoid what many financial services executives are dealing with what is now known as AI Hallucinations.

Guardrails include the establishment of exception processing workflows for disputed outputs and toll gates to ensure security and privacy are baked into design—DSPM, masking, and appropriate IAM controls should be integral, not afterthoughts.

Conclusion and next steps

AI data lineage for banks is a pragmatic control that directly addresses regulatory expectations, speeds audits, and reduces model and reporting risk. Start small, prove value with a focused pilot, and embed lineage into standard data stewardship processes. If you’re a CDO looking to move quickly with minimal risk, contact Perficient to run a tailored assessment and pilot design that maps directly to your audit and governance priorities. We’ll help translate proof into firm-wide control and confidence.

]]>
https://blogs.perficient.com/2025/10/06/ai-driven-data-lineage-for-financial-services-firms-a-practical-roadmap-for-cdos/feed/ 0 387626
Trust, Data, and the Human Side of AI: Lessons From a Lifelong Automotive Leader https://blogs.perficient.com/2025/10/02/customer-experience-automotive-wally-burchfield/ https://blogs.perficient.com/2025/10/02/customer-experience-automotive-wally-burchfield/#respond Thu, 02 Oct 2025 17:05:47 +0000 https://blogs.perficient.com/?p=387540

In this episode of “What If? So What?”, Jim Hertzfeld sits down with Wally Burchfield, former senior executive at GM, Nissan, and Nissan United, to explore what’s driving transformation in the automotive industry and beyond. 

 Wally’s perspective is clear: in a world obsessed with automation and data, the companies that win will be the ones that stay human. 

 From “Build and Sell” to “Know and Serve” 

 The old model was simple: build a car, sell a car, repeat. But as Wally explains it, that formula no longer works in a world where customer expectations are shaped by digital platforms and instant personalization. “It’s not just about selling a product,” he said. “It’s about retaining the customer through a high-quality experience one that feels personal, respectful, and effortless.” Every interaction matters, and every brand is in the experience business. 

 Data Alone Doesn’t Build Loyalty – Trust Does 

 It’s true that organizations have more data than ever before. But as Wally points out, it’s not how much data you have, it’s what you do with it. The real differentiator is how responsibly, transparently, and effectively you use that data to improve the customer experience. 

 “You can have a truckload of data but if it doesn’t help you deliver value or build trust, it’s wasted,” Wally said. 

 When used carelessly, data can feel manipulative. When used well, it creates clarity, relevance, and long-term relationships. 

 AI Should Remove Friction, Not Feeling 

 Wally’s take on AI is refreshingly grounded. He sees it as a tool to reduce friction, not replace human connection. Whether it’s scheduling service appointments via SMS or filtering billions of digital signals, the best AI is invisible, working quietly in the background to make the customer feel understood. 

 Want to Win? Listen Better and Faster 

 At the end of the day, the brands that thrive won’t be the ones with the biggest data sets; they’re the ones that move fast, use data responsibly, and never lose sight of the customer at the center. 

🎧 Listen to the full conversation with Wally Burchfield for more on how trust, data, and AI can work together to build lasting customer relationships—and why the best strategies are still the most human. 

Subscribe Where You Listen

Apple | Spotify | Amazon | Overcast | Watch the full video episode on YouTube

Meet our Guest – Wally Burchfield

Wally Burchfield is a veteran automotive executive with deep experience across retail, OEM operations, marketing, aftersales, dealer networks, and HR. 

He spent 20 years at General Motors before joining Nissan, where he held multiple VP roles across regional operations, aftersales, and HR. He later served as COO of Nissan United (TBWA), leading Tier 2/3 advertising and field marketing programs to support dealer and field team performance. Today, Wally runs a successful consulting practice helping OEMs, partners, and dealer groups solve complex challenges and drive results. A true “dealer guy”, he’s passionate about improving customer experience, strengthening OEM-dealer partnerships, and challenging the status quo to unlock growth. 

Follow Wally on LinkedIn  

Learn More about Wally Burchfield

 

Meet our Host

Jim Hertzfeld

Jim Hertzfeld is Area Vice President, Strategy for Perficient.

For over two decades, he has worked with clients to convert market insights into real-world digital products and customer experiences that actually grow their business. More than just a strategist, Jim is a pragmatic rebel known for challenging the conventional and turning grand visions into actionable steps. His candid demeanor, sprinkled with a dose of cynical optimism, shapes a narrative that challenges and inspires listeners.

Connect with Jim:

LinkedIn | Perficient

 

 

]]>
https://blogs.perficient.com/2025/10/02/customer-experience-automotive-wally-burchfield/feed/ 0 387540
Beyond Denial: How AI Concierge Services Can Transform Healthcare from Reactive to Proactive https://blogs.perficient.com/2025/09/24/beyond-denial-how-ai-concierge-services-can-transform-healthcare-from-reactive-to-proactive/ https://blogs.perficient.com/2025/09/24/beyond-denial-how-ai-concierge-services-can-transform-healthcare-from-reactive-to-proactive/#respond Wed, 24 Sep 2025 14:39:32 +0000 https://blogs.perficient.com/?p=387380

The headlines are troubling but predictable. The Trump administration will launch a program next year to find out how much money an artificial intelligence algorithm could save the federal government by denying care to Medicare patients. Meanwhile, a survey of physicians published by the American Medical Association in February found that 61% think AI is “increasing prior authorization denials, exacerbating avoidable patient harms and escalating unnecessary waste now and into the future.”

We’re witnessing the healthcare industry’s narrow vision of AI in action: algorithms designed to say “no” faster and more efficiently than ever before. But what if we’re missing the bigger opportunity?

The Current AI Problem: Built to Deny, Not to Help

The recent expansion of AI-powered prior authorization reveals a fundamental flaw in how we’re approaching healthcare technology. “The more expensive it is, the more likely it is to be denied,” said Jennifer Oliva, a professor at the Maurer School of Law at Indiana University-Bloomington, whose work focuses on AI regulation and health coverage.

This approach creates a vicious cycle: patients don’t understand their benefits, seek inappropriate or unnecessary care, trigger costly prior authorization processes, face denials, appeal those denials, and ultimately either give up or create even more administrative burden for everyone involved.

The human cost is real. Nearly three-quarters of respondents thought prior authorization was a “major” problem in a July poll published by KFF, and we’ve seen how public displeasure with insurance denials dominated the news in December, when the shooting death of UnitedHealthcare’s CEO led many to anoint his alleged killer as a folk hero.

A Better Vision: The AI Concierge Approach

What if instead of using AI to deny care more efficiently, we used it to help patients access the right care more effectively? This is where the AI Concierge concept transforms the entire equation.

An AI Concierge doesn’t wait for a claim to be submitted to make a decision. Instead, it proactively:

  • Educates patients about their benefits before they need care
  • Guides them to appropriate providers within their network
  • Explains coverage limitations in plain language before appointments
  • Suggests preventive alternatives that could avoid more expensive interventions
  • Streamlines pre-authorization by ensuring patients have the right documentation upfront

The Quantified Business Case

The financial argument for AI Concierge services is compelling:

Star Ratings Revenue Impact: A half-star increase in Medicare Star Ratings is valued at approximately $500 per member. For a 75,000-member plan, that translates to $37.5 million in additional funding. An AI Concierge directly improves patient satisfaction scores that drive these ratings.

Operational Efficiency Gains: Healthcare providers implementing AI-powered patient engagement systems report 15-20% boosts in clinic revenue and 10-20% reductions in overall operational costs. Clinics using AI tools see 15-25% increases in patient retention rates.

Cost Avoidance Through Prevention: Utilizing AI to help patients access appropriate care could save up to 50% on treatment costs while improving health outcomes by up to 40%. This happens by preventing more expensive interventions through proper preventive care utilization.

The HEDIS Connection

HEDIS measures provide the perfect framework for demonstrating AI Concierge value. With 235 million people enrolled in plans that report HEDIS results, improving these scores directly impacts revenue through bonus payments and competitive positioning.

An AI Concierge naturally improves HEDIS performance in:

  • Preventive Care Measures: Proactive guidance increases screening and immunization rates
  • Care Gap Closure: Identifies and addresses gaps before they become expensive problems
  • Patient Engagement: Improves medication adherence and chronic disease management

Beyond the Pilot Programs

While government initiatives like the WISeR pilot program focus on “Wasteful and Inappropriate Service Reduction” through AI-powered denials, forward-thinking healthcare organizations have an opportunity to differentiate themselves with AI-powered patient empowerment.

The math is simple: preventing a $50,000 hospitalization through proactive care coordination delivers better ROI than efficiently denying the claim after it’s submitted.

AI Healthcare Concierge Implementation Strategy

For healthcare leaders considering AI Concierge implementation:

  • Phase 1: Deploy AI-powered benefit explanation tools that reduce call center volume and improve patient understanding
  • Phase 2: Integrate predictive analytics to identify patients at risk for expensive interventions and guide them to preventive alternatives
  • Phase 3: Expand to comprehensive care navigation that optimizes both patient outcomes and organizational performance

The Competitive Advantage

While competitors invest in AI to process denials faster, organizations implementing AI Concierge services are investing in:

  • Member satisfaction and retention (15-25% improvement rates)
  • Star rating improvements ($500 per member value per half-star)
  • Operational cost reduction (10-20% typical savings)
  • Revenue protection through better member experience

Conclusion: Choose Your AI Future

The current trajectory of AI in healthcare—focused on denial optimization—represents a massive missed opportunity. As one physician noted about the Medicare pilot: “I will always, always err on the side that doctors know what’s best for their patients.”

AI Healthcare Concierge services align with this principle by empowering both patients and providers with better information, earlier intervention, and more effective care coordination. The technology exists. The business case is proven. The patient need is urgent.

The question isn’t whether AI will transform healthcare—it’s whether we’ll use it to build walls or bridges between patients and the care they need.

The choice is ours. Let’s choose wisely.

]]>
https://blogs.perficient.com/2025/09/24/beyond-denial-how-ai-concierge-services-can-transform-healthcare-from-reactive-to-proactive/feed/ 0 387380
Perficient’s “What If? So What?” Podcast Wins Gold Stevie® Award for Technology Podcast https://blogs.perficient.com/2025/09/08/what-if-so-what-podcast-gold-stevie-award/ https://blogs.perficient.com/2025/09/08/what-if-so-what-podcast-gold-stevie-award/#comments Mon, 08 Sep 2025 16:32:32 +0000 https://blogs.perficient.com/?p=386592

We’re proud to share that Perficient’s What If? So What? podcast has been named a Gold Stevie® Award winner in the Technology Podcast category at the 22nd Annual International Business Awards®. These awards are among the world’s top honors for business achievement, celebrating innovation, impact, and excellence across industries.

Winners were selected by more than 250 executives worldwide, whose feedback praised the podcast’s ability to translate complex digital trends into practical, high-impact strategies for business and technology leaders.

Hosted by Jim Hertzfeld, Perficient’s AVP of Strategy, the podcast explores the business impact of digital transformation, AI, and disruption. With guests like Mark Cuban, Neil Hoyne (Google), May Habib (WRITER), Brian Solis (ServiceNow), and Chris Duffey (Adobe), we dive into the possibilities of What If?, the practical impact of So What?, and the actions leaders can take with Now What?

The Stevie judges called out what makes the show stand out:

  • “What If? So What? Podcast invites experts from different industries, which is important to make sure that audiences are listening and gaining valuable information.”
  • “A sharp, forward-thinking podcast that effectively translates complex digital trends into actionable insights.”
  • “With standout guests like Mark Cuban, Brian Solis, and Google’s Neil Hoyne, the podcast demonstrates exceptional reach, relevance, and editorial curation.”

In other words, we’re not just talking about technology for technology’s sake. We’re focused on real business impact, helping leaders make smarter, faster decisions in a rapidly changing digital world.

We’re honored by this recognition and grateful to our listeners, guests, and production team who make each episode possible.

If you haven’t tuned in yet, now’s the perfect time to hear why the judges called What If? So What? a “high-quality, future-forward show that raises the standard for business podcasts.”

🎧 Catch the latest episodes here: What If? So What? Podcast

Subscribe Where You Listen

APPLE PODCASTS | SPOTIFY | AMAZON MUSIC | OTHER PLATFORMS 

Watch Full Video Episodes on YouTube

Meet our Host

Jim Hertzfeld

Jim Hertzfeld is Area Vice President, Strategy for Perficient.

For over two decades, he has worked with clients to convert market insights into real-world digital products and customer experiences that actually grow their business. More than just a strategist, Jim is a pragmatic rebel known for challenging the conventional and turning grand visions into actionable steps. His candid demeanor, sprinkled with a dose of cynical optimism, shapes a narrative that challenges and inspires listeners.

Connect with Jim: LinkedIn | Perficient

 

 

]]>
https://blogs.perficient.com/2025/09/08/what-if-so-what-podcast-gold-stevie-award/feed/ 1 386592
Drupal 11’s AI Features: What They Actually Mean for Your Team https://blogs.perficient.com/2025/09/04/drupal-11s-ai-features-what-they-actually-mean-for-your-team/ https://blogs.perficient.com/2025/09/04/drupal-11s-ai-features-what-they-actually-mean-for-your-team/#comments Thu, 04 Sep 2025 14:04:33 +0000 https://blogs.perficient.com/?p=386893

Drupal 11’s AI Features: What They Actually Mean for Your Team

If you’ve been following the Drupal community lately, you’ve probably heard about the excitement with AI in Drupal 11 and the new Drupal AI Initiative. With over $100,000 in funding and 290+ AI modules already available, this will be a game changer.

But here’s the thing, AI in Drupal isn’t about replacing your team. It’s about making everyone more effective at what they already do best. Let’s talk through some of these new capabilities and what they mean for different teams in your organization.

Content Teams: Finally, An Assistant That Actually Helps

Creating quality content quickly has always been a challenge, but Drupal 11’s AI features tackle this head-on. The AI CKEditor integration gives content creators real-time assistance right in the editing interface, things like spelling corrections, translations, and contextual suggestions as you type.

The AI Content module is where things get interesting. It can automatically adjust your content’s tone for different audiences, summarize long content, and even suggest relevant taxonomy terms. For marketing teams juggling multiple campaigns, this means maintaining brand consistency without the usual back-and-forth reviews.

One feature that’s already saving teams hours is the AI Image Alt Text module. Instead of manually writing alt text for accessibility compliance, it generates descriptions automatically. The AI Translate feature is another game-changer for organizations with global reach—one-click multilingual content creation that actually understands context.

The bottom line? Your content team can focus on strategy and creativity instead of getting bogged down in routine tasks.

Developers: Natural Language Site Building

Here’s where Drupal 11 gets really exciting for a dev team. The AI Agents module introduces something we haven’t seen before, text-to-action capabilities. Developers can now modify Drupal configurations, create content types, and manage taxonomies just by describing what they need in spoken english.

Instead of clicking through admin interfaces, you can literally tell Drupal what you want, “Create a content type for product reviews with fields for rating, pros, cons, and reviewer information.” The system understands and executes these commands.

The AI module ecosystem supports over 21 major providers, OpenAI, Claude, AWS Bedrock, Google Vertex, and more. This means you’re not locked into any single AI provider and can choose the best model for specific tasks. The AI Explorer gives you a testing ground to experiment with prompts before pushing anything live.

For complex workflows, AI Automators let you chain multiple AI systems together. Think automated content transformation, field population, and business logic handling with minimal custom code.

The other great aspect of Drupal AI, is the open source backbone of Drupal, allows you to extend, add and build upon these agents in any way your dev team sees fit.

Marketing Teams: Data-Driven Campaign Planning

Marketing teams might be the biggest winners here. The AI Content Strategy module analyzes your existing content and provides recommendations for what to create next based on actual data, not guesswork. It identifies gaps in your content strategy and suggests targeted content based on audience behavior and industry trends.

The AI Search functionality means visitors can find content quickly, no more keyword guessing games. The integrated chatbot framework provides intelligent customer service that can access your site’s content to give accurate responses.

For SEO, the AI SEO module generates reports with user recommendations, reviewing content and metadata automatically. This reduces the need for separate SEO tools while giving insights right where you can act on them.

Why This Matters Right Now

The Drupal AI Initiative represents something more than just new features. With dedicated teams from leading agencies and serious funding behind it, this is Drupal positioning itself as the go-to platform for AI-powered content management.

For IT executives evaluating CMS options, Drupal 11’s approach is a great fit. You maintain complete control over your data and AI interactions while getting enterprise-grade governance with approval workflows and audit trails. It’s AI augmentation rather than AI replacement.

The practical benefits are clear: faster campaign launches, consistent brand voice across all content, and teams freed from manual tasks to focus on strategic work. In today’s competitive landscape, that kind of operational efficiency can make the difference between leading your market and playing catch-up.

The Reality Check

We all know, no technology is perfect. The success of these AI features, especially within the open source community, depends heavily on implementation and team adoption. You’ll need to spend time in training and process development to see real benefits. Like any new technology, there will be a learning curve as your team figures out the best ways to leverage these new features.

Based on what we are seeing within groups that have done early adoption of the AI features, they are seeing a good ROI on improvement of team efficiency, marketing time as well as reduced SEO churn.

If you’re considering how Drupal 11’s AI features might fit your organization, it’s worth having a conversation with an experienced implementation partner like Perficient. We can help you navigate the options and develop an AI strategy that makes sense for your specific situation.

]]>
https://blogs.perficient.com/2025/09/04/drupal-11s-ai-features-what-they-actually-mean-for-your-team/feed/ 2 386893