Data + Intelligence Articles / Blogs / Perficient https://blogs.perficient.com/category/services/data-intelligence/ Expert Digital Insights Mon, 09 Mar 2026 06:52:13 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Data + Intelligence Articles / Blogs / Perficient https://blogs.perficient.com/category/services/data-intelligence/ 32 32 30508587 Optimize Snowflake Compute: Dynamic Table Refreshes https://blogs.perficient.com/2026/03/07/optimize-snowflake-compute-dynamic-table-refreshes/ https://blogs.perficient.com/2026/03/07/optimize-snowflake-compute-dynamic-table-refreshes/#respond Sat, 07 Mar 2026 10:46:14 +0000 https://blogs.perficient.com/?p=390653

In this blog, we will discuss one of the problems: the system refreshes as per the target_lag even if no new data in the downstream tables. Most of the time, nothing has changed, which means we’re wasting compute for no reason.

If your data does not change, your compute should not either. Here is how to optimize your Dynamic table to save resources.

Core concepts use in this blog: –
Snowflake:
Snowflake is a fully managed cloud data warehouse that lets you store data and SQL queries at massive scale—without managing servers.

Compute Resources:
Compute resources in Snowflake are the processing power (virtual warehouses) that Snowflake uses to run your queries, load data, and perform calculations.
In simple way:
Storage = where data lives
Compute = the power used to process the data

Dynamic table:
In Snowflake, a Dynamic Table acts as a self-managing data container that bridges the gap between a query and a physical table. Instead of you manually inserting records, you provide Snowflake with a “blueprint” (a SQL query), and the system ensures the table’s physical content always matches that blueprint.

Stream:
A Stream in Snowflake is a tool that keeps track of all changes made to a table so you can process only the updated data instead of scanning the whole table.

Task:
Tasks can run at specific times you choose, or they can automatically start when something happens — for example, when new data shows up in a stream.
Scenario:

The client has requested that data be inserted every 1 hour, but sometimes there may be no new data coming into the downstream tables.

Steps: –
First, we go through the traditional approach and below are the steps.
1. Create source data:

— Choose a role/warehouse you can use

USE ROLE SYSADMIN;

USE WAREHOUSE SNOWFLAKE_LEARNING_WH;

 

— Create database/schema for the demo

CREATE DATABASE IF NOT EXISTS DEMO_DB;

CREATE SCHEMA IF NOT EXISTS DEMO_DB.DEMO_SCHEMA;

USE SCHEMA DEMO_DB.DEMO_SCHEMA;

— Base table: product_changes

CREATE OR REPLACE TABLE product_changes (

product_code VARCHAR(50),

product_name VARCHAR(200),

price NUMBER(10, 2),

price_start_date TIMESTAMP_NTZ(9),

last_updated    TIMESTAMP_NTZ DEFAULT CURRENT_TIMESTAMP()
);

— Seed a few rows

INSERT INTO product_changes (product_code, product_name, price, price_start_date,last_updated)

SELECT

‘PC-‘ || LPAD(TO_VARCHAR(MOD(SEQ4(), 10000) + 1), 3, ‘0’) AS product_code,

‘Product ‘ || LPAD(TO_VARCHAR(MOD(SEQ4(), 10000) + 1), 3, ‘0’) AS product_name,

ROUND(10.00 + (MOD(SEQ4(), 10000) * 5) + (SEQ4() * 0.01), 2) AS price,

DATEADD(MINUTE, SEQ4() * 5, ‘2025-01-01 00:00:00’) AS PRICE_START_DATE,

CURRENT_TIMESTAMP() AS last_updated

FROM

TABLE(GENERATOR(ROWCOUNT => 100000000));

— Create dynamic table

CREATE OR REPLACE DYNAMIC TABLE product_current_price_v1

TARGET_LAG = ‘1 hour’

WAREHOUSE = SNOWFLAKE_LEARNING_WH

INITIALIZE = ON_SCHEDULE

REFRESH_MODE = INCREMENTAL

AS

SELECT

h.product_code,

h.product_name,

h.price,

h.price_start_date

FROM product_changes h

INNER JOIN (

SELECT product_code, MAX(price_start_date) max_price_start_date

FROM product_changes

GROUP BY product_code

) m ON h.price_start_date = m.max_price_start_date AND h.product_code = m.product_code;

 

–Manually Refresh

ALTER DYNAMIC TABLE product_current_price_v1 REFRESH;
Always, we need to do manual refresh after an hour to check the new data is in table
Picture3

Because Snowflake uses a pay‑as‑you‑go credit model for compute, keeping a dynamic table refreshed every hour means compute resources are running continuously. Over time, this constant usage can drive up costs, making frequent refresh intervals less cost‑effective for customers.

To tackle this problem in a smarter and more cost‑efficient way, we follow a few simple steps that make the entire process smoother and more optimized:
First, we set the target_lag to 365 days when creating the dynamic table. This ensures Snowflake doesn’t continually consume compute resources for frequent refreshes, helping us optimize costs right from the start.

— Create dynamic table

CREATE OR REPLACE DYNAMIC TABLE product_current_price_v1

TARGET_LAG = ‘365 days’

WAREHOUSE = SNOWFLAKE_LEARNING_WH

INITIALIZE = ON_SCHEDULE

REFRESH_MODE = INCREMENTAL

AS

SELECT

h.product_code,

h.product_name,

h.price,

h.price_start_date

FROM product_changes h

INNER JOIN (

SELECT product_code, MAX(price_start_date) max_price_start_date

FROM product_changes

GROUP BY product_code

) m ON h.price_start_date = m.max_price_start_date AND h.product_code = m.product_code;

— A) Stream to detect changes in data
CREATE OR REPLACE STREAM STR_PRODUCT_CHANGES ON TABLE PRODUCT_CHANGES;

—  Stored procedure: refresh only when stream has data

CREATE OR REPLACE PROCEDURE SP_REFRESH_DT_IF_NEW()
RETURNS VARCHAR
LANGUAGE SQL
EXECUTE AS OWNER
AS
$$
DECLARE
v_has_data BOOLEAN;
BEGIN
SELECT SYSTEM$STREAM_HAS_DATA(‘STR_PRODUCT_CHANGESS’) INTO :v_has_data;
IF (v_has_data) THEN
ALTER DYNAMIC TABLE DEMO_DB.DEMO_SCHEMA.PRODUCT_CURRENT_PRICE_V1
REFRESH;
RETURN ‘Refreshed dynamic table DT_SALES (new data detected).’;
ELSE
RETURN ‘Skipped refresh (no new data).’;

END IF;

END;
$$;

— Create TASK
Here, we can schedule as per requirement

   EXAMPLE:
CREATE OR REPLACE TASK PUBLIC.T_REFRESH_DT_IF_NEW
WAREHOUSE = SNOWFLAKE_LEARNING_WH
SCHEDULE = ‘5 MINUTE’
AS
CALL PUBLIC.SP_REFRESH_DT_IF_NEW();

ALTER TASK PUBLIC.T_REFRESH_DT_IF_NEW RESUME;
Conclusion:
Optimizing Snowflake compute isn’t just about reducing costs—it’s about making your data pipelines smarter, faster, and more efficient. By carefully managing how and when dynamic tables refresh, teams can significantly cut down on unnecessary compute usage while still maintaining reliable, up‑to‑date data.

Adjusting refresh intervals, thoughtfully using features like target_lag, and designing workflows that trigger updates only when needed can turn an expensive, always‑running process into a cost‑effective, well‑tuned system. With the right strategy, Snowflake’s powerful dynamic tables become not just a convenience, but a competitive advantage in building lean, scalable data platforms.

 

]]>
https://blogs.perficient.com/2026/03/07/optimize-snowflake-compute-dynamic-table-refreshes/feed/ 0 390653
IngestIQ – Hadoop to Databricks AI-powered Migration https://blogs.perficient.com/2026/02/05/ai-driven-hadoop-migration-databricks/ https://blogs.perficient.com/2026/02/05/ai-driven-hadoop-migration-databricks/#respond Thu, 05 Feb 2026 21:49:18 +0000 https://blogs.perficient.com/?p=390059

Organizations are migrating from their on-premise, legacy Hadoop Data Lake to a more modern data architecture to take advantage of AI to fulfill the long-awaited promise of unlocking business value from semi- and unstructured data. Databricks tends to be the modern platform of choice for Hadoop migrations due to core architectural similarities. Apache Spark has its roots in Hadoop, and its developers founded Databricks. There is a pretty good chance you are using Parquet as your file format in HDFS. They even share the Hive Metastore for data abstraction and discovery.

Teams tasked with migrating from their legacy Hadoop platforms to Databricks face unique and unexpected challenges. since Hadoop is a platform, not just a database. In fact, approaching this as a database migration hides most of the technical challenges and can lead to a fundamental misunderstanding of the scope of the project. This is particularly true when you consider Hive only as a lift-and-shift to Databricks. In many cases, it makes more sense to focus on the data movement rather than the data storage. Imagine an Oozie-first approach to a Hadoop migration.

Change your mindset from a data platform migration to a business process modernization, and read on.

Introducing IngestIQ

IngestIQ leverages cutting-edge AI models available in Databricks to ingest and translate a variety of workflows across the Hadoop ecosystem into an innovative Intermediate Domain Specific Language (iDSL). This AI-centric transformation yields a business-first perspective on data workflows, uncovering the underlying business intents and dataset value. With AI at its core, IngestIQ empowers a human-on-the-loop (HOTL) model to make precise, informed decisions that prioritize modernization and high-impact migratory strategies.

  • Traditional tools like Oozie, Airflow, and NiFi often encode complex operational logic rather than business rules, obscuring the true business value. By utilizing AI-driven insights, IngestIQ transforms these workflows into an iDSL that highlights business relevance, enabling stakeholders to make strategic, value-driven decisions. AI enhances the HOTL’s ability to discern critical, redundant, or obsolete jobs, focusing efforts on strategically significant modernization. This prioritization prevents misallocation of resources towards low-impact migrations, optimizing computational and storage costs while emphasizing data security, compliance, and business-critical areas.

Why this matters

  • Oozie deployments often encode operational logic, not business intent. Translating to an iDSL makes intent explicit, enabling business owners to triage what matters.
  • Human review reduces risk of incorrectly migrating jobs that are no longer needed or that embed obsolete business rules.
  • Column-level prioritization prevents over-migration of low-value data and focuses security, lineage, and Unity Catalog efforts where business impact is highest.
  • Provides auditable, repeatable decisioning and a clear path from discovery to production cutover in Databricks.

IngestIQ’s AI-Driven Capabilities

  1. Comprehensive Ingestion & AI-Powered Analysis:
    • AI algorithms process diverse inputs from Oozie XML workflows, Apache Airflow DAGs, and Apache NiFi flows. Both static analyses and AI-enhanced runtime assessments map job dependencies, execution metrics, and data lineage .
  2. Business-First AI Representation with iDSL:
    • The iDSL leverages AI to generate concise, business-centric representations of data workflows. This AI-driven translation surfaces transformation intents and dataset significance clearly, ensuring decisions align closely with strategic goals.
  3. AI-Based Triage & Workflow Optimization:
    • IngestIQ uses AI and machine learning classifiers to intelligently identify and optimize redundant, outdated, or misaligned workflows, supported by AI-derived evidence and confidence metrics.
  4. AI-Enhanced HOTL Interface:
    • Equipped with AI-powered dashboards and predictive analytics, the HOTL interface enables stakeholders to navigate prioritized actions efficiently.
  5. Data-Driven Business-Priority AI Ranking:
    • A sophisticated AI model evaluates workflows across multiple criteria—business criticality, usage patterns, technical debt, cost, and compliance pressures. This advanced AI prioritization focuses on the most impactful areas first.
  6. Automated AI Workflow Generation:
    • From AI-optimized iDSL inputs, IngestIQ automates the generation of Spark templates, migration scripts, and compliance documents that seamlessly integrate into CI/CD pipelines for robust, secure implementation.

Example flow (end-to-end)

  1. Ingest Oozie metadata and execution logs => parse into ASTs and runtime profiles.
  2. Generate iDSL artifacts representing jobs and transforms, store in Git.
  3. Run triage models and rules => produce candidate list with evidence and priority scores.
  4. HOTL reviews, annotates, and approves actions via UI; approvals create commits.
  5. Approved artifacts trigger code & migration artifact generation (Spark templates, Delta migration scripts, Unity Catalog manifests).
  6. CI pipeline runs tests (unit, differential), security checks, and human approval gates.
  7. Deploy to Databricks staging; run parallel validation with Hadoop outputs; upon pass, cutover per schedule.
  8. Capture telemetry to refine triage models and priority weighting.

Conclusion

The IngestIQ Accelerator provides a pragmatic, auditable bridge between legacy Hadoop operational workflows and business-led modernization. By making intent explicit and placing a human-on-the-loop for final decisions, organizations get the speed and repeatability of automated translation without sacrificing governance or business risk management. Column-level prioritization ensures effort and controls focus on data that matters most—reducing cost, improving security posture, and accelerating value realization on Databricks.

Perficient is a Databricks Elite PartnerContact us to learn more about how to empower your teams with the right tools, processes, and training to unlock your data’s full potential across your enterprise.

]]>
https://blogs.perficient.com/2026/02/05/ai-driven-hadoop-migration-databricks/feed/ 0 390059
Moving to CJA? Sunset Adobe Analytics Without Causing Chaos https://blogs.perficient.com/2026/01/27/moving-to-cja-sunset-adobe-analytics-without-causing-chaos/ https://blogs.perficient.com/2026/01/27/moving-to-cja-sunset-adobe-analytics-without-causing-chaos/#comments Tue, 27 Jan 2026 13:51:10 +0000 https://blogs.perficient.com/?p=389876

Adobe Experience Platform (AEP) and Customer Journey Analytics (CJA) continue to emerge as the preferred solutions for organizations seeking a unified, 360‑degree view of customer behavior.  For organizations requiring HIPAA compliance, AEP and CJA is a necessity.  Many organizations are now having discussions about whether they should retool or retire their legacy Adobe Analytics implementations.  The transition from Adobe Analytics to CJA is far more complex than simply disabling an old tool. Teams must carefully plan, perform detailed analysis, and develop a structured approach to ensure that reporting continuity, data integrity, and downstream dependencies remain intact.

Adobe Analytics remains a strong platform for organizations focused exclusively on web and mobile app measurement; however, enterprises that are prioritizing cross‑channel data activation, real‑time profiles, and detailed journey analysis should embrace AEP as the future. Of course, you won’t be maintaining two platforms after building out CJA so you must think about how to move on from Adobe Analytics.

Decommissioning Options and Key Considerations

You can approach decommissioning Adobe Analytics in several ways. Your options include: 1) disabling the extension; 2) adding an s.abort at the top of the AppMeasurement custom‑code block to prevent data from being sent to Adobe Analytics; 3) deleting all legacy rules; or 4) discarding Adobe Analytics entirely and creating a new Launch property for CJA. Although multiple paths exist, the best approach almost always involves preserving your data‑collection methods and keeping the historical Adobe Analytics data. You have likely collected that data for years, and you want it to remain meaningful after migration. Instead of wiping everything out, you can update Launch by removing rules you no longer need or by eliminating references to Adobe Analytics.

Recognizing the challenges involved in going through the data to make the right decisions during this process, I have developed a specialized tool – Analytics Decommissioner (AD) — designed to support organizations as they decommission Adobe Analytics and transition fully to AEP and CJA. The tool programmatically evaluates Adobe Platform Launch implementations using several Adobe API endpoints, enabling teams to quickly identify dependencies, references, and potential risks associated with disabling Adobe Analytics components.

Why Decommissioning Requires More Than a Simple Shutdown

One of the most significant obstacles in decommissioning Adobe Analytics is identifying where legacy tracking still exists and where removing Adobe Analytics could potentially break the website or cause errors. Over the years, many organizations accumulate layers of custom code, extensions, and tracking logic that reference Adobe Analytics variables—often in places that are not immediately obvious. These references may include s. object calls, hard‑coded AppMeasurement logic, or conditional rules created over the course of several years. Without a systematic way to surface dependencies, teams risk breaking critical data flows that feed CJA or AEP datasets.

Missing or outdated documentation makes the problem even harder. Many organizations fail to maintain complete or current solution design references (SDRs), especially for older implementations. As a result, teams rely on tribal knowledge, attempts to recall discussions from years ago, or a manual inspection of data collected to understand how the system collects data. This approach moves slowly, introduces errors, and cannot support large‑scale environments. When documentation lacks clarity, teams struggle to identify which rules, data elements, or custom scripts still matter and which they can safely remove. Now imagine repeating this process for every one of your Launch properties.

This is where Perficient and the AD tool provide significant value.
The AD tool programmatically scans Launch properties and uncovers dependencies that teams may have forgotten or never documented. A manual analysis might easily overlook these dependencies. AD also pinpoints where custom code still references Adobe Analytics variables, highlights rules that have been modified or disabled since deployment, and surfaces AppMeasurement usage that could inadvertently feed into CJA or AEP data ingestion. This level of visibility is essential for ensuring that the decommissioning process does not disrupt data collection or reporting.

How Analytics Decommissioner (AD) Works

The tool begins by scanning all Launch properties across your organization and asking the user to select a property. This is necessary because the decommissioning process must be done on each property individually.  This is the same way data is set for Adobe Analytics, one Launch property at a time.  Once a property is selected, the tool retrieves all production‑level data elements, rules, and rule components, including their revision histories.  The tool ignores rules and data element revisions that developers disabled or never published (placed in production).  The tool then performs a comprehensive search for AppMeasurement references and Adobe Analytics‑specific code patterns. These findings show teams exactly where legacy tracking persists and see what needs to be updated or modified and which items can be safely removed.  If no dependencies exist, AD can disable the rules and create a development library for testing.  When AD cannot confirm that a dependency exists, it reports the rule names and components where potential issues exist and depend on development experts to make the decision about the existence of a dependency.  The user always makes the final decisions.

This tool is especially valuable for large or complex implementations. In one recent engagement, a team used it to scan nearly 100 Launch properties. Some of those properties included more than 300 data elements and 125 active rules.  Attempting to review this level of complexity manually would have taken weeks and the risk would remain that critical dependencies are missed. Programmatic scanning ensures accuracy, completeness, and efficiency.  This allows teams to move forward with confidence.

A Key Component of a Recommended Decommissioning Approach

The AD tool and a comprehensive review are essential parts of a broader, recommended decommissioning framework. A structured approach typically includes:

  • Inventory and Assessment – Identifying all Adobe Analytics dependencies across Launch, custom code, and environments.
  • Mapping to AEP/CJA – Ensuring all required data is flowing into the appropriate schemas and datasets.
  • Gap Analysis – Determining where additional configuration or migration work needs to be done.
  • Remediation and Migration – Updating Launch rules, removing legacy code, and addressing undocumented dependencies.
  • Validation and QA – Confirming that reporting remains accurate in CJA after removal of Launch rules and data elements created for Adobe Analytics.
  • Sunset and Monitoring – Disabling AppMeasurement, removing Adobe Analytics extensions, and monitoring for errors.

Conclusion

Decommissioning Adobe Analytics is a strategic milestone in modernizing the digital data ecosystem. Using the right tools and having the right processes are essential.  The Analytics Decommissioner tool allows organizations to confidently transition to AEP and CJA. This approach to migration preserves data quality, reduces operational costs, and strengthens governance when teams execute it properly. By using the APIs and allowing the AD tool to handle the heavy lifting, teams ensure that they don’t overlook any dependencies.  This will enable a smooth and risk‑free transition with robust customer experience analytics.

]]>
https://blogs.perficient.com/2026/01/27/moving-to-cja-sunset-adobe-analytics-without-causing-chaos/feed/ 2 389876
Base Is Loaded: Bridging OLTP and OLAP with Lakebase and PySpark https://blogs.perficient.com/2026/01/25/bridging-oltp-olap-databricks-lakebase-python/ https://blogs.perficient.com/2026/01/25/bridging-oltp-olap-databricks-lakebase-python/#comments Sun, 25 Jan 2026 17:45:19 +0000 https://blogs.perficient.com/?p=389908

For years, the Lakehouse paradigm has successfully collapsed the wall between Data Warehouses and Data Lakes. We have unified streaming and batch, structured and unstructured data, all under one roof. Yet we often find ourselves hitting a familiar, frustrating wall: the gap between the analytical plane (OLAP) and the transactional plane (OLTP). In my latest project, the client wanted to use Databricks to serve as both an analytic platform and power their front-end React web app. There is a sample Databricks App that uses NodeJS for a front end and FastAPI for a Python backend that connects to Lakebase. The sample ToDo app provides a sample front end that performs CRUD operations out of the box. I opened a new Databricks Query object, connected to the Lakebase compute, and verified the data. It’s hard to overstate how cool this seemed.

The next logical step was to build a declarative pipeline that would flow the data Lakebase received from the POST, PUT and GET requests through the Bronze layer, for data quality checks, into Silver for SCD2-style history and then into Gold where it would be available to end users through AI/BI Genie and PowerBI reports as well as being the source for a sync table back to Lakebase to serve GET statements. I created a new declarative pipeline in a source-controlled asset bundle and started building. Then I stopped building. That’s not supported. You actually need to communicate with Lakebase from a notebook using the SDK. A newer SDK than Serverless provides, no less.

A couple of caveats. At the time of this writing, I’m using Azure Databricks, so I only have access to Lakebase Provisioned and not Lakebase Autoscaling. And it’s still in Public Preview; maybe GA is different. Or, not. Regardless, I have to solve the problem on my desk today, and simply having the database isn’t enough. We need a robust way to interact with it programmatically from our notebooks and pipelines.

In this post, I want to walk through a Python architectural pattern I’ve developed—BaseIsLoaded. This includes pipeline configurations, usage patterns, and a PySpark class:LakebaseClient. This class serves two critical functions: it acts as a CRUD wrapper for notebook-based application logic, and, more importantly, it functions as a bridge to turn a standard Postgres table into a streaming source for declarative pipelines.

The Connectivity Challenge: Identity-Native Auth

The first hurdle in any database integration is authentication. In the enterprise, we are moving away from hardcoded credentials and .pgpass files. We want identity-native authentication. The LakebaseClient handles this by leveraging the databricks.sdk. Instead of managing static secrets, the class generates short-lived tokens on the fly.

Look at the _ensure_connection_info method in the provided code snippet:

def _ensure_connection_info(self, spark: SparkSession, value: Any):
    # Populate ``self._conn_info`` with the Lakebase endpoint and temporary token
  if self._conn_info is None:
    w = WorkspaceClient()
    instance_name = "my_lakebase" # Example instance
    instance = w.database.get_database_instance(name=instance_name)
    cred = w.database.generate_database_credential(
    request_id=str(uuid.uuid4()), instance_names=[instance_name]
  )
  self._conn_info = {
    "host": instance.read_write_dns,
    "dbname": "databricks_postgres",
    "password": cred.token, # Ephemeral token
    # ...
  }
    """)

This encapsulates the complexity of finding the endpoint and authenticating and allows us to enforce a “zero-trust” model within our code. The notebook or job running this code inherits the permissions of the service principal or user executing it, requesting a token valid only for that session.

Operationalizing DDL: Notebooks as Migration Scripts

One of the strongest use cases for Lakebase is managing application state or configuration for data products. However, managing the schema of a Postgres database usually requires an external migration tool (like Flyway or Alembic).

To keep the development lifecycle contained within Databricks, I extended the class to handle safe DDL execution. The class includes methods like create_table, alter_table_add_column, and create_index.

These methods use psycopg2.sql to handle identifier quoting safely. In a multi-tenant environment where table names might be dynamically generated based on business units or environments, either by human or agentic developers, SQL injection via table names is a real risk.

def create_table(self, schema: str, table: str, columns: List[str]):
    ddl = psql.SQL("CREATE TABLE IF NOT EXISTS {}.{} ( {} )").format(
        psql.Identifier(schema),
        psql.Identifier(table),
        psql.SQL(", ").join(psql.SQL(col) for col in columns)
    )
    self.execute_ddl(ddl.as_string(self._get_connection()))

This allows a Databricks Notebook to serve as an idempotent deployment script. You can define your schema in code and execute it as part of a “Setup” task in a Databricks Workflow, ensuring the OLTP layer exists before the ETL pipeline attempts to read from or write to it.

The Core Innovation: Turning Postgres into a Micro-Batch Stream

The most significant value of this architecture is the load_new_data method.

Standard JDBC connections in Spark are designed for throughput, not politeness. They default to reading the entire table or, if you attempt to parallelize reads via partitioning, they spawn multiple executors that can quickly exhaust the connection limit of Lakebase. By contrast, LakebaseClient runs intentionally on the driver using a single connection.

This solves a common dilemma we run into with our enterprise clients: if you have a transactional table (e.g., an orders table or a pipeline_audit log) in Lakebase and want to ingest it into Delta Lake incrementally, you usually have to introduce Kafka, Debezium, or complex CDC tools. If you have worked for a large, regulated company, you can appreciate the value of not asking for things.

Instead, LakebaseClient implements a lightweight “Client-Side CDC” pattern. It relies on a monotonic column (a checkpoint_column, such as an auto-incrementing ID or a modification_timestamp) to fetch only what has changed since the last run.

1. State Management with Delta

The challenge with custom polling logic is: where do you store the offset? If the cluster restarts, how does the reader know where it left off?

I solved this by using Delta Lake itself as the state store for the Postgres reader. The _persist_checkpoint and _load_persisted_checkpoint methods use a small Delta table to track the last_checkpoint for every source.

def _persist_checkpoint(self, spark: SparkSession, value: Any):
    # ... logic to create table if not exists ...
    # Upsert (merge) last checkpoint into a Delta table
    spark.sql(f"""
        MERGE INTO {self.checkpoint_store} t
        USING _cp_upsert_ s
        ON t.source_id = s.source_id
        WHEN MATCHED THEN UPDATE SET t.last_checkpoint = s.last_checkpoint
        WHEN NOT MATCHED THEN INSERT ...
    """)

This creates a robust cycle: The pipeline reads from Lakebase, processes the data, and commits the offset to Delta. This ensures exactly-once processing semantics (conceptually) for your custom ingestion logic.

2. The Micro-Batch Logic

The load_new_data method brings it all together. It creates a psycopg2 cursor, queries only the rows where checkpoint_col > last_checkpoint, limits the fetch size (to prevent OOM errors on the driver), and converts the result into a Spark DataFrame.

    if self.last_checkpoint is not None:
        query = psql.SQL(
            "SELECT * FROM {} WHERE {} > %s ORDER BY {} ASC{}"
        ).format(...)
        params = (self.last_checkpoint,)

By enforcing an ORDER BY on the monotonic column, we ensure that if we crash mid-batch, we simply resume from the last successfully processed ID.

Integration with Declarative Pipelines

So, how do we use this in a real-world enterprise scenario?

Imagine you have a “Control Plane” app running on a low-cost cluster that allows business users to update “Sales Targets” via a Streamlit app (backed by Lakebase). You want these targets to immediately impact your “Sales Reporting” Delta Live Table (DLT) pipeline.

Instead of a full refresh of the sales_targets table every hour, you can run a continuous or scheduled job using LakebaseClient.

The Workflow:

  1. Instantiation:
    lb_source = LakebaseClient(
        table_name="public.sales_targets",
        checkpoint_column="updated_at",
        checkpoint_store="system.control_plane.ingestion_offsets"
    )
    
  2. Ingestion Loop: You can wrap load_new_data in a simple loop or a scheduled task.
    # Fetch micro-batch
    df_new_targets = lb_source.load_new_data()
    
    if not df_new_targets.isEmpty():
        # Append to Bronze Delta Table
        df_new_targets.write.format("delta").mode("append").saveAsTable("bronze.sales_targets")
    
  3. Downstream DLT: Your main ETL pipeline simply reads from bronze.sales_targets as a standard streaming source. The LakebaseClient acts as the connector, effectively “streaming” changes from the OLTP layer into the Bronze layer.

Architectural Considerations and Limitations

While this class provides a powerful bridge, as architects, we must recognize the boundaries.

  1. It is not a Debezium Replacement: This approach relies on “Query-based CDC.” It cannot capture hard deletes (unless you use soft-delete flags), and it relies on the checkpoint_column being strictly monotonic. If your application inserts data with past timestamps, this reader will miss them. My first use case was pretty simple; just a single API client performing CRUD operations. For true transaction log mining, you still need logical replication slots (which Lakebase supports, but requires a more complex setup).
  2. Schema Inference: The _postgres_type_to_spark method in the code provides a conservative mapping. Postgres has rich types (like JSONBHSTORE, custom enums). This class defaults unknown types to StringType. This is intentional design—it shifts the schema validation burden to the Bronze-to-Silver transformation in Delta, preventing the ingestion job from failing due to exotic Postgres types. I can see adding support for JSONB before this project is over, though.
  3. Throughput: This runs on the driver or a single executor node (depending on how you parallelize calls). It is designed for “Control Plane” data—thousands of rows per minute, not millions of rows per second. Do not use this to replicate a high-volume trading ledger; use standard ingestion tools for that.

Conclusion

Lakebase fills the critical OLTP void in the Databricks ecosystem. However, a database is isolated until it is integrated. The BaseIsLoaded pattern demonstrated here offers a lightweight, Pythonic way to knit this transactional layer into your analytical backbone.

By abstracting authentication, safely handling DDL, and implementing stateful micro-batching via Delta-backed checkpoints, we can build data applications that are robust, secure, and entirely contained within the Databricks control plane. It allows us to stop treating application state as an “external problem” and start treating it as a native part of the Lakehouse architecture. Because, at the end of the day, adding Apps plus Lakebase to your toolbelt is too much fun to let a little glue code stand in your way.

Perficient is a Databricks Elite PartnerContact us to learn more about how to empower your teams with the right tools, processes, and training to unlock your data’s full potential across your enterprise.

]]>
https://blogs.perficient.com/2026/01/25/bridging-oltp-olap-databricks-lakebase-python/feed/ 2 389908
Part 1: Mobile AI 2026: Why On-Device Intelligence is the New Standard https://blogs.perficient.com/2026/01/19/part-1-mobile-ai-2026-why-on-device-intelligence-is-the-new-standard/ https://blogs.perficient.com/2026/01/19/part-1-mobile-ai-2026-why-on-device-intelligence-is-the-new-standard/#comments Mon, 19 Jan 2026 20:15:36 +0000 https://blogs.perficient.com/?p=389691

Subtitle: From Critical Medical Hardware to the Apple Ecosystem, the future of mobile intelligence is local, instant, and unified.

We are standing at a hardware tipping point. For the last decade, “AI” on mobile effectively meant one thing: sending data to the cloud and waiting for an answer. Especially for those chatbots, adding AI to an app meant integrating a slow, spinning loading indicator while data traveled to a server, waited in a queue, and eventually returned text. Users are tired of waiting. They are overwhelmed by generic bots that feel disconnected from the app they are actually using.

But as we move toward 2026, the script is flipping. Phone manufacturers are shipping devices with neural engines (NPUs) so powerful they rival the desktop GPUs of just a few years ago. This shift isn’t just about faster chatbots or smoother animations; it is reshaping critical industries like healthcare and unifying the mobile ecosystem under a single dominant model family: Google Gemini.

The Hardware Revolution: The “Brain” in Your Pocket

The defining trend of the 2025-2026 cycle is the explosion of Hardware Acceleration. Modern mobile processors—whether it’s the latest Snapdragons powering Android flagships or the A-series chips in iPhones—are no longer just Central Processing Units (CPUs). They are dedicated AI powerhouses capable of “always-on” generative tasks.

This hardware leap means we can now run massive models (like Gemini Nano) directly on the device. The benefits are immediate and transformative:

  • Zero Latency: No network round-trips. The intelligence feels instantaneous.
  • Total Privacy: Sensitive data never leaves the phone’s secure enclave.
  • Offline Reliability: Intelligence works in elevators, basements, and airplanes.

The Critical Use Case: Android in Healthcare

Nowhere is this shift more vital than in the rapidly expanding world of Medical Devices. Android has quietly become the operating system of choice for specialized medical hardware, from handheld ultrasound scanners to patient vitals monitors.

Why is the edge critical here? Because medical environments are unforgiving. A doctor in a rural clinic or a paramedic in a speeding ambulance cannot rely on spotty 5G connections to process a patient’s vitals or analyze an X-ray.

  • Privacy Compliance: Processing sensitive patient data (like facial analysis for pain detection) strictly on-device removes complex regulatory cloud compliance hurdles. The data stays with the patient.
  • Reliability: An Android-based diagnostic tool must work instantly, 100% of the time, regardless of Wi-Fi status.
  • Adoption: We are seeing a massive surge in smart, connected medical tools that rely on commodity Android hardware to deliver hospital-grade diagnostics at a fraction of the cost.

The “One AI” Future: Gemini on iOS & Android

Perhaps the most compelling reason to bet on Gemini is the upcoming unification of the mobile AI landscape. Reports indicate that Apple is partnering with Google to integrate Gemini models into iOS 18 and macOS Sequoia for complex reasoning tasks and summaries, a rollout expected to mature by Spring 2026.

While Apple will handle basic tasks with its own on-device models, it is leaning on Gemini’s superior reasoning for the “heavy lifting.” This creates a unique opportunity for developers:

  • Unified Intelligence: Learning to engineer prompts and integrations for Gemini means you are effectively targeting the entire mobile market—both the Android medical devices and the premium iPhone user base.
  • Cross-Platform Consistency: A feature built on Gemini’s logic will behave consistently whether it’s running on a Samsung Galaxy Tab in a hospital or an iPhone 17 in a consumer’s hand.
  • Future-Proofing: With these updates expected shortly, building expertise in Gemini now puts us ahead of the curve when the feature goes mainstream across billions of Apple devices.

In Part 2, we will leave the strategy behind and dive into the code to see how we are already building this future today on iOS and Android.

]]>
https://blogs.perficient.com/2026/01/19/part-1-mobile-ai-2026-why-on-device-intelligence-is-the-new-standard/feed/ 1 389691
Model Context Protocol (MCP) – Simplified https://blogs.perficient.com/2026/01/08/model-context-protocol-mcp-simplified/ https://blogs.perficient.com/2026/01/08/model-context-protocol-mcp-simplified/#comments Thu, 08 Jan 2026 07:50:15 +0000 https://blogs.perficient.com/?p=389415

What is MCP?

Model Context Protocol (MCP) is an open-source standard for integrating AI applications to external systems. With AI use cases getting traction more and more, it becomes evident that AI applications tend to connect to multiple data sources to provide intelligent and relevant responses.

Earlier AI systems interacted with users through Large language Models (LLM) that leveraged pre-trained datasets. Then, in larger organizations, business users work with AI applications/agents expect more relevant responses from enterprise dataset, from where Retrieval Augmented Generation (RAG) came into play.

Now, AI applications/agents are expected to produce more accurate responses leveraging latest data, that requires AI systems to interact with multiple data sources and fetch accurate information. When multi-system interactions are established, it requires the communication protocol to be more standardized and scalable. That is where MCP enables a standardized way to connect AI applications to external systems.

 

Architecture

Mcp Architecture

Using MCP, AI applications can connect to data source (ex; local files, databases), tools and workflows – enabling them to access key information and perform tasks. In enterprises scenario, AI applications/agents can connect to multiple databases across organization, empowering users to analyze data using natural language chat.

Benefits of MCP

MCP serves a wide range of benefits

  • Development: MCP reduces development time and complexity when building, or integrating with AI application/agent. It makes integrating MCP host with multiple MCP servers simple by leveraging built-in capability discovery feature.
  • AI applications or agents: MCP provides access to an ecosystem of data sources, tools and apps which will enhance capabilities and improve the end-user experience.
  • End-users: MCP results in more capable AI applications or agents which can access your data and take actions on user behalf when necessary.

MCP – Concepts

At the top level of MCP concepts, there are three entities,

  • Participants
  • Layers
  • Data Layer Protocol

 

Participants

MCP follows a client-server architecture where an MCP host – an AI application like enterprise chatbot establishes connections to one or more MCP servers. The MCP host accomplishes this by creating a MCP client for each MCP server. Each MCP client maintains a dedicated connection with its MCP server.

The key participants of MCP architecture are:

  • MCP Host: AI application that coordinates and manages one or more MCP clients
  • MCP Client: A component that maintains a dedicated connection to an MCP server and obtains context from an MCP server for MCP host to interact
  • MCP Server: A program that provides context to MCP clients (i.e. generate responses or perform actions on user behalf)

Mcp Client Server

Layers

MCP consists of two layers:

  • Data layer – Defines JSON-RPC based protocol for client-server communication including,
    • lifecycle management – initiate connection, capability discovery & negotiation, connection termination
    • Core primitives – enabling server features like tools for AI actions, resources for context data, prompt templates for client-server interaction and client features like ask client to sample from host LLM, log messages to client
    • Utility features – Additional capabilities like real-time notifications, track progress for long-running operations
  • Transport Layer – Manages communication channels and authentication between clients and servers. It handles connection establishment, message framing and secure communication between MCP participants

Data Layer Protocol

The core part of MCP is defining the schema and semantics between MCP clients and MCP servers. It is the part of MCP that defines the ways developers can share context from MCP servers to MCP clients.

MCP uses JSON-RPC 2.0 as its underlying RPC protocol. Client and servers send requests to each other and respond accordingly. Notifications can be used when no response is required.

Life Cycle Management

MCP is a stateful protocol that requires lifecycle management. The purpose of lifecycle management is to negotiate the capabilities (i.e. functionalities) that both client and server support.

Primitives

Primitives define what clients and servers can offer each other. These primitives specify the types of contextual information that can be shared with AI applications and the range of actions that can be performed. MCP defines three core primitives that servers can expose:

  • Tools: Executable functions that AI applications can invoke to perform actions (e.g., API calls, database queries)
  • Resources: Data sources that provide contextual information to AI applications (e.g., file contents, API responses, database records)
  • Prompts: Reusable templates that help structure interactions with language models (e.g., system prompts, few-shot examples)

 

Notifications

The protocol supports real-time notifications to enable dynamic updates between servers and clients. For example, when a server’s available tools change – such as when new functionalities are added or existing functionality is updated – the server can send tool update notifications to all its connected clients about these changes.

 

Security in Data Accessing

While AI applications communicate with multiple enterprise data sources thgrouch MCP and fetch real-time sensitive data like customer information, financial data to serve the users, data security becomes absolutely critical factor to be addressed.

MCP ensures secure access.

Authentication and Authorization

MCP implements server-side authentication where each MCP server validates who is making the request. The enterprise system controls access through:

  • User-specific credentials – Each user connecting through MCP has their own authentication tokens
  • Role-based access control (RBAC) – Users only access data that the role permits
  • Session management – Time-limited sessions that expire automatically

Data Access Controls

MCP server acts as a security gateway that enforces the same access policies as direct system access:

    • Users can only query data that they are authroized to access
    • The server validates every request against permission rules
    • Sensitive information can be masked or filtered based on user privileges

Secure Communication

      • Encrypted connections – All data transmissions uses TLS/HTTPS encryption
      • No data storage in AI – AI systems do not store the financial data it accesses; it only process it during the conversation session

Audit and Monitoring

MCP implementations in enterprise ecosystem should include:

      • Complete audit logs – Every data access request is logged with user, timestamp and data accessed
      • Anomaly detection – Engage mechanisms that monitor unusual access patterns and trigger alerts
      • Compliance tracking – All interactions meet regulatory requirements like GDPR, PCI-DSS

Architecture Isolation

Enterprises typically deploy MCP using:

      • Private network deployment – MCP servers stay within the enterprise secure firewall boundary
      • API gateway integration – Requests go through existing security infrastructure
      • No direct database access – MCP connects and access data through secure APIs, not direct access to database

The main idea is that MCP does not bypass existing security. It works within the same security as other enterprise applications, just showing a smarter interface.

 

MCP Implementation & Demonstration

In this section, I will demonstrate a simple use case where MCP client (Claude Desktop) interacts with “Finance Manager” MCP server that can fetch financial information from the database.

Financial data is maintained in Postgres database tables. MCP client (Claude Desktop app) will request information about customer account, MCP host will discover appropriate capability based on user prompt and invoke respective MCP tool function that can fetch data from the database table.

To make MCP client-server in action, there are three parts to be configured

      • Backend Database
      • MCP server implementation
      • MCP server registration in MCP Host

Backend Database

Postgres table “accounts” maintains accounts data with below information, “transactions” table maintains the transaction performed on the accounts

Accounts Table

Transactions Table

MCP server implementation

Mcp Server Implementation

FastMCP class implements MCP server components and creating an object of it initialize and enables access to those components to create enterprise MCP server capabilities.

The annotation “@mcp.tool()” defines the capability and the respective function will be recognized as MCP capability. These functions will be exposed to AI applications and will be invoked from MCP Host to perform designated actions.

In order to invoke MCP capabilities from client, MCP server should be up & running. In this example, there are two functions defined as MCP tool capabilities,

      • get_account_details – The function accept account number as input parameter, query “accounts” table and returns account information
      • add_transaction – The function accepts account number and transaction amount as parameters, make entry into “transactions” table

 

MCP Server Registration in MCP Host

For AI applications to invoke MCP server capability, MCP server should be registered in MCP host at client end. For this demonstration, I am using Claude Desktop as MCP client from where I interact with MCP server.

First, MCP server is registered with MCP host in Claude Desktop as below,

Claude Desktop -> Settings -> Developer -> Local MCP Servers -> Click “Edit Config”

Developer Settings

Open “claude_desktop_config” JSON file in Notepad. Add configurations in the JSON as below. The configurations define the path where MCP server implementation is located and instruct command to MCP host to run. Save the file and close.

Register Mcp Server

Restart “Claude Desktop” application, go to Settings -> Developer -> Local MCP servers tab. The newly added MCP server (finance-manager) will be in running state as below,

Mcp Server Running

Go to chat window in Claude Desktop. Issue a prompt to fetch details of an account in “accounts” table and review the response,

 

Claude Mcp Invocation

User Prompt: User issues a prompt to fetch details of an account.

MCP Discovery & Invoke: The client (Claude Desktop) processes the prompt, interacts with MCP host, automatically discover the relevant capability – get_account_details function in this case – without explicitly mention the function name and invoke the function with necessary parameter.

Response: MCP server process the request, fetch account details from the table and respond details to the client. The client formats the response and present it to the user.

Another example to add a transaction in the backend table for an account,

Mcp Server Add Transaction

Here, “add_transaction” capability has been invoked to add a transaction record in “transactions” table. In the chat window, you could notice that what MCP function is being invoked along with request & response body.

The record has been successfully added into the table,

Add Transaction Postgres Table

Impressive, isn’t it..!!

There are a wide range of use cases implementing MCP servers and integrate with enterprise AI systems that bring in intelligent layer to interact with enterprise data sources.

Here, you may also develop a thought that in what ways MCP (Model Context Protocol) is different from RAG (Retrieval Augmented Generation), as I did so. Based on my research, I just curated a comparison matrix of the features that would add more clarity,

 

Aspect RAG (Retrieval Augmented Generation) MCP (Model Context Protocol)
Purpose Retrieve unstructured docs to improve LLM responses AI agents access structured data/tools dynamically
Data Type Unstructured text (PDFs, docs, web pages) Structured data (JSON, APIs, databases)
Workflow Retrieve → Embed → Prompt injection → Generate AI requests context → Protocol delivers → AI reasons
Context Delivery Text chunks stuffed into prompt Structured objects via standardized interface
Token Usage High (full text in context) Low (references/structured data)
Action Capability Read-only (information retrieval) Read + Write (tools, APIs, actions)
Discovery Pre-indexed vector search Runtime tool/capability discovery
Latency Retrieval + embedding time Real-time protocol calls
Use Case Q&A over documents, chatbots AI agents, tool calling, enterprise systems
Maturity Widely adopted, mature ecosystem Emerging standard (2025+)
Complexity Vector DB + embedding pipeline Protocol implementation + AI agent

 

Conclusion

MCP Servers extend the capabilities of AI assistants by allowing them to interact with external services and data sources using natural language commands. Model Context Protocol (MCP) has a wide range of use cases and there are several enterprises already implemented and hosted MCP servers for AI clients to integrate and interact.

Some of the prominent MCP servers include:

GitHub MCP Server: Allows AI to manage repositories, issues, pull requests, and monitor CI/CD workflows directly within the development environment.

Azure DevOps MCP Server: Integrates AI with Azure DevOps services for managing pipelines, work items and repositories, ideal for teams withing the Microsoft ecosystem.

PostgreSQL MCP Server: bridges the gap between AI and databases, allowing natural language queries, schema exploration and data analysis without manual SQL scripting.

Slack MCP Server: Turns Slack into an AI-powered collaboration hub, enabling message posting, channel management

]]>
https://blogs.perficient.com/2026/01/08/model-context-protocol-mcp-simplified/feed/ 1 389415
Bruno : The Developer-Friendly Alternative to Postman https://blogs.perficient.com/2026/01/02/bruno-the-developer-friendly-alternative-to-postman/ https://blogs.perficient.com/2026/01/02/bruno-the-developer-friendly-alternative-to-postman/#respond Fri, 02 Jan 2026 08:25:16 +0000 https://blogs.perficient.com/?p=389232

If you’re knee-deep in building apps, you already know APIs are the backbone of everything. Testing them? That’s where the real magic happens. For years, we’ve relied on tools like Postman and Insomnia to send requests, debug issues, and keep things running smoothly. But lately, there’s a buzz about something new: Bruno. It’s popping up everywhere, and developers are starting to make the switch. Why? Let’s dive in.

What Exactly is Bruno?

Picture this: an open-source, high-performance API client that puts your privacy first. Bruno isn’t some bloated app that shoves your stuff into the cloud. “No,” it keeps everything right on your local machine. Your API collections, requests, all of it? Safe and sound where you control it, no cloud drama required.

Bruno is built for developers who want:

  • Simplicity without compromise
  • High performance without unnecessary extras
  • Complete freedom with open-source flexibility

It’s like the minimalist toolbox you’ve been waiting for.

Why is Bruno Suddenly Everywhere?

Bruno solves the pain points that frustrate us with other API tools:

  • Privacy First: No forced cloud uploads, your collections stay local. No hidden syncing; your data stays completely under your control.
  • Fast and Lightweight: Loads quickly and handles requests without lag. Perfect for quick tests on the go.
  • Open-Source Freedom: No fees, no lock-in. Collections are Git-friendly and saved as plain text for easy version control.
  • No Extra Bloat: Focused on what matters, API testing without unnecessary features.

Bottom line: Bruno fits the way we work today, collaboratively, securely, and efficiently. It’s not trying to do everything; it’s just good at API testing.

Key Features

Bruno keeps it real with features that matter. Here are the highlights:

  1. Totally Open-Source

  • No sneaky costs or paywalls.
  • Peek under the hood anytime—the code’s all there.
  • A group of developers is contributing to GitHub, making it better every day. Wanna join? Hit up their repo and contribute.
  1. Privacy from the Ground Up

  • Everything lives locally.
  • No accounts, no cloud pushes—your requests don’t leave your laptop.
  • Ideal if you’re handling sensitive APIs and don’t want Big Tool Company snooping.
  • Bonus: Those plain-text files integrate well with Git, so team handoffs are seamless.
  1. Light as a Feather, Fast as Lightning

  • Clean UI, no extra bells and whistles slowing you down.
  • Starts up quickly and zips through responses.
  • Great for solo endpoint tweaks or managing large workflows without your machine slowing.

Getting Bruno Up and Running

Installing Bruno is simple. It works on Windows, macOS, and Linux. Just choose your platform, and you’re good to go.

#3. Quick Install Guide

Windows

  1. Head to Bruno’s GitHub Releases page.
  2. Grab the latest .exe file.
  3. Run it and follow the prompts.
  4. Boom—find it in your Start Menu.

macOS

  1. Download the .dmg from Releases.
  2. Drag it to Applications.
  3. Fire it up and get testing.

Linux

  1. Snag the .AppImage or .deb from Releases.
  2. For AppImage: chmod +x Bruno.AppImage then ./Bruno.AppImage.
  3. For .deb: sudo dpkg -i bruno.deb and sudo apt-get install -f.

GUI or CLI? Your Call

  • GUI: Feels like Postman but cleaner. Visual, easy-to-build requests on the fly.
  • CLI: For the terminal lovers. Automate tests, integrate with CI/CD, or run collections: bruno run collection.bru –env dev.

Build Your First Collection in Minutes

Bruno makes organizing APIs feel effortless. Here’s a no-sweat walkthrough.

Step 1: Fire It Up

Launch Bruno. You’ll see a simple welcome screen prompting you to create a new collection.

Step 2: New Collection Time

  1. Hit “New Collection.”
  2. Name it (say, “My API Playground”).
  3. Pick a folder—it’s all plain text, so Git loves it.

Step 3: Add a Request

  1. Inside the collection, click “New Request.”
  2. Pick your method (GET, POST, etc.).
  3. Enter the URL: https://jsonplaceholder.typicode.com/posts.

Step 4: Headers and Body Magic

  • Add the header: Content-Type: application/json.
  • For POSTs, add a body like:

JSON

{
"title": "Bruno Blog",
"body": "Testing Bruno API Client",
"userId": 1
}

Step 5: Hit Send

Click it, and watch the response pop: status, timing, pretty JSON—all right there.

Step 6: Save and Sort

Save the request, create folders for environments or APIs, and use variables to switch setups.

Bruno vs. Postman: Head-to-Head

Postman’s the OG, but Bruno’s the scrappy challenger winning hearts. Let’s compare.

  1. Speed

  • Bruno: Lean and mean—quick loads, low resource hog.
  • Postman: Packed with features, but it can feel sluggish on big projects. Edge: Bruno
  1. Privacy

  • Bruno: Local only, no cloud creep.
  • Postman: Syncs to their servers—handy for teams, sketchy for secrets. Edge: Bruno
  1. Price Tag

  • Bruno: Free forever, open-source vibes.
  • Postman: Free basics, but teams and extras? Pay up. Edge: Bruno

 

Feature Bruno Postman
Open Source ✅ Yes ❌ No
Cloud Sync ❌ No ✅ Yes
Performance ✅ Lightweight ❌ Heavy
Privacy ✅ Local Storage ❌ Cloud-Based
Cost ✅ Free ❌ Paid Plans

Level up With Advanced Tricks

Environmental Variables

Swap envs easy-peasy:

  • Make files for dev/staging/prod.
  • Use {{baseUrl}} in requests.
  • Example:
{
"baseUrl": "https://api.dev.example.com",
"token": "your-dev-token"
}

 

Scripting Smarts

Add pre/post scripts for:

  • Dynamic auth: request.headers[“Authorization”] = “Bearer ” + env.token;
  • Response checks or automations.

Community & Contribution

It’s community-driven:

Conclusion

Bruno isn’t just another API testing tool; it’s designed for developers who want simplicity and control. With local-first privacy, fast performance, open-source flexibility, and built-in Git support, Bruno delivers everything you need without unnecessary complexity.
If you’re tired of heavy, cloud-based clients, it’s time to switch. Download Bruno today and experience the difference: Download here.

 

]]>
https://blogs.perficient.com/2026/01/02/bruno-the-developer-friendly-alternative-to-postman/feed/ 0 389232
Perficient Included in IDC Market Glance: Digital Engineering and Operational Technology Services https://blogs.perficient.com/2025/12/23/perficient-included-in-idc-market-glance-digital-engineering-and-operational-technology-services/ https://blogs.perficient.com/2025/12/23/perficient-included-in-idc-market-glance-digital-engineering-and-operational-technology-services/#respond Tue, 23 Dec 2025 18:31:09 +0000 https://blogs.perficient.com/?p=389312

We’re excited to announce that Perficient has been included in the “DEOT Services Provider with Other IT Services” category in the IDC Market Glance: Digital Engineering and Operational Technology Services, 4Q25 report (Doc# US53142225, December 2025). This segment includes service providers whose offerings and value proposition are focused primarily on Digital Engineering and OT services.

We believe our inclusion in this Market Glance reflects our deep commitment to helping organizations navigate the complex intersection of digital innovation and operational technology with confidence, agility, and cutting-edge engineering capabilities.

According to IDC, Digital Engineering and Operational Technology Services encompass three critical areas: product engineering services that support an enterprise’s existing software, hardware, and semiconductor product lifecycle from concept to end of life; operational technology services including plant engineering, manufacturing engineering services, asset modernization, and IIoT services; and digital engineering innovation accelerator services that leverage next-generation technologies like IoT, AI/ML, Generative AI, AR/VR, Digital Twins, Robotics, and more to transform product engineering and operational technology capabilities.

To us, this inclusion validates Perficient’s comprehensive approach to digital engineering and our ability to integrate emerging technologies with operational excellence to drive measurable business outcomes for our clients.

“Being included in the IDC Market Glance for Digital Engineering and Operational Technology Services is a testament to our team’s expertise in bridging the gap between digital innovation and operational reality,” said Justin Huckins, Perficient Director, AI Digital and Martech Strategy. “We’re proud to help organizations modernize their product engineering capabilities, optimize their operational technology infrastructure, and leverage AI and other advanced technologies to accelerate innovation and drive competitive advantage.”

Solutions Informed by Industry Expertise

We have found that one of the keys to real impact is to deeply understand the challenges and complexities of the industry for which digital engineering and operational technology solutions are created. This approach is especially important for complex manufacturing environments, energy infrastructure, and automotive innovation where off-the-shelf solutions fall short.

We believe our automotive expertise has been further validated by our inclusion as a Major Player in the IDC MarketScape: Worldwide IT and Engineering Services for Software-Defined Vehicles 2025 Vendor Assessment (Doc # US51813124, September 2025). We believe this recognition underscores our leadership in strategic vision for the automotive industry and our AI-first approach across the SDV lifecycle, promoting innovation in areas such as digital twins, augmented and virtual reality, smart personalization throughout the buyer and ownership experience, and the monetization of connected vehicle data.

Ingesting, Reporting, and Monetizing Telemetry Data

We helped a top Japanese multinational automobile manufacturer build a comprehensive cloud data platform to securely store, orchestrate, and analyze proprietary electric vehicle battery data for strategic decision-making and monetization. By designing and implementing a Microsoft Azure architecture using Terraform, we delivered robust data pipelines that clean, enrich, and ingest complex telematics datasets. Our solution integrated Databricks for advanced analytics and Power BI for visualization, creating a centralized, secure, and scalable platform that leverages data governance and strategy to maximize the monetization of EV batteries.

Explore our strategic position on battery passports and Cloud capabilities.

Transforming EV Adoption Through Subscription Model Innovation

We partnered with a major automotive manufacturer to enhance their EV subscription plan and mobile app, addressing customer concerns about charging accessibility and infrastructure fragmentation. Our application modernization and Adobe teams created seamless customer journeys that enabled subscribers to order complimentary supercharger adapters and integrated the mobile app with shipping and billing systems to track adapter delivery and process charging transactions. This innovative approach resulted in over 105,000 adapter orders and 55,000 new subscribers in the first month of launch, generating significant media attention as the industry’s first offering of this type.

Read more about our automotive industry solutions or discover our application modernization services.

Building Advanced Metering Infrastructure for Operational Excellence

We developed an advanced metering infrastructure solution for a leading nationwide diversified energy provider to provide real-time outage information from meters, routers, transformers, and substations. Using Databricks on Azure integrated with ArcGIS and Azure API Management, we built a comprehensive data foundation to ingest IoT data from all network components while overlaying hyperspectral imagery to visualize vegetation, poles, and terrain. This AMI implementation created a single source of truth for operational teams, reduced operational costs, and enabled accurate, timely information delivery to customers, fundamentally improving maintenance, engineering, and customer service workflows.

Read more about our energy and utilities expertise or explore our IoT and operational technology capabilities.

Let’s Engineer the Future Together

As organizations continue to digitally transform their engineering and operational capabilities, Perficient remains a trusted partner for companies seeking to lead in the era of smart products, connected operations, and AI-driven innovation.

Learn more about how Perficient is shaping the future of digital engineering and operational technology.

About IDC Market Glance:

This IDC study is a vendor assessment of the 2025 IT and engineering services market for software-defined vehicles (SDVs) using the IDC MarketScape model. This assessment discusses both the quantitative and qualitative characteristics for success in the software-defined vehicle life-cycle services market and covers a variety of vendors operating in this market. The evaluation is based on a comprehensive and rigorous framework that compares vendors, assesses them based on certain criteria, and highlights the factors expected to be most important for market success in the short and long terms.

 

]]>
https://blogs.perficient.com/2025/12/23/perficient-included-in-idc-market-glance-digital-engineering-and-operational-technology-services/feed/ 0 389312
Purpose-Driven AI in Insurance: What Separates Leaders from Followers https://blogs.perficient.com/2025/12/19/purpose-driven-ai-in-insurance-what-separates-leaders-from-followers/ https://blogs.perficient.com/2025/12/19/purpose-driven-ai-in-insurance-what-separates-leaders-from-followers/#respond Fri, 19 Dec 2025 17:57:54 +0000 https://blogs.perficient.com/?p=389098

Reflecting on this year’s InsureTech Connect Conference 2025 in Las Vegas, one theme stood out above all others: the insurance industry has crossed a threshold from AI experimentation to AI expectation. With over 9,000 attendees and hundreds of sessions, the world’s largest insurance innovation gathering became a reflection of where the industry stands—and where it’s heading.

What became clear: the carriers pulling ahead aren’t just experimenting with AI—they’re deploying it with intentional discipline. AI is no longer optional, and the leaders are anchoring every investment in measurable business outcomes.

The Shift Is Here: AI in Insurance Moves from Experimentation to Expectation

This transformation isn’t happening in isolation though. Each shift represents a fundamental change in how carriers approach, deploy, and govern AI—and together, they reveal why some insurers are pulling ahead while others struggle to move beyond proof-of-concept.

Here’s what’s driving the separation:

  • Agentic AI architectures that move beyond monolithic models to modular, multi-agent systems capable of autonomous reasoning and coordination across claims, underwriting, and customer engagement. Traditional models aren’t just slow—they’re competitive liabilities that can’t deliver the coordinated intelligence modern underwriting demands.
  • AI-first strategies that prioritize trust, ethics, and measurable outcomes—especially in underwriting, risk assessment, and customer experience.
  • A growing emphasis on data readiness and governance. The brutal reality: carriers are drowning in data while starving for intelligence. Legacy architectures can’t support the velocity AI demands.

Success In Action: Automating Insurance Quotes with Agentic AI

Why Intent Matters: Purpose-Driven AI Delivers Measurable Results

What stood out most this year was the shift from “AI for AI’s sake” to AI with purpose. Working with insurance leaders across every sector, we’ve seen the industry recognize that without clear intent—whether it’s improving claims efficiency, enhancing customer loyalty, or enabling embedded insurance—AI initiatives risk becoming costly distractions.

Conversations with leaders at ITC and other industry events reinforced this urgency. Leaders consistently emphasize that purpose-driven AI must:

  • Align with business outcomes. AI enables real-time decisions, sharpens risk modeling, and delivers personalized interactions at scale. The value is undeniable: new-agent success rates increase up to 20%, premium growth boosts by 15%, customer onboarding costs reduce up to 40%.

  • Be ethically grounded. Trust is a competitive differentiator—AI governance isn’t compliance theater, it’s market positioning.

  • Deliver tangible value to both insurers and policyholders. From underwriting to claims, AI enables real-time decisions, sharpens risk modeling, and delivers personalized interactions at scale. Generative AI accelerates content creation, enables smarter agent support, and transforms customer engagement. Together, these capabilities thrive on modern, cloud-native platforms designed for speed and scalability.

Learn More: Improving CSR Efficiency With a GenAI Assistant

Building the AI-Powered Future: How We’re Accelerating AI in Insurance

So, how do carriers actually build this future? That’s where strategic partnerships and proven frameworks become essential.

At Perficient, we’ve made this our focus. We help clients advance AI capabilities through virtual assistants, generative interfaces, agentic frameworks, and product development, enhancing team velocity by integrating AI team members.

Through our strategic partnerships with industry-leading technology innovators—including AWS, MicrosoftSalesforceAdobe, and more— we accelerate insurance organizations’ ability to modernize infrastructure, integrate data, and deliver intelligent experiences. Together, we shatter boundaries so you have the AI-native solutions you need to boldly advance business.

But technology alone isn’t enough. We take it even further by ensuring responsible AI governance and ethical alignment with our PACE framework—Policies, Advocacy, Controls, and Enablement—to ensure AI is not only innovative, but also rooted in trust. This approach ensures AI is deployed with purpose, aligned to business goals, and embedded with safeguards that protect consumers and organizations.

Because every day your data architecture isn’t AI-ready is a day you’re subsidizing your competitors’ advantage.

You May Also Enjoy: 3 Ways Insurers Can Lead in the Age of AI

Ready to Lead? Partner with Perficient to Accelerate Your AI Transformation

Are you building your AI capabilities at the speed the market demands?

From insight to impact, our insurance expertise helps leaders modernize, personalize, and scale operations. We power AI-first transformation that enhances underwriting, streamlines claims, and builds lasting customer trust.

  • Business Transformation: Activate strategy and innovation ​within the insurance ecosystem.​
  • Modernization: Optimize technology to boost agility and ​efficiency across the value chain.​
  • Data + Analytics: Power insights and accelerate ​underwriting and claims decision-making.​
  • Customer Experience: Ease and personalize experiences ​for policyholders and producers.​

We are trusted by leading technology partners and consistently mentioned by analysts. Discover why we have been trusted by 13 of the 20 largest P&C firms and 11 of the 20 largest annuity carriers. Explore our insurance expertise and contact us to learn more.

]]>
https://blogs.perficient.com/2025/12/19/purpose-driven-ai-in-insurance-what-separates-leaders-from-followers/feed/ 0 389098
Improve Healthcare Quality With Data + AI: Key Takeaways for Industry Leaders [Webinar] https://blogs.perficient.com/2025/12/18/improve-healthcare-quality-with-data-ai-key-takeaways-for-industry-leaders-webinar/ https://blogs.perficient.com/2025/12/18/improve-healthcare-quality-with-data-ai-key-takeaways-for-industry-leaders-webinar/#respond Thu, 18 Dec 2025 23:41:42 +0000 https://blogs.perficient.com/?p=389177

As healthcare organizations accelerate toward value-based care, the ability to turn massive data volumes into actionable insights is no longer optional—it’s mission-critical.

In a recent webinar, Improve Healthcare Quality with Data + AI, experts from Databricks, Excellus BlueCross BlueShield, and Perficient shared how leading organizations are using unified data and AI to improve outcomes, enhance experiences, and reduce operational costs.

Below are the key themes and insights you need to know.

1. Build a Unified, AI-Ready Data Foundation

Fragmented data ecosystems are the biggest barrier to scaling AI. Claims, clinical records, social determinants of health (SDOH), and engagement data often live in silos. This creates inefficiencies and incomplete views of your consumers (e.g, members, patients, providers, brokers, etc.).

What leaders are doing:

  • Unify all data sources—structured and unstructured—into a single, secure platform.
  • Adopt open formats and governance by design (e.g., Unity Catalog) to ensure compliance and interoperability.
  • Move beyond piecemeal integrations to an enterprise data strategy that supports real-time insights.

✅ Why it matters: A unified foundation enables predictive models, personalized engagement, and operational efficiency—all essential for success in value-based care.

2. Shift from Reactive to Proactive Care

Healthcare is moving from anecdotal, reactive interactions to data-driven, proactive engagement. This evolution requires prioritizing interventions based on risk, cost, and consumer preferences.

Key capabilities:

  • Predict risk and close gaps in care before they escalate.
  • Use AI to prioritize next-best actions, balancing population-level insights with individual needs.
  • Incorporate feedback loops to refine outreach strategies and improve satisfaction.

✅ North Star: Deliver care that is timely, personalized, and measurable, improving both individual outcomes and population health.

3. Personalize Engagement at Scale

Consumers expect the “Amazon experience”—personalized, seamless, and proactive. Achieving this requires flexible activation strategies.

Best practices:

  • Decouple message, channel, and recommendation for modular outreach.
  • Use AI-driven segmentation to tailor interventions across email, SMS, phone, PCP coordination, and more.
  • Continuously optimize based on response and engagement data.

✅ Result: Higher quality scores, improved retention, and stronger consumer trust.

4. Operationalize AI for Measurable Impact

AI has moved beyond experimentation and is now delivering tangible ROI. Excellus BlueCross BlueShield’s AI-powered call summarization is a prime example:

  • Reduced call handle time by one to two minutes, saving thousands of hours annually
  • Improved audit quality scores from ~85% to 95–100%
  • Achieved real-time summarization in under seven seconds, enhancing advocate productivity and member experience

✅ Lesson: Start with high-impact workflows, not isolated tasks. Quick wins build confidence and pave the way for enterprise-scale transformation.

5. Scale Strategically—Treat AI as Business Transformation

Perficient emphasized that scaling AI is not a tech project—it’s a business transformation. Success depends on:

  • Clear KPIs tied to business outcomes (e.g., CMS Stars, HEDIS measures)
  • Governed, explainable, and continuously monitored data
  • Change management and workforce enablement to drive adoption
  • Modular, composable architecture for flexibility and speed

✅ Pro tip: Begin with an MVP approach—prioritize workflows, prove value quickly, and expand iteratively.

Final Thought: Data and AI are Redefining Health Care Delivery

Healthcare leaders face mounting pressure to deliver better outcomes, lower costs, and exceptional experiences. The insights shared in this webinar make one thing clear: success starts with a unified, AI-ready data foundation and a strategic approach to scaling AI across workflows—not just isolated tasks.

Organizations that act now will be positioned to move from reactive care to proactive engagement, personalize experiences at scale, and unlock measurable ROI. The opportunity is here. How you act on it will define your competitive edge.

Ready to reimagine healthcare with data and AI?

If you’re exploring how to modernize care delivery and consumer engagement, start with a strategic assessment. Align your goals, evaluate your data readiness, and identify workflows that deliver the greatest business and health impact. That first step sets the stage for meaningful transformation, and it’s where the right partner can accelerate progress from strategy to measurable impact.

Our healthcare expertise equips leaders to modernize, personalize, and scale care. We drive resilient, AI-powered transformation to shape the experiences and engagement of healthcare consumers, streamline operations, and improve the cost, quality, and equity of care.

  • Business Transformation: Activate strategy for transformative outcomes and health experiences.
  • Modernization: Maximize technology to drive health innovation, efficiency, and interoperability.
  • Data + Analytics: Power enterprise agility and accelerate healthcare insights.
  • Consumer Experience: Connect, ease, and elevate impactful health journeys.

We understand that every organization is on a unique AI journey. Whether you’re starting from scratch, experimenting with pilots, or scaling AI across your enterprise, we meet you where you are. Our structured approach delivers value at every stage, helping you turn AI from an idea into a business advantage. Plus, as a Databricks Elite consulting partner, we build end-to-end solutions that empower you to drive more value from your data.

Discover why we have been trusted by the 10 largest health systems and the 10 largest health insurers in the U.S.  Explore our healthcare expertise and contact us to get started today.

Watch the on-demand webinar now:

]]>
https://blogs.perficient.com/2025/12/18/improve-healthcare-quality-with-data-ai-key-takeaways-for-industry-leaders-webinar/feed/ 0 389177
Getting Started with Python for Automation https://blogs.perficient.com/2025/12/09/getting-started-with-python-for-automation/ https://blogs.perficient.com/2025/12/09/getting-started-with-python-for-automation/#respond Tue, 09 Dec 2025 14:00:21 +0000 https://blogs.perficient.com/?p=388867

Automation has become a core part of modern work, allowing teams to reduce repetitive tasks, save time, and improve accuracy. Whether it’s generating weekly reports, organizing files, processing large amounts of data, or interacting with web applications, automation helps individuals and companies operate more efficiently. Among all programming languages used for automation, Python is one of the most widely adopted because of its simplicity and flexibility. 

Why Python Is Perfect for Automation 

Python is known for having a clean and readable syntax, which makes it easy for beginners to start writing scripts without needing deep programming knowledge. The language is simple enough for non-developers, yet powerful enough for complex automation tasks. Another major advantage is the availability of thousands of libraries. These libraries allow Python to handle file operations, manage Excel sheets, interact with APIs, scrape websites, schedule tasks, and even control web browsers – all with minimal code. Because of this, Python becomes a single tool capable of automating almost any repetitive digital task. 

What You Can Automate with Python 

Python can automate everyday tasks that would otherwise require significant manual effort. Simple tasks like renaming multiple files, organizing folders, or converting file formats can be completed instantly using small scripts. It is also commonly used for automating Excel-based workflows, such as cleaning datasets, merging sheets, generating monthly summaries, or transforming data between formats. Python is equally powerful for web-related automation: collecting data from websites, making API calls, sending automated emails, downloading content, and filling out online forms. For more advanced uses, Python can also automate browser testing, server monitoring, and deployment processes. 

Setting Up Your Python Automation Environment 

Getting started is straightforward. After installing Python, you can use an editor like VS Code or PyCharm to write your scripts. Libraries required for automation can be installed using a single command, making setup simple. Once you have your environment ready, writing your first script usually takes only a few minutes. For example, a short script can rename files in a folder, send an email, or run a function at a specific time of the day. Python’s structure is beginner-friendly, so even basic programming knowledge is enough to start automating everyday tasks. 

Examples of Simple Automation 

A typical example is a script that automatically renames files. Instead of renaming hundreds of files one by one, Python can loop through the folder and rename them instantly. Another example is an automated email script that can send daily reminders or reports. Python can also schedule tasks so that your code runs every morning, every hour, or at any time you choose. These examples show how even small scripts can add real value to your workflow by reducing repetitive manual tasks. 

Best Practices When Building Automation 

As you begin writing automation scripts, it helps to keep the code organized and reliable. Using virtual environments ensures that your project libraries remain clean. Adding error-handling prevents scripts from stopping unexpectedly. Logging enables you to track what your script does and when it executes. Once your automation is ready, you can run it automatically using tools like Task Scheduler on Windows or cron on Linux, so the script works in the background without your involvement. 

How Companies Use Python Automation 

Python automation is widely used across industries. IT teams rely on it to monitor servers, restart services, and handle deployment tasks. Business teams use it to generate reports, clean data, update dashboards, and manage document workflows. Marketing teams use automation for scraping competitor information, scheduling social media posts, or tracking engagement. For developers, Python helps with testing, error checking, and system integration via APIs. Across all these areas, automation improves efficiency and reduces human error. 

Conclusion 

Python is an excellent starting point for anyone who wants to begin automating daily tasks. Its simplicity, combined with its powerful ecosystem of libraries, makes it accessible to beginners and useful for professionals. Even basic automation scripts can save hours of work, and as you grow more comfortable, you can automate more complex processes involving data, web interactions, and system management. Learning Python for automation not only makes your work easier but also adds valuable skills for professional growth. 

 

]]>
https://blogs.perficient.com/2025/12/09/getting-started-with-python-for-automation/feed/ 0 388867
AI and the Future of Financial Services UX https://blogs.perficient.com/2025/12/01/ai-banking-transparency-genai-financial-ux/ https://blogs.perficient.com/2025/12/01/ai-banking-transparency-genai-financial-ux/#comments Mon, 01 Dec 2025 18:00:28 +0000 https://blogs.perficient.com/?p=388706

I think about the early ATMs now and then. No one knew the “right” way to use them. I imagine a customer in the 1970s standing there, card in hand, squinting at this unfamiliar machine and hoping it would give something back; trying to decide if it really dispensed cash…or just ate cards for sport. That quick panic when the machine pulled the card in is an early version of the same confusion customers feel today in digital banking.

People were not afraid of machines. They were afraid of not understanding what the machine was doing with their money.

Banks solved it by teaching people how to trust the process. They added clear instructions, trained staff to guide customers, and repeated the same steps until the unfamiliar felt intuitive. 

However, the stakes and complexity are much higher now, and AI for financial product transparency is becoming essential to an optimized banking UX.

Today’s banking customer must navigate automated underwriting, digital identity checks, algorithmic risk models, hybrid blockchain components, and disclosures written in a language most people never use. Meanwhile, the average person is still struggling with basic money concepts.

FINRA reports that only 37% of U.S. adults can answer four out of five financial literacy questions (FINRA Foundation, 2022).

Pew Research finds that only about half of Americans understand key concepts like inflation and interest (Pew Research Center, 2024).

Financial institutions are starting to realize that clarity is not a content task or a customer service perk. It is structural. It affects conversion, compliance, risk, and trust. It shapes the entire digital experience. And AI is accelerating the pressure to treat clarity as infrastructure.

When customers don’t understand, they don’t convert. When they feel unsure, they abandon the flow. 

 

How AI is Improving UX in Banking (And Why Institutions Need it Now)

Financial institutions often assume customers will “figure it out.” They will Google a term, reread a disclosure, or call support if something is unclear. In reality, most customers simply exit the flow.

The CFPB shows that lower financial literacy leads to more mistakes, higher confusion, and weaker decision-making (CFPB, 2019). And when that confusion arises during a digital journey, customers quietly leave without resolving their questions.

This means every abandoned application costs money. Every misinterpreted term creates operational drag. Every unclear disclosure becomes a compliance liability. Institutions consistently point to misunderstanding as a major driver of complaints, errors, and churn (Lusardi et al., 2020).

Sometimes it feels like the industry built the digital bank faster than it built the explanation for it.

Where AI Makes the Difference

Many discussions about AI in financial services focus on automation or chatbots, but the real opportunity lies in real-time clarity. Clarity that improves financial product transparency and streamlines customer experience without creating extra steps.

In-context Explanations That Improve Understanding

Research in educational psychology shows people learn best when information appears the moment they need it. Mayer (2019) demonstrates that in-context explanations significantly boost comprehension. Instead of leaving the app to search unfamiliar terms, customers receive a clear, human explanation on the spot.

Consistency Across Channels

Language in banking is surprisingly inconsistent. Apps, websites, advisors, and support teams all use slightly different terms. Capgemini identifies cross-channel inconsistency as a major cause of digital frustration (Capgemini, 2023). A unified AI knowledge layer solves this by standardizing definitions across the system.

Predictive Clarity Powered by Behavioral Insight

Patterns like hesitation, backtracking, rapid clicking, or form abandonment often signal confusion. Behavioral economists note these patterns can predict drop-off before it happens (Loibl et al., 2021). AI can flag these friction points and help institutions fix them.

24/7 Clarity, Not 9–5 Support

Accenture reports that most digital banking interactions now occur outside of business hours (Accenture, 2023). AI allows institutions to provide accurate, transparent explanations anytime, without relying solely on support teams.

At its core, AI doesn’t simplify financial products. It translates them.

What Strong AI-Powered Customer Experience Looks Like

Onboarding that Explains Itself

  • Mortgage flows with one-sentence escrow definitions.
  • Credit card applications with visual explanations of usage.
  • Hybrid products that show exactly what blockchain is doing behind the scenes. The CFPB shows that simpler, clearer formats directly improve decision quality (CFPB, 2020).

A Unified Dictionary Across Channels

The Federal Reserve emphasizes the importance of consistent terminology to help consumers make informed decisions (Federal Reserve Board, 2021). Some institutions now maintain a centralized term library that powers their entire ecosystem, creating a cohesive experience instead of fragmented messaging.

Personalization Based on User Behavior

Educational nudges, simplified paths, multilingual explanations. Research shows these interventions boost customer confidence (Kozup & Hogarth, 2008). 

Transparent Explanations for Hybrid or Blockchain-backed Products

Customers adopt new technology faster when they understand the mechanics behind it (University of Cambridge, 2021). AI can make complex automation and decentralized components understandable.

The Urgent Responsibilities That Come With This

 

GenAI can mislead customers without strong data governance and oversight. Poor training data, inconsistent terminology, or unmonitored AI systems create clarity gaps. That’s a problem because those gaps can become compliance issues. The Financial Stability Oversight Council warns that unmanaged AI introduces systemic risk (FSOC, 2023). The CFPB also emphasizes the need for compliant, accurate AI-generated content (CFPB, 2024).

Customers are also increasingly wary of data usage and privacy. Pew Research shows growing fear around how financial institutions use personal data (Pew Research Center, 2023). Trust requires transparency.

Clarity without governance is not clarity. It’s noise.

And institutions cannot afford noise.

What Institutions Should Build Right Now

To make clarity foundational to customer experience, financial institutions need to invest in:

  • Modern data pipelines to improve accuracy
  • Consistent terminology and UX layers across channels
  • Responsible AI frameworks with human oversight
  • Cross-functional collaboration between compliance, design, product, and analytics
  • Scalable architecture for automated and decentralized product components
  • Human-plus-AI support models that enhance, not replace, advisors

When clarity becomes structural, trust becomes scalable.

Why This Moment Matters

I keep coming back to the ATM because it perfectly shows what happens when technology outruns customer understanding. The machine wasn’t the problem. The knowledge gap was. Financial services are reliving that moment today.

Customers cannot trust what they do not understand.

And institutions cannot scale what customers do not trust.

GenAI gives financial organizations a second chance to rebuild the clarity layer the industry has lacked for decades, and not as marketing. Clarity, in this new landscape, truly is infrastructure.

Related Reading

References 

  • Accenture. (2023). Banking top trends 2023. https://www.accenture.com
  • Capgemini. (2023). World retail banking report 2023. https://www.capgemini.com
  • Consumer Financial Protection Bureau. (2019). Financial well-being in America. https://www.consumerfinance.gov
  • Consumer Financial Protection Bureau. (2020). Improving the clarity of mortgage disclosures. https://www.consumerfinance.gov
  • Consumer Financial Protection Bureau. (2024). Supervisory highlights: Issue 30. https://www.consumerfinance.gov
  • Federal Reserve Board. (2021). Consumers and mobile financial services. https://www.federalreserve.gov
  • FINRA Investor Education Foundation. (2022). National financial capability study. https://www.finrafoundation.org
  • Financial Stability Oversight Council. (2023). Annual report. https://home.treasury.gov
  • Kozup, J., & Hogarth, J. (2008). Financial literacy, public policy, and consumers’ self-protection. Journal of Consumer Affairs, 42(2), 263–270.
  • Loibl, C., Grinstein-Weiss, M., & Koeninger, J. (2021). Consumer financial behavior in digital environments. Journal of Economic Psychology, 87, 102438.
  • Lusardi, A., Mitchell, O. S., & Oggero, N. (2020). The changing face of financial literacy. University of Pennsylvania, Wharton School.
  • Mayer, R. (2019). The Cambridge handbook of multimedia learning. Cambridge University Press.
  • Pew Research Center. (2023). Americans and data privacy. https://www.pewresearch.org
  • Pew Research Center. (2024). Americans and financial knowledge. https://www.pewresearch.org
  • University of Cambridge. (2021). Global blockchain benchmarking study. https://www.jbs.cam.ac.uk
]]>
https://blogs.perficient.com/2025/12/01/ai-banking-transparency-genai-financial-ux/feed/ 7 388706