Perficient Blogs https://blogs.perficient.com/ Expert Digital Insights Tue, 27 Jan 2026 13:51:10 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Perficient Blogs https://blogs.perficient.com/ 32 32 30508587 Moving to CJA? Sunset Adobe Analytics Without Causing Chaos https://blogs.perficient.com/2026/01/27/moving-to-cja-sunset-adobe-analytics-without-causing-chaos/ https://blogs.perficient.com/2026/01/27/moving-to-cja-sunset-adobe-analytics-without-causing-chaos/#respond Tue, 27 Jan 2026 13:51:10 +0000 https://blogs.perficient.com/?p=389876

Adobe Experience Platform (AEP) and Customer Journey Analytics (CJA) continue to emerge as the preferred solutions for organizations seeking a unified, 360‑degree view of customer behavior.  For organizations requiring HIPAA compliance, AEP and CJA is a necessity.  Many organizations are now having discussions about whether they should retool or retire their legacy Adobe Analytics implementations.  The transition from Adobe Analytics to CJA is far more complex than simply disabling an old tool. Teams must carefully plan, perform detailed analysis, and develop a structured approach to ensure that reporting continuity, data integrity, and downstream dependencies remain intact.

Adobe Analytics remains a strong platform for organizations focused exclusively on web and mobile app measurement; however, enterprises that are prioritizing cross‑channel data activation, real‑time profiles, and detailed journey analysis should embrace AEP as the future. Of course, you won’t be maintaining two platforms after building out CJA so you must think about how to move on from Adobe Analytics.

Decommissioning Options and Key Considerations

You can approach decommissioning Adobe Analytics in several ways. Your options include: 1) disabling the extension; 2) adding an s.abort at the top of the AppMeasurement custom‑code block to prevent data from being sent to Adobe Analytics; 3) deleting all legacy rules; or 4) discarding Adobe Analytics entirely and creating a new Launch property for CJA. Although multiple paths exist, the best approach almost always involves preserving your data‑collection methods and keeping the historical Adobe Analytics data. You have likely collected that data for years, and you want it to remain meaningful after migration. Instead of wiping everything out, you can update Launch by removing rules you no longer need or by eliminating references to Adobe Analytics.

Recognizing the challenges involved in going through the data to make the right decisions during this process, I have developed a specialized tool – Analytics Decommissioner (AD) — designed to support organizations as they decommission Adobe Analytics and transition fully to AEP and CJA. The tool programmatically evaluates Adobe Platform Launch implementations using several Adobe API endpoints, enabling teams to quickly identify dependencies, references, and potential risks associated with disabling Adobe Analytics components.

Why Decommissioning Requires More Than a Simple Shutdown

One of the most significant obstacles in decommissioning Adobe Analytics is identifying where legacy tracking still exists and where removing Adobe Analytics could potentially break the website or cause errors. Over the years, many organizations accumulate layers of custom code, extensions, and tracking logic that reference Adobe Analytics variables—often in places that are not immediately obvious. These references may include s. object calls, hard‑coded AppMeasurement logic, or conditional rules created over the course of several years. Without a systematic way to surface dependencies, teams risk breaking critical data flows that feed CJA or AEP datasets.

Missing or outdated documentation makes the problem even harder. Many organizations fail to maintain complete or current solution design references (SDRs), especially for older implementations. As a result, teams rely on tribal knowledge, attempts to recall discussions from years ago, or a manual inspection of data collected to understand how the system collects data. This approach moves slowly, introduces errors, and cannot support large‑scale environments. When documentation lacks clarity, teams struggle to identify which rules, data elements, or custom scripts still matter and which they can safely remove. Now imagine repeating this process for every one of your Launch properties.

This is where Perficient and the AD tool provide significant value.
The AD tool programmatically scans Launch properties and uncovers dependencies that teams may have forgotten or never documented. A manual analysis might easily overlook these dependencies. AD also pinpoints where custom code still references Adobe Analytics variables, highlights rules that have been modified or disabled since deployment, and surfaces AppMeasurement usage that could inadvertently feed into CJA or AEP data ingestion. This level of visibility is essential for ensuring that the decommissioning process does not disrupt data collection or reporting.

How Analytics Decommissioner (AD) Works

The tool begins by scanning all Launch properties across your organization and asking the user to select a property. This is necessary because the decommissioning process must be done on each property individually.  This is the same way data is set for Adobe Analytics, one Launch property at a time.  Once a property is selected, the tool retrieves all production‑level data elements, rules, and rule components, including their revision histories.  The tool ignores rules and data element revisions that developers disabled or never published (placed in production).  The tool then performs a comprehensive search for AppMeasurement references and Adobe Analytics‑specific code patterns. These findings show teams exactly where legacy tracking persists and see what needs to be updated or modified and which items can be safely removed.  If no dependencies exist, AD can disable the rules and create a development library for testing.  When AD cannot confirm that a dependency exists, it reports the rule names and components where potential issues exist and depend on development experts to make the decision about the existence of a dependency.  The user always makes the final decisions.

This tool is especially valuable for large or complex implementations. In one recent engagement, a team used it to scan nearly 100 Launch properties. Some of those properties included more than 300 data elements and 125 active rules.  Attempting to review this level of complexity manually would have taken weeks and the risk would remain that critical dependencies are missed. Programmatic scanning ensures accuracy, completeness, and efficiency.  This allows teams to move forward with confidence.

A Key Component of a Recommended Decommissioning Approach

The AD tool and a comprehensive review are essential parts of a broader, recommended decommissioning framework. A structured approach typically includes:

  • Inventory and Assessment – Identifying all Adobe Analytics dependencies across Launch, custom code, and environments.
  • Mapping to AEP/CJA – Ensuring all required data is flowing into the appropriate schemas and datasets.
  • Gap Analysis – Determining where additional configuration or migration work needs to be done.
  • Remediation and Migration – Updating Launch rules, removing legacy code, and addressing undocumented dependencies.
  • Validation and QA – Confirming that reporting remains accurate in CJA after removal of Launch rules and data elements created for Adobe Analytics.
  • Sunset and Monitoring – Disabling AppMeasurement, removing Adobe Analytics extensions, and monitoring for errors.

Conclusion

Decommissioning Adobe Analytics is a strategic milestone in modernizing the digital data ecosystem. Using the right tools and having the right processes are essential.  The Analytics Decommissioner tool allows organizations to confidently transition to AEP and CJA. This approach to migration preserves data quality, reduces operational costs, and strengthens governance when teams execute it properly. By using the APIs and allowing the AD tool to handle the heavy lifting, teams ensure that they don’t overlook any dependencies.  This will enable a smooth and risk‑free transition with robust customer experience analytics.

]]>
https://blogs.perficient.com/2026/01/27/moving-to-cja-sunset-adobe-analytics-without-causing-chaos/feed/ 0 389876
Perficient Recognized as a Leader in the Next Wave of Manufacturing Innovation https://blogs.perficient.com/2026/01/26/perficient-recognized-as-a-leader-in-the-next-wave-of-manufacturing-innovation/ https://blogs.perficient.com/2026/01/26/perficient-recognized-as-a-leader-in-the-next-wave-of-manufacturing-innovation/#respond Mon, 26 Jan 2026 20:09:39 +0000 https://blogs.perficient.com/?p=389934

Manufacturing is undergoing a profound transformation, driven by the convergence of smart digital technologies, connected operations, and customer-centric strategies. Perficient is uniquely positioned to guide this evolution, applying deep industry knowledge and a suite of next-gen solutions, from IoTpowered manufacturing ecosystems to AIdriven supply chain and aftermarket optimization. 

An independent industry analysis recently recognized Perficient as a leader in delivering IT consulting and technology services for manufacturers, affirming our ability to deliver substantial, measurable results. This honor validates what we’ve long known: Perficient delivers high-value outcomes by blending deep manufacturing domain expertise with cutting-edge technology. 

“We empower manufacturers to embrace the speed at which digital innovation is happening, accelerating their efforts, streamlining processes, enhancing experiences, and sustaining a competitive advantage via AI.”Kevin Espinosa, Director of Manufacturing & Strategy, Perficient 

Elevating Factory Operations Through Smart Innovation 

Perficient empowers manufacturers to overcome fragmented IT and OT environments, data silos, and legacy infrastructure limitations. By deploying digital twins, AI enhanced predictive maintenance, and real-time asset monitoring, we enable clients to make smarter, faster decisions on the plant floor. For example, we’ve enabled access to equipment telemetry data and streamlined data ingestion into a top 10 automaker’s future-proof and AI/ML-ready infrastructure to perform analytics and proactively identify issues. Further, we designed a container-based cloud-agnostic solution to support the real-time collection and processing of telemetry data from millions of connected vehicles and enabled scaling on demand while containing costs. 

Reimagining Aftermarket and Field Service 

Competitive advantage in manufacturing often hinges on post-sale service. At a Fortune 500 manufacturing conglomerate, Perficient reengineered customer service using AI and machine learning alongside Amazon Web Services. The initiative automated email responses, introduced intelligent chatbots, and gave agent productivity a significant boost. These enhancements slashed service costs by $25 million annually and unlocked an additional $61 million projected in revenue, transforming service into a strategic business driver. In addition, we’ve streamlined dealer service reporting with Microsoft AI and automation at a global industrial machinery manufacturer by enabling technicians to generate service reports from shorthand notes, saving them 1.5 hours on average per service report and reducing warranty claim denials. 

Building Extended Ecosystems, Forging Supply Chain Resilience 

Perficient helps manufacturers smooth out extended enterprise complexities, aligning systems like PLM, PIM, OMS, and ERP to create consistent product content, inventory accuracy, and seamless buyer journeys. As featured in a recent industry report, our consultants shed light on real-time content sync and eliminating abandoned carts and improving conversions, an essential move amid growing digital sophistication in B2B and B2C channels. In addition, Perficient recently transformed Roeslein’s supply chain with Oracle Cloud to unite all locations and provide a flexible, scalable foundation for growth. 

Industry Recognition That Matters 

Our manufacturing leadership extends beyond project results. This past year, Perficient has been awarded and recognized by several publications, firms, and organizations for expertise and contributions in the industry, namely:  

  • Recognized as a 2025 CRN IoT Innovator for our role in building connected, autonomous, and sensor-driven solutions across industrial environments

Orchestrating the Future of Manufacturing 

Innovation in manufacturing is no longer optional. It’s essential. Perficient’s expertise spans cloud-to-edge infrastructure, integrated digital twins, real-time analytics, and AI-first strategies. Whether driving smart factory initiatives, optimizing aftermarket service, modernizing IT and OT systems, or improving the operation of and experience with connected products and equipment, we empower manufacturers to achieve operational agility, customer-centric experiences, and sustainable growth. 

Explore how we can help your organization lead in connectivity, resilience, and innovation. 

Discover Perficient’s approach to manufacturing transformation. 

 

]]>
https://blogs.perficient.com/2026/01/26/perficient-recognized-as-a-leader-in-the-next-wave-of-manufacturing-innovation/feed/ 0 389934
The Desktop LLM Revolution Left Mobile Behind https://blogs.perficient.com/2026/01/26/the-desktop-llm-revolution-left-mobile-behind/ https://blogs.perficient.com/2026/01/26/the-desktop-llm-revolution-left-mobile-behind/#respond Mon, 26 Jan 2026 19:44:56 +0000 https://blogs.perficient.com/?p=389927

Large Language Models have fundamentally transformed how we work on desktop computers. From simple ChatGPT conversations to sophisticated coding assistants like Claude and Cursor, from image generation to CLI-based workflows—LLMs have become indispensable productivity tools.

Desktop with multiple windows versus iPhone single-app limitation
On desktop, LLMs integrate seamlessly into multi-window workflows. On iPhone? Not so much.

On my Mac, invoking Claude is a keyboard shortcut away. I can keep my code editor, browser, and AI assistant all visible simultaneously. The friction between thought and action approaches zero.

But on iPhone, that seamless experience crumbles.

The App-Switching Problem

iOS enforces a fundamental constraint: one app in the foreground at a time. This creates a cascade of friction every time you want to use an LLM:

  1. You’re browsing Twitter and encounter text you want translated
  2. You must leave Twitter (losing your scroll position)
  3. Find and open your LLM app
  4. Wait for it to load
  5. Type or paste your query
  6. Get your answer
  7. Switch back to Twitter
  8. Try to find where you were

This workflow is so cumbersome that many users simply don’t bother. The activation energy required to use an LLM on iPhone often exceeds the perceived benefit.

“Opening an app is the biggest barrier to using LLMs on iPhone.”

Building a System-Level LLM Experience

Rather than waiting for Apple Intelligence to mature, I built my own solution using iOS Shortcuts. The goal: make LLM access feel native to iOS, not bolted-on.

iOS Shortcuts workflow diagram for LLM integration
The complete workflow: Action Button → Shortcut → API → Notification → Notes

The Architecture

My system combines three key components:

  • Trigger: iPhone’s Action Button for instant, one-press access
  • Backend: Multiple LLM providers via API calls (Siliconflow’s Qwen, Nvidia’s models, Google’s Gemini Flash)
  • Output: System notifications for quick answers, with automatic saving to Bear for detailed responses
iPhone Action Button triggering AI assistant
One press of the Action Button brings AI assistance without leaving your current app.

Three Core Functions

I configured three preset modes accessible through the shortcut:

Function Use Case Output
Quick Q&A General questions, fact-checking Notification popup
Translation English ↔ Chinese conversion Notification + clipboard
Voice Todo Capture tasks via speech Formatted list in Bear app

Why This Works

The magic isn’t in the LLM itself—it’s in the integration points:

  • No app switching required: Shortcuts run as an overlay, preserving your current context
  • Sub-second invocation: Action Button is always accessible, even from the lock screen
  • Persistent results: Answers are automatically saved, so you never lose important responses
  • Model flexibility: Using APIs means I can switch providers based on speed, cost, or capability

The Bigger Picture

Apple Intelligence promises to bring system-level AI to iOS, but its rollout has been slow and its capabilities limited. By building with Shortcuts and APIs, I’ve created a more capable system that:

  • Works today, not “sometime next year”
  • Uses state-of-the-art models (not Apple’s limited on-device options)
  • Costs pennies per query (far less than subscription apps)
  • Respects my workflow instead of demanding I adapt to it

Try It Yourself

The iOS Shortcuts app is more powerful than most users realize. Combined with free or low-cost API access from providers like Siliconflow, Groq, or Google AI Studio, you can build your own system-level AI assistant in an afternoon.

The best interface is no interface at all. When AI assistance is a single button press away—without leaving what you’re doing—you’ll actually use it.

]]>
https://blogs.perficient.com/2026/01/26/the-desktop-llm-revolution-left-mobile-behind/feed/ 0 389927
Build, Govern, Measure: Agentforce Done Right https://blogs.perficient.com/2026/01/26/build-govern-measure-agentforce-done-right/ https://blogs.perficient.com/2026/01/26/build-govern-measure-agentforce-done-right/#respond Mon, 26 Jan 2026 18:59:22 +0000 https://blogs.perficient.com/?p=389923

Part 1 of our Salesforce Outcomes Playbook made the case for measurable value and orchestrated workflows. In this next post, we move from strategy to execution and show how to put Agentforce to work on a real business KPI.

Perficient is recognized in Forrester’s Salesforce Consulting Services Landscape, Q4 2025 for our North America focus and industry depth in Financial Services, Healthcare, and Manufacturing. We bring proven capabilities across Agentforce, Data 360 (Data Cloud), and Industry Clouds to help clients turn trusted data and well designed workflows into outcomes you can verify.

Forrester asked each provider included in the Landscape to select the top business scenarios for which clients select them and from there determined which are the extended business scenarios that highlight differentiation among the providers. Perficient is shown in the report for having selected Agentforce, Data 360 (Data Cloud), and Industry Clouds as top reasons clients work with us out of those extended business scenarios. Our proven capabilities across Agentforce, Data 360 (Data Cloud), and Industry Clouds to help clients achieve measurable outcomes from their Salesforce investments.

Here, we walk through a practical operating model to launch one production agent, govern by design, and measure lift with real users. The goal is confidence without complexity: a visible improvement in a specific KPI and a repeatable pattern you can scale as results compound.

What Success Looks Like

  • Build: A visible lift in one KPI, such as reduced time to resolution in Service or improved conversion in Sales.
  • Govern: Role‑based access with data minimization, accuracy checks, and audit trails in place.
  • Measure: Observability that traces agent decisions and reports performance, adoption, and error rates.
  • Scale: A prioritized backlog and a scale plan that extends the win without unnecessary build.

The Operating Model: Build, Govern, Measure

1) Build one agent for one KPI

Choose a single use case with a business‑visible metric. Ship a working slice and measure against an agreed baseline. Examples:

  • Agent‑assisted case triage that reduces average handle time in Service
  • Quote‑to‑order agent in Agentforce Revenue Management (formerly Revenue Cloud) that shrinks cycle time and errors
  • Renewal‑risk agent that flags at‑risk accounts and improves retention
  • Field service parts availability agent that improves first‑time fix rate

Ground the agent in trustworthy data. Unify records, events, and identities so decisions are consistent and auditable. Use Data 360 foundations to give agents clean context across teams and channels.

2) Govern by Design

Put guardrails in at the start. Define who can access what, how accuracy is checked, and where audit trails are stored.

  • Role‑based access and data minimization
  • Accuracy checks and human‑in‑the‑loop for high‑impact actions
  • Prompt and policy versioning with change tracking
  • Audit trails that capture inputs, decisions, and outcomes
  • Backout controls with pause and rollback procedures

Governance belongs inside your delivery lifecycle, not as an afterthought.

3) Measure and iterate

Use observability to trace decisions, monitor performance, and tune safely.

  • Baseline the KPI before launch and track lift after launch
  • Monitor adoption, satisfaction, and error rates
  • Identify drift, hallucination, or policy violations quickly
  • Iterate prompts, policies, and integrations based on data

Expand capabilities only once the first KPI moves. This keeps momentum high, risk low, and aligns investment to tangible results.

Why This Matters

Most teams already believe in AI. The question is how to make it work here, safely and repeatably. Salesforce continues to expand what you can do with AI, data, and integration. When foundations are solid, those capabilities turn into outcomes you can measure. Agentforce gives you practical building blocks for trusted AI at scale. You get observability to understand how agents perform, governance controls to protect data and accuracy, and low code configuration so business and IT can move together faster.

“Enterprises often underestimate the need for structured enablement, adoption planning, and sustained evolution….” – The Salesforce Consulting Services Landscape, Q4 2025

Partners help translate powerful platform features into everyday outcomes. That is how you reduce risk and accelerate value.

Orchestrate The Workflow, Not Just the Feature

Real value shows up when workflows span systems. Map the end‑to‑end process across Salesforce and adjacent platforms. Eliminate the handoffs that slow customers down. Use reference architectures and integration patterns so the process is portable and resilient. Agentforce is most effective when agents can act across the flow rather than bolt onto a single step.

Ready to translate strategy into a working Agentforce use case that moves a KPI?

Book an Agentforce workshop. We will help you choose one KPI, define data sources, set guardrails and observability, and stand up a working slice you can scale.

Forrester does not endorse any company, product, brand, or service included in its research publications and does not advise any person to select the products or services of any company or brand based on the ratings included in such publications. Information is based on the best available resources. Opinions reflect judgment at the time and are subject to change. For more information, read about Forrester’s objectivity here.

]]>
https://blogs.perficient.com/2026/01/26/build-govern-measure-agentforce-done-right/feed/ 0 389923
Perficient included in IDC ServiceScape U.S. Midmarket Salesforce Implementation Services 2025–2026 https://blogs.perficient.com/2026/01/26/perficient-included-in-idc-servicescape-u-s-midmarket-salesforce-implementation-services-2025-2026/ https://blogs.perficient.com/2026/01/26/perficient-included-in-idc-servicescape-u-s-midmarket-salesforce-implementation-services-2025-2026/#respond Mon, 26 Jan 2026 17:34:18 +0000 https://blogs.perficient.com/?p=389879

Perficient is proud to be included in the IDC ServiceScape: U.S. Midmarket Salesforce Implementation Services 2025–2026 (Doc# US54222726, January 2026). Led by Jason Bremner, Research Vice President, IT Consulting and Systems Integration Services at IDC, this IDC ServiceScape provides buyers with a structured view of Salesforce services capabilities across the industry.

Why we believe this matters for Salesforce leaders

Organizations are asking for measurable outcomes on Salesforce, not bigger projects. The questions have shifted:

  • How do we modernize without disrupting what works
  • How do we orchestrate workflows across Salesforce and adjacent platforms for end-to-end impact
  • How do we adopt AI with confidence so accuracy, access, and auditability are protected
  • How do we fund what works based on KPI movement rather than effort

How we help on Salesforce

  • Sales Cloud and Revenue Cloud  – Opportunity to quote, quote to order, renewals, and pricing accuracy
  • Service Cloud and Field Service – Case triage, knowledge curation, parts availability, and first time fix
  • Data Cloud – Unified customer profiles, identity resolution, and event driven context
  • Agentforce – One agent, one KPI patterns with governance and observability by design
  • Integration – Reusable API patterns for portable, resilient end to end workflows
  • Org consolidation and tech debt cleanup – License alignment, reduction of unsupported customizations, native first design

“Our clients ask for clarity, speed, and confidence. We align to a single KPI, orchestrate the workflow, and build in governance so value is visible and repeatable.”
Hunter Austin, Managing Director, Perficient Salesforce Practice

Getting started

  • Explore Perficient’s Salesforce servicesPerficient is a trusted Salesforce partner helping enterprises lead AI-powered transformation. We specialize in CRM, data, and personalization—using real-time intelligence to deliver relevant experiences with Agentforce, Data 360, and Agentforce Marketing.

 

]]>
https://blogs.perficient.com/2026/01/26/perficient-included-in-idc-servicescape-u-s-midmarket-salesforce-implementation-services-2025-2026/feed/ 0 389879
Base Is Loaded: Bridging OLTP and OLAP with Lakebase and PySpark https://blogs.perficient.com/2026/01/25/bridging-oltp-olap-databricks-lakebase-python/ https://blogs.perficient.com/2026/01/25/bridging-oltp-olap-databricks-lakebase-python/#comments Sun, 25 Jan 2026 17:45:19 +0000 https://blogs.perficient.com/?p=389908

For years, the Lakehouse paradigm has successfully collapsed the wall between Data Warehouses and Data Lakes. We have unified streaming and batch, structured and unstructured data, all under one roof. Yet we often find ourselves hitting a familiar, frustrating wall: the gap between the analytical plane (OLAP) and the transactional plane (OLTP). In my latest project, the client wanted to use Databricks to serve as both an analytic platform and power their front-end React web app. There is a sample Databricks App that uses NodeJS for a front end and FastAPI for a Python backend that connects to Lakebase. The sample ToDo app provides a sample front end that performs CRUD operations out of the box. I opened a new Databricks Query object, connected to the Lakebase compute, and verified the data. It’s hard to overstate how cool this seemed.

The next logical step was to build a declarative pipeline that would flow the data Lakebase received from the POST, PUT and GET requests through the Bronze layer, for data quality checks, into Silver for SCD2-style history and then into Gold where it would be available to end users through AI/BI Genie and PowerBI reports as well as being the source for a sync table back to Lakebase to serve GET statements. I created a new declarative pipeline in a source-controlled asset bundle and started building. Then I stopped building. That’s not supported. You actually need to communicate with Lakebase from a notebook using the SDK. A newer SDK than Serverless provides, no less.

A couple of caveats. At the time of this writing, I’m using Azure Databricks, so I only have access to Lakebase Provisioned and not Lakebase Autoscaling. And it’s still in Public Preview; maybe GA is different. Or, not. Regardless, I have to solve the problem on my desk today, and simply having the database isn’t enough. We need a robust way to interact with it programmatically from our notebooks and pipelines.

In this post, I want to walk through a Python architectural pattern I’ve developed—BaseIsLoaded. This includes pipeline configurations, usage patterns, and a PySpark class:LakebaseClient. This class serves two critical functions: it acts as a CRUD wrapper for notebook-based application logic, and, more importantly, it functions as a bridge to turn a standard Postgres table into a streaming source for declarative pipelines.

The Connectivity Challenge: Identity-Native Auth

The first hurdle in any database integration is authentication. In the enterprise, we are moving away from hardcoded credentials and .pgpass files. We want identity-native authentication. The LakebaseClient handles this by leveraging the databricks.sdk. Instead of managing static secrets, the class generates short-lived tokens on the fly.

Look at the _ensure_connection_info method in the provided code snippet:

def _ensure_connection_info(self, spark: SparkSession, value: Any):
    # Populate ``self._conn_info`` with the Lakebase endpoint and temporary token
  if self._conn_info is None:
    w = WorkspaceClient()
    instance_name = "my_lakebase" # Example instance
    instance = w.database.get_database_instance(name=instance_name)
    cred = w.database.generate_database_credential(
    request_id=str(uuid.uuid4()), instance_names=[instance_name]
  )
  self._conn_info = {
    "host": instance.read_write_dns,
    "dbname": "databricks_postgres",
    "password": cred.token, # Ephemeral token
    # ...
  }
    """)

This encapsulates the complexity of finding the endpoint and authenticating and allows us to enforce a “zero-trust” model within our code. The notebook or job running this code inherits the permissions of the service principal or user executing it, requesting a token valid only for that session.

Operationalizing DDL: Notebooks as Migration Scripts

One of the strongest use cases for Lakebase is managing application state or configuration for data products. However, managing the schema of a Postgres database usually requires an external migration tool (like Flyway or Alembic).

To keep the development lifecycle contained within Databricks, I extended the class to handle safe DDL execution. The class includes methods like create_table, alter_table_add_column, and create_index.

These methods use psycopg2.sql to handle identifier quoting safely. In a multi-tenant environment where table names might be dynamically generated based on business units or environments, either by human or agentic developers, SQL injection via table names is a real risk.

def create_table(self, schema: str, table: str, columns: List[str]):
    ddl = psql.SQL("CREATE TABLE IF NOT EXISTS {}.{} ( {} )").format(
        psql.Identifier(schema),
        psql.Identifier(table),
        psql.SQL(", ").join(psql.SQL(col) for col in columns)
    )
    self.execute_ddl(ddl.as_string(self._get_connection()))

This allows a Databricks Notebook to serve as an idempotent deployment script. You can define your schema in code and execute it as part of a “Setup” task in a Databricks Workflow, ensuring the OLTP layer exists before the ETL pipeline attempts to read from or write to it.

The Core Innovation: Turning Postgres into a Micro-Batch Stream

The most significant value of this architecture is the load_new_data method.

Standard JDBC connections in Spark are designed for throughput, not politeness. They default to reading the entire table or, if you attempt to parallelize reads via partitioning, they spawn multiple executors that can quickly exhaust the connection limit of Lakebase. By contrast, LakebaseClient runs intentionally on the driver using a single connection.

This solves a common dilemma we run into with our enterprise clients: if you have a transactional table (e.g., an orders table or a pipeline_audit log) in Lakebase and want to ingest it into Delta Lake incrementally, you usually have to introduce Kafka, Debezium, or complex CDC tools. If you have worked for a large, regulated company, you can appreciate the value of not asking for things.

Instead, LakebaseClient implements a lightweight “Client-Side CDC” pattern. It relies on a monotonic column (a checkpoint_column, such as an auto-incrementing ID or a modification_timestamp) to fetch only what has changed since the last run.

1. State Management with Delta

The challenge with custom polling logic is: where do you store the offset? If the cluster restarts, how does the reader know where it left off?

I solved this by using Delta Lake itself as the state store for the Postgres reader. The _persist_checkpoint and _load_persisted_checkpoint methods use a small Delta table to track the last_checkpoint for every source.

def _persist_checkpoint(self, spark: SparkSession, value: Any):
    # ... logic to create table if not exists ...
    # Upsert (merge) last checkpoint into a Delta table
    spark.sql(f"""
        MERGE INTO {self.checkpoint_store} t
        USING _cp_upsert_ s
        ON t.source_id = s.source_id
        WHEN MATCHED THEN UPDATE SET t.last_checkpoint = s.last_checkpoint
        WHEN NOT MATCHED THEN INSERT ...
    """)

This creates a robust cycle: The pipeline reads from Lakebase, processes the data, and commits the offset to Delta. This ensures exactly-once processing semantics (conceptually) for your custom ingestion logic.

2. The Micro-Batch Logic

The load_new_data method brings it all together. It creates a psycopg2 cursor, queries only the rows where checkpoint_col > last_checkpoint, limits the fetch size (to prevent OOM errors on the driver), and converts the result into a Spark DataFrame.

    if self.last_checkpoint is not None:
        query = psql.SQL(
            "SELECT * FROM {} WHERE {} > %s ORDER BY {} ASC{}"
        ).format(...)
        params = (self.last_checkpoint,)

By enforcing an ORDER BY on the monotonic column, we ensure that if we crash mid-batch, we simply resume from the last successfully processed ID.

Integration with Declarative Pipelines

So, how do we use this in a real-world enterprise scenario?

Imagine you have a “Control Plane” app running on a low-cost cluster that allows business users to update “Sales Targets” via a Streamlit app (backed by Lakebase). You want these targets to immediately impact your “Sales Reporting” Delta Live Table (DLT) pipeline.

Instead of a full refresh of the sales_targets table every hour, you can run a continuous or scheduled job using LakebaseClient.

The Workflow:

  1. Instantiation:
    lb_source = LakebaseClient(
        table_name="public.sales_targets",
        checkpoint_column="updated_at",
        checkpoint_store="system.control_plane.ingestion_offsets"
    )
    
  2. Ingestion Loop: You can wrap load_new_data in a simple loop or a scheduled task.
    # Fetch micro-batch
    df_new_targets = lb_source.load_new_data()
    
    if not df_new_targets.isEmpty():
        # Append to Bronze Delta Table
        df_new_targets.write.format("delta").mode("append").saveAsTable("bronze.sales_targets")
    
  3. Downstream DLT: Your main ETL pipeline simply reads from bronze.sales_targets as a standard streaming source. The LakebaseClient acts as the connector, effectively “streaming” changes from the OLTP layer into the Bronze layer.

Architectural Considerations and Limitations

While this class provides a powerful bridge, as architects, we must recognize the boundaries.

  1. It is not a Debezium Replacement: This approach relies on “Query-based CDC.” It cannot capture hard deletes (unless you use soft-delete flags), and it relies on the checkpoint_column being strictly monotonic. If your application inserts data with past timestamps, this reader will miss them. My first use case was pretty simple; just a single API client performing CRUD operations. For true transaction log mining, you still need logical replication slots (which Lakebase supports, but requires a more complex setup).
  2. Schema Inference: The _postgres_type_to_spark method in the code provides a conservative mapping. Postgres has rich types (like JSONBHSTORE, custom enums). This class defaults unknown types to StringType. This is intentional design—it shifts the schema validation burden to the Bronze-to-Silver transformation in Delta, preventing the ingestion job from failing due to exotic Postgres types. I can see adding support for JSONB before this project is over, though.
  3. Throughput: This runs on the driver or a single executor node (depending on how you parallelize calls). It is designed for “Control Plane” data—thousands of rows per minute, not millions of rows per second. Do not use this to replicate a high-volume trading ledger; use standard ingestion tools for that.

Conclusion

Lakebase fills the critical OLTP void in the Databricks ecosystem. However, a database is isolated until it is integrated. The BaseIsLoaded pattern demonstrated here offers a lightweight, Pythonic way to knit this transactional layer into your analytical backbone.

By abstracting authentication, safely handling DDL, and implementing stateful micro-batching via Delta-backed checkpoints, we can build data applications that are robust, secure, and entirely contained within the Databricks control plane. It allows us to stop treating application state as an “external problem” and start treating it as a native part of the Lakehouse architecture. Because, at the end of the day, adding Apps plus Lakebase to your toolbelt is too much fun to let a little glue code stand in your way.

Perficient is a Databricks Elite PartnerContact us to learn more about how to empower your teams with the right tools, processes, and training to unlock your data’s full potential across your enterprise.

]]>
https://blogs.perficient.com/2026/01/25/bridging-oltp-olap-databricks-lakebase-python/feed/ 1 389908
Perficient Included in the IDC Market Glance: Healthcare Ecosystem, 4Q25 https://blogs.perficient.com/2026/01/22/perficient-included-in-idc-market-glance-healthcare-ecosystem/ https://blogs.perficient.com/2026/01/22/perficient-included-in-idc-market-glance-healthcare-ecosystem/#respond Thu, 22 Jan 2026 20:09:10 +0000 https://blogs.perficient.com/?p=389743

Healthcare organizations are managing many challenges at once: consumers expect digital experiences that feel as personalized as other industries, fragmented data in silos slows strategic decision-making, and AI and advanced technologies must integrate seamlessly into existing care models. 

Meeting these demands requires more than incremental change—it calls for digital solutions that unify access to care, trusted data, and advanced technologies to deliver transformative outcomes and operational efficiency. 

IDC Market Glance: Healthcare Ecosystem, 4Q25

We’re proud to share that Perficient has been included in the “IT Services” category in the IDC Market Glance: Healthcare Ecosystem, 4Q25 report (Doc# US54010025, December 2025). This segment includes systems integration organizations providing advisory, consulting, development, and implementation services, as well as products or solutions. 

We believe this inclusion reinforces our expertise in leveraging AI, data, and technology to deliver intelligent tools and intuitive, compliant care experiences that drive measurable value across the health journey.  

We believe this commitment aligns with critical shifts IDC Market Glance highlights in its latest report, which emphasizes how healthcare organizations are activating advanced technology and AI. IDC Market Glance shares, “Health systems and payers are moving more revenue into value-based care and capitated risk, pushing tech buyers to favor solutions that improve quality metrics, lower total cost of care, and help hit incentive thresholds.” 

As the industry evolves, IDC predicts: “Technology buyers will likely favor vendors that align revenue models to customer risk arrangements, plug seamlessly into large platforms, and demonstrate human-centered design that supports clinicians rather than replacing them.” 

To us, this inclusion validates our ability to help healthcare organizations maximize technology and AI to drive transformative outcomes, power enterprise agility, and create seamless, consumer-centric experiences that build lasting trust.

Intelligent Solutions for Transformative Outcomes 

These shifts are actively transforming the healthcare ecosystem, challenging leaders to rethink how they deliver care and create value. Our partnerships with leading organizations show what’s possible: moving AI from pilot to production, building interoperable data foundations that accelerate insights, and designing human-centered solutions that empower care teams and improve the cost, quality, and equity of care. 

Easing Access to Care With a Commerce-Like Experience 

We helped Rochester Regional Health reimagine its digital front door to triage like a clinician, personalize like a concierge, and convert like a commerce platform—creating a seamless experience that improves access, trust, and outcomes. The mobile-first redesign introduced smart search, dynamic filters, and real-time booking, driving a 26% increase in appointment scheduling and saving $79K+ monthly in call center costs. As a result, this transformative work earned three industry awards, recognizing the solution’s innovation in accessibility, engagement, and measurable impact on patient care.

Consumers expect frictionless access to care, personalized experiences, and real-time engagement. Our recent Access to Care Report reveals more than 45% of consumers aged 18–64 have used digital-first care instead of their regular provider—and 92% of them believe the quality is equal or better. To deliver on consumers’ expectations, leaders need a unified digital strategy that connects systems, streamlines workflows, and gives consumers simple, reliable ways to find and schedule care.

Explore how our Access to Care research continues to earn industry awards or learn more about our strategic position ofind care experiences. 

Empowering Care Ecosystems Through Interoperable Data Foundations 

We helped a healthcare insurance leader build a single, interoperable source of truth that turns healthcare data into a true strategic asset. Our FHIRenabled solution ingests, normalizes, and validates data from internal and external systems and shares a consolidated, reliable dataset through API connectors, gateways, and extracts, grounded in data governance. Ultimately, this interoperable data foundation accelerates time to market, minimizes downtime through EDI and API modernization, and ensures the right data reaches the right hands at the right time to power consumergrade experiences, while confidently meeting interoperability standards. 

Discover our platform modernization and data management capabilities.  

Accelerating Member Support With Human-Centered GenAI Innovation 

We helped a leading Blue Cross Blue Shield health insurer transform CSR support by deploying a natural language Generative AI benefits assistant powered by AWS’s AI foundation models and APIs. The intelligent assistant mines a library of ingested documents to deliver tailored, member-specific answers in real time, eliminating cumbersome manual processes and PDF downloads that previously slowed resolution times. Beyond faster answers, this human-centered solution accelerates benefits education, equips agents to provide relevant information with greater speed and accuracy, and demonstrates how generative AI can move from pilots into core infrastructure to support staff rather than replace them.

Read more about our AI expertise or explore our human-centered design services. 

Build Your Scalable, Data-Driven Future 

From insight to impact, our healthcare expertise  equips leaders to modernize, personalize, and scale care. We drive resilient, AI-powered transformation to shape the experiences and engagement of healthcare consumers, streamline operations, and improve the cost, quality, and equity of care.

We have been trusted by the 10 largest health systems and the 10 largest health insurers in the U.S., and Modern Healthcare consistently ranks us as one of the largest healthcare consulting firms.

Our strategic partnerships with industry-leading technology innovators—including AWS, Microsoft, Salesforce, Adobe, and  more—accelerate healthcare organizations’ ability to modernize infrastructure, integrate data, and deliver intelligent experiences. Together, we shatter boundaries so you have the AI-native solutions you need to boldly advance business.

Ready to Turn Fragmentation Into Strategic Advantage? 

We’re here to help you move beyond disconnected systems and toward a unified, data-driven future—one that delivers better experiences for patients, caregivers, and communities. Let’s connect  and explore how you can lead with empathy, intelligence, and impact. 

]]>
https://blogs.perficient.com/2026/01/22/perficient-included-in-idc-market-glance-healthcare-ecosystem/feed/ 0 389743
Build a Custom Accordion Component in SPFx Using React – SharePoint https://blogs.perficient.com/2026/01/22/build-a-custom-accordion-component-in-spfx-using-react-sharepoint/ https://blogs.perficient.com/2026/01/22/build-a-custom-accordion-component-in-spfx-using-react-sharepoint/#respond Thu, 22 Jan 2026 07:50:54 +0000 https://blogs.perficient.com/?p=389813

When building modern SharePoint Framework (SPFx) solutions, reusable UI components play a crucial role in keeping your code clean, scalable, and maintainable. In particular, interactive components help improve the user experience without cluttering the interface.

Among these components, the Accordion is a commonly used UI element. It allows users to expand and collapse sections, making it easier to display large amounts of information in a compact and organized layout. In this blog, we’ll walk through how to create a custom accordion component in SPFx using React.


Create the Accordion Wrapper Component

To begin with, we’ll create a wrapper component that acts as a container for multiple accordion items. At a high level, this component’s responsibility is intentionally simple: it renders child accordion items while keeping styling and layout consistent across the entire accordion.This approach allows individual accordion items to remain focused on their own behavior, while the wrapper handles structure and reusability.

Accordion.tsx

import * as React from 'react';
import styles from './Accordion.module.scss';
import classNames from 'classnames';
import { IAccordionItemProps } from './subcomponents/AccordionItem';

import { ReactElement } from 'react';

export interface IAccordionProps {
  children?:
    | ReactElement<IAccordionItemProps>
    | ReactElement<IAccordionItemProps>[];
  className?: string;
}


const Accordion: React.FunctionComponent<
  React.PropsWithChildren<IAccordionProps>
> = (props) => {
  const { children, className } = props;
  return (
    <div className={classNames(styles.accordionSubcomponent, className)}>
      {children}
    </div>
  );
};

export default Accordion;

Styling with SCSS Modules

Next, let’s focus on styling. SPFx supports SCSS modules, which is ideal for avoiding global CSS conflicts and keeping styles scoped to individual components. Let’s see styling for accordion and accordion items.

Accordion.module.scss

.accordionSubcomponent {
    margin-bottom: 12px;
    .accordionTitleRow {
        display: flex;
        flex-direction: row;
        align-items: center;
        padding: 5px;
        font-size: 18px;
        font-weight: 600;
        cursor: pointer;
        -webkit-touch-callout: none;
        -webkit-user-select: none;
        -khtml-user-select: none;
        -moz-user-select: none;
        -ms-user-select: none;
        user-select: none;
        border-bottom: 1px solid;
        border-color: "[theme: neutralQuaternaryAlt]";
        background: "[theme: neutralLighter]";
    }
    .accordionTitleRow:hover {
        opacity: .8;
    }
    .accordionIconCol {
        padding: 0px 5px;
    }
    .accordionHeaderCol {
        display: inline-block;
        width: 100%;
    }
    .iconExpandCollapse {
        margin-top: -4px;
        font-weight: 600;
        vertical-align: middle;
    }
    .accordionContent {
        margin-left: 12px;
        display: grid;
        grid-template-rows: 0fr;
        overflow: hidden;
        transition: grid-template-rows 200ms;
        &.expanded {
          grid-template-rows: 1fr;
        }
        .expandableContent {
          min-height: 0;
        }
    }
}

Styling Highlights

  • Grid‑based animation for expand/collapse
  • SharePoint theme tokens
  • Hover effects for better UX

Creating Accordion Item Component

Each expandable section is managed by AccordionItem.tsx.

import * as React from 'react';
import styles from '../Accordion.module.scss';
import classNames from 'classnames';
import { Icon } from '@fluentui/react';
import { useState } from 'react';


export interface IAccordionItemProps {
  iconCollapsed?: string;
  iconExpanded?: string;
  headerText?: string;
  headerClassName?: string;
  bodyClassName?: string;
  isExpandedByDefault?: boolean;
}
const AccordionItem: React.FunctionComponent<React.PropsWithChildren<IAccordionItemProps>> = (props: React.PropsWithChildren<IAccordionItemProps>) => {
  const {
    iconCollapsed,
    iconExpanded,
    headerText,
    headerClassName,
    bodyClassName,
    isExpandedByDefault,
    children
  } = props;
  const [isExpanded, setIsExpanded] = useState<boolean>(!!isExpandedByDefault);
  const _toggleAccordion = (): void => {
    setIsExpanded((prevIsExpanded) => !prevIsExpanded);
  }
  return (
    <Stack>
    <div className={styles.accordionTitleRow} onClick={_toggleAccordion}>
        <div className={styles.accordionIconCol}>
            <Icon
                iconName={isExpanded ? iconExpanded : iconCollapsed}
                className={styles.iconExpandCollapse}
            />
        </div>
        <div className={classNames(styles.accordionHeaderCol, headerClassName)}>
            {headerText}
        </div>
    </div>
    <div className={classNames(styles.accordionContent, bodyClassName, {[styles.expanded]: isExpanded})}>
      <div className={styles.expandableContent}>
        {children}
      </div>
    </div>
    </Stack>
  )
}
AccordionItem.defaultProps = {
  iconExpanded: 'ChevronDown',
  iconCollapsed: 'ChevronUp'
};
export default AccordionItem;

Example Usage in SPFx Web Part

<Accordion>
  <AccordionItem headerText="What is SPFx?">
    <p>SPFx is a development model for SharePoint customizations.</p>

  </AccordionItem>

  <AccordionItem
    headerText="Why use custom controls?"
    isExpandedByDefault={true}
  >
    <p>Custom controls improve reusability and UI consistency.</p>
  </AccordionItem>
</Accordion>

Accordion

Conclusion

By building a custom accordion component in SPFx using React, you gain:

  • Full control over UI behavior
  • Lightweight and reusable code
  • Native SharePoint theming

This pattern is perfect for:

  • FAQ sections
  • Configuration panels
  • Dashboard summaries
]]>
https://blogs.perficient.com/2026/01/22/build-a-custom-accordion-component-in-spfx-using-react-sharepoint/feed/ 0 389813
Perficient Drives Agentic Automation Solutions with SS&C Blue Prism Partner Certification https://blogs.perficient.com/2026/01/21/perficient-drives-agentic-automation-solutions-with-ssc-blue-prism-partner-certification/ https://blogs.perficient.com/2026/01/21/perficient-drives-agentic-automation-solutions-with-ssc-blue-prism-partner-certification/#respond Wed, 21 Jan 2026 15:15:55 +0000 https://blogs.perficient.com/?p=389825

We’re excited to announce that Perficient has officially attained SS&C Blue Prism Implementation Partner Certification at the Silver level. As we begin 2026, this achievement reflects our commitment to delivering world-class intelligent automation solutions and driving measurable value for our clients. 

What This Certification Means 

The SS&C Blue Prism Silver Implementation Partner Certification is a hallmark of quality, expertise, and consistency. It recognizes partners who meet rigorous standards across personnel, support, and delivery requirements. By earning this certification, Perficient has demonstrated its ability to implement intelligent RPA solutions that set a benchmark for customer success. 

About the SS&C Partner Program 

The SS&C Partner Program is designed to give customers access to the best partners and technology in the rapidly evolving world of intelligent automation and AI. As part of this program, Perficient joins a global ecosystem focused on helping businesses: 

  • Deliver end-to-end transformation through full-stack automation. 
  • Implement strategic governance tools for deployment success. 
  • Expand into high-growth markets where demand for automation is accelerating. 

This recognition positions Perficient at the forefront of intelligent automation, enabling us to help clients streamline processes, reduce complexity, and unlock new efficiencies. 

A Testament to Teamwork and Vision 

This achievement underscores Perficient’s commitment to excellence and innovation in intelligent automation. It reflects not only the technical expertise required to meet SS&C Blue Prism’s rigorous standards but also our strategic focus on helping clients accelerate transformation.  

“This milestone reflects the hard work and dedication of our team,” said Mwandama Mutanuka, Vice President of AI Platforms. “We see this partnership as a key in our AI Platforms go-to-market this year.” 

Driving Intelligent Automation Forward 

As a Silver Implementation Partner, Perficient is proud to be part of SS&C Blue Prism’s 5-star rated Partner Program. This recognition strengthens our ability to help organizations become more intelligently connected through agentic automation, enabling businesses to scale responsibly and deliver meaningful outcomes. 

Learn More 

Explore how Perficient’s AI Automation expertise can help your organization embrace next-generation automation solutions. Visit https://www.perficient.com/contact to start your transformation journey. 

]]>
https://blogs.perficient.com/2026/01/21/perficient-drives-agentic-automation-solutions-with-ssc-blue-prism-partner-certification/feed/ 0 389825
OmniStudio Expression Set Action – A Beginner‑Friendly Guide https://blogs.perficient.com/2026/01/21/omnistudio-expression-set-action-a-beginner-friendly-guide/ https://blogs.perficient.com/2026/01/21/omnistudio-expression-set-action-a-beginner-friendly-guide/#respond Wed, 21 Jan 2026 07:50:14 +0000 https://blogs.perficient.com/?p=389487

OmniStudio Expression Set Action is a powerful feature in Salesforce Industries. It lets you make changes and make decisions based on rules in guided processes, like OmniScripts and Integration Procedures. Instead of writing rules in many places, you can define your business rules once in an Expression Set and use them wherever you need them. This improves consistency, reduces errors, and simplifies maintenance.

What Is an Expression Set Action?

An Expression Set Action acts as a bridge between:

  • OmniScripts / Integration Procedures, and
  • Expression Sets, which are part of the Business Rules Engine (BRE)

In simple terms:

  • Your OmniScript or Integration Procedure sends inputs (like OrderValue or DeliveryType).
  • The Expression Set processes this data using calculations, conditions, or lookups.
  • The result is returned as a structured JSON output, which you can display or use for further logic.

What Can Expression Sets Do?

Expression Sets are Designed to Handle:

  • Mathematical calculations
  • Conditional logic (if/else situations)
  • Lookups using decision matrices
  • Data transformations

Common Real‑World Use Cases

  • Calculating shipping or delivery charges.
  • Determining customer eligibility.
  • Applying discounts or fees.
  • Computing taxes or surcharges

Because Expression Sets work with JSON, they are lightweight, fast, and ideal for complex rule processing.

Creating an Expression Set – Step by Step

Step 1: Navigate to Expression Sets

  1. Go to Salesforce Setup
  2. Search for Expression Sets (under OmniStudio / Industries features)
  3. Click New

 Step 2: Basic Setup

  • Name: Example – ShippingChargesExp
  • Usage Type: Select Default
  • Save the record

This automatically creates Version 1 of the Expression Set.

Building Logic Using Expression Set Builder

After saving, open the Expression Set Builder, which provides a visual canvas for designing logic.

Step 3: Define Variables (Resource Manager)

Variables represent the data your Expression Set uses and produces.

Example Variables:

  • DeliveryType – Input (e.g., Standard or Express)
  • OrderValue – Input (order amount)
  • ExpectedDeliveryCharges – Intermediate result
  • TotalCharges – Final output

Each Variable Should Have:

  • A clear name
  • Data type (number, text, boolean, etc.)
  • Input or Output configuration

Step 4: Use Decision Matrices (Optional but Powerful)

If your charges depend on predefined rules (for example, deliveryType), you can use a Decision Matrix.

  1. Drag the Lookup Table element onto the canvas
  2. Associate it with an existing Decision Matrix, such as DeliveryCharges
  3. Use inputs like DeliveryType to return ExpectedDeliveryCharges

This keeps your logic external and easy to update without modifying the code.

Step 5: Add Calculations

To perform arithmetic operations:

  1. Drag the Calculation element from the Elements panel
  2. Define a formula such as:
    TotalCharges = ExpectedDeliveryCharges + OrderValue

This element performs the actual math and stores the result in a variable.

Step 6: Sequence and Test

  • Arrange elements on the canvas in logical order
  • Use the Simulate option to test with a sample JSON input:
                         {
                                  “DeliveryType”: “Standard”,
                                  “OrderValue”: 1000
                         }
Verify that the output JSON returns the expected TotalCharges.

Step 7: Activate the Expression Set

Before using it:

  • Set Active Start Date
  • Define Rank (for rule priority)
  • Select Output Variables
  • Click Activate

Your Expression Set is now ready for use.

Screenshot 13 1 2026 132048 Perficient 1d2 Dev Ed.develop.lightning.force.com

 

Using Expression Set Action in an OmniScript

OmniScripts are user-facing guided flows, and Expression Set Actions allow logic to run automatically in the background.

Step 1: Prepare Inputs

In the OmniScript:

  • Create fields such as DeliveryType  and OrderValue
  • Capture values from user input or previous steps

Step 2: Add Expression Set Action

  • Open OmniScript Designer
  • Drag Expression Set Action between steps
  • Select your Expression Set (ShippingChargesExp)

Step 3: Configure Input Mapping

Map inputs using JSON paths, for example:

  • %Step:CustomerDetails:DeliveryType%
  • %Step:CustomerDetails:OderValue%

Step 4: Use Output Values

In the next step:

  • Use Set Values or Display Text elements
  • Reference returned outputs like TotalCharges

Step 5: Test

Preview the OmniScript with different inputs to ensure calculations work correctly.

Using Expression Set Action in an Integration Procedure

Integration Procedures handle server-side processing and are ideal for performance-heavy logic.

Step 1: Create Integration Procedure

  1. Go to Integration Procedures
  2. Click New
  3. Add an Expression Set Action from the Actions palette

Step 2: Configure the Action

  • Select the Expression Set
  • Map inputs such as DeliveryType and OrderValue

Step 3: Return Outputs

  • Add a Response Action
  • Include output variables
  • Save and execute to validate results

Step 4: Call from OmniScript

Use an Integration Procedure Action inside OmniScript to invoke this logic.

This approach improves scalability and keeps OmniScripts lightweight.

Key Learning Resources

If you’re new to OmniStudio, these resources are highly recommended:

]]>
https://blogs.perficient.com/2026/01/21/omnistudio-expression-set-action-a-beginner-friendly-guide/feed/ 0 389487
An Example Brainstorming Session https://blogs.perficient.com/2026/01/20/example-brainstorming-session/ https://blogs.perficient.com/2026/01/20/example-brainstorming-session/#respond Tue, 20 Jan 2026 23:42:15 +0000 https://blogs.perficient.com/?p=389807

In my last blog post I addressed how to prepare your team for a unique experience and have them primed and ready for brainstorming.

Now I want to cover what actually happens INSIDE the brainstorming session itself. What activities should be included? How do you keep the energy up throughout the session?

Here’s a detailed brainstorming framework and agenda you can follow to generate real results. It works whether you have 90 minutes or a full day; whether you are tackling product innovation, process improvement, strategic planning, or problem solving; and whether you have 4 people on the team or 12 (try not to do more than that). Feel free to pick and choose what you like and adjust to fit your team and desired depth.

Pre-Session Checklist

  • Room Setup: Seating arranged to encourage collaboration (avoid traditional conference setups), background music playing softly, be free to move around. Being offsite is best!
  • Materials: Whiteboards, sticky notes, markers, small and large paper pads, dot stickers for voting, projector/screen.
  • Helpers: Enlist volunteers to capture ideas, manage breakout groups, and tally votes. Ensure they know their roles ahead of time.
  • Technology: If you’re using digital tools, screen sharing, or virtual whiteboards, test everything before the team arrives.
  • Breaks: Make sure you plan for breaks. People need mental and physical break periods.
  • Food: Have snacks and beverages ready. If you have a session over 3 hours, plan lunch and/or supper.

1. Welcome the Team (5-20 minutes)

As people arrive, keep things light to set the tone. Try to keep a casual conversation going, laughs are ideal! This isn’t another meeting, it’s a space for creative thinking.

If anyone participated in personal disruptions ahead of the meeting, (with no pressure) see if they’ll share. As the facilitator, have your own ready to share and also explain the room disruptions you’ve set up.

2. Mental Warmups (5-20 minutes)

The personal disruptions mentioned in my other post are meant to break people out of their mental ruts. This period of warm up is meant to achieve the same thing.

Many facilitators do this with ice breakers. I personally don’t like them and have had better luck with other approaches. Consider sharing some optical illusions or brain teasers that stretch their minds rather than putting them on the spot with forced socialization.

That said, ice breakers that get people up and building something together can work too, if you have one you like. Things like small teams building the tallest tower out of toothpicks and mini-marshmallows is a common one that works well.

3. Cover the Brainstorming Ground Rules (2-10 minutes)

  • No Bad Ideas: Save negativity for later. Right now, we’re generating not judging.
  • Quantity Over Quality: More ideas mean more chances for success. Aim for volume.
  • Wild Ideas Welcome: Suspend reality temporarily. One impossible idea can spark a feasible one.
  • No Ownership Battles: Ideas belong to the team. Collaboration beats competition.
  • Build on Others: Use “Yes, and…” thinking. Evolve, merge, and improve ideas together.
  • Stay Present: No emails, no phones. Even during breaks, don’t get distracted.

These rules should be available throughout the session. Consider hanging a poster with them or sharing an attendee packet that includes it. If anyone is attending remotely, share these in the chat area.

As the facilitator, you should be prepared to enforce these rules!

4. Frame the Challenge (5-20 minutes)

Why are we here today? What’s the goal of this brainstorming session? What do we hope to achieve after spending hours together?

This is a critical time to ensure everyone’s head is in the right place before diving into the actual brainstorming. We’re not here just to have fun, we’re here to solve a business problem. Use whatever information you have to enlighten the team on current state, desired state, competition, business data, customer feedback…whatever you have.

Now that we have everyone mentally prepared, consider a short break after this.

5.A. Individual Ideation (5-15 minutes)

This time is well spent whether you had your team generate ideas ahead of time or not. Even if you asked them to, you cannot expect everyone to have devoted time to think about your business objective ahead of time. You will end up with more diverse ideas if you keep this individual time in the agenda.

Here, we want to provide your attendees with paper, pens, and/or sticky notes, and set a timer. Remind them that quantity of ideas is the goal.

Ask the team on their own to come up with 10+ ideas in 5 minutes. They can compete to see who comes up with the most. Keep some soft background music playing (instrumental music). Consider dropping a “crazy bomb of an idea” as an example… something completely unrealistic and surprising, just to jar their minds one last time before they start. Show them that it’s OK to be wild in their suggestions.

When the round is done, optionally, you can take the next 5-10 minutes hearing some of the team’s favorites. Not all, just the favorites. Write them on a board, or post the sticky notes up.

5.B. Second Round of Individual Ideation (10-20 minutes)

If you have time, do a second round of individual idea creation, but this time introduce lateral thinking. Using random entry to show them that ideas can be triggered through associations. Have snippets of paper with random words for each person to draw from a bowl or hat. Give them an additional 5 or 10 minutes to come up with another set of ideas that relates to the word they selected.

For this second round you should be prepared to help anyone who struggles. You can suggest connections to their selected word, or push them to explore synonyms, antonyms, or other associations. For instance, if they draw “tiger”, you can associate animal, cat, jungle, teeth, claws, stripes, fur, orange, black, white, predator, aggression, primal, mascot, camouflage, frosted flakes, breakfast, sports, Detroit, baseball, Cincinnati, football, apparel, clothing, costume, Halloween, and more!

The associations are endless. They draw “tiger”, associate “stripe”, and relate that to the objective in how “striping” could mean updating parts of a system, and not all of it. Or they associate “baseball” and relate that to the objective in how a “bunt” is a strategic move that averts expectations and gets you on base.

6. Idea Sharing (10-60 minutes)

This portion of brainstorming is where ideas start to come together. When people start sharing their initial ideas, others get inspired. Remind everyone that we’re not after ownership, we’re collectively trying to solve the business problem. Your helpers can take notes on who was involved in an idea, so they can later be tagged for additional input or the project team.

This step can be nerve-wracking. Professionals may be uncertain about sharing half-baked ideas, but this is what we need! Don’t pressure anyone, so you, as the facilitator, can offer to share ideas on their behalf if they would like that.

As part of this step, begin identifying patterns and themes. People’s first ideas are generally the easy ones that multiple people will have (including your competitors). There will be similarities. Group those ideas now and try to give the groupings easy to reference names.

The bulk of the ideas are now in everyone’s heads, consider a short break after this.

7. Idea Expansion (20-60 minutes)

As the team comes back from a break, do a round of dot voting. Your ideas are pasted up and grouped, and the team has had some time to let those ideas settle in their minds. Now we’re ready to start driving the focus of the rest of this session.

There should be a set of concepts that are most intriguing to the team. Now, you will encourage pushing some further, spin-off ideas, and cross-pollination. Even flipping ideas to their opposite is still welcome. SCAMPER is an acronym that applies to creative thinking, and you might print it out and display it for your session today.

Like comedy improv, we still do not want to be negative about any idea. Use “yes, and…” to elaborate on someone’s idea. “I really like this idea, now imagine if we spin it as…” Make sure these expansions are being written down and captured.

8. Wild Card Rounds (10-60 minutes)

If you have a larger group, this time is ideal for break-out sessions. If your group is small, it can be another individual ideation round.

Take the top contending themes and divvy them out to groups or individuals. Then you can run 1-3 speed rounds, rotating themes between rounds.

  1. Role Play: Ask them to expand on their theme as if they were Steve Jobs, Jeff Bezzos, Einstein, your competitor, or SpongeBob. This makes them think differently.
  2. Constraints: Consider how they would have to change the idea if they were limited by budget, time, quality, or approach. Poetry is beautiful because of its constraints.
  3. Wishful Thinking: What could you do if all constraints were lifted? If you were writing a fictional book, how would you make this happen?
  4. Exaggeration: Take the idea to the extreme. If the idea as stated is 10%, what does 100% look like? What does 10-times look like?

This level of pushing creativity can be exhausting, consider a break after this.

9. Bring it Together (10-60 minutes)

Update your board with the latest ideas and iterations, if you haven’t already. Give the attendees a few minutes to peruse the posted ideas and reflect. Refresh the favorites list with another round of dot voting.

If time allows, move on from all this divergent thinking, and ask the attendees to list some constraints or areas that need to be investigated for these favorite ideas to work. Keep in mind this is still a “no bad ideas” session, so this effort should be a means to identify next steps for the idea and how to ensure it is successful if it is selected to move forward.

If you still have more time available, start some discussion that could help create a priority matrix after the meeting (like How/Now/Wow). Venture into identifying the following for each of the favorite ideas. We’re just looking for broad strokes and wide ranges today. On a scale of 1-10, where do these fall?

  • Impact: How much would this change the story for the business?
  • Effort: How much effort from business resources might be required?
  • Timeline: What would the timeline look like?
  • Cost: Would there be outside costs?

10. Next Steps (5-10 minutes)

This is the last step of this brainstorming session, but this is not the end. Now we fill the team in on what happens next and give them confidence that today’s effort will be useful. Start by asking the team what excited or surprised them the most today, and what they’d like to do again sometime.

Explain to the team how these ideas will be documented and shared out. The team should already be excited about at least one of today’s ideas, they’ll sleep on these ideas and continue thinking. So, let them know that there will be an opportunity to add additional thoughts to their favorites in the days/weeks to come.

Explain if you have any further plans to get feedback from stakeholders, leaders, or customers. If there are decision makers that are not in this meeting, then help your team understand what you’ll be doing to share these collective ideas with those who will make the final call.

Lastly, thank them for their time today. Express your own satisfaction and excitement for what’s to come. Try to squeeze in a few more laughs and build a feeling of teamwork. Consider remarking on something from this meeting as a “you had to be there” type of joke, even if it is the unrealistic bombshell of an idea that gets a laugh.

Tips for the Facilitator

  • Energy Management: Watch the room’s energy. If it dips, inject movement. Stand up, stretch, take a quick walk, change the pace with a speed round.
  • Protect the Quiet Voices: Don’t let extroverts dominate. Use techniques like written brainstorming and round-robin sharing to ensure everyone contributes.
  • Embrace the Awkward Silence: When you ask a question and get silence, resist the urge to fill it. Give people time to think. Count to ten in your head before jumping in, and don’t make them feel like it was a failure to not say anything.
  • Document Everything: Assign helpers to photograph whiteboards, capture sticky notes, and record key insights. You’ll lose valuable ideas if you rely on memory alone.
  • Keep Your “Crazy Idea Bomb” Ready: If the room gets stuck, be prepared to throw out something intentionally wild to break the pattern. Sometimes the group needs permission to think bigger.
  • Stay Neutral: As facilitator, your job is to guide the process, not advocate for specific ideas. You can participate, if you want to, but save your own advocacy for later. No idea is a bad idea in this session.

Conclusion

I hope you find this example brainstorming session agenda helpful! It’s one of my favorite things to run through. Get your team prepped and ready, then deliver an amazing workshop to drive creativity and innovation!

……

If you are looking for a partner to run brainstorming with, reach out to your Perficient account manager or use our contact form to begin a conversation.

]]>
https://blogs.perficient.com/2026/01/20/example-brainstorming-session/feed/ 0 389807
NRF 2026 and the Human-Centered Future of Retail https://blogs.perficient.com/2026/01/20/nrf-2026-and-the-human-centered-future-of-retail/ https://blogs.perficient.com/2026/01/20/nrf-2026-and-the-human-centered-future-of-retail/#respond Tue, 20 Jan 2026 20:31:44 +0000 https://blogs.perficient.com/?p=389828

In the lead up to NRF opening its doors in New York City, industry leaders were already debating how artificial intelligence, shifting consumer expectations, and economic uncertainty would shape the year ahead. One theme rose above the rest. Retail is entering a renewed era where human connection is once again the driving force behind meaningful customer experiences.

To better understand these conversations, Justin Racine, principal and associate vice president of commerce, attended Retail’s Big Show. He walked away with a clear message that he later shared in CMSWire. Below are several of his key takeaways.

A Return to Basics With Advanced Tools

Justin described NRF 2026 as a return to something retailers have always known. People want to feel understood, valued, and connected. While the industry has spent recent years racing toward automation and efficiency, the conversations at NRF signaled a shift back to the emotional core of retail. This time, the tools supporting that shift are more advanced than ever.

Across sessions and show floor discussions, leaders emphasized that technology is not replacing the human element. It is amplifying it. Retailers are using modern capabilities to create experiences that feel more personal and more intuitive. Innovation and empathy are beginning to work together rather than compete for attention.

Artificial Intelligence as an Intimacy Engine

One of Justin’s strongest observations was the reframing of artificial intelligence. Instead of viewing AI as a tool for automation alone, many speakers described it as a way to deepen relationships. AI can help associates anticipate needs, respond with greater accuracy, and create interactions that feel tailored and thoughtful.

This perspective marks a meaningful shift. Retailers are no longer asking how AI can replace tasks. They are asking how it can elevate people. When employees are empowered with better insights, they can deliver service that feels more human.

The Power of Physical and Emotional Proximity

Justin also noted a renewed focus on closeness. Brands are working to get physically closer to their customers through store design and layout choices that feel warm and intuitive. They are also striving for emotional closeness by building trust and demonstrating empathy in every interaction.

This emphasis on proximity is becoming a competitive advantage. When customers feel seen and understood, loyalty grows. NRF made it clear that the brands winning in 2026 will be those that invest in relationships as intentionally as they invest in technology.

Consumers Still Seek Joy

Even with economic uncertainty shaping buying behavior, Justin observed that consumers continue to seek moments of joy. They are willing to spend when an experience feels meaningful. Retailers are responding by creating moments that spark delight, whether through a warm conversation with a sales associate or a surprisingly relevant recommendation powered by AI.

A Future Built on People and Technology Working Together

Justin’s reflections on NRF 2026 paint a picture of an industry recalibrating. The future of retail is not defined by technology alone. It is defined by the way technology and humanity support one another. NRF reinforced that the strongest brands will be those that use innovation to elevate people and create experiences that feel authentic and personal.

Retail thrives when it stays close to its roots. NRF 2026 was a reminder that human connection remains the heart of the industry and that the path forward is one where people and technology move together with purpose.

To read Justin’s full article, head over to CMSWire.

Learn more about our commerce expertise.

Learn more about our retail + distribution expertise

]]>
https://blogs.perficient.com/2026/01/20/nrf-2026-and-the-human-centered-future-of-retail/feed/ 0 389828