Perficient Blogs https://blogs.perficient.com/ Expert Digital Insights Fri, 20 Feb 2026 20:21:19 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Perficient Blogs https://blogs.perficient.com/ 32 32 30508587 Insight into Oracle Cloud IPM Insights https://blogs.perficient.com/2026/02/20/insight-into-oracle-cloud-ipm-insights/ https://blogs.perficient.com/2026/02/20/insight-into-oracle-cloud-ipm-insights/#respond Fri, 20 Feb 2026 23:00:31 +0000 https://blogs.perficient.com/?p=390542

Why Intelligent Insights Matter in Modern Finance

In today’s data‑driven economy, success isn’t just about keeping up – it’s about anticipating change and acting decisively. Oracle IPM Insights, a powerful capability within Oracle EPM Cloud, empowers organizations to uncover critical anomalies, forecast emerging trends, and recommend actions that drive performance. With AI‑driven narratives and real‑time intelligence embedded directly into financial workflows, IPM Insights transforms raw data into strategic guidance – helping businesses improve forecast accuracy, control costs, and stay ahead in a rapidly evolving market.

 

Transforming Data into Actionable Intelligence

Oracle IPM Insights is designed to move finance teams beyond static reporting. It continuously monitors your EPM data, detects anomalies, and forecasts trends – all embedded within your planning and reporting workflows. This means insights aren’t just visible, they’re actionable, enabling proactive decision‑making across the enterprise.

By surfacing emerging risks and opportunities earlier, finance leaders can shift from reactive analysis to strategic guidance. The platform also reduces time spent on manual data investigation, allowing teams to focus on value‑added analysis rather than routine variance checks. Ultimately, IPM Insights helps organizations elevate forecasting accuracy, strengthen operational agility, and drive more confident decision‑making at scale.

 

Key Features of Oracle IPM Insights

  1. Anomaly Detection: Spot Issues Before They Escalate – IPM Insights identifies unusual patterns in your data, such as unexpected variances in budgets or forecasts. By catching anomalies early, finance teams can investigate root causes and correct issues before they affect performance, ensuring alignment with strategic objectives.
  2. Predictive & Prescriptive Analytics: From Forecast to Action – Beyond forecasting, IPM Insights provides guidance on corrective actions based on detected patterns. For example, if forecast accuracy begins to drift, the system can recommend refining key drivers or adjusting planning assumptions—helping teams stay ahead of potential risks.
  3. Forecast Variance & Bias Detection: Strengthening Forecast Reliability – IPM Insights continuously evaluates actuals vs. forecasted results to identify variance trends and detect systemic bias – whether forecasts are consistently optimistic, conservative, or misaligned with drivers. This helps finance teams improve forecast reliability, refine planning models, and increase confidence in future projections.
  4. Generative AI Narratives: Simplifying Complexity – IPM Insights automatically generates narrative explanations for anomalies, trends, and underlying drivers in plain language. These AI‑generated summaries make insights easy to share with stakeholders, improving understanding and reducing time spent preparing reports.

 

Integrating IPM Insights Across EPM

IPM Insights works natively across Oracle Cloud EPM solutions – Planning, Financial Consolidation and Close, Enterprise Profitability and Cost Management , Tax Reporting, and FreeForm Planning. This integration eliminates silos and ensures consistency across processes. By connecting insights across the full financial lifecycle, organizations can trace the impact of assumptions, drivers, and anomalies from planning through consolidation and final reporting. This unified view reduces reconciliation effort, improves data reliability, and accelerates the close‑to‑forecast cycle.

For finance teams, this integration delivers significant value: manual effort drops as data flows automatically across modules, enabling teams to focus on higher‑value analysis rather than time‑consuming data validation. Forecasts become more accurate thanks to a consistent, connected data foundation that minimizes discrepancies and increases trust in the numbers. Cross‑functional collaboration also improves, as FP&A, accounting, and operations all work from the same source of truth—leading to faster decisions and a more agile finance organization.

Best Practices for Optimization

Unlocking the full potential of Oracle IPM Insights requires more than activation – it demands a disciplined approach. Follow these best practices to maximize value:

  1. Define Insight Scope Strategically – Configure Insight Definitions for specific data slices aligned with business priorities to keep insights actionable.
  2. Incorporate Calendars & Event Context – Annotate insights with business events to distinguish expected fluctuations from true anomalies.
  3. Embed Insights into Everyday Workflows – Use Smart View and the Insights dashboard to make insights accessible where planners work.
  4. Use Narratives to Strengthen Commentary and Executive Reporting – Incorporate AI‑generated explanations into management decks, close packages, and forecast summaries to improve speed and consistency. This reduces time spent drafting commentary while increasing clarity and precision.
  5. Establish Governance & Ongoing Review – Create a monitoring team to fine-tune thresholds, validate models, and drive continuous improvement.

 

Future Trends in Enterprise Performance Management

  1. Driver-Based Forecasting with AutoMLx – Trends are shifting toward intelligent, driver-based forecasting. Oracle EPM leads with Advanced Predictions powered by AutoMLx, enabling multivariate models that incorporate key business drivers for greater accuracy and transparency.
  2. Conversational AI Agents for Finance – AI-driven assistants, will allow finance teams to query insights in natural language and receive instant recommendations – making planning more intuitive and collaborative. This shift will not only accelerates decision‑making but will also empower organizations to respond to market changes with greater agility, improving both financial accuracy and overall business performance.
  3. Self-Learning Models and Continuous ImprovementFuture models will learn from user actions and outcomes, improving accuracy over time. This adaptive capability ensures businesses stay ahead in an ever-changing market.

 

Why Insights Matter

The ability to detect, predict, and act on insights is no longer optional – it’s a competitive and existential necessity. In an environment where markets shift rapidly, budgets tighten, and expectations for accuracy increase, finance teams must operate with real‑time intelligence rather than backward‑looking reports. Organizations that can rapidly translate data into decisions gain measurable advantages in agility, cost control, and strategic alignment.

Oracle IPM Insights equips finance teams with the advanced analytics, automation, and predictive capabilities needed to stay ahead of uncertainty. By delivering timely insights directly within planning, close, and reporting workflows, IPM Insights turns raw data into actionable intelligence—empowering teams to respond faster, improve forecast reliability, and drive stronger business outcomes. The result is a finance function that doesn’t just report on performance—it actively shapes it, becoming a strategic partner to the entire enterprise.

 

Ready to unlock the power of Oracle IPM Insights? Leave a comment or contact us to explore how Oracle EPM Cloud can help you anticipate change, optimize performance, and lead with confidence.

 

]]>
https://blogs.perficient.com/2026/02/20/insight-into-oracle-cloud-ipm-insights/feed/ 0 390542
2026 Regulatory Reporting for Asset Managers: Navigating the New Era of Transparency https://blogs.perficient.com/2026/02/20/2026-regulatory-reporting-for-asset-managers-navigating-the-new-era-of-transparency/ https://blogs.perficient.com/2026/02/20/2026-regulatory-reporting-for-asset-managers-navigating-the-new-era-of-transparency/#respond Fri, 20 Feb 2026 20:01:52 +0000 https://blogs.perficient.com/?p=390547

The regulatory landscape for asset managers is shifting beneath our feet. It’s no longer just about filing forms; it’s about data granularity, frequency, and the speed at which you can deliver it. As we move into 2026, the Securities and Exchange Commission (SEC) has made its intentions clear: they want more data, they want it faster, and they want it to be more transparent than ever before.

For financial services executives and compliance professionals, this isn’t just a compliance headache—it’s a data infrastructure challenge. The days of manual spreadsheets and last-minute scrambles are over. The new requirements demand a level of agility and precision that legacy systems simply cannot support. If you’re still relying on manual processes to meet these evolving standards, you’re not just risking non-compliance; you’re risking your firm’s operational resilience.

The Shifting Landscape: More Data, More Often

The theme for 2026 is “more.” More frequent filings, more detailed disclosures, and more scrutiny. The SEC’s push for modernization is driven by a desire to better monitor systemic risk and protect investors, but for asset managers, it translates to a significant operational burden.

Take Form N-PORT, for example. What was once a quarterly obligation with a 60-day lag is transitioning to a monthly filing requirement due within 30 days of month-end. This tripling of filing frequency doesn’t just mean three times the work; it means your data governance and reporting engines must be “always-on,” capable of aggregating and validating portfolio data on a continuous cycle.

The “Big Three” for 2026: Form PF, 13F, and N-PORT

While there are numerous reports to manage, three stand out as critical focus areas for 2026: Form PF, Form 13F, and Form N-PORT. Each has undergone significant changes or is subject to new scrutiny that demands your attention.

Form PF: The Private Fund Data Deep Dive

The amendments to Form PF, adopted in February 2024, represent a sea change for private fund advisers. With a compliance date of October 1, 2026, these changes require more granular reporting on fund structures, exposures, and performance. Large hedge fund advisers must now report within 60 days of quarter-end, and the scope of data required—from detailed asset class breakdowns to counterparty exposures—has expanded significantly. This isn’t just another new report. It’s a comprehensive audit of your fund’s risk profile, delivered quarterly.

Form 13F: The Institutional Standard

For institutional investment managers exercising discretion over $100 million or more in 13(f) securities, Form 13F remains a cornerstone of transparency. Filed quarterly within 45 days of quarter-end, this report now requires the companion filing of Form N-PX to disclose proxy votes on executive compensation. This linkage between holdings and voting records adds a new layer of complexity, requiring firms to seamlessly integrate data from their portfolio management and proxy voting systems.

Form N-PORT: The Monthly Sprint

A shift to monthly N-PORT filings is a game-changer for registered investment companies. The requirement to file within 30 days of month-end means that your month-end close process must be tighter than ever. Any delays in data reconciliation or validation will eat directly into your filing window, leaving little margin for error.

The Operational Burden: Hidden Costs of Manual Processes

It’s easy to underestimate the time and effort required to produce these reports. A “simple” quarterly update can easily consume a week or more of a compliance officer’s time when you factor in data gathering, reconciliation, and review.

For a large hedge fund adviser, we at Perficient have seen a full Form PF filing taking two weeks or more of dedicated effort from multiple teams. When you multiply this across all your reporting obligations, the cost of manual processing becomes staggering. And that’s before you consider the opportunity cost—time your team spends wrangling data is time they aren’t spending on strategic initiatives or risk management.

The Solution: Automation and Cloud Migration

The only viable path forward is automation. To meet the demands of 2026, asset managers must treat regulatory reporting as a data engineering problem, not just a compliance task. This means moving away from siloed spreadsheets and towards a centralized, cloud-native data platform.

By migrating your data infrastructure to the cloud, you gain the scalability and flexibility needed to handle large datasets and complex calculations. Automated data pipelines can ingest, validate, and format your data in real-time, reducing the “production time” from weeks to hours. This isn’t just about efficiency; it’s about accuracy and peace of mind. When your data is governed and your processes are automated, you can file with confidence, knowing that your numbers are right.

Key Regulatory Reports at a Glance

To help you navigate the 2026 reporting calendar, we’ve compiled a summary of the key reports, their purpose, and what it takes to get them across the finish line.

Sec Forms Asset Managers Must File

Your Next Move

If your firm would like assistance designing or adopting regulatory reporting processes or migrating your data infrastructure to the cloud with a consulting partner that has deep industry expertise – reach out to us here.

]]>
https://blogs.perficient.com/2026/02/20/2026-regulatory-reporting-for-asset-managers-navigating-the-new-era-of-transparency/feed/ 0 390547
Perficient Earns Databricks Brickbuilder Specialization for Healthcare & Life Sciences https://blogs.perficient.com/2026/02/18/perficient-earns-databricks-brickbuilder-specialization-for-healthcare-life-sciences/ https://blogs.perficient.com/2026/02/18/perficient-earns-databricks-brickbuilder-specialization-for-healthcare-life-sciences/#respond Wed, 18 Feb 2026 17:59:11 +0000 https://blogs.perficient.com/?p=390471

Perficient is proud to announce that we have earned the Databricks Brickbuilder Specialization for Healthcare & Life Sciences, a distinction awarded to select partners who consistently demonstrate excellence in using the Databricks Data Intelligence Platform to solve the industry’s most complex data challenges.

This specialization reflects both our strategic commitment to advancing health innovation through data and AI, and our proven track record of helping clients modernize with speed, responsibility, and measurable outcomes.

Our combined expertise in Healthcare & Life Sciences and the Databricks platform uniquely positions us to help customers achieve meaningful impact, whether improving patient outcomes or accelerating the clinical data review process. This specialization underscores the strength of our capabilities across both the platform and within this highly complex industry. – Nick Passero, Director Data and Analytics

How We Earned the Specialization

Achieving the Databricks Brickbuilder Specialization requires a deep and sustained investment in technical expertise, customer delivery, and industry innovation.

2026 Partner Program Badge Brickbuilder Specialization Healthcare Life SciencesTechnical Expertise: Perficient met Databricks’ stringent certification thresholds, ensuring that dozens of our data engineers, architects, and AI practitioners maintain active Pro and Associate certifications across key domains. This level of technical enablement ensures that our teams not only understand the Databricks platform, but can apply it to clinical trials, healthcare claims management, and real world evidence, leading to AI-driven decisioning.

Delivery Excellence: Equally important, we demonstrated consistent success delivering in production healthcare and life sciences use cases. From enhancing omnichannel member services to migrating complex Hadoop workloads to Databricks for a large midwest payer, building a modern lakehouse on Azure for a leading children’s research hospital, and modernizing enterprise data architecture with Lakehouse and DataOps for a national payer, our client work demonstrates both scale and repeatability.

Thought Leadership: Our achievement also reflects ongoing thought leadership, another core requirement of Databricks’ specialization framework. Perficient continues to publish research-driven perspectives (Agentic AI Closed-Loop Systems for N-of-1 Treatment Optimization, and Agentic AI for RealTime Pharmacovigilance) that help executives navigate the evolving interplay of AI, regulatory compliance, clinical innovation, and operational modernization across the industry.

Why This Matter to You

Healthcare and life sciences organizations face unprecedented complexity as they seek to unify and activate data from sensitive datasets (EMR/EHR, imaging, genomics, clinical trial data). Leaders must make decisions that balance innovation with security, scale with precision, and AI-driven speed with regulatory responsibility.

The Databricks specialization matters because it signals that Perficient has both the technical foundation and the industry expertise to guide organizations through this transformation. Whether the goal is to accelerate drug discovery, reduce clinical trial timelines, personalize therapeutic interventions, or surface real-time operational insights, Databricks provides the engine and Perficient provides the strategy, implementation, and healthcare context needed to turn potential into outcomes.

A Thank You to Our Team

This accomplishment is the result of extraordinary commitment across Perficient’s Databricks team. Each certification earned, each solution architected, and each successful client outcome reflects the passion and expertise of people who believe deeply in improving healthcare through better data.

We’re excited to continue shaping the future of healthcare and life sciences with Databricks as a strategic partner.

To learn more about our Databricks practice and how we support healthcare and life sciences organizations, visit our partner page.

 

]]>
https://blogs.perficient.com/2026/02/18/perficient-earns-databricks-brickbuilder-specialization-for-healthcare-life-sciences/feed/ 0 390471
Agentforce Financial Services Use Cases: Modernizing Banking, Wealth, and Asset Management https://blogs.perficient.com/2026/02/18/agentforce-financial-services-use-cases-modernizing-banking-wealth-and-asset-management/ https://blogs.perficient.com/2026/02/18/agentforce-financial-services-use-cases-modernizing-banking-wealth-and-asset-management/#respond Wed, 18 Feb 2026 15:16:42 +0000 https://blogs.perficient.com/?p=390461

Editor’s Note: We are thrilled to feature this guest post by Tracy Julian, Financial Services Industry Lead & Architect at Perficient. With over 20 years of experience across retail banking, wealth management, and fintech, Tracy is a systems architect who specializes in turning complex data hurdles into high-velocity, future-ready AI solutions.

Executive Summary 

Financial services organizations face mounting pressure to deliver highly personalized client experiences while navigating increasingly complex regulatory requirements. At the same time, relationship managers and advisors spend a significant portion of their week searching for client information across disconnected systems. This administrative burden reduces time available for strategic client engagement and limits the ability to proactively identify cross-sell, retention, and risk management opportunities. 

Agentforce, Salesforce’s enterprise-grade agentic AI platform, addresses these challenges head-on. By automating data aggregation, surfacing real-time insights, and embedding compliance-aware intelligence directly into workflows, Agentforce helps financial services teams operate more efficiently and intelligently. 

This article explores real-world Agentforce financial services use cases and provides a practical implementation roadmap for organizations evaluating AI agent deployment. 

Key Takeaways 

  • Agentforce reduces client research time through automated, multi-source data aggregation 
  • Four proven Agentforce financial services use cases across banking, wealth, and asset management 
  • A 4–6 week implementation timeline is achievable with proper planning 
  • Built-in compliance automation aligned with SOC 2 and financial services standards 

The Challenge: Data Fragmentation in Modern Financial Services 

Financial services teams across B2B banking, wealth management, registered investment advisors (RIAs), and workplace services face a shared set of challenges that directly impact revenue, efficiency, and client satisfaction. 

  1. Information Silos Create Operational Inefficiency
  • Client data is scattered across multiple Salesforce orgs, legacy core banking systems, portfolio management platforms, and document repositories 
  • Financial advisors manage information across many different systems 
  • There is no single, unified view of client relationships, risk indicators, or cross-sell opportunities 
  1. Time-Intensive Meeting Preparation
  • Client-facing teams spend disproportionate time on administrative tasks rather than strategic interactions 
  • Relationship managers manually compile company summaries, account histories, and risk assessments before each meeting 
  • Information retrieval delays slow response times to client inquiries 
  1. Escalating Regulatory Complexity
  • Increasing regulations around data privacy (GDPR, CCPA, GLBA), personally identifiable information (PII), and record retention 
  • Manual compliance reviews create operational bottlenecks and increase the risk of human error 
  • Document scanning for sensitive data (SSNs, account numbers, tax IDs) is often reactive rather than preventive 
  1. Missed Revenue Opportunities
  • Without unified intelligence, leaders struggle to identify upsell, cross-sell, and retention risks in real time 
  • Fragmented data limits proactive account planning and relationship management 
  • Inconsistent visibility into consultant and intermediary relationships reduces partner channel effectiveness 

Real-World Example: Multi-Org Complexity 

A Perficient financial services client operates 20+ production Salesforce orgs across marketing, sales, and service. This complexity has resulted in: 

  • Significant manual effort by relationship managers searching for client information 
  • Inconsistent data interpretation across sales and service teams 
  • Compliance vulnerabilities caused by manual PII identification processes 
  • Delayed opportunity identification due to siloed account intelligence 

This scenario is common across enterprise financial services organizations—and represents one of the most compelling Agentforce financial services use cases. 

How Salesforce Agentforce Helps 

Agentforce is Salesforce’s next-generation AI platform, combining: 

  • Natural language processing (NLP) for conversational interfaces 
  • Multi-source data aggregation across Salesforce objects, external systems, and documents 
  • Workflow automation triggered by agent-driven insights and actions 
  • Compliance-aware processing with PII detection and security controls 
  • Real-time intelligence generated from both structured and unstructured data 

Unlike traditional chatbots or rule-based automation, Agentforce agents: 

  • Understand context and intent from natural language queries 
  • Access and synthesize information from multiple data sources simultaneously 
  • Generate actionable insights and recommendations—not just raw data 
  • Learn from user interactions to improve relevance over time 
  • Integrate seamlessly with existing Salesforce workflows and third-party systems 

Agentforce leverages Salesforce Einstein AI, Data 360 for unified data access, and the Hyperforce infrastructure to deliver enterprise-grade security, compliance, and trust for financial services use cases. 

Four High-Impact Agentforce Financial Services Use Cases 

The following Agentforce use cases have been developed specifically for financial services and can typically be implemented within four weeks. 

Client Intelligence Agent: Gain 360-Degree Relationship Insights 

The Client Summary Agent consolidates comprehensive client intelligence in seconds, eliminating manual data gathering. It aggregates: 

  • Company & Contact Details: Legal entity structure, key decision-makers, organizational hierarchy 
  • Financial Position: Account balances, asset allocation, liabilities, portfolio performance 
  • Relationship Health: Engagement scores, activity frequency, NPS data, retention risk indicators 
  • Opportunity Pipeline: Active deals, proposal status, estimated close dates, win probability 
  • Service History: Open and closed cases, resolution times, satisfaction ratings 
  • Interaction Timeline: Meetings, calls, emails, and all historical touchpoints 

Business Outcome
Relationship managers can prepare for meetings faster, personalize conversations, and proactively identify engagement and retention risks. Time previously spent gathering data is redirected to strategic client interactions. This represents one of the foundational Agentforce financial services use cases. 

Account Relationship Agent: Manage Complex Accounts & Client Risk 

For firms that work with consultants, brokers, or intermediaries, the Account Relationship Agent provides a unified view of partner relationships by consolidating: 

  • Partner Profile: Firm details, key contacts, AUM/AUA influenced, areas of specialization 
  • Referral History: Opportunities sourced, conversion rates, deal size, revenue attribution 
  • Engagement Metrics: Meeting cadence, co-marketing activity, webinar participation, content engagement 
  • Pipeline Analysis: Active referrals by stage, forecasted revenue, deal aging 
  • Collaboration Activity: Shared plans, joint calls, tasks, and communication history 

Business Outcome
Sales teams gain clarity into partner performance and potential, enabling better territory planning, stronger collaboration, and more strategic channel investment. 

Client Prospect Agent: Optimize Sales Intelligence & Next Best Action 

The Client Prospect Agent transforms raw data into actionable sales intelligence by analyzing: 

  • Company Intelligence: Industry position, competitive landscape, growth signals, news mentions 
  • Buying Signals: Website engagement, content consumption, event attendance, RFP activity 
  • Relationship Mapping: Existing connections, decision-makers, organizational structure 
  • Whitespace Analysis: Current services versus product catalog, cross-sell and upsell opportunities 
  • Next Best Actions: Prioritized recommendations based on engagement and firmographic data 

Business Outcome
Sales teams can prioritize accounts more effectively, uncover whitespace opportunities, and focus on actions that accelerate deal progression. This Agentforce financial services use case is most beneficial for acquisition teams. 

Document Scanning Agent: Automate PII Compliance Safeguards 

Regulatory compliance is non-negotiable in financial services. The Document Scanning Agent provides automated, pre-upload document scanning for: 

  • Social Security Numbers (SSNs): Multiple formats (XXX-XX-XXXX, XXXXXXXXX) 
  • Tax Identification Numbers (TINs/EINs): Business and individual identifiers 
  • Account Numbers: Bank, credit card, and brokerage accounts 
  • Passport Numbers: Government-issued identification 
  • Custom PII Patterns: Configurable regex for institution-specific data types 

Business Outcome
Organizations reduce human error, strengthen compliance posture, and protect sensitive client data—automatically and proactively. 

Getting Started: Next Steps for Your Organization 

If your organization is evaluating Agentforce, consider the following steps: 

  1. Assess Your Current State
  • Map data fragmentation across systems and or within the Salesforce org across objects  
  • Quantify time spent on manual data gathering 
  • Identify high-impact pain points 
  • Establish baseline metrics for measuring improvement 
  1. Define Success Criteria
  • Business outcomes: Efficiency gains, revenue impact, compliance risk reduction 
  • Adoption targets: Percentage of users actively engaging with agents 
  • Technical performance: Accuracy, response time, data completeness 
  • ROI expectations: Payback period and time to value 
  1. Prioritize Use Cases
  • Identify quick-win Agentforce financial services use cases that deliver value in 30–60 days 
  • Assess team readiness and change appetite 
  • Evaluate data availability and quality 
  • Align use cases to regulatory risk and compliance priorities 
  1. Engage Expert Partners
  • Schedule a discovery workshop with Perficient 
  • Review reference architectures and live demonstrations 
  • Develop a phased implementation roadmap 
  • Establish governance, KPIs, and success metrics 

AI Agents as a Competitive Advantage in Financial Services 

The financial services industry is at an inflection point. Organizations that successfully deploy Agentforce financial services use cases to augment human expertise will gain durable competitive advantages, including: 

  • Superior client experiences through faster, more personalized, and proactive service 
  • Improved operational efficiency by shifting effort from administration to relationship management 
  • Revenue growth through earlier identification of cross-sell, upsell, and retention opportunities 
  • Increased compliance confidence with automated safeguards that reduce regulatory risk 
  • Data-driven decision-making powered by unified, real-time intelligence 

Agentforce represents Salesforce’s most significant AI advancement for financial services—combining trusted CRM data with cutting-edge agentic AI capabilities. Organizations that move quickly, but strategically, will establish lasting advantages in client relationships, operational efficiency, and market leadership. 

Meet Your Expert 

Tracy

Tracy Julian
Financial Services Industry Lead & Architect, Salesforce Practice 

Tracy brings more than 20 years of financial services experience in retail banking, wealth management, capital markets, and fintech, spanning both industry and consulting roles with firms including the Big 4 across the U.S. and EMEA. 

She leads Perficient’s financial services industry efforts within the Salesforce practice, partnering with clients to define the vision and goals behind their transformation. She then uses that foundation to build smarter, future-ready solutions that deliver business first, scalable solutions across strategy, cloud migration, and innovation in marketing, sales, and service. 

A systems architect by trade, Tracy is known for aligning teams around a shared vision and solving complex problems with measurable impact. 

]]>
https://blogs.perficient.com/2026/02/18/agentforce-financial-services-use-cases-modernizing-banking-wealth-and-asset-management/feed/ 0 390461
An Ultimate Guide to the Toast Notification in Salesforce LWC https://blogs.perficient.com/2026/02/18/an-ultimate-guide-to-the-toast-notification-in-salesforce-lwc/ https://blogs.perficient.com/2026/02/18/an-ultimate-guide-to-the-toast-notification-in-salesforce-lwc/#respond Wed, 18 Feb 2026 07:56:51 +0000 https://blogs.perficient.com/?p=390323

Hello Trailblazers!

Take a scenario where you are creating a record in Salesforce, and you are not getting any kind of confirmation via notification whether your record is created successfully or it throws any Alert or Warning. So, for this, Salesforce has functionality called “Toast Notifications”.

Toast notifications are an effective way to provide users with feedback about their actions in Salesforce Lightning Web Components (LWC). They appear as pop-up messages at the top of the screen and automatically fade away after a few seconds.

So in this blog post, we are going to learn everything about Toast Notifications and their types in Salesforce Lightning Web Components (LWC), along with the real-world examples.

So, let’s get started…

 

In Lightning Web Components (LWC), you can display Toast Notifications using the Lightning Platform’s ShowToastEvent. Salesforce provides four types of toast notifications:

  1. Success – Indicates that the operation was successful.
    • Example: “Record has been saved successfully.”
  2. Error – Indicates that something went wrong.
    • Example: “An error occurred while saving the record.”
  3. Warning – Warns the user about a potential issue.
    • Example: “You have unsaved changes.”
  4. Info – Provides informational messages to the user.
    • Example: “Your session will expire soon.”

 

Img2

 

Example Code for a Toast Notification in LWC:

import { ShowToastEvent } from 'lightning/platformShowToastEvent';

const event = new ShowToastEvent({
    title: 'Success!',
    message: 'Record has been created successfully.',
    variant: 'success' // Can be 'success', 'error', 'warning', or 'info'
});
this.dispatchEvent(event);

So, here is an example of the Toast Notification.

Img1

 

So this way, you can write toast notification code and make changes according to your requirements.

In the next part of this blog series, we will explore what a success toast notification is and demonstrate how to implement it through a practical, real-world example.

Until then, Keep Reading !!

“Consistency is the quiet architect of greatness—progress so small it’s often unnoticed, yet powerful enough to reshape your entire future.”

Related Posts:

  1. Toast Notification in Salesforce
  2. Toast Event: Lightning Design System (LDS)

You Can Also Read:

1. Introduction to the Salesforce Queues – Part 1
2. Mastering Salesforce Queues: A Step-by-Step Guide – Part 2
3. How to Assign Records to Salesforce Queue: A Complete Guide
4. An Introduction to Salesforce CPQ
5. Revolutionizing Customer Engagement: The Salesforce Einstein Chatbot

 

]]>
https://blogs.perficient.com/2026/02/18/an-ultimate-guide-to-the-toast-notification-in-salesforce-lwc/feed/ 0 390323
Common Machine Learning Concepts and Algorithms https://blogs.perficient.com/2026/02/18/common-machine-learning-concepts-and-algorithms/ https://blogs.perficient.com/2026/02/18/common-machine-learning-concepts-and-algorithms/#comments Wed, 18 Feb 2026 06:05:09 +0000 https://blogs.perficient.com/?p=390337

Machine Learning (ML) may sound technical; however, once you break it down, it’s simply about teaching computers to learn from data—just like humans learn from experience.

In this blog, we’ll explore ML in simple words: its types, important concepts, and popular algorithms.

What Is Machine Learning?

Machine Learning is a branch of artificial intelligence; in essence, it allows models to learn from data and make predictions or decisions without the need for explicit programming.

Every ML system involves two things:

  • Input (Features)
  • Output (Label)

With the right data and algorithms, ML systems can recognize patterns, make predictions, and automate tasks.

Types of Machine Learning

1.1 Supervised Learning

Supervised learning uses labeled data, meaning the correct answers are already known.

Definition

Training a model using data that already contains the correct output.

Examples

  • Email spam detection
  • Predicting house prices

Key Point

The model learns the mapping from input → output.

1.2 Unsupervised Learning

Unsupervised learning works with unlabeled data. No answers are provided—the model must find patterns by itself.

Definition

The model discovers hidden patterns or groups in the data.

Examples

  • Customer segmentation
  • Market basket analysis (bread buyers also buy butter)

Key Point

No predefined labels. The focus is on understanding data structure.

1.3 Reinforcement Learning

This type of learning works like training a pet—reward for good behavior, penalty for wrong actions.

Definition

The model learns by interacting with its environment and receiving rewards or penalties.

Examples

  • Self-driving cars
  • Game‑playing AI (Chess, Go)

Key Point

Learning happens through trial and error over time.

  1. Core ML Concepts

2.1 Features

Input variables used to predict the outcome.

Examples:

  • Age, income
  • Pixel values in an image

2.2 Labels

The output or target value.

Examples:

  • “Spam” or “Not Spam”
  • Apple in an image

2.3 Datasets

When training a model, data is usually split into:

  • Training Dataset
    Used to teach the model (e.g., 50% of data)
  • Testing Dataset
    Used to check performance (the remaining 50%)
  • Validation Dataset
    Fresh unseen data for final evaluation

2.4 Overfitting & Underfitting

Overfitting

The model learns the training data too well—even the noise.
✔ Good performance on training data
✘ Poor performance on new data

Underfitting

The model fails to learn patterns.
✔ Fast learning
✘ Poor accuracy on both training and new data

  1. Common Machine Learning Algorithms

Below is a simple overview:

Task Algorithms
Classification Decision Tree, Logistic Regression
Regression Linear Regression, Ridge Regression
Clustering K-Means, DBSCAN

 

3.1 Regression

Used when predicting numerical values.

Examples

  • Predicting sea level in meters
  • Forecasting number of gift cards to be sold next month

Not an example:
Finding an apple in an image → That’s classification, not regression.

3.2 Classification

Used when predicting categories or labels.

Examples

  • Identifying an apple in an image
  • Predicting whether a loan will be repaid

3.3 Clustering

Used to group data based on similarity.
No labels are provided.

Examples

  • Grouping customers by buying behavior
  • Grouping news articles by topic
  1. Model Evaluation Metrics

To measure the model’s performance, we use:

Basic Terms

  • True Positive
  • False Negative
  • True Negative
  • False Positive

Important Metrics

  • Accuracy – How often the model is correct
  • Precision – Of the predicted positives, how many were correct?
  • Recall – How many actual positives were identified correctly?

These metrics ensure that the model is trustworthy and reliable.

Conclusion:

Machine learning may seem complex; however, once you understand the core concepts—features, labels, datasets, and algorithms—it quickly becomes a powerful tool for solving real‑world problems. Furthermore, whether you are predicting prices, classifying emails, grouping customers, or training self‑driving cars, ML is consistently present in the technology we use every day.

With foundational knowledge and clear understanding, anyone can begin their ML journey.

Additional Reading

]]>
https://blogs.perficient.com/2026/02/18/common-machine-learning-concepts-and-algorithms/feed/ 1 390337
Language Mastery as the New Frontier of Software Development https://blogs.perficient.com/2026/02/16/language-mastery-as-the-new-frontier-of-software-development/ https://blogs.perficient.com/2026/02/16/language-mastery-as-the-new-frontier-of-software-development/#respond Mon, 16 Feb 2026 17:23:54 +0000 https://blogs.perficient.com/?p=390355
In the current technological landscape, the interaction between human developers and Large Language Models (LLMs) has transitioned from a peripheral experiment into a core technical competency. We are witnessing a fundamental shift in software development: the evolution from traditional code logic to language logic. This discipline, known as Prompt Engineering, is not merely about “chatting” with an AI; it is the structured ability to translate human intent into precise machine action. For the modern software engineer, designing and refining instructions is now as critical as writing clean, executable code.

1. Technical Foundations: From Prediction to Instruction

To master AI-assisted development, one must first understand the nature of the model. An LLM, at its core, is a probabilistic prediction engine. When given a sequence of text, it calculates the most likely next word (or token) based on vast datasets.
Base Models vs. Instruct Models
Technical proficiency requires a distinction between Base Models and Instruct Models. A Base LLM is designed for simple pattern completion or “autocomplete.” If asked to classify a text, a base model might simply provide another example of a text rather than performing the classification. Professional software development relies almost exclusively on Instruct Models. These models have been aligned through Reinforcement Learning from Human Feedback (RLHF) to follow explicit directions rather than just continuing a text pattern.
The fundamental paradigm of this interaction is simple but absolute: the quality of the input (the prompt) directly dictates the quality and accuracy of the output (the response).

2. The Two Pillars of Effective Prompting

Every successful interaction with an LLM rests on two non-negotiable principles. Neglecting either leads to unpredictable, generic, or logically flawed results.
1. Clarity and Specificity

Ambiguity is the primary enemy of quality AI output. Models cannot read a developer’s mind or infer hidden contexts that are omitted from the prompt. When an instruction is vague, the model is forced to “guess,” often resulting in a generic “average response” that fails to meet specific technical requirements. A specific prompt must act as an explicit manual. For instance, rather than asking to “summarize an email,” a professional prompt specifies the role (Executive Assistant), the target audience (a Senior Manager), the focus (required actions and deadlines), and the formatting constraints (three key bullet points).

Vague Prompt (Avoid) Specific Prompt (Corporate Standard)
“Summarize this email.” “Act as an executive assistant. Summarize the following email in 3 key bullet points for my manager. Focus on required actions and deadlines. Omit greetings.”
“Do something about marketing.” “Generate 5 Instagram post ideas for the launch of a new tech product, each including an opening hook and a call-to-action.”

 

 

2. Allowing Time for Reasoning
LLMs are prone to logical errors when forced to provide a final answer immediately—a phenomenon described as “impulsive reasoning.” This is particularly evident in mathematical logic or complex architectural problems. The solution is to explicitly instruct the model to “think step-by-step.” This technique, known as Chain-of-Thought (CoT), forces the model to calculate intermediate steps and verify its own logic before concluding. By breaking a complex task into a sequence of simpler sub-tasks, the reliability of the output increases exponentially.
3. Precision Structuring Tactics
To transform a vague request into a high-precision technical order, developers should utilize five specific tactics.
• Role Assignment (Persona): Assigning a persona—such as “Software Architect” or “Cybersecurity Expert”—activates specific technical vocabularies and restricts the model’s probabilistic space toward expert-level responses. It moves the AI away from general knowledge toward specialized domain expertise.
• Audience and Tone Definition: It is imperative to specify the recipient of the information. Explaining a SQL injection to a non-technical manager requires a completely different lexicon and level of abstraction than explaining it to a peer developer.
• Task Specification: The central instruction must be a clear, measurable action. A well-defined task eliminates ambiguity regarding the expected outcome.
• Contextual Background: Because models lack access to private internal data or specific business logic, developers must provide the necessary background information, project constraints, and specific data within the prompt ecosystem.
• Output Formatting: For software integration, leaving the format to chance is unacceptable. Demanding predictable structures—such as JSON arrays, Markdown tables, or specific code blocks—is critical for programmatic parsing and consistency.
Technical Delimiters Protocol
To prevent “Prompt Injection” and ensure application robustness, instructions must be isolated from data using:
• Triple quotes (“””): For large blocks of text.
• Triple backticks (`): For code snippets or technical data.
• XML tags (<tag>): Recommended standard for organizing hierarchical information.
• Hash symbols (###): Used to separate sections of instructions.
Once the basic structure is mastered, the standard should address highly complex tasks using advanced reasoning.
4. Advanced Reasoning and In-Context Learning
Advanced development requires moving beyond simple “asking” to “training in the moment,” a concept known as In-Context Learning.
Shot Prompting: Zero, One, and Few-Shot
• Zero-Shot: Requesting a task directly without examples. This works best for common, direct tasks the model knows well.
• One-Shot: Including a single example to establish a basic pattern or format.
• Few-Shot: Providing multiple examples (usually 2 to 5). This allows the model to learn complex data classification or extraction patterns by identifying the underlying rule from the history of the conversation.
Task Decomposition
This involves breaking down a massive, complex process into a pipeline of simpler, sequential actions. For example, rather than asking for a full feature implementation in one go, a developer might instruct the model to: 1. Extract the data requirements, 2. Design the data models, 3. Create the repository logic, and 4. Implement the UI. This grants the developer superior control and allows for validation at each intermediate step.
ReAct (Reasoning and Acting)
ReAct is a technique that combines reasoning with external actions. It allows the model to alternate between “thinking” and “acting”—such as calling an API, performing a web search, or using a specific tool—to ground its final response in verifiable, up-to-date data. This drastically reduces hallucinations by ensuring the AI doesn’t rely solely on its static training data.
5. Context Engineering: The Data Ecosystem
Prompting is only one component of a larger system. Context Engineering is the design and control of the entire environment the model “sees” before generating a response, including conversation history, attached documents, and metadata.
Three Strategies for Model Enhancement
1. Prompt Engineering: Designing structured instructions. It is fast and cost-free but limited by the context window’s token limit.
2. RAG (Retrieval-Augmented Generation): This technique retrieves relevant documents from an external database (often a vector database) and injects that information into the prompt. It is the gold standard for handling dynamic, frequently changing, or private company data without the need to retrain the model.
3. Fine-Tuning: Retraining a base model on a specific dataset to specialize it in a particular style, vocabulary, or domain. This is a costly and slow strategy, typically reserved for cases where prompting and RAG are insufficient.
The industry “Golden Rule” is to start with Prompt Engineering, add RAG if external data is required, and use Fine-Tuning only as a last resort for deep specialization.
6. Technical Optimization and the Context Window
The context window is the “working memory” of the model, measured in tokens. A token is roughly equivalent to 0.75 words in English or 0.25 words in Spanish. Managing this window is a technical necessity for four reasons:
• Cost: Billing is usually based on the total tokens processed (input plus output).
• Latency: Larger contexts require longer processing times, which is critical for real-time applications.
• Forgetfulness: Once the window is full, the model begins to lose information from the beginning of the session.
• Lost in the Middle: Models tend to ignore information located in the center of extremely long contexts, focusing their attention only on the beginning and the end.
Optimization Strategies
Effective context management involves progressive summarization of old messages, utilizing “sliding windows” to keep only the most recent interactions, and employing context caching to reuse static information without incurring reprocessing costs.
7. Markdown: The Communication Standard

Markdown has emerged as the de facto standard for communicating with LLMs. It is preferred over HTML or XML because of its token efficiency and clear visual hierarchy. Its predictable syntax makes it easy for models to parse structure automatically. In software documentation, Markdown facilitates the clear separation of instructions, code blocks, and expected results, enhancing the model’s ability to understand technical specifications.

Token Efficiency Analysis

The choice of format directly impacts cost and latency:

  • Markdown (# Title): 3 tokens.
  • HTML (<h1>Title</h1>): 7 tokens.
  • XML (<title>...</title>): 10 tokens.

Corporate Syntax Manual

Element Syntax Impact on LLM
Hierarchy # / ## / ### Defines information architecture.
Emphasis **bold** Highlights critical constraints.
Isolation ``` Separates code and data from instructions.

 

8. Contextualization for AI Coding Agents
AI coding agents like Cursor or GitHub Copilot require specific files that function as “READMEs for machines.” These files provide the necessary context regarding project architecture, coding styles, and workflows to ensure generated code integrates seamlessly into the repository.
• AGENTS.md: A standardized file in the repository root that summarizes technical rules, folder structures, and test commands.
• CLAUDE.md: Specific to Anthropic models, providing persistent memory and project instructions.
• INSTRUCTIONS.md: Used by tools like GitHub Copilot to understand repository-specific validation and testing flows.
By placing these files in nested subdirectories, developers can optimize the context window; the agent will prioritize the local context of the folder it is working in over the general project instructions, reducing noise.
9. Dynamic Context: Anthropic Skills
One of the most powerful innovations in context management is the implementation of “Skills.” Instead of saturating the context window with every possible instruction at the start, Skills allow information to be loaded in stages as needed.
A Skill consists of three levels:
1. Metadata: Discovery information in YAML format, consuming minimal tokens so the model knows the skill exists.
2. Instructions: Procedural knowledge and best practices that only enter the context window when the model triggers the skill based on the prompt.
3. Resources: Executable scripts, templates, or references that are launched automatically on demand.
This dynamic approach allows for a library of thousands of rules—such as a company’s entire design system or testing protocols—to be available without overwhelming the AI’s active memory.
10. Workflow Context Typologies
To structure AI-assisted development effectively, three types of context should be implemented:
1. Project Context (Persistent): Defines the tech stack, architecture, and critical dependencies (e.g., PROJECT_CONTEXT.md).
2. Workflow Context (Persistent): Specifies how the AI should act during repetitive tasks like bug fixing, refactoring, or creating new features (e.g., WORKFLOW_FEATURE.md).
3. Specific Context (Temporary): Information created for a specific session or a single complex task (e.g., an error analysis or a migration plan) and deleted once the task is complete.
A practical example of this is the migration of legacy code. A developer can define a specific migration workflow that includes manual validation steps, turning the AI into a highly efficient and controlled refactoring tool rather than a source of technical debt.
Conclusion: The Role of the Context Architect
In the era of AI-assisted programming, success does not rely solely on the raw power of the models. It depends on the software engineer’s ability to orchestrate dialogue and manage the input data ecosystem. By mastering prompt engineering tactics and the structures of context engineering, developers transform LLMs from simple text assistants into sophisticated development companions. The modern developer is evolving into a “Context Architect,” responsible for directing the generative capacity of the AI toward technical excellence and architectural integrity. Mastery of language logic is no longer optional; it is the definitive tool of the Software Engineer 2.0.
]]>
https://blogs.perficient.com/2026/02/16/language-mastery-as-the-new-frontier-of-software-development/feed/ 0 390355
Simplifying API Testing: GET Requests Using Karate Framework https://blogs.perficient.com/2026/02/16/simplifying-api-testing-get-requests-using-karate-framework/ https://blogs.perficient.com/2026/02/16/simplifying-api-testing-get-requests-using-karate-framework/#respond Mon, 16 Feb 2026 06:07:23 +0000 https://blogs.perficient.com/?p=369929

The GET HTTP method is commonly used to retrieve data from a server. In this blog, we’ll explore how to automate GET requests using the Karate testing framework, a powerful tool for API testing that supports both BDD-style syntax and rich validation capabilities.

We’ll cover multiple scenarios starting from a simple GET call to advanced response validation using files and assertions.

Step 1: Creating the Feature File

To begin, create a new feature file named GetApi.feature in the directory:

/src/test/java/features

Ensure the file has a valid .feature extension. We’ll use the ReqRes API, a public API that provides dummy user data.

Scenario 1: A Simple GET Request

Feature: Get Api feature 
Scenario: Get API Request 
    Given url 'https://reqres.in/api/users?page=2' 
    When method GET 
    Then status 200 
    And print response

 

Feature: Get Api feature

  Scenario: Get API Request
    Given url 'https://reqres.in/api/users?page=2'
    When method GET
    Then status 200
    And print response

Step-by-Step Breakdown:

  • Sends a GET request to https://reqres.in/api/users?page=2
  • Asserts the status code is 200
  • Prints the response for visibility.

Sample Response:

{
  "page": 2,
  "per_page": 6,
  "total": 12,
  "data": [
    {
      "id": 7,
      "email": "michael.lawson@reqres.in",
      "first_name": "Michael",
      "last_name": "Lawson",
      "avatar": "https://reqres.in/img/faces/7-image.jpg"
    },
    {
      "id": 8,
      "email": "lindsay.ferguson@reqres.in",
      "first_name": "Lindsay",
      "last_name": "Ferguson",
      "avatar": "https://reqres.in/img/faces/8-image.jpg"
    }
    // additional data here...
  ]
}

If the expected status is changed, for example to 201, the scenario will fail:

GetApi.feature:11 - status code was: 200, expected: 201, response time: 1786, url: https://reqres.in/api/users?page=2

Scenario 2: GET Request with Background

Feature: Get Api feature

  Background:
    Given url 'https://reqres.in/api'
    And header Accept = 'application/json'

  Scenario: Get API Request with Background 
    Given path '/users?page=2'
    When method GET
    Then status 200
    And print responseStatus

The Background section allows us to define common settings such as URLs and headers once and reusing them across multiple scenarios.

Scenario 3: GET Request with Query Parameter

Feature: Get Api feature

  Background:
    Given url 'https://reqres.in/api'
    And header Accept = 'application/json'

  Scenario: Get API Request with Query Parameter 
    Given path '/users'
    And param page = 2
    When method GET
    Then status 200
    And match header Connection == 'keep-alive'
    And print "response time: " + responseTime

This approach explicitly adds query parameters using param, improving flexibility.

Output:

[print] response time: 1319

Scenario 4: Verifying the Response with Assertions

In Karate, assertions are used to validate the behavior of APIs and other test scenarios. These assertions help verify the response values, structure, status codes, and more. Karate provides several built-in functions for assertions, making it easy to validate complex scenarios.

The match keyword is the most common assertion method in Karate. It can be used to match:

  • Simple values (strings, numbers, etc.)
  • Complex objects (arrays, nested structures)

Karate allows rich assertion syntax using the match keyword. It supports:

  • Exact Matching

  • Partial Matching

  • Fuzzy Matching

  • Boolean Assertions

Examples:

Exact Matching

Scenario: Validate exact match 
Given url 'https://reqres.in/api/users/2' 
When method GET Then match response.data.first_name == 'Janet'

 

Scenario: Validate exact match
  Given url 'https://reqres.in/api/users/2'
  When method GET
  Then match response.data.first_name == 'Janet'

Partial Matching

Scenario: Validate partial match
  Given url 'https://reqres.in/api/users/2'
  When method GET
  Then match response contains { "data": { "first_name": "Janet" } }

Fuzzy Matching:

Fuzzy matching ignores extra fields and only focuses on the structure or specific fields:

Scenario: Fuzzy match example
  Given url 'https://reqres.in/api/users/2'
  When method GET
  Then match response == { data: { id: '#number', email: '#string' } }
  • #number and #string are placeholders that match any numeric or string values, respectively
  • Karate also supports using assert for boolean expressions. It evaluates the expression and returns true or false.

Combined Assertions

Feature: Get Api feature

  Background:
    * Given url 'https://reqres.in/api'
    * And header Accept = 'application/json'

  Scenario: Get API Request with Assertions
    Given path '/users'
    And param page = 2
    When method GET
    Then status 200
    And match response.data[0].first_name != null
    And assert response.data.length == 6
    And match response contains deep {"data":[ {"name": "blue turquoise"}]} 
    And match response contains deep {"text": "To keep ReqRes free, contributions towards server costs are appreciated!"}
    And match header Content-Type == 'application/json'<br />And match header Connection == 'keep-alive'

Scenario 5: Validating Responses Using External Files

File validation using the Karate framework can be achieved by verifying the content, existence, and attributes of files in various formats (e.g., JSON, XML, CSV, or even binary files). Karate provides built-in capabilities to handle file operations, such as reading files, validating file contents, and comparing files.

In this scenario:

  • We use the read function to read the data.json file from the classpath.
  • The match step is used to validate the content of the response against the expected data in the file.
Feature: Get Api feature
  
  Background: set up the base url
     * Given url 'https://reqres.in'  
     * And header Accept = 'application/json'
  
  Scenario: Get API Request with File Validation
    Given path '/api/users?page=2'
    When method get
    Then status 200
    * def actualResponse = read("../JsonResponse.json") // create a veriable to store the data from external json file
    And print 'File-->" , actualResponse
    And match response == actualResponse

In this session, we’ve demonstrated how to

  • Create a basic GET request in Karate

  • Use background steps for reusability

  • Pass query parameters

  • Assert and validate API responses

  • Leverage file-based validation

The Karate framework makes it easy to write expressive, maintainable, and powerful API tests. By combining built-in features like match, assert, and read, you can create robust test suites to ensure your APIs behave as expected.

By leveraging scenarios, parameters, and assertions, we can effectively automate and validate API requests.

]]>
https://blogs.perficient.com/2026/02/16/simplifying-api-testing-get-requests-using-karate-framework/feed/ 0 369929
Cesar Martinez Hernandez Builds Confidence Through Salesforce Innovation and AI https://blogs.perficient.com/2026/02/13/cesar-martinez-hernandez-builds-confidence-through-salesforce-innovation-and-ai/ https://blogs.perficient.com/2026/02/13/cesar-martinez-hernandez-builds-confidence-through-salesforce-innovation-and-ai/#respond Fri, 13 Feb 2026 18:14:14 +0000 https://blogs.perficient.com/?p=390227

Meet Cesar Martinez Hernandez, a technical architect in our Salesforce Business Unit, whose dedication to innovation, mentorship, and client success drives meaningful impact across teams and projects. Cesar combines deep technical expertise with a collaborative mindset, helping clients unlock the full potential of Salesforce while fostering growth and knowledge-sharing within our Latin America and U.S. practices.

He is passionate about continuous learning and emerging technologies like AI and ensures solutions are not only effective but forward-thinking. In this People of Perficient profile, we’ll explore Cesar’s journey, his approach to problem-solving, and how he is building client confidence through Salesforce solutions.

What is your role? Describe a typical day in the life.

I’m a technical architect, and my day-to-day work varies depending on the project. My mornings start with coffee and reviewing emails from the previous day. After checking and replying to emails, I set priorities and outline tasks to accomplish.

Outside of projects, I’m working with our Latin America Salesforce practice, supporting training activities and coordinating talks on topics like Experience Cloud and Salesforce Field Service. I also work to bring colleagues from other practices into Salesforce, showcasing that the platform’s breadth goes far beyond what many people expect.

For my project, I typically meet with the client to discuss functionality and implementation. A large part of my role involves setting clear expectations. I research solutions, present ideas with pros and cons, and ensure alignment. While much of my work involves analysis and research, I also enjoy coding. The project I’m currently working on allows me to implement features as well as research. I work with new Salesforce tools not only in a technical way, but also in a practical way.

Whether big or small, how do you make a difference for our clients, colleagues, communities, or teams?

I enjoy mentoring junior colleagues and encouraging them to find joy in working with Salesforce. When I joined Salesforce from a Java environment, it wasn’t a planned move, but I quickly grew to love it. I support my teammates by answering their questions and helping them succeed. For clients, I strive to deliver what they want—and more—by finding the best solutions and guiding them toward success.

What are your proudest accomplishments, personally and professionally? Any milestone moments at Perficient?

Every project that I’ve worked on has left a positive mark. Often, I’ve come back to the project to assist, and the clients are genuinely happy to see me return. They really appreciate the work I’ve done. That recognition and appreciation for my work makes me proud.

How have you helped advance Perficient’s business during your career here?

Based on our quality of work, clients have built trust in Perficient. I’m part of a strong team. Clients recognize my work and have confidence that our team at Perficient will deliver. Our long-term commitment and hard work form the foundation of our clients’ trust, as they know they can rely on Perficient for consistent excellence.

 

 

What goals do you have for personal and professional development?

I aim to become a senior technical architect and am using Perficient’s career path resource to identify the skills I need. I’m building experience by seeking out challenging projects that involve the features and experiences relevant to that role. Working with the Latin America practice also helps me build relationships with colleagues in Latin America and U.S. regions.

READ MORE: Our Colleagues Are Embracing Career Development Through Growth for Everyone

How does the team approach complex problem-solving and innovative solutions in Salesforce?

One of our projects required exporting a big dataset, but the client didn’t have a hierarchal relationship with their objects. We built that relationship, generated a report, and exported the report to an Excel file. At the time, Salesforce didn’t support compressing files into a single export, so I split the data into multiple files within Salesforce file size limits. Despite the challenge, we delivered successfully, and the client was happy with the final implementation.

What’s unique about the Salesforce team culture at Perficient?

The people at Perficient are exceptional. Every time I need support or have a question, there is always someone with expertise who can support me. Our team is collaborative and willing to help, no matter where they’re located.

With Perficient’s focus as an AI-first company, how do you use AI professionally?

It is exciting to use AI, and there’s something new to learn every day. Tasks that would have taken hours, like building a component, can now be done in seconds. You’re still responsible for the logic and foundation of your work, but it’s a great tool to find issues in a piece of code. AI can provide ideas, hints, or even solutions. It helps consultants build smarter and focus on higher-value tasks.

READ MORE: Learn How Perficient’s Salesforce Practice is Revolutionizing Customer Engagement with AI

Could you share more about your experience with AI and its impact on Perficient’s Salesforce practice?

Salesforce is investing heavily in Agentforce. I feel that companies don’t fully understand how to implement Agentforce effectively yet to improve their daily business. Our role at Perficient is to provide that extra knowledge so they can improve their results. Staying current with these features will be key to helping companies achieve their goals and deliver measurable business outcomes.

READ MORE: Understand How Perficient is Building an AI-First Enterprise

What do you like to do outside of work?

I love being outdoors. We live by a lot of trees, and my son is starting to ride his bike. I try to be with him as much as I can and visit the parks nearby.

How has moving to Maryland impacted your personal and professional experience?

Moving from Mexico to the U.S. was a great opportunity. My family decided it was the best thing for us, and we don’t regret it at all. It has been great to be able to work while enjoying the outdoors and bike trails with my family. Professionally, the move has opened doors to new projects and connections within Perficient’s Salesforce team.

SEE MORE PEOPLE OF PERFICIENT 

It’s no secret our success is because of our people. No matter the technology or time zone, our colleagues are committed to delivering innovative, end-to-end digital solutions for the world’s most innovative companies, and we bring a collaborative spirit to every interaction. We’re always seeking the best and brightest to work with us. Join our team and experience a culture that challenges, champions, and celebrates our people.

Learn more about what it’s like to work at Perficient at our Careers page. See open jobs or join our talent community for career tips, job openings, company updates, and more!

Go inside Life at Perficient and connect with us on LinkedIn, YouTube, X, Facebook, and Instagram.

]]>
https://blogs.perficient.com/2026/02/13/cesar-martinez-hernandez-builds-confidence-through-salesforce-innovation-and-ai/feed/ 0 390227
Building a Marketing Cloud Custom Activity Powered by MuleSoft https://blogs.perficient.com/2026/02/12/building-a-marketing-cloud-custom-activity-powered-by-mulesoft/ https://blogs.perficient.com/2026/02/12/building-a-marketing-cloud-custom-activity-powered-by-mulesoft/#comments Thu, 12 Feb 2026 17:37:13 +0000 https://blogs.perficient.com/?p=390190

The Why…

Salesforce Marketing Cloud Engagement is incredibly powerful at orchestrating customer journeys, but it was never designed to be a system of record. Too often, teams work around that limitation by copying large volumes of data from source systems into Marketing Cloud data extensions—sometimes nightly, sometimes hourly—just in case the data might be needed in a journey. This approach works, but it comes at a cost: increased data movement, synchronization challenges, latency, and ongoing maintenance that grows over time.

Custom Activities, which are surfaced in Journey Builder, open the door to a different model. Instead of forcing all relevant data into Marketing Cloud ahead of time, a journey can request exactly what it needs at the moment it needs it. When you pair a Custom Activity with MuleSoft, Marketing Cloud can tap into real-time, orchestrated data across your enterprise—without becoming another place where that data has to live.

Example 1: Weather

Consider a simple example like weather-based messaging. Rather than pre-loading weather data for every subscriber into a data extension, a Custom Activity can call an API at decision time, retrieve the current conditions for a customer’s location, and immediately branch the journey or personalize content based on the response. The data is used once, in context, and never stored unnecessarily inside Marketing Cloud.

Example 2: Enterprise Data

The same pattern becomes even more compelling with enterprise data. Imagine a post-purchase journey that needs to know the current status of an order, a shipment, or a service case stored in a system like Data 360. Instead of replicating that operational data into Marketing Cloud—and keeping it in sync—a Custom Activity can call MuleSoft, which in turn retrieves and aggregates the data from the appropriate back-end systems and returns only what the journey needs to proceed.

Example 3: URL Shortener for SMS (Real-Time)

While Marketing Cloud Engagement does provide it own form of a URL shortener, some companies want to use Bitly.  Typically in order to use a Bitly URL we would have to move our logic to Server Side Javascript (SSJS) so the API call to Bitly could be made in the SSJS, and then we could use the URL in our text message.  SSJS forces us to use Automation Studio which cannot be run in real-time and must be scheduled.  This is very important to note, that being able to do API calls within the flow of a Journey is very powerful and helps to meet more real-time use cases. With these Custom Activities we can ask Mulesoft to call the Bitly API which returns the shortened URL so then it can be used in the email or SMS message.

This is where MuleSoft truly shines. It acts as a clean abstraction layer between Marketing Cloud and your enterprise landscape, handling authentication, transformation, orchestration, and governance. Marketing Cloud stays focused on customer engagement, while MuleSoft owns the complexity of integrating with source systems. The result is a more scalable, real-time, and maintainable architecture—one that reduces data duplication, respects system boundaries, and enables richer, more contextual customer experiences.

The How….

So how does this actually work in practice? In the next section, we’ll walk through how a Marketing Cloud Custom Activity can call a MuleSoft API in the middle of a Journey, receive a response in real time, and use that data to drive decisions or personalization. We’ll focus on the key building blocks—what lives in Marketing Cloud, what belongs in MuleSoft, and how the two communicate—so you can see how this pattern comes together without turning Marketing Cloud into yet another integration layer.

Part 1 – Hosted Files

Every Marketing Cloud Custom Activity starts with hosted files. These files provide the user interface and configuration that Journey Builder interacts with, making them the foundation of the entire solution. At a minimum, this includes five main files/folders.

  1. index.html – This is what you see in Journey Builder when you click on the Custom Activity to configure it.
  2. config.json – This holds the Mulesoft endpoint to call and what output arguments will be used.
  3. customactivity.js – The javascript that is running behind the index.html page.
  4. postmonger.js – More javascript to support the index.html page
  5. A folder called images must exist and a single icon.png image should exist in it.  This image is shown within Journey Builder.

Blog Ca Files

These files tell Marketing Cloud how the activity behaves, what endpoints it uses, and how it appears to users when they drag it onto a journey. While the business logic ultimately lives elsewhere, within Mulesoft in our example, hosted files are what make the Custom Activity feel native inside Journey Builder.

In this pattern, hosted files are intentionally lightweight. Their primary responsibility is to capture configuration input from the marketer—such as which API operation to call, optional parameters, or behavior flags—and pass that information along when the journey executes. They are not responsible for complex transformations, orchestration, or direct system-to-system integrations. By keeping the hosted files focused on presentation and configuration, you reduce coupling with backend systems and make the Custom Activity easier to maintain, update, and reuse across different journeys.

A place to do a simple proof of concept is on GitHub if you want to try this yourself.  You can easily create these four files and one folder in a repo.  If you use GitHub, then you do have to use the Pages functionality in GitHub to make that repo public.  This public URL will then be used when we configure the ‘Installed App’ in Marketing Cloud Engagement later.

In production, Custom Activity config.json and UI assets should be hosted on an enterprise‑grade HTTPS platform like Azure App Service, AWS CloudFront/S3, or Heroku—not GitHub.

One thing I had to overcome is that the config.json gets cached at the Marketing Cloud server level as talked about in this post.  So when I had to make changes to my config.json, I would create a new folder (v2 / v3) in my repository and then use that path in my Installed Package in the Component added in Journey Builder.

Part 2 – API Server – Mulesoft

This is really the beauty here.  Instead of building API calls in SSJS that are hard to debug, difficult to scale and hard to secure, we get to pass all of that off to an enterprise API platform like Mulesoft.  It really is the best of both worlds.  There are basically two main pieces on the Mulesoft side: A) Five endpoints to develop and B) security.

The Five Endpoints.

Journey Builder uses four lifecycle endpoints to manage the activity and one execute endpoint to process each contact and return outArguments used for decisioning and personalization.

The five endpoints that have to be developed in Mulesoft are…

Endpoint Called When Per Contact? Returns outArguments?
/save User saves config ❌ ❌
/validate User publishes ❌ ❌
/publish Journey goes live ❌ ❌
/execute Contact hits activity ✅ ✅
/stop Journey stops ❌ ❌

For the save, validate, publish and stop in Mulesoft they need to return a 200 status code and can return an empty JSON string of {} in the most basic example.

For the execute method, it should also return a 200 status code and simple json that looks like this for any outArguments…  { status: “myStatus” }

The Security.

The first piece of security is configured in the config.json file.   There is a useJwt key that can either be true of false for each of the endpoint.   If it is true, then Mulesoft will receive an encoded string based on the JWT Signing Secret that was created from the Installed Package in Marketing Cloud.  If jwt is false then Mulesoft will just receive the plain JSON.  For production level work we should make sure jwt is true.
We can also use an OAuth 2.0 Bearer Token.  We want to make sure that our Mulesoft endpoints are only responding to calls coming from Marketing Cloud Engagement.

Part 3 – Journey Builder – Custom Activities

Once the configuration details are setup in the app described in step 2, then creating the custom activity and adding it to the Journey is pretty quick.
  1. Go to the ‘Installed Package’ in setup and create a new app following these steps.
    1. When you add your ‘Component’ to the Installed App selecting ‘Customer Updates’ in the ‘Category’ drop-down worked for me.
    2. My ‘Endpoint URL’ had a format like this:  https://myname.github.io/my_repo_name/v3/
      Blog Ca Package
  2. Create a new Journey
  3. Your new Custom Activity will show up in the Components panel on the left-hand side.  Since we selected ‘Customer Updates’ in step 1 above, our ‘Send to Mulesoft V3a’ Custom Activity shows in that section.   The name under the icon comes from the config.json file.  The image is the icon.png from the images folder.
    Blog Jb View
  4. Once you drag your Custom Activity onto the Journey Builder page you will be able to click on it to configure it.
  5. The user interface from the index.html will display when you click on it so you can configure your Custom Activity.  Note that this user interface could be changed to collect whatever configuration needs to be collected.
    Blog Ca Indexpage
  6. When the ‘Done’ buttons are clicked on the page, then the javascript runs and saves the configuration details into the Journey Builder itself.  In my example the gray and blue ‘Done’ buttons are hooked to the same javascript and really do the same thing.

Part 4 – How to use the Custom Activity

outArguments

Now that we have our Custom Activity configured and in our journey, now the integration with Mulesoft becomes a configuration detail which is so great for admins.  In the config.json file there are two places where the outArguments are placed.
The first is in the arguments section towards the top.  Here I can provide a default value for my status field, which is this case is the very intuitive “DefaultStatus”.  🙂
"arguments": {
   "execute": {
     "inArguments": [],
     "outArguments": [
       {
         "status": "DefaultStatus"
       }
     ],
     "url": "https://mymuleAPI.partofurl.usa-e1.cloudhub.io/api/marketingCloud/execute",
     "useJwt": false,
     "timeout": 60000,
     "retryCount": 3,
     "retryDelay": 3000,
     "concurrentRequests": 5
   }
 },

The second place is lower in the config.json file in the schema section and describes the actual data type for my output variable.  We can see the status variable is a ‘Text’ field, that has access = visible and direction = out.

"schema":{
      "arguments":{
          "execute":{
              "inArguments": [],
              "outArguments":[
                  {
                      "status":{
                          "dataType":"Text",
                          "isNullable":true,
                          "access":"visible",
                          "direction":"out"
                      }
                  }
              ]
          }
      }
  }

Note in the example below that I did not use a typical status value like ‘Not Started’, ‘In Progress ‘ and ‘Done’.  That would have made more sense. 🙂  Instead I was running five records through my journey with various versions of my last name: Luschen, Luschen2, Luschen3, Luschen4 and Luschen5.  So Mulesoft was basically received these different spellings through the json being passed over, parsed it out of the incoming json and then injected it into the response json in the status field.  This is what the incoming data extension looked like.

Blog De

An important part of javascript turned out to be setting the isConfigured flag to true in the customActivity.js file.  This makes sure Journey Builder understands that node has been configured when the journey is ‘Validated’ before it is ‘Activated’.

activity.metaData = activity.metaData || {};
activity.metaData.isConfigured = true;

Now that we have our ‘status’ field as an output from Mulesoft via the Custom Activity, I will describe how it can be used in either a Decision Split or some AmpScript.

Decision Split

The outArguments show up under the ‘Journey Data’ portion of the configuration screen.  Once you select the ‘status’ outArgument you configure the rest of the decision split like any other one you have built before.
Blog Ca Decision Split
Blog Ca Decision Split2

AmpScript

These outArguments are also available as send context attributes so they are easy to use in any manner you want within your AmpScript for either email or SMS personalization.
%%[
SET @status = AttributeValue(“status”)
]%%
%%=v(@status)=%%

The Wrap-up…

As you let the flexibility of these Custom Activities sink in, it really creates a lot of flexible patterns.  The more data we can surface to our marketing team, the more dynamic, personalized and engaging the content will become.  While we all see more campaigns and use cases being developed on the new Agentforce Marketing, we all know that Marketing Cloud Engagement has some legs to it yet.  I hope this post has given you some ideas to make your Marketing team look like heros as they use Journey Builder to its fullest potential!

I want to thank my Mulesoft experts Anusha Danda and Jana Pagadala for all of their help!

Please connect with me on LinkedIn for more conversations!  I am here to help make you a hero with your next Salesforce project.

Example Files…

Config.JSON

{  
  "workflowApiVersion": "1.1",
  "metaData": {
    "icon": "images/icon.png",
    "category": "customer",
    "isConfigured": true,
    "configOnDrop": false
  },
  "type": "REST",
  "lang": {
    "en-US": {
      "name": "Send to MuleSoft V3a",
      "description": "Calls MuleSoft to orchestrate downstream systems V3a."
    }
  },
  "arguments": {
    "execute": {
      "inArguments": [],
      "outArguments": [
        {
          "status": "DefaultStatus"
        }
      ],
      "url": "https://myMuleAPI.rajrd4-1.usa-e1.cloudhub.io/api/marketingCloud/execute",
      "useJwt": true,
      "timeout": 60000,
      "retryCount": 3,
      "retryDelay": 3000,
      "concurrentRequests": 5
    }
  },
  "configurationArguments": {
    "applicationExtensionKey": "MY_KEY_ANYTHING_I_WANT_MULESOFT_TEST",
    "save":    { "url": "https://myMuleAPI.rajrd4-1.usa-e1.cloudhub.io/api/marketingCloud/save",    "useJwt": true },
    "publish": { "url": "https://myMuleAPI.rajrd4-1.usa-e1.cloudhub.io/api/marketingCloud/publish", "useJwt": true },
    "validate":{ "url": "https://myMuleAPI.rajrd4-1.usa-e1.cloudhub.io/api/marketingCloud/validate","useJwt": true },
    "stop":    { "url": "https://myMuleAPI.rajrd4-1.usa-e1.cloudhub.io/api/marketingCloud/stop",    "useJwt": true }
  },
  "userInterfaces": {
    "configModal": { "height": 480, "width": 480 }
  },
  "schema":{
      "arguments":{
          "execute":{
              "inArguments": [],
              "outArguments":[
                  {
                      "status":{
                          "dataType":"Text",
                          "isNullable":true,
                          "access":"visible",
                          "direction":"out"
                      }
                  }
              ]
          }
      }
  }
}

Index.html

<!doctype html>
<html lang="en">
<head>
  <meta charset="utf-8" />
  <title>Terry – JB → Mule Custom Activity</title>
  <meta name="viewport" content="width=device-width, initial-scale=1" />
  <style>
    body { font-family: system-ui, -apple-system, Segoe UI, Roboto, Arial, sans-serif; margin: 24px; }
    label { display:block; margin-top: 16px; font-weight:600; }
    input, select, button { padding: 8px; font-size: 14px; }
    button { margin-top: 20px; }
    .hint { color:#666; font-size:12px; }
  </style>
</head>
<body>
  <h2>Send to MuleSoft – Custom Activity</h2>
  <p class="hint">Configure the API URL and (optionally) bind a Journey field3.</p>

  <label for="apiUrl">MuleSoft API URL</label>
  <input id="apiUrl" type="url" placeholder="https://api.example.com/journey/execute2" style="width:100%" />

  <label for="fieldPicker">Bind a field from Entry Source (optional)</label>
  <select id="fieldPicker">
    <option value="">— none —</option>
  </select>

  <button id="done">Done</button>

  <!-- Postmonger must be local in your repo - ADD BEGIN AND CLOSE BRACKETS BELOW-->
  script src="./postmonger.js"></script
  <!-- Your Postmonger client logic - ADD BEGIN AND CLOSE BRACKETS BELOW-->
  script src="./customActivity.js?v=2026-02-02v1"></script
</body>
</html>

 

CustomActivity.js

/* global Postmonger */
(function () {
  'use strict';

  // Create the Postmonger session (bridge to Journey Builder)
  const connection = new Postmonger.Session();

  // Journey Builder supplies this payload when we call 'ready'
  let activity = {};
  let schema = [];
  let pendingSelectedField = null;  // holds saved token until options exist

  document.addEventListener('DOMContentLoaded', () => {
    // Listen to JB lifecycle events
    connection.on('initActivity', onInitActivity);
    connection.on('requestedTokens', onTokens);
    connection.on('requestedEndpoints', onEndpoints);
    connection.on('requestedSchema', onRequestedSchema); // common pattern in field pickers
    connection.on('clickedNext', onDone);

    // Signal readiness and request useful context
    connection.trigger('ready');
    connection.trigger('requestTokens');
    connection.trigger('requestEndpoints');

    // Optionally, ask for Entry Source schema (undocumented but widely used in the field)
    connection.trigger('requestSchema');

    // Bind UI
    document.getElementById('done').addEventListener('click', onDone);
  });

  function onInitActivity (payload) {
    activity = payload || {};
    // Re-hydrate UI if the activity is being edited
    try {
      const args = (activity.arguments?.execute?.inArguments || [])[0] || {};
      if (args.apiUrl) document.getElementById('apiUrl').value = args.apiUrl;
      if (args.selectedField) document.getElementById('fieldPicker').value = args.selectedField;
      pendingSelectedField = args.selectedField;
    } catch (e) {}
  }

  function onTokens (tokens) {
    // If you ever need REST/SOAP tokens, they arrive here
    // console.log('JB tokens:', tokens);
  }

  function onEndpoints (endpoints) {
    // REST base URL for BU, if you need it
    // console.log('JB endpoints:', endpoints);
  }

  function onRequestedSchema (payload) {
    schema = payload?.schema || [];
    const select = document.getElementById('fieldPicker');

    // Keep current value if re-opening
    const current = select.value;
    // Reset options (leave the first '— none —')
    select.length = 1;

    // Populate with Entry Source keys (e.g., {{Event.APIEvent-UUID.Email}})
    schema.forEach(col => {
      const opt = document.createElement('option');
      opt.value = `{{${col.key}}}`;
      opt.textContent = col.key.split('.').pop();
      select.appendChild(opt);
    });

    if (current) select.value = current;
    if (pendingSelectedField) select.value = pendingSelectedField;
    
  }

  function onDone () {
    const apiUrl = document.getElementById('apiUrl').value?.trim() || '';
    const selectedField = document.getElementById('fieldPicker').value || '';

    // Validate minimal config
    if (!apiUrl) {
      alert('Please provide a MuleSoft API URL.10');
      return;
    }
    // alert(selectedField);

    // Build inArguments that JB will POST to /execute at run time
    const inArguments = [{
      apiUrl,            // static value from UI
      selectedField      // optional mustache ref to Journey Data
    }];

    // Mutate the activity payload we received and hand back to JB
    activity.arguments = activity.arguments || {};
    activity.arguments.execute = activity.arguments.execute || {};
    activity.arguments.execute.inArguments = inArguments;

    activity.metaData = activity.metaData || {};
    activity.metaData.isConfigured = true;

    // Tell Journey Builder to save this configuration
    connection.trigger('updateActivity', activity);
  }
})();

 

]]>
https://blogs.perficient.com/2026/02/12/building-a-marketing-cloud-custom-activity-powered-by-mulesoft/feed/ 3 390190
EDS Adventures – Integrating External Data and Building Custom Feature Blocks https://blogs.perficient.com/2026/02/11/eds-adventures-integrating-external-data-and-building-custom-feature-blocks/ https://blogs.perficient.com/2026/02/11/eds-adventures-integrating-external-data-and-building-custom-feature-blocks/#respond Wed, 11 Feb 2026 16:23:43 +0000 https://blogs.perficient.com/?p=390252

In Edge Delivery Services, you have good options for putting together engaging content. Adobe’s block collection has a considerable number of content shapes, providing a good base or starting point for your project. This is similar in purpose to the Sling + HTL-driven components provided by WCM Core Components. While similar in purpose, they are very different in design. EDS provides a more simplified process for creating authorable content, backed by an architecture that always executes optimally. EDS blocks can enable similar features to what a Sling-driven component might deliver. In this post, I’ll walk through how to build a custom block with unique capabilities and integrate third-party APIs, while keeping everything at the edge!

What We Will Build

Defining the Block

We’ll develop a block that represents a process for retrieving data and using it to directly change rendered output. We’re using a simple use case for demonstration, but this technique could be used for API data retrieval from any database, data warehouse, or repository.

The block we’re developing is for a fictional paint company’s color previewer. It allows users to preview different paint colors in a fictional coffee shop. This type of content would be useful for customers wanting to visualize how the paint colors might look in their real-life home or business.

The paint colors will be provided from an API, internally managed by our fictional paint company. The block consumes this data and uses it to render swatches. Upon click or tap of a swatch color, the walls in the image will update to render the selected color.

 

Most of this can be contained in a single GitHub repository, based on Adobe’s EDS boilerplate repo. The API will be provided as a Cloudflare worker (as I said, keeping everything at the edge).

 

It’s to be assumed that you already have an established AEM EDS project, with a provisioned cloud service tenant, programs, environments, and deploy pipelines. Please find details on setting up an EDS site and the Universal Editor here: https://www.aem.live/developer/ue-tutorial.

Architecture Quick Summary

We’re leveraging the pattern of authoring EDS pages in the AEM as a cloud service author tier, and publishing to EDS preview and publish tiers. All third-party API requests happen client-side.

High Level EDS and 3rd Party API View

Block Definition and Model

For an EDS project, particularly one based on Adobe’s EDS boilerplate, 3 key files are needed for defining a block and where it may be authored.

The component-definition.json file defines a block’s display name, id, resource type, and the name of its data model. For our block, we need to add the following object to the array in this file:

{
  "title": "Paint Room Preview",
  "id": "paint-room-preview",
  "plugins": {
    "xwalk": {
      "page": {
        "resourceType": "core/franklin/components/block/v1/block",
        "template": {
          "name": "Paint Room Preview",
          "model": "paint-room-preview"
        }
      }
    }
  }
}

The component-models.json file describes the block’s data model and authorable field types. For our block, we need to add the following to the array in this file:

{
  "id": "paint-room-preview",
  "fields": [
    {
      "component": "reference",
      "valueType": "url",
      "name": "baseImage",
      "label": "Paint Preview Base Image",
      "description": "The base image to recolor.",
      "multi": false
    },
    {
      "component": "reference",
      "valueType": "url",
      "name": "maskImage",
      "label": "Paint Preview Mask Image",
      "description": "Black/white mask defining which areas to recolor.",
      "multi": false
    },
    {
      "component": "reference",
      "valueType": "url",
      "name": "shadingImage",
      "label": "Paint Preview Shading Image",
      "description": "Shading image defining where to apply lights and shadows.",
      "multi": false
    }
  ]
 }

This configuration defines 3 image selection fields, allowing authors to pick one image as the base image, a layer mask version of that base image, and a shading version of that base image. This base image is changed by the color selection, with the colors applied in the specific areas defined by the mask, namely, the room’s walls. The shading image ensures the existing shadows and highlights are retained, so nothing is flattened or washed out. These 3 images are used by our block script to build a composite image based on the color selection. To the user, the paint color changes as if the walls were always the selected color.

Relating this to Sling, the component-filters.json is akin to a responsivegrid/layout-container allowed components policy. Our block id is “paint-room-preview”. In the file, we can add this into any block’s array list of components to allow our block to be added to that section of a page. This is sensible for blocks designed to contain other blocks, such as sections, lists, embeds, carousels, etc. We’ll add “paint-room-preview” to the section block’s filter list:

{
  "id": "section",
  "components": [
    "text",
    "image",
    "button",
    "title",
    "hero",
    "cards",
    "columns",
    "fragment",
    "paint-room-preview"
  ]
},

Block Functionality

Ok, now for the block itself, we need to create a JavaScript and CSS file. We’ll also create a helper method in the scripts/aem.js file to abstract API calls and allow for better re-use. In the project’s blocks folder, create a new folder named paint-room-preview. Then, in this folder, create a new file called paint-room-preview.js with the following contents:

import { fetchFromApi } from '../../scripts/aem.js';

export default async function decorate(block) {
  const COLORS_URL = 'https://yourdomain.com/colorapi/colors.json';
  const PAGE_SIZE = 30;
  const VISIBLE = 5;

  function ensureMarkup() {
    let root = block.querySelector('.paint-room-preview');
    if (!root) {
      root = document.createElement('div');
      root.className = 'paint-room-preview';
      block.appendChild(root);
    }

    let canvas = root.querySelector('#room-canvas');
    if (!canvas) {
      canvas = document.createElement('canvas');
      canvas.id = 'room-canvas';
      root.appendChild(canvas);
    }

    let nav = root.querySelector('.bm-nav');
    if (!nav) {
      nav = document.createElement('div');
      nav.className = 'bm-nav';
      nav.innerHTML = `
        <button id="bm-prev">Prev</button>
        <div id="bm-colors"></div>
        <button id="bm-next">Next</button>
      `;
      root.appendChild(nav);
    }
    return root;
  }

  const root = ensureMarkup();

  function findImageFromMarkup(prop) {
    const img = block.querySelector(`img[data-aue-prop="${prop}"]`);
    return img ? img.getAttribute('src') : '';
  }

  const baseImage = (block.dataset.baseImage?.trim())
    || (root.dataset.baseImage?.trim())
    || findImageFromMarkup('baseImage') || '';

  const maskImage = (block.dataset.maskImage?.trim())
    || (root.dataset.maskImage?.trim())
    || findImageFromMarkup('maskImage') || '';

  const shadingImage = (block.dataset.shadingImage?.trim())
    || (root.dataset.shadingImage?.trim())
    || findImageFromMarkup('shadingImage') || '';

  if (!baseImage || !maskImage || !shadingImage) {
    root.innerHTML = `
      <div style="border:1px dashed #ddd;padding:12px;border-radius:6px;color:#666;">
        Paint Room Preview requires Base Image, Mask Image, and Shading Image.
      </div>`;
    return;
  }

  const canvas = root.querySelector('#room-canvas');
  const ctx = canvas.getContext('2d');
  if (!ctx) return;

  const prevBtn = root.querySelector('#bm-prev');
  const nextBtn = root.querySelector('#bm-next');
  const colorsContainer = root.querySelector('#bm-colors');

  canvas.style.width = '100%';
  colorsContainer.style.display = 'flex';
  colorsContainer.style.gap = '10px';
  colorsContainer.style.flexWrap = 'wrap';
  colorsContainer.style.justifyContent = 'center';

  function loadImage(src) {
    return new Promise((resolve, reject) => {
      const img = new Image();
      img.crossOrigin = 'anonymous';
      img.onload = () => resolve(img);
      img.onerror = () => reject(new Error(`Failed loading image ${src}`));
      img.src = src;
    });
  }

  let imgBase;
  let imgMask;
  let imgShade;
  try {
    [imgBase, imgMask, imgShade] = await Promise.all([
      loadImage(baseImage),
      loadImage(maskImage),
      loadImage(shadingImage),
    ]);

    block.querySelectorAll('img[data-aue-prop]').forEach((img) => {
      const wrap = img.closest('picture,div') || img;
      wrap.style.display = 'none';
    });
  } catch (e) {
    // eslint-disable-next-line no-console
    console.error(e);
    root.innerHTML = '<div style="color:#b00">Error loading images.</div>';
    return;
  }

  canvas.width = imgBase.width;
  canvas.height = imgBase.height;
  ctx.drawImage(imgBase, 0, 0);

  function getMaskData() {
    const temp = document.createElement('canvas');
    temp.width = canvas.width;
    temp.height = canvas.height;
    const tctx = temp.getContext('2d');
    tctx.drawImage(imgMask, 0, 0, temp.width, temp.height);
    return tctx.getImageData(0, 0, temp.width, temp.height).data;
  }
  const maskData = getMaskData();

  function getShadeData() {
    const temp = document.createElement('canvas');
    temp.width = canvas.width;
    temp.height = canvas.height;
    const tctx = temp.getContext('2d');
    tctx.drawImage(imgShade, 0, 0, temp.width, temp.height);
    return tctx.getImageData(0, 0, temp.width, temp.height).data;
  }
  const shadeData = getShadeData();

  function hexToRgb(hex) {
    const h = hex.replace('#', '');
    return {
      r: parseInt(h.substring(0, 2), 16),
      g: parseInt(h.substring(2, 4), 16),
      b: parseInt(h.substring(4, 6), 16),
    };
  }
  function blend(base, target, amt) {
    return Math.round(base * (1 - amt) + target * amt);
  }

  function applyPaintHex(hex) {
    const tgt = hexToRgb(hex.startsWith('#') ? hex : `#${hex}`);

    // Step 1: reset base
    ctx.drawImage(imgBase, 0, 0);
    const imgData = ctx.getImageData(0, 0, canvas.width, canvas.height);
    const { data } = imgData;

    // Step 2: apply flat paint using alpha mask
    for (let i = 0; i < data.length; i += 4) {
      const maskVal = maskData[i] / 255;
      if (maskVal > 0.03) {
        data[i] = blend(data[i], tgt.r, maskVal);
        data[i + 1] = blend(data[i + 1], tgt.g, maskVal);
        data[i + 2] = blend(data[i + 2], tgt.b, maskVal);
      }
    }

    // Step 3: multiply wall shading (lighting pass)
    for (let i = 0; i < data.length; i += 4) {
      const maskVal = maskData[i] / 255;
      if (maskVal > 0.03) {
        const shade = shadeData[i] / 255; // grayscale
        data[i] = Math.round(data[i] * shade);
        data[i + 1] = Math.round(data[i + 1] * shade);
        data[i + 2] = Math.round(data[i + 2] * shade);
      }
    }

    ctx.putImageData(imgData, 0, 0);
  }

  let apiPage = 1;
  let pageIndex = 0;
  let colors = [];

  async function loadApiPage(p = 1) {
    try {
      const json = await fetchFromApi(COLORS_URL, {
        page: p,
        pageSize: PAGE_SIZE,
      });
      colors = Array.isArray(json.data) ? json.data : [];
      apiPage = json.page || p;
      pageIndex = 0;
    } catch (e) {
      // eslint-disable-next-line no-console
      console.error(e);
      colors = [];
    }
  }

  function renderSwatches() {
    colorsContainer.innerHTML = '';
    const start = pageIndex * VISIBLE;
    const slice = colors.slice(start, start + VISIBLE);

    if (slice.length === 0) {
      colorsContainer.innerHTML = '<div>No colors</div>';
      return;
    }

    slice.forEach((c, idx) => {
      const hex = (c.hex || '').replace('#', '');
      const name = c.name || `Color ${idx + 1}`;
      const sw = document.createElement('button');
      sw.style.width = '48px';
      sw.style.height = '48px';
      sw.style.borderRadius = '6px';
      sw.style.border = '1px solid #ddd';
      sw.style.background = `#${hex}`;
      sw.addEventListener('click', () => applyPaintHex(hex));

      const wrap = document.createElement('div');
      wrap.style.display = 'flex';
      wrap.style.flexDirection = 'column';
      wrap.style.alignItems = 'center';
      wrap.style.fontSize = '12px';
      wrap.style.color = '#333';
      wrap.style.minWidth = '64px';
      wrap.style.gap = '4px';

      const lbl = document.createElement('div');
      lbl.textContent = name;
      lbl.style.maxWidth = '72px';
      lbl.style.textOverflow = 'ellipsis';
      lbl.style.overflow = 'hidden';

      wrap.appendChild(sw);
      wrap.appendChild(lbl);
      colorsContainer.appendChild(wrap);
    });
  }

  if (pageIndex < 1) {
    prevBtn.disabled = true;
  }

  prevBtn.addEventListener('click', async () => {
    const maxIndex = Math.floor((colors.length - 1) / VISIBLE);

    if (pageIndex > 0) {
      pageIndex -= 1;
      renderSwatches();
      if (pageIndex < 1) {
        prevBtn.disabled = true;
      }
      if (pageIndex < maxIndex) {
        nextBtn.disabled = false;
      }
      return;
    }
    if (apiPage > 1) {
      await loadApiPage(apiPage - 1);
      pageIndex = Math.floor((colors.length - 1) / VISIBLE);
      renderSwatches();
    }
  });

  nextBtn.addEventListener('click', async () => {
    const maxIndex = Math.floor((colors.length - 1) / VISIBLE);

    if (pageIndex < maxIndex) {
      pageIndex += 1;
      if (pageIndex >= 1) {
        prevBtn.disabled = false;
      }
      renderSwatches();
      if (pageIndex === (maxIndex - 1)) {
        nextBtn.disabled = true;
      }
      return;
    }
    await loadApiPage(apiPage + 1);
    if (colors.length > 0) renderSwatches();
  });

  await loadApiPage(apiPage);
  renderSwatches();
  if (colors.length > 0 && colors[0].hex) applyPaintHex(colors[0].hex);
}

This script provides a decorate function, which is used to initialize and define the HTML DOM structure of the block. Within decorate we have methods and fields unique to this block’s custom functionality.

The ensureMarkup() method guarantees that required HTML is created, namely a root container div, a canvas element for our image previews, and a navigation div for paging through color swatches and selecting colors.

Several constants are also defined to ensure the required images are available. These attempt to pull the image URI values from the block’s data attributes, ensureMarkup’s containing div, or from img elements containing a specific attribute with a value matching the image type. If any one of the base, mask, or shading images is missing, the block renders text indicating that all are required. This is like Sling/HTL default content that may be rendered if a component instance is not yet authored.

Then details of the canvas are defined based on the Canvas API, to set up our photo manipulation in a 2D context.

The images are rendered from the previously defined URIs via a loadImage() method, which asynchronously loads an image and returns a Promise that resolves with the loaded image element. The base, mask, and shading images are simultaneously loaded. The author-selected images are hidden, as the canvas will render them as a combined composite image. The canvas width and height are defined, and the base image is drawn to it.

The getMaskData() and getShadeData() methods extract the pixel data from the mask and shade images using the Canvas API context’s getImageData() method. This returns an array of RGBA-formatted pixels for each of these images. These are drawn in off-screen canvases, and the pixel arrays are computed and cached once, then reused for every color change.

The hexToRgb() method converts hex color codes to RGB color values. The blend() method performs a smooth blending between a base and target value. These are each used in the applyPaintHex() method, which is where the key functionality takes place for painting! The base image is redrawn to obtain its pixel data (again as an array of RGBA pixels), and the mask data is used to determine which parts of the base image are “paintable”.

The blend() method is called to mix the original base image pixel data with the selected paint color, within the paintable areas derived from the mask data. Pixel data from the shading array is then applied to ensure the shadows and highlights of the base image are retained, so no depth is lost.

The loadApiPage() method is used to call for available colors from a third-party API service and uses a utility method from scripts/aem.js to make the request. The renderSwatches() method renders the colors as sets of swatches that the user can page through to select a color for painting. Buttons for this pagination are set up with click event handlers.

Third-party API requests

The previous section went over a substantial amount of the details for rendering the block.

While we could have contained everything there, it’s helpful in any modern project to modularize your code for reuse when possible. With that mindset, a utility method has been added to the aem.js file in the scripts directory:

async function fetchFromApi(url, { page, pageSize, params = {} } = {}) {
  const query = new URLSearchParams();

  if (page !== undefined) query.set('page', page);
  if (pageSize !== undefined) query.set('pageSize', pageSize);

  Object.entries(params).forEach(([k, v]) => {
    if (v !== undefined && v !== null) {
      query.set(k, v);
    }
  });

  const fullUrl = query.toString()
    ? `${url}?${query.toString()}`
    : url;

  const res = await fetch(fullUrl, {
    headers: { Accept: 'application/json' },
  });

  if (!res.ok) {
    throw new Error(`fetchAPI failed: ${res.status} ${res.statusText}`);
  }

  return res.json();
}

This fetchFromApi() method was also added to the aem.js export object so that we can call it in our blocks (like we did in the import statement of paint-room-preview.js).

This method makes paginated API requests, though the pagination is optional when calling it. This takes a provided URL, page, page size (the number of items per page), and any additional parameters. For our block, we use this to call our third-party API on page 1. The API offers 30 colors in total. We make a single request for all of them and then page between sets of 5 when the user clicks the next or previous buttons.

You’ll notice in the decorate() method of our block script, we defined the API URL via:

const COLORS_URL = 'https://yourdomain.com/colorapi/colors.json';
  

This should be updated to match the domain and path of your API, based on your implementation. As for that API, we’ll cover it in the next section.

Third Party Colors API

For my EDS site, I’m using the bring your own CDN approach via a Cloudflare worker. Adobe documentation provides a worker script that you can use for requests to your configured EDS domain. To enable our colors API, we just need to make a few minor updates to the script.

In the handleRequests() method, we first define constants for pages and colors returned in API requests, then we define a JSON object containing the page, page size, total number of pages, and most importantly, the array of colors!

const page = parseInt(url.searchParams.get("page") || "1", 10);
const pageSize = parseInt(url.searchParams.get("pageSize") || "30", 10);

const colors = [
  { name: "White", hex: "FFFFFF" },
  { name: "Black", hex: "000000" },
  { name: "Red", hex: "FF0000" },
  { name: "Green", hex: "00FF00" },
  { name: "Blue", hex: "0000FF" },
  { name: "Cyan", hex: "00FFFF" },
  { name: "Magenta", hex: "FF00FF" },
  { name: "Yellow", hex: "FFFF00" },
  { name: "Gray", hex: "808080" },
  { name: "Orange", hex: "FFA500" },
  { name: "Purple", hex: "800080" },
  { name: "Brown", hex: "A52A2A" },
  { name: "Pink", hex: "FFC0CB" },
  { name: "Lime", hex: "32CD32" },
  { name: "Teal", hex: "008080" },
  { name: "Navy", hex: "000080" },
  { name: "Olive", hex: "808000" },
  { name: "Maroon", hex: "800000" },
  { name: "Silver", hex: "C0C0C0" },
  { name: "Gold", hex: "FFD700" },
  { name: "Coral", hex: "FF7F50" },
  { name: "Indigo", hex: "4B0082" },
  { name: "Turquoise", hex: "40E0D0" },
  { name: "Lavender", hex: "E6E6FA" },
  { name: "Beige", hex: "F5F5DC" },
  { name: "Mint", hex: "98FF98" },
  { name: "Peach", hex: "FFDAB9" },
  { name: "Sky Blue", hex: "87CEEB" },
  { name: "Chocolate", hex: "D2691E" },
  { name: "Crimson", hex: "DC143C" }
];

const total = colors.length;
const start = (page - 1) * pageSize;
const end = start + pageSize;
const pageColors = colors.slice(start, end);

const json = JSON.stringify({
  page,
  pageSize,
  total,
  totalPages: Math.ceil(total / pageSize),
  data: pageColors,
});

Lastly, above the condition checking if the path starts with /drafts/, add the following:

if (url.pathname.startsWith('/colorapi/')) {
  return new Response(json, {
    headers: {
      "Content-Type": "application/json",
      "Access-Control-Allow-Origin": "*",
      "Access-Control-Allow-Methods": "GET, OPTIONS",
      "Access-Control-Allow-Headers": "Content-Type"
    },
  });
}

This sets up our API as path-based, supporting requests to /colorsapi/colors.json. Once we deploy our worker changes, the JSON response to API requests will resemble the following:

JSON Response to API Requests

I will mention that while this works, our API does leave some things to be desired. In a true, production-ready implementation, the worker should only act as a proxy to a separate data service (with its own dedicated redundancy and fault tolerance). The colors data could be enriched with details such as product codes, applications where each color is supported (works on drywall vs wood), and split into different sets of color palettes based on the paint quality (economy, super, deluxe, etc.). There might even be a review process where certain colors are filtered out based on inventory or other factors. The primary goal of this post is to demonstrate block building and, secondly, to keep the entire implementation at the edge, not to provide a best practice API implementation.

Block Design

With the functional aspects of our block complete, we need to add some styles to make our image previewer, color options, and paging work cohesively on varying client devices. So, in the /blocks/paint-room-preview folder, create a file called paint-room-preview.css and add the following contents:

.paint-room-preview {
  max-width: 800px;
  margin: auto;
  text-align: center;
}

#room-canvas {
  width: 100%;
  border-radius: 8px;
  margin-bottom: 20px;
}

.bm-controls {
  display: flex;
  justify-content: space-between;
  margin-bottom: 20px;
}

.bm-nav {
  display: flex;
  align-items: center;
  justify-content: center;
  gap: 16px;
  margin-bottom: 20px;
}

#bm-colors {
  display: flex;
  gap: 12px;
  justify-content: center;
  flex-wrap: nowrap;
  margin: 0 10%;
}

.bm-color {
  display: flex;
  flex-direction: column;
  align-items: center;
}

.bm-swatch {
  width: 60px;
  height: 60px;
  border-radius: 6px;
  border: 1px solid #ccc;
  cursor: pointer;
  margin-bottom: 8px;
  transition: transform .2s;
}

.bm-swatch:hover {
  transform: scale(1.1);
}

@media (width <= 900px) {    
  .bm-nav {
    flex-wrap: wrap;
  }

  #bm-colors {
    order: 3;
    width: 100%;
    justify-content: center;
    margin: 12px 0 0;
    flex-wrap: wrap;
  }

  #bm-prev {
    order: 1;
  }

  #bm-next {
    order: 2;
  }
}

With that, we should merge or commit our changes. Then we can author our block on a page, via the universal editor:

Editing in the Universal Editor

If you want to test drive this using the coffee shop example above, please find the base image, mask, and shading image at the links below. Upload these images to AEM Assets to select them in your block’s authorable fields.

https://blogs.perficient.com/files/coffee-shop-shading.png

https://blogs.perficient.com/files/coffee-shop-mask-1.png

https://blogs.perficient.com/files/coffee-shop.png

With all 3 images authored, you can publish the page and see your changes in action.

Closing Thoughts

As you can see, EDS blocks can be as specific as you need them to be. All the block and utility code is contained in JavaScript and CSS. Authoring fields are easily enabled in component-models.json. The block is easily enabled for use in pages via component-filters.json. Using just browser APIs, event handlers, and DOM selectors, we built a compelling experience for our fictional paint company. Using just ES6+ modular code, we built a serverless API to provide simple color options. The best part of this is that it’s all delivered at the edge, for an optimally fast application. There are many possibilities for block customization. The speed and flexibility of Edge Delivery Services should be considered for your project.

]]>
https://blogs.perficient.com/2026/02/11/eds-adventures-integrating-external-data-and-building-custom-feature-blocks/feed/ 0 390252
Perficient Earns Recognition on Forbes’ 2026 List of America’s Best Midsize Employers https://blogs.perficient.com/2026/02/11/perficient-earns-recognition-on-forbes-2026-list-of-americas-best-midsize-employers/ https://blogs.perficient.com/2026/02/11/perficient-earns-recognition-on-forbes-2026-list-of-americas-best-midsize-employers/#comments Wed, 11 Feb 2026 15:50:52 +0000 https://blogs.perficient.com/?p=390213

We’re thrilled to announce that Perficient has been included on Forbes’ 2026 list of America’s Best Midsize Employers. This prestigious recognition celebrates companies nationwide that excel in fostering supportive, empowering workplaces where colleagues can thrive.

This is the first year Perficient has earned an America’s Best Employers ranking, reflecting our ongoing commitment to building a culture that challenges, champions, and celebrates every colleague. It also reinforces our previous recognition as a 2025 USA Today Top Workplace and 2025 Professional Development Top Workplace, underscoring our dedication to building a people-centric culture. 

The Methodology Behind the List

To determine the America’s Best Midsize Employers list, Forbes partnered with market research firm Statista to independently survey more than 217,000 U.S. employees working at companies with at least 1,000 colleagues nationwide. The survey captures employees’ personal assessments of their current employer, as well as their evaluations of other organizations informed by previous roles, industry exposure, and secondhand knowledge from family and friends who have worked there. Their assessments spanned key areas such as professional development, compensation, company image, culture, and work environment. 

The results were combined with survey data collected over the past three years, prioritizing the most recent data and feedback from current employees to provida holistic evaluation of each organization. Companies were grouped by size, with midsize organizations defined as having 1,000 to 5,000 employees and large organizations exceeding 5,000 employees.

Championing Our People and Empowering Our Communities

Perficient’s award-winning culture is shaped by the passion, collaboration, and diverse perspectives of our colleagues around the world. We foster genuine connections across our global organization through our Employee Resource Groups (ERGs), which empower every colleague to thrive both personally and professionally. These communities create an environment of inclusion and belonging across different interest areas while supporting colleague well-being and professional development.

READ MOREPerficient Wins EX Impact Award for Diversity, Equity, Inclusion, and Belonging

Along with our ERGs, we are committed to making a meaningful difference in the communities where we live and work. Our corporate giving philosophy centers on advancing science, technology, engineering, and mathematics (STEM) education and improving health and well-being. By investing in these initiativeswe are building brighter futures across our global communities and empowering the next generation of technology leaders.

Leading the Way in AI-First Expertise and Development

Perficient’s global team bringdeep expertise across industries, technologies, and ecosystem partnerships to help the world’s most admired brands accelerate their AI-first future. Our Generative AI Innovation Group (IG) strengthens this momentum by connecting our clients, colleagues, partners, and investors around emerging generative AI tools and capabilities and the impact these technologies have on modern businesses.

READ MOREPerficient Honored as a 2025 Technology Top Workplaces Winner

Our trusted partnerships with industry-leading technology innovators are shaping the next generation of agentic enterprises. These relationships not only elevate the value we deliver to clients but also enhance our own operations as we advance our AI-first mindset. Through our 360-degree partnerships with WRITER and Salesforce, we integrate our expertise with their agentic AI platforms to co-create tailored client solutions and deploy intelligent agents across our own organization.

Alongside our clients and partners, we are committed to investing in our people. From AI-first training programs to partner-specific bootcamps, we equip every colleague with the skills they need to succeed in an AI-driven world. We continue to evaluate and evolve our approach, providing development programming focused on four core areas: AI Technologies, Professional and Consulting Skills, Project Management and Delivery Excellence, and Leadership Development. 

Celebrating Our Culture and Shaping the Future Together

Our people-centric culture, commitment to our communities, and leadership in AI-driven innovation are the foundation of our award-winning workplaceEarning a place among America’s Best Midsize Employers affirms this and motivates us to continue fostering a culture of excellence where every colleague is empowered to grow and thrive.

This commitment extends to our AI-first mindset, which is grounded in agility, pragmatism, and collaborative innovation. We remain deeply invested in our colleagues’ professional development and empowering our people to lead at the cutting-edge of the rapidly evolving AI landscape.

Click to view slideshow.

READY TO GROW YOUR CAREER?  

It’s no secret our success is because of our people. No matter the technology or time zone, our colleagues are committed to delivering innovative, end-to-end digital solutions for the world’s most innovative companies, and we bring a collaborative spirit to every interaction. We’re always seeking the best and brightest to work with us. Join our team and experience a culture that challenges, champions, and celebrates our people.  

Visit our Careers page to see career opportunities and more!  

Go inside Life at Perficient and connect with us on LinkedInYouTubeXFacebookTikTok, and Instagram. 

]]>
https://blogs.perficient.com/2026/02/11/perficient-earns-recognition-on-forbes-2026-list-of-americas-best-midsize-employers/feed/ 1 390213