Data + Intelligence Articles / Blogs / Perficient https://blogs.perficient.com/category/services/data-intelligence/ Expert Digital Insights Tue, 11 Nov 2025 15:19:28 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Data + Intelligence Articles / Blogs / Perficient https://blogs.perficient.com/category/services/data-intelligence/ 32 32 30508587 Use Cases on AWS AI Services https://blogs.perficient.com/2025/11/09/amazon-web-services-ai/ https://blogs.perficient.com/2025/11/09/amazon-web-services-ai/#comments Sun, 09 Nov 2025 14:48:42 +0000 https://blogs.perficient.com/?p=386758

In today’s AI activated world, there are ample number of AI related tools that organizations can use to tackle diverse business challenges. In line with this, Amazon has it’s set of Amazon Web Services for AI and ML, to address the real-world needs.

This blog provides details on AWS services, but by understanding this writeup you can also get to know how AI and ML capabilities can be used to address various business challenges. To illustrate how these services can be leveraged, I have used a few simple and straightforward use cases and mapped the AWS solutions to them.

 

AI Use Cases : Using AWS Services

1. Employee On boarding process

Any employee onboarding process has its own challenges which can be improved by better information discovery, shortening the onboarding timelines, providing more flexibility to the new hire, option for learning and re-visiting the learning multiple times and enhancing both the security and personalization of the induction experience.

Using natural language queries, the AWS AI service – Amazon Kendra, enables new hires to easily find HR manuals, IT instructions, leave policies, and company guidelines, without needing to know exact file names or bookmark multiple URLs.

Amazon Kendra uses Semantic Search which understands the user’s intent and contextual meaning. Semantic search relies on Vector embeddings, Vector search, Pattern matching and Natural Language Processing.

Real-time data retrieval through Retrieval-augmented Generation (RAG) in Amazon Kendra empowers employees to access up-to-date content securely and efficiently.

Following are examples of few prompts a new hire can use to retrieve information:

  • How can I access my email on my laptop and on my phone.
  • How do I contact the IT support.
  • How can I apply for a leave and who do I reach out to for approvals.
  • How do I submit my timesheet.
  • Where can I find the company training portal.
  • ….etcetera.

Data Security

To protect organizational data and ensure compliance with enterprise security standards, Amazon Kendra supports robust data security measures, including encryption in transit and at rest, and seamless integration with AWS Identity and Access Management (IAM).

Role-based access ensures that sensitive information is only visible to authorized personnel.

Thus, in the Onboarding process, the HR team can provide the personalized touch, and the AI agent ensures the employees have easy, anytime access to the right information throughout their on-boarding journey.

.

2. Healthcare: Unlocking Insights from Unstructured Clinical Data

Healthcare providers always need to extract critical patient information and support timely decision-making. They face the challenge of rapidly analyzing vast amounts of unstructured medical records, such as physician notes, discharge summaries, and clinical reports.

From a data perspective two key features are required, namely, Entity Recognition and Attribute detection. Medical entities include symptoms, medications, diagnoses, and treatment plans. Similarly Attribute detection includes identifying the dosage, frequency and severity associated with these entities.

Amazon provides the service, Amazon Comprehend Medical which uses NLP and ML models for extracting such information from unstructured data available with healthcare organizations.

One of the crucial aspects in healthcare is to handle Security and compliance related to patient’s health data. AWS has Amazon Macie as a security related service which employs machine learning & pattern matching to discover, classify, and protect Protected Health Information (PHI) within Amazon S3 bucket. Such a service helps organizations maintain HIPAA compliance through automated data governance.

 

3. Enterprise data insights

Any large enterprise has data spread across various tools like SharePoint, Salesforce, Leave management portals or some accounting applications.

From these data sets, executives can extract great insights, evaluate what-if scenarios, check on some key performance indicators, and utilize all this for decision making.

We can use AWS AI service, Amazon Q business for this very purpose using various plugins, connectors to DBs, and Retrieval Augmented Generation for up-to-date information.

The user can use natural language to query the system and Amazon Q performs Semantic search to return back contextually appropriate information. It also uses Knowledge Grounding which eventually helps in providing accurate answers not relying solely on training data sets.

To ensure that AI-generated responses adhere strictly to approved enterprise protocols, provide accurate and relevant information, we can define built-in guardrails within Amazon Q, such as Global Controls and Topic blocking.

 

4. Retail company use cases

a) Reading receipts and invoices

The company wants to automate the financial auditing process. In order to achieve this we can use Amazon Textract to read receipts and invoices as it uses machine learning algorithms to accurately identify and extract key information like product names, prices, and reviews.

b) Analyse customer purchasing patterns

The company intends to analyse customer purchasing patterns to predict future sales trends from their large datasets of historical sales data. For these analyses the company wants to build, train, and deploy machine learning models quickly and efficiently.

Amazon SageMaker is the ideal service for such a development.

c) Customer support Bot

The firm receives thousands of customer calls daily. In order to smoothen the process, the firm is looking to create a conversational AI bot which can take text inputs and voice commands.

We can use Amazon Bedrock to create a custom AI application from a dataset of ready to use Foundation models. These models can process large volumes of customer data, generate personalized responses and integrate with other AWS services like Amazon SageMaker for additional processing and analytics.

We can use Amazon Lex to create the bot, and Amazon Polly for text to speech purposes.

d) Image analyses

The company might want to identify and categorize their products based on the images uploaded. To implement this, we can use Amazon S3 and Amazon Rekognition to analyze images as soon as the new product image is uploaded into the storage service.

 

AWS Services for Compliance & Regulations

AWS AI Services for Compliance

AWS Services for Compliance & Regulations

In order to manage complex customer requirements and handling large volumes of sensitive data it becomes essential for us to adhere to various regulations.

Key AWS services supporting these compliance and governance needs include:

  1. AWS Config
    Continuously monitors and records resource configurations to help assess compliance.
  2. AWS Artifact
    Centralized repository for on-demand access to AWS compliance reports and agreements.
  3. AWS CloudTrail
    Logs and tracks all user activity and API calls within your AWS environment for audit purposes.
  4. AWS Inspector
    Automated security assessment service that identifies vulnerabilities and deviations from best practices.
  5. AWS Audit Manager
    Simplifies audit preparation by automating evidence collection and compliance reporting.
  6. AWS Trusted Advisor
    Provides real-time recommendations to optimize security, performance, and cost efficiency.

 

Security and Privacy risks: Vulnerabilities in LLMs

Vulnerabilities in LLMs

Vulnerabilities in LLMs

While dealing with LLMs there are ways available to attack the prompts, however there are various safeguards also against them. Keeping in view the attacks I am noting down some vulnerabilities which are useful to understand the risks around your LLMs.

S.No Vulnerability Description
1 Prompt Injection User input intended to manipulate the LLM
2 Insecure o/p handling Un-validated model’s output.
3 Training data poisoning Malicious data introduced in training set.
4 Model Denial Of Service Disrupting availability by identifying architecture weaknesses.
5 Supply chain vulnerabilities Weakness in s/w, h/w, services used to build or deploy the model.
6 Leakage Leakage of sensitive data.
7 Insecure plugins Flaws in model components.
8 Excessive autonomy Autonomy to the model in decision making.
9 Over – reliance Relying heavily on model’s capabilities.
10 Model theft. Leading to unauthorized re-use of the copies of the model

 

Can you co-relate the above use cases with any of your challenges at hand? Have you been able to use any of the AWS services or other AI platforms for dealing with such challenges?

References:

https://aws.amazon.com/ai/services/
https://www.udemy.com/share/10bvuD/

]]>
https://blogs.perficient.com/2025/11/09/amazon-web-services-ai/feed/ 1 386758
Building for Humans – Even When Using AI https://blogs.perficient.com/2025/10/29/building-for-humans-even-when-using-ai/ https://blogs.perficient.com/2025/10/29/building-for-humans-even-when-using-ai/#comments Thu, 30 Oct 2025 01:03:55 +0000 https://blogs.perficient.com/?p=388108

Artificial Intelligence (AI) is everywhere. Every month brings new features promising “deeper thinking” and “agentic processes.” Tech titans are locked in trillion-dollar battles. Headlines scream about business, economic, and societal concerns. Skim the news and you’re left excited and terrified!

Here’s the thing: we’re still human – virtues, flaws, quirks, and all. We’ve always had our agency, collectively shaping our future. Even now, while embracing AI, we need to keep building for us.

We Fear What We Do Not Know

“AI this… AI that…” Even tech leaders admit they don’t fully understand it. Sci-fi stories warn us with cautionary tales. News cycles fuel anxiety about job loss, disconnected human relationships, and cognitive decline.

Luckily, this round of innovation is surprisingly transparent. You can read the Attention is All You Need paper (2017) that started it all. You can even build your own AI if you want! This isn’t locked behind a walled garden. That’s a good thing.

What the Past Can Tell Us

I like to look at the past to gauge what we can expect from the future. Humans have feared every major invention and technological breakthrough. We expect the worst, but most have proven to improve life.

We’ve always had distractions from books, movies, games, to TikTok brain-rot. Some get addicted and go too deep, while others thrive. People favor entertainment and leisure activities – this is nothing new – so I don’t feel like cognitive decline is anything to worry about. Humanity has overcome all of it before and will continue to do so.

 

.

 

Humans are Simple (and Complicated) Creatures

We look for simplicity and speed. Easy to understand, easy to look at, easy to interact with, easy to buy from. We skim read, we skip video segments, we miss that big red CTA button. The TL;DR culture rules. Even so, I don’t think we’re at risk of the future from Idiocracy (2006).

That’s not to say that we don’t overcomplicate things. The Gods Must Be Crazy movie (1980) has a line that resonates, “The more [we] improved [our] surroundings to make life easier, the more complicated [we] made it.” We bury our users (our customers) in detail when they just want to skim, skip, and bounce.

Building for Computers

The computer revolution (1950s-1980s) started with machines serving humans. Then came automation. And eventually, systems talking to systems.

Fast-forward to the 2010s, where marketers gamed the algorithms to win at SEO, SEM, and social networking. Content was created for computers, not humans. Now we have the dead internet theory. We were building without humans in mind.

We will still have to build for systems to talk to systems. That won’t change. APIs are more important than ever, and agentic AI relies on them. Because of this, it is crucial to make sure what you are building “plays well with others”. But AIs and APIs are tools, not the audience.

Building for Humans

Google used to tell us all to build what people want, as opposed to gaming their systems. I love that advice. However, at first it felt unrealistic…gaming the system worked. Then after many updates, for a short bit, it felt like Google was getting there! Then it got worse and feels like pay-to-play recently.

Now AI is reshaping search and everything else. You can notice the gap between search results and AI recommendations. They don’t match. AI assistants aim to please humans, which is great, until it inevitably changes.

Digital teams must build for AI ingestion, but if you neglect the human aspect and the end user experience, then you will only see short-term wins.

Examples of Building for Humans

  • Make it intuitive and easy. Simple for end users means a lot of work for builders, but it is worth it! Reduce their cognitive load.
  • Build with empathy. Appeal to real people, not just personas and bots. Include feedback loops so they can feel heard.
  • Get to the point. Don’t overwhelm users, instead help them take action! Delight your customers by saving them time.
  • Add humor when appropriate. Don’t be afraid to be funny, weird, or real…it connects on a human level.
  • Consider human bias. Unlike bots and crawlers, humans aren’t always logical. Design for human biases.
  • Watch your users. Focus groups or digital tracking tools are great for observing. Learn from real users and iterate.

Conclusion

Building for humans never goes out of style. Whatever comes after AI will still need to serve people. So as tech evolves, let’s keep honing systems that work with and around our human nature.

……

If you are looking for that extra human touch (built with AI), reach out to your Perficient account manager or use our contact form to begin a conversation.

]]>
https://blogs.perficient.com/2025/10/29/building-for-humans-even-when-using-ai/feed/ 1 388108
See Perficient’s Amarender Peddamalku at the Microsoft 365, Power Platform & Copilot Conference https://blogs.perficient.com/2025/10/23/see-perficients-amarender-peddamalku-at-the-microsoft-365-power-platform-copilot-conference/ https://blogs.perficient.com/2025/10/23/see-perficients-amarender-peddamalku-at-the-microsoft-365-power-platform-copilot-conference/#respond Thu, 23 Oct 2025 17:35:19 +0000 https://blogs.perficient.com/?p=388040

As the year wraps up, so does an incredible run of conferences spotlighting the best in Microsoft 365, Power Platform, and Copilot innovation. We’re thrilled to share that Amarender Peddamalku, Microsoft MVP and Practice Lead for Microsoft Modern Work at Perficient, will be speaking at the Microsoft 365, Power Platform & Copilot Conference in Dallas, November 3–7.

Amarender has been a featured speaker at every TechCon365, DataCon, and PWRCon event this year—and Dallas marks the final stop on this year’s tour. If you’ve missed him before, now’s your chance to catch his insights live!

With over 15 years of experience in Microsoft technologies and a deep focus on Power Platform, SharePoint, and employee experience, Amarender brings practical, hands-on expertise to every session. Here’s where you can find him in Dallas:

Workshops & Sessions

  • Power Automate Bootcamp: From Basics to Brilliance
    Mon, Nov 3 | 9:00 AM – 5:00 PM | Room G6
    A full-day, hands-on workshop for Power Automate beginners.

 

  • Power Automate Multi-Stage Approval Workflows
    Tue, Nov 4 | 9:00 AM – 5:00 PM | Room G2
    Wed, Nov 5 | 3:50 PM – 5:00 PM | Room G6
    Learn how to build dynamic, enterprise-ready approval workflows.

 

  • Ask the Experts
    Wed, Nov 5 | 12:50 PM – 2:00 PM | Expo Hall
    Bring your questions and get real-time answers from Amarender and other experts.

 

  • Build External-Facing Websites Using Power Pages
    Thu, Nov 6 | 1:00 PM – 2:10 PM | Room D
    Discover how to create secure, low-code websites with Power Pages.

 

  • Automate Content Processing Using AI & SharePoint Premium
    Thu, Nov 6 | 4:20 PM – 5:30 PM | Room G6
    Explore how AI and SharePoint Premium (formerly Syntex) can transform content into knowledge.

 

Whether you’re just getting started with Power Platform or looking to scale your automation strategy, Amarender’s sessions will leave you inspired and equipped to take action.

Register now!

]]>
https://blogs.perficient.com/2025/10/23/see-perficients-amarender-peddamalku-at-the-microsoft-365-power-platform-copilot-conference/feed/ 0 388040
Datadog Synthetic Monitoring Integration with Azure DevOps Pipeline for Sitecore https://blogs.perficient.com/2025/10/23/datadog-synthetic-monitoring-integration-with-azure-devops-pipeline-for-sitecore/ https://blogs.perficient.com/2025/10/23/datadog-synthetic-monitoring-integration-with-azure-devops-pipeline-for-sitecore/#respond Thu, 23 Oct 2025 15:35:10 +0000 https://blogs.perficient.com/?p=387828

Datadog Synthetic Monitoring provides automated, simulated user journeys to proactively confirm the health and performance of websites and APIs, helping detect issues before users experience them. Integrating this into our Azure DevOps pipeline ensures that only builds where core site functionality is verified get promoted, reducing the risk of regressions in production. This approach is especially valuable in Sitecore projects, where critical web journeys and API endpoints are essential to user experience.

Why Use This Approach?

  • Immediate feedback: Failing releases are blocked before merging, saving post-release firefighting.
  • Coverage: Synthetic tests simulate real browser actions and API calls over real user flows.
  • Reliability: Automated testing delivers consistent, repeatable validation without manual steps.
  • Visibility: Results are unified within both Datadog and Azure DevOps for full traceability.
  • Scalability: As Sitecore projects grow, synthetic tests can be expanded to cover new endpoints and user scenarios without significant pipeline changes.
  • Environment parity: Tests can be run against staging, UAT, or pre-production environments before the live rollouts for safer releases.

Prerequisites

  • Active Datadog account with Synthetic Monitoring enabled.
  • Datadog API and Application keys created with the appropriate access scope.
  • Azure DevOps project with a working YAML-based CI/CD pipeline.
  • Secure variable storage in Azure DevOps (e.g., Variable Groups, Secret Variables) for credentials.
  • Stable and accessible endpoint URLs for Sitecore environment(s) under test.

High-Level Integration Process

1. Datadog Synthetic Test Creation

  • Create Browser and/or HTTP Synthetic Tests in Datadog tailored for key Sitecore application flows, such as:
    • Homepage load and rendering
    • Login flow and user dashboard navigation
    • Core API calls (search, content retrieval)
    • Critical commerce or form submissions
  • Use relevant tags (e.g., premerge) for search/query filtering by the CI pipeline.
  • Configure assertions to confirm critical elements:
    • Content correctness
    • HTTP status codes
    • Redirect targets
    • Response time SLAs
  • Validate tests in Datadog’s UI with multiple runs before pipeline integration.

Datadogdashboard1

2. Azure DevOps Pipeline Configuration

The Azure DevOps YAML pipeline is set up to invoke Datadog CI, run all tests matching our tag criteria, and fail the pipeline if any test fails.

Key Pipeline Steps

  • Install Datadog CI binary: Downloads and installs the CLI in the build agent.
  • Run Synthetic Tests: Uses the environment variables and search tags to pick synthetic tests (e.g., all with type: browser tag: remerge) and runs them directly.
  • JUnit Reporting & Artifacts: The CLI output is saved, and a JUnit-formatted result file is generated for Azure DevOps’ Tests UI. All test outputs are attached as build artifacts.
  • Conditional Fast-forward Merge: The pipeline proceeds to a gated merge to release/production only if all synthetics pass.

How Results and Flow Work

When All Tests Pass

  • The pipeline completes the Premerge_Datadog_Synthetics stage successfully.
  • Test summaries (JUnit) and CLI outputs are attached to the pipeline run.
  • Approval-gated merge to the Release branch is unblocked; approvers can verify test results before promotion.

Build artifacts include full logs for further review.

     Pipelinepassed

When Any Test Fails

  • If any synthetic (browser/API) test fails, the CLI exits with a non-zero exit code.
  • The JUnit summary will contain failure info and a link to the log details.
  • The pipeline stage fails (Premerge_Datadog_Synthetics), halting the fast-forward merge.
  • Approvers can review the failure in test results and attached artifacts within Azure DevOps.

Only successful resolution and green reruns allow code promotion.

Pipelinefailed

Best Practices for Datadog Synthetic Monitoring

  • Run tests in parallel to reduce wait times.
  • Use separate synthetic tests per microservice or major Sitecore area to isolate failures.
  • Monitor test trends in Datadog to detect gradual performance regression over time.
  • Limit sensitive data in synthetic flows by avoiding the storage of actual credentials.
  • Schedule periodic synthetic runs outside CI/CD to catch environment fluctuations unrelated to code changes.

Security Considerations

  • Store Datadog keys as secret variables in Azure DevOps.
  • Restrict permission for synthetic management to trusted CICD admins.
  • Avoid embedding credentials or sensitive payloads in test scripts.

Conclusion

By integrating Datadog Synthetic Monitoring directly into our CI/CD pipeline with Azure DevOps. Sitecore teams gain a safety net that blocks faulty builds before they hit production, while keeping a detailed audit trail. Combined with careful test design, secure key management, and continuous expansion of coverage, this approach becomes a cornerstone of proactive web application quality assurance.

 

]]>
https://blogs.perficient.com/2025/10/23/datadog-synthetic-monitoring-integration-with-azure-devops-pipeline-for-sitecore/feed/ 0 387828
Perficient at Microsoft Ignite 2025: Let’s Talk AI Strategy https://blogs.perficient.com/2025/10/21/perficient-at-microsoft-ignite-2025-lets-talk-ai-strategy/ https://blogs.perficient.com/2025/10/21/perficient-at-microsoft-ignite-2025-lets-talk-ai-strategy/#respond Tue, 21 Oct 2025 16:49:06 +0000 https://blogs.perficient.com/?p=387885

Microsoft Ignite 2025 is right around the corner—and Perficient is showing up with purpose and a plan to help you unlock real results with AI.

As a proud member of Microsoft’s Inner Circle for AI Business Solutions, we’re at the forefront of helping organizations accelerate their AI transformation. Whether you’re exploring custom copilots, modernizing your data estate, or building secure, responsible AI solutions, our team is ready to meet you where you are—and help you get where you want to go.

Here’s where you can connect with us during Ignite:

Join Us for Happy Hour
Unwind and connect with peers, Microsoft leaders, and the Perficient team at our exclusive happy hour just steps from the Moscone Center.
📍 Fogo de Chão | 🗓 November 17 | 🕔 6:00–9:00 PM
Reach out to our team to get registered!

 

Book a Strategy Session
Need a quiet space to talk AI strategy? We’ve secured a private meeting space across from the venue—perfect for 1:1 conversations about your AI roadmap.
📍 Ember Lounge — 201 3rd St, 8th floor, Suite 8016 | 🗓 November 18-20
Reserve Your Time

 

From copilots to cloud modernization, we’re helping clients across industries turn AI potential into measurable impact. Let’s connect at Ignite and explore what’s possible.

]]>
https://blogs.perficient.com/2025/10/21/perficient-at-microsoft-ignite-2025-lets-talk-ai-strategy/feed/ 0 387885
Salesforce AI for Financial Services: Practical Capabilities That Move the Organization Forward https://blogs.perficient.com/2025/10/20/salesforce-ai-for-financial-services-practical-capabilities-that-move-the-organization-forward/ https://blogs.perficient.com/2025/10/20/salesforce-ai-for-financial-services-practical-capabilities-that-move-the-organization-forward/#respond Mon, 20 Oct 2025 11:01:05 +0000 https://blogs.perficient.com/?p=387746

Turn on CNBC during almost any trading day and you’ll see and hear plenty of AI buzz that sounds great, and may look great in a deck, but falls short in regulated industries. For financial services firms, AI must do two things at once: unlock genuine business value and satisfy strict compliance, privacy, and audit requirements. Salesforce’s AI stack — led by Einstein GPT, Data Cloud, and integrated with MuleSoft, Slack, and robust security controls — is engineered to meet that dual mandate. Here’s a practical look at what Salesforce AI delivers for banks, insurers, credit unions, wealth managers, and capital markets firms, and how to extract measurable value without trading off controls and/or governance.

What Salesforce AI actually is (and why it matters for Financial Services)

Salesforce is widely adopted by financial services firms, with over 150,000 companies worldwide using its CRM, including a significant portion of the U.S. market, where 83% of businesses opt for its Financial Services Cloud (“FSC”). Major financial institutions like Wells Fargo, Bank of America Merrill Lynch and The Bank of New York are among its users, demonstrating its strong presence within the industry. Salesforce has combined together generative AI, predictive models, and enterprise data plumbing into a single ecosystem. Key capabilities include:

  • Einstein GPT: Generative AI tailored for CRM workflows — draft client communications, summarize notes, and surface contextual insights using your internal data.
  • Data Cloud: A real-time customer data platform that ingests, unifies, and models customer profiles at scale, enabling AI to operate on a trusted single source of truth.
  • Tableau + CRM Analytics: Visualize model outcomes, monitor performance, and create operational dashboards that align AI outputs with business KPIs.
  • MuleSoft: Connectors and APIs to bring core banking, trading, and ledger systems into the loop securely.
  • Slack & Flow (and Flow Orchestrator): Operationalize AI outputs into workflows, approvals, and human-in-the-loop processes.

For financial services, that integration matters more than flashy demos: accuracy, traceability, and context are non-negotiable. Salesforce’s ecosystem lets you apply AI where it impacts revenue, risk, and customer retention — and keep audit trails for everything.

High-value financial services use cases

Here are the pragmatic use cases where Salesforce AI delivers measurable ROI:

Client advisory and personalization

Generate personalized portfolio reviews, client outreach, or renewal communications using Einstein GPT combined with up-to-date holdings and risk profiles from Data Cloud. The result: more relevant outreach and higher conversion rates with less advisor time.

Wealth management — scalable advice and relationship mining

AI-driven summarization of client meetings, automated risk-tolerance classifiers, and opportunity scoring help advisors prioritize high-value clients and surface cross-sell opportunities without manual data wrangling.

Commercial lending — faster decisioning and better risk controls

Combine predictive credit risk models with document ingestion (via MuleSoft and integrated OCR) to auto-populate loan applications, flag exceptions, and route for human review where model confidence is low.

Fraud, AML, and compliance augmentation

Use real-time customer profiles and anomaly detection to surface suspicious behaviors. AI can triage alerts and summarize evidence for investigators, improving throughput while preserving explainability for regulators. AI can also reduce the volume of false alerts, which is the bane of every compliance officer ever.

Customer support and claims

RAG-enabled virtual assistants (Einstein + Data Cloud) pull from policy language, transaction history, and client notes to answer common questions or auto-draft claims responses — reducing service time and improving consistency. The virtual assistants can also interact in multiple languages, which helps reduce customer turnover for non-English writing clients.

Sales and pipeline acceleration

Predictive lead scoring, propensity-to-buy models, and AI-suggested next-best actions increase win rates and shorten sales cycles. Integrated workflows push suggestions to reps in Slack or the Salesforce console, making adoption frictionless.

Why Salesforce’s integrated approach reduces risk

Financial firms can’t treat AI as a separate experiment. Salesforce’s value proposition is that AI is embedded into systems that already handle customer interactions, security, and governance. That produces the following practical advantages:

Single source of truth

Data Cloud reduces conflicting customer records and stale insights, which directly lowers the risk of AI producing inappropriate or inaccurate outputs.

Controlled model access and hosting options

Enterprises can choose where data and model inference occur, including private or managed-cloud options, helping meet residency and confidentiality requirements.

Explainability and audit trails

Salesforce logs user interactions, AI-generated outputs, and data lineage into the platform. That creates the documentation regulators ask for and lets financial services executives investigate where models made decisions.

Human-in-the-loop and confidence thresholds

Workflows can be configured so that high-risk or low-confidence outputs require human approval. That’s essential for credit decisions, compliance actions, and investment advice.

Implementation considerations for regulated firms

To assist in your planned deployment of Salesforce AI in financial services, here’s a checklist of practical guardrails and steps:

Start with business outcomes, not models

  • Identify high-frequency, low-risk tasks for pilots (e.g., document summarization, inquiry triage) and measure lift on KPIs like turnaround time, containment rate, and advisor productivity.

Clean and govern your data

Invest in customer identity resolution, canonicalization, and metadata tagging in Data Cloud. Garbage in, garbage out is especially painful when compliance hangs on a model’s output.

Create conservative guardrails

Hard-block actions that have material customer impact (e.g., account closure, fund transfers) from automated flows. Use AI to assist drafting and recommendation, not to execute high-risk transactions autonomously.

Establish model testing and monitoring

Implement A/B tests, accuracy benchmarks, and drift detection. Integrate monitoring into Tableau dashboards and set alerts for performance degradation or unusual patterns.

Document everything for auditors and regulators

Maintain clear logs of training data sources, prompt templates, model versions, and human overrides. Salesforce’s native logging plus orchestration records from Flow help with this.

Train users and change-manage

Advisors, compliance officers, and client service reps should be part of prompt tuning and feedback loops. Incentivize flagging bad outputs — their corrections will dramatically improve model behavior.

Measurable outcomes to expect

When implemented with discipline, financial services firms typically see improvements including:

  • Reduced average handling time and faster loan turnaround
  • Higher client engagement and improved cross-sell conversion
  • Fewer false positives and faster investigator resolution times
  • Better advisor productivity via automated notes and suggested actions

Those outcomes translate into cost savings, improved regulatory posture, and revenue lift — the hard metrics CFOs, CROs, and CCOs require.

Final thoughts — pragmatic AI adoption

Salesforce gives financial institutions a practical path to embed AI into customer-facing and operational workflows without ripping up existing systems. The power isn’t just in the model; it’s in the combination of unified data (Data Cloud), generative assistance (Einstein GPT), secure connectors (MuleSoft), and operationalization (Flows and Slack). If you treat governance, monitoring, and human oversight as first-class citizens, AI becomes an accelerant — not a liability.

To help financial services firms either install or expand their Salesforce capability, Perficient has a 360-degree strategic partnership with Salesforce. While Salesforce itself is the provider of the platform and technology, as a global digital consultancy firm Perficient partners with Salesforce to offer its expertise in implementation, customization, and optimization of Salesforce solutions, leveraging Salesforce’s AI-first technologies and platform to deliver consulting, implementation, and integration services. Working together, Salesforce and Perficient’s partnership helps mutual clients build customer-centric solutions and operate as “agentic enterprises” 

 

 

]]>
https://blogs.perficient.com/2025/10/20/salesforce-ai-for-financial-services-practical-capabilities-that-move-the-organization-forward/feed/ 0 387746
Navigating the AI Frontier: Data Governance Controls at SIFIs in 2025 https://blogs.perficient.com/2025/10/13/navigating-the-ai-frontier-data-governance-controls-at-sifis-in-2025/ https://blogs.perficient.com/2025/10/13/navigating-the-ai-frontier-data-governance-controls-at-sifis-in-2025/#respond Mon, 13 Oct 2025 10:57:25 +0000 https://blogs.perficient.com/?p=387652

The Rise of AI in Banking

AI adoption in banking has accelerated dramatically. Predictive analytics, generative AI, and autonomous agentic systems are now embedded in core banking functions such as loan underwriting, compliance including fraud detection and AML, and customer engagement. 

A recent White Paper by Perficient affiliate Virtusa Agentic Architecture in Banking – White Paper | Virtusa documented that when designed with modularity, composability, Human-in-the-Loop (HITL), and governance, agentic AI agents empower a more responsive, data-driven, and human-aligned approach in financial services.

However, the rollout of agentic and generative AI tools without proper controls poses significant risks. Without a unified strategy and governance structure, Strategically Important Financial Institutions (“SIFIs”) risk deploying AI in ways that are opaque, biased, or non-compliant. As AI becomes the engine of next-generation banking, institutions must move beyond experimentation and establish enterprise-wide controls.

Key Components of AI Data Governance

Modern AI data governance in banking encompasses several critical components:

1. Data Quality and Lineage: Banks must ensure that the data feeding AI models is accurate, complete, and traceable.

Please refer to Perficient’s recent blog on this topic here:

AI-Driven Data Lineage for Financial Services Firms: A Practical Roadmap for CDOs / Blogs / Perficient

2. Model Risk Management: AI models must be rigorously tested for fairness, accuracy, and robustness. It has been documented many times in lending decision-making software that the bias of coders can result in biased lending decisions.

3. Third-Party Risk Oversight: Governance frameworks now include vendor assessments and continuous monitoring. Large financial institutions do not have to develop AI technology solutions themselves (Buy vs Build) but they do need to monitor the risks of having key technology infrastructure owned and/or controlled by third parties.

4. Explainability and Accountability: Banks are investing in explainable AI (XAI) techniques. Not everyone is a tech expert, and models need to be easily explainable to auditors, regulators, and when required, customers.

5. Privacy and Security Controls: Encryption, access controls, and anomaly detection are essential. These are all done already in legacy systems and extending it to the AI environment, whether it is narrow AI, machine learning, or more advanced agentic and/or generative AI it is natural to ensure these proven controls are extended to the new platforms. 

Industry Collaboration and Standards

The FINOS Common Controls for AI Services initiative is a collaborative, cross-industry effort led by the FINtech Open-Source Foundation (FINOS) to develop open-source, technology-neutral baseline controls for safe, compliant, and trustworthy AI adoption in financial services. By pooling resources from major banks, cloud providers, and technology vendors, the initiative creates standardized, open-source technology-neutral controls, peer-reviewed governance frameworks, and real-time validation mechanisms to help financial institutions meet complex regulatory requirements for AI. 

Key participants of FINOS include financial institutions such as BMO, Citibank, Morgan Stanley, and RBC, and key Technology & Cloud Providers include Perficient’s technology partners including Microsoft, Google Cloud, and Amazon Web Services (AWS). The FINOS Common Controls for AI Services initiative aims to create vendor-neutral standards for secure AI adoption in financial services.

At Perficient, we have seen leading financial institutions, including some of the largest SIFIs, establishing formal governance structures to oversee AI initiatives. Broadly, these governance structures typically include:

– Executive Steering Committees at the legal entity level
– Working Groups, at the legal entity as well as the divisional, regional and product levels
– Real-Time Dashboards that allow customizable reporting for boards, executives, and auditors

This multi-tiered governance model promotes transparency, agility, and accountability across the organization.

Regulatory Landscape in 2025

Regulators worldwide are intensifying scrutiny of Artificial Intelligence in banking. The EU AI Act, the U.S. SEC’s cybersecurity disclosure rules, and the National Insititute of Standards and Technology (“NIST”) AI Risk Management Framework are shaping how financial institutions must govern AI systems.

Key regulatory expectations include:

– Risk-Based Classification
– Human Oversight
– Auditability
– Bias Mitigation

Some of these, and other regulatory regimes have been documented and summarized by Perficient at the following links:

AI Regulations for Financial Services: Federal Reserve / Blogs / Perficient

AI Regulations for Financial Services: European Union / Blogs / Perficient 

Eu Ai Act Risk Based Approach

The Road Ahead

As AI becomes integral to banking operations, data governance will be the linchpin of responsible innovation. Banks must evolve from reactive compliance to proactive risk management, embedding governance into every stage of the AI lifecycle.

The journey begins with data—clean, secure, and well-managed. From there, institutions must build scalable frameworks that support ethical AI development, align with regulatory mandates, and deliver tangible business value.

Readers are urged to read the links contained in this blog and then contact Perficient, a global AI-first digital consultancy to discuss how partnering with Perficient can help run a tailored assessment and pilot design that maps directly to your audit and governance priorities and ensure all new tools are rolled out in a well-designed data governance environment.

]]>
https://blogs.perficient.com/2025/10/13/navigating-the-ai-frontier-data-governance-controls-at-sifis-in-2025/feed/ 0 387652
AI-Driven Data Lineage for Financial Services Firms: A Practical Roadmap for CDOs https://blogs.perficient.com/2025/10/06/ai-driven-data-lineage-for-financial-services-firms-a-practical-roadmap-for-cdos/ https://blogs.perficient.com/2025/10/06/ai-driven-data-lineage-for-financial-services-firms-a-practical-roadmap-for-cdos/#respond Mon, 06 Oct 2025 11:17:05 +0000 https://blogs.perficient.com/?p=387626

Introduction

Imagine just as you’re sipping your Monday morning coffee and looking forward to a hopefully quiet week in the office, your Outlook dings and you see that your bank’s primary federal regulator is demanding the full input – regulatory report lineage for dozens of numbers on both sides of the balance sheet and the income statement for your latest financial report filed with the regulator. The full first day letter responses are due next Monday, and as your headache starts you remember that the spreadsheet owner is on leave; the ETL developer is debugging a separate pipeline; and your overworked and understaffed reporting team has three different ad hoc diagrams that neither match nor reconcile.

If you can relate to that scenario, or your back starts to tighten in empathy, you’re not alone. Artificial Intelligence (“AI”) driven data lineage for banks is no longer a nice-to-have. We at Perficient working with our clients in banking, insurance, credit unions, and asset managers find that it’s the practical answer to audit pressure, model risk (remember Lehman Brothers and Bear Stearns), and the brittle manual processes that create blind spots. This blog post explains what AI-driven lineage actually delivers, why it matters for banks today, and a phased roadmap Chief Data Officers (“CDOs”) can use to get from pilot to production.

Why AI-driven data lineage for banks matters today

Regulatory pressure and real-world consequences

Regulators and supervisors emphasize demonstrable lineage, timely reconciliation, and governance evidence. In practice, financial services firms must show not just who touched data, but what data enrichment and/or transformations happened, why decisions used specific fields, and how controls were applied—especially under BCBS 239 guidance and evolving supervisory expectations.

In addition, as a former Risk Manager, the author knows that he would have wanted and has spoken to a plethora of financial services executives who want to know that the decisions they’re making on liquidity funding, investments, recording P&L, and hedging trades are based on the correct numbers. This is especially challenging at global firms that operate in in a transaction heavy environment with constantly changing political, interest rate, foreign exchange and credit risk environment.

Operational risks that keep CDOs up at night

Manual lineage—spreadsheets, tribal knowledge, and siloed code—creates slow audits, delayed incident response, and fragile model governance. AI-driven lineage automates discovery and keeps lineage living and queryable, turning reactive fire drills into documented, repeatable processes that will greatly shorten the time QA tickets are closed and reduce compensation costs for misdirected funds. It also provides a scalable foundation for governed data practices without sacrificing traceability.

What AI-driven lineage and controls actually do (written by and for non-tech staff)

At its core, AI-driven data lineage combines automated scanning of code, SQL, ETL jobs, APIs, and metadata with semantic analysis that links technical fields to business concepts. Instead of a static map, executives using AI-driven data lineage get a living graph that shows data provenance at the field level: where a value originated, which transformations touched it, and which reports, models, or downstream services consume it.

AI adds value by surfacing hidden links. Natural language processing reads table descriptions, SQL comments, and even README files (yes they do still exist out there) to suggest business-term mappings that close the business-IT gap. That semantic layer is what turns a technical lineage graph into audit-ready evidence that regulators or auditors can understand.

How AI fixes the pain points keeping CDOs up at night

Faster audits: As a consultant at Perficient, I have seen AI-driven lineage that after implementation allowed executives to answer traceability questions in hours rather than weeks. Automated evidence packages—exportable lineage views and transformation logs—provide auditors with a reproducible trail.
Root-cause and incident response: When a report or model spikes, impact analysis highlights which datasets and pipelines are involved, highlighting responsibility and accountability, speeding remediation and alleviating downstream impact.
Model safety and feature provenance: Lineage that includes training datasets and feature transformations enables validation of model inputs, reproducibility of training data, and enforcement of data controls—supporting explainability and governance requirements. That allows your P&L to be more R&S. (a slogan used by a client that used R&S P&L to mean rock solid profit and loss.)

Tooling, architecture, and vendor considerations

When evaluating vendors, demand field-level lineage, semantic parsing (NLP across SQL, code, and docs), auditable diagram exports, and policy enforcement hooks that integrate with data protection tools. Deployment choices matter in regulated banking environments; hybrid architectures that keep sensitive metadata on-prem while leveraging cloud analytics often strike a pragmatic balance.

A practical, phased roadmap for CDOs

Phase 0 — Align leadership and define success: Engage CRO, COO, and Head of Model Risk. Define 3–5 KPIs (e.g., lineage coverage, evidence time, mean time to root cause) and what “good” will look like. This is often done during a evidence gathering phase by Perficient with clients who are just starting their Artificial Intelligence journey.
Phase 1 — Inventory and quick wins: Target a high-risk area such as regulatory reporting, a few production models, or a critical data domain. Validate inventory manually to establish baseline credibility.
Phase 2 — Pilot AI lineage and controls: Run automated discovery, measure accuracy and false positives, and quantify time savings. Expect iterations as the model improves with curated mappings.
Phase 1 and 2 are usually done by Perficient with clients as a Proof-of-Concept phase to show that the key feeds into and out of existing technology platforms can be done.
Phase 3 — Operationalize and scale: Integrate lineage into release workflows, assign lineage stewards, set SLAs, and connect with ticketing and monitoring systems to embed lineage into day-to-day operations.
Phase 4 — Measure, refine, expand: Track KPIs, adjust models and rules, and broaden scope to additional reports, pipelines, and models as confidence grows.

Risks, human oversight, and governance guardrails

AI reduces toil but does not remove accountability. Executives, auditors and regulators either do or should require deterministic evidence and human-reviewed lineage. Treat AI outputs as recommendations subject to curator approval. This will avoid what many financial services executives are dealing with what is now known as AI Hallucinations.

Guardrails include the establishment of exception processing workflows for disputed outputs and toll gates to ensure security and privacy are baked into design—DSPM, masking, and appropriate IAM controls should be integral, not afterthoughts.

Conclusion and next steps

AI data lineage for banks is a pragmatic control that directly addresses regulatory expectations, speeds audits, and reduces model and reporting risk. Start small, prove value with a focused pilot, and embed lineage into standard data stewardship processes. If you’re a CDO looking to move quickly with minimal risk, contact Perficient to run a tailored assessment and pilot design that maps directly to your audit and governance priorities. We’ll help translate proof into firm-wide control and confidence.

]]>
https://blogs.perficient.com/2025/10/06/ai-driven-data-lineage-for-financial-services-firms-a-practical-roadmap-for-cdos/feed/ 0 387626
Transform Your Data Workflow: Custom Code for Efficient Batch Processing in Talend-Part 2 https://blogs.perficient.com/2025/10/03/transform-your-data-workflow-custom-code-for-efficient-batch-processing-in-talend-part-2/ https://blogs.perficient.com/2025/10/03/transform-your-data-workflow-custom-code-for-efficient-batch-processing-in-talend-part-2/#comments Fri, 03 Oct 2025 07:25:24 +0000 https://blogs.perficient.com/?p=387517

Introduction:

Custom code in Talend offers a powerful way to enhance batch processing efficiently by allowing developers to implement specialized logic that is not available through Talend’s standard components. This can involve data transformations, custom code as per use case and integration with flat files as per specific project needs. By leveraging custom code, users can optimize performance, improve data quality, and streamline complex batch workflows within their Talend jobs.

Talend Components:

Key components for batch processing as mention below:

  • tDBConnection: Establish and manage database connections within a job & allow configuration with single connection to reuse within Talend job.
  • tFileInputDelimited: For reading data from flat files.
  • tFileRowCount: Reads file row by row to calculate the number of rows.
  • tLoop: Executes a task automatically, based on a loop size.
  • tHashInput, tHashOutput: For high-speed data transfer and processing within a job. tHashOutput writes data to cache memory, while tHashInput reads from that cached data.
  • tFilterRow: For filtering rows from a dataset based on specified.
  • tMap: Data transformation allows you to map input data with output data and enables you to perform data filtering, complex data manipulation, typecasting, and multiple input source joins.
  • tJavaRow: It can be used as an intermediate component, and we are able to access the input flow and transform the data using custom Java code.
  • tJava: It has no input or output data flow & can be used independently to Integrate custom Java code.
  • tPreJob, tPostJob: PreJob start the execution before the job & PostJob at the end of the job.
  • tDBOutput: Supports wide range of databases & used to write data to various databases.
  • tDBCommit:It retains and verifies the alterations applied to a connected database throughout a Talend job, guaranteeing that it permanently records the data changes.
  • tDBClose:  It explicitly close a database connection that was opened by a tDBConnection component.
  • tLogCatcher: It is used in error handling within Talend job for adding runtime logging information. It catches all the exceptions and warnings raised by tWarn and tDie components during Talend job execution.
  • tLogRow: It is employed in error handling to display data or keep track of processed data in the run console.
  • tDie: We can stop the job execution explicitly if it fails. In addition, we can create a customized warning message and exit code.

Workflow with example:

To process the bulk of data in Talend, we can implement batch processing to efficiently process flat file data within a minimal execution time. We can read the flat file data & after the execution, we can process it to insert it into MySQL database table as a target & we can achieve this without batch processing. But this data flow will take quite a longer time to execute. If we use batch processing using the custom code, it takes minimal execution time to write the entire source file data into batch of records into MySQL database table at the target location.

Talend Job Design

Talend Job Design 

Solution:

  • Establish the database connection at the start of the execution so that we can reuse.
  • Read the number of rows in the source flat file using tFileRowCount component.
  • To determine the batch size, subtract the header count from the total row count and then divide the number by the total batch size. Take the whole number nearby which indicates the total number of batch or chunk.

    Calculate the batch size from total row count

    Calculate the batch size from total row count

  • Now use tFileInputDelimited component to read the source file content. In the tMap component, utilize the sequence Talend function to generate row numbers for your data mapping and transformation tasks. Then, load all of the data into the tHashOutput component, which stores the data into a cache.
  • Iterate the loop based on the calculated whole number using tLoop
  • Retrieve all the data from tHashInput component.
  • Filter the dataset retrieved from tHashInput component based on the rowNo column in the schema using tFilterRow

Filter the dataset using tFilterRow

Filter the dataset using tFilterRow

  • If First Iteration is in progress & batch size is 100 then rowNo range will be as 1 to 100.
    If Third Iteration is in progress & batch size is 100 then rowNo range will be as 201 to 300.
    For example, if the value of current iteration is 3 then [(3-1=2)* 100]+1 = 201 & [3*100=300]. So final dataset range for the 3rd iteration will be 201 to 300.
  • Finally extract the dataset range between the rowNo column & write the batch data MySQL database table using tDBOutput
  • The system uses the tLogCatcher component for error management by capturing runtime logging details, including warning or exception messages, and employs tLogRow to display the information in the execution console.
  • Regarding performance tuning, tMap component that maps source data to output data, allows for complex data transformation, and offers unique join, first join, and all other join options for looking up data within the tMap component.
  • The temporary data that the tHashInput & tHashOutput components store in cache memory enhances runtime performance.
  • At the end of the job execution, we are committing the database modification & closing the connection to release the database resource.

Advantages of Batch Processing:

  • Batch processing can efficiently handle large datasets.
  • It takes minimal time to process the data even after data transformation.
  • By grouping records from a large dataset and processing them as a single unit, it can be highly beneficial for improving performance.
  • With the batch processing, it can easily scale to accommodate growing data volumes.
  • It is particularly useful for operations like generating reports, performing data integration, and executing complex transformations on large datasets.

For more details: Get-started-talend-open-studio-data-integration

Note: Efficient Batch Processing in Talend-Part 1

]]>
https://blogs.perficient.com/2025/10/03/transform-your-data-workflow-custom-code-for-efficient-batch-processing-in-talend-part-2/feed/ 3 387517
Transform Your Data Workflow: Custom Code for Efficient Batch Processing in Talend-Part 1 https://blogs.perficient.com/2025/10/03/transform-data-workflow-custom-code-for-efficient-batch-processing-in-talend-part-1-2/ https://blogs.perficient.com/2025/10/03/transform-data-workflow-custom-code-for-efficient-batch-processing-in-talend-part-1-2/#respond Fri, 03 Oct 2025 07:22:35 +0000 https://blogs.perficient.com/?p=387572

Introduction:

Custom code in Talend offers a powerful way to enhance batch processing efficiently by allowing developers to implement specialized logic that is not available through Talend’s standard components. This can involve data transformations, custom code as per use case and integration with flat files as per specific project needs. By leveraging custom code, users can optimize performance, improve data quality, and streamline complex batch workflows within their Talend jobs.

Understand Batch Processing:

            Batch processing is a method of running high-volume, repetitive data within Talend jobs. The batch method allows users to process a bunch of data when computing resources are available, and with little or no user interaction.

Through batch processing, users gather and retain data, subsequently processing it during a designated period referred to as a “batch window.” This method enhances efficiency by establishing processing priorities and executing data tasks in a timeframe that is optimal.

Here, Talend job takes the total row count from source file then load the data from the flat file, processes it in a batch, provided input through context variable & then write the data into smaller flat files. This implementation made it possible to process enormous amounts of data more precisely and quickly than other implementation.

Batch processing is a method of executing a series of jobs sequentially without user interaction, typically used for handling large volumes of data efficiently. Talend, a prominent and extensively employed ETL (Extract, Transform, Load) tool, utilizes batch processing to facilitate the integration, transformation, and loading of data into data warehouse and various other target systems.

Talend Components:

Key components for batch processing as mention below:

  • tFileInputDelimited, tFileOutputDelimited: For reading & writing data from/to files.
  • tFileRowCount: Reads file row by row to calculate the number of rows.
  • tLoop: Executes a task automatically, based on a loop size.
  • tHashInput, tHashOutput: For high-speed data transfer and processing within a job. tHashOutput writes data to cache memory, while tHashInput reads from that cached data.
  • tFilterRow: For filtering rows from a dataset based on specified.
  • tMap: Use for data transformation which allow to map input data with output data along with use to perform data filtering, complex data manipulation, typecasting & multiple input source join.
  • tJavaRow: It can be used as an intermediate component, and we are able to access the input flow and transform the data using custom Java code.
  • tJava: It has no input or output data flow & can be used independently to Integrate custom Java code.
  • tLogCatcher: It is used in error handling within Talend job for adding runtime logging information. It catches all the exceptions and warnings raised by tWarn and tDie components during Talend job execution.
  • tLogRow: It is employed in error handling to display data or keep track of processed data in the run console.

Workflow with example:

To process the bulk of data in Talend, we can implement batch processing to efficiently process flat file data within a minimal execution time. We can read the flat file data & after execution, we can write it into a chunk of another flat file as a target & we can achieve this without batch processing. But this data flow will take quite a larger execution time to execute. If we use batch processing using the custom code, it takes minimal execution time to write the entire source file data into chunks of files at the target location.

Talend job design

Talend job design

Solution:

  • Read the number of rows in the source flat file using tFileRowCount component.
  • To determine the batch size, subtract the header count from the total row count and then divide the number by the total batch size. Take the whole number nearby which indicates the total number of batch or chunk.

    Calculate the batch / chunk by reducing the header from total row count & then divide the number by the total batch size. Take the whole number nearby which indicates the total number of batch or chunk.

    Calculate the batch size from total row count

  • Now use tFileInputDelimited component to read the source file content. In the tMap component, utilize the sequence Talend function to generate row numbers for your data mapping and transformation tasks. Then, load all of the data into the tHashOutput component, which stores the data into a cache.
  • Iterate the loop based on the calculated whole number using tLoop
  • Retrieve all the data from tHashInput component.
  • Filter the dataset retrieved from tHashInput component based on the rowNo column in the schema using tFilterRow

    Filter the dataset retrieved from tHashInput component based on the rowNo column in the schema using tFilterRow

    Filter the dataset using tFilterRow

  • If First Iteration is in progress & batch size is 100 then rowNo range will be as 1 to 100.
    If Third Iteration is in progress & batch size is 100 then rowNo range will be as 201 to 300.
    For example, if the value of current iteration is 3 then [(3-1=2)* 100]+1 = 201 & [3*100=300]. So final dataset range for the 3rd iteration will be 201 to 300.
  • Finally extract the dataset range between the rowNo column & write it into chunk of output target file using tFileOutputDelimited
  • The system uses the tLogCatcher component for error management by capturing runtime logging details, including warning or exception messages, and employs tLogRow to display the information in the execution console.
  • Regarding performance tuning, we have a tMap component that maps source data to output data, allows for complex data transformation, and offers unique join, first join, and all other join options for looking up data within the tMap component.
  • The temporary data that the tHashInput & tHashOutput components store in cache memory enhances runtime performance.

 

Advantages of Batch Processing:

  • Batch processing can efficiently handle large datasets.
  • It takes minimal time to process the data even after data transformation.
  • By grouping records from a large dataset and processing them as a single unit, it can be highly beneficial for improving performance.
  • With the batch processing, it can easily scale to accommodate growing data volumes.
  • It is particularly useful for operations like generating reports, performing data integration, and executing complex transformations on large datasets.

For more details: Get-started-talend-open-studio-data-integration

Note: Efficient Batch Processing in Talend-Part 2

]]>
https://blogs.perficient.com/2025/10/03/transform-data-workflow-custom-code-for-efficient-batch-processing-in-talend-part-1-2/feed/ 0 387572
Trust, Data, and the Human Side of AI: Lessons From a Lifelong Automotive Leader https://blogs.perficient.com/2025/10/02/customer-experience-automotive-wally-burchfield/ https://blogs.perficient.com/2025/10/02/customer-experience-automotive-wally-burchfield/#respond Thu, 02 Oct 2025 17:05:47 +0000 https://blogs.perficient.com/?p=387540

In this episode of “What If? So What?”, Jim Hertzfeld sits down with Wally Burchfield, former senior executive at GM, Nissan, and Nissan United, to explore what’s driving transformation in the automotive industry and beyond. 

 Wally’s perspective is clear: in a world obsessed with automation and data, the companies that win will be the ones that stay human. 

 From “Build and Sell” to “Know and Serve” 

 The old model was simple: build a car, sell a car, repeat. But as Wally explains it, that formula no longer works in a world where customer expectations are shaped by digital platforms and instant personalization. “It’s not just about selling a product,” he said. “It’s about retaining the customer through a high-quality experience one that feels personal, respectful, and effortless.” Every interaction matters, and every brand is in the experience business. 

 Data Alone Doesn’t Build Loyalty – Trust Does 

 It’s true that organizations have more data than ever before. But as Wally points out, it’s not how much data you have, it’s what you do with it. The real differentiator is how responsibly, transparently, and effectively you use that data to improve the customer experience. 

 “You can have a truckload of data but if it doesn’t help you deliver value or build trust, it’s wasted,” Wally said. 

 When used carelessly, data can feel manipulative. When used well, it creates clarity, relevance, and long-term relationships. 

 AI Should Remove Friction, Not Feeling 

 Wally’s take on AI is refreshingly grounded. He sees it as a tool to reduce friction, not replace human connection. Whether it’s scheduling service appointments via SMS or filtering billions of digital signals, the best AI is invisible, working quietly in the background to make the customer feel understood. 

 Want to Win? Listen Better and Faster 

 At the end of the day, the brands that thrive won’t be the ones with the biggest data sets; they’re the ones that move fast, use data responsibly, and never lose sight of the customer at the center. 

🎧 Listen to the full conversation with Wally Burchfield for more on how trust, data, and AI can work together to build lasting customer relationships—and why the best strategies are still the most human. 

Subscribe Where You Listen

Apple | Spotify | Amazon | Overcast | Watch the full video episode on YouTube

Meet our Guest – Wally Burchfield

Wally Burchfield is a veteran automotive executive with deep experience across retail, OEM operations, marketing, aftersales, dealer networks, and HR. 

He spent 20 years at General Motors before joining Nissan, where he held multiple VP roles across regional operations, aftersales, and HR. He later served as COO of Nissan United (TBWA), leading Tier 2/3 advertising and field marketing programs to support dealer and field team performance. Today, Wally runs a successful consulting practice helping OEMs, partners, and dealer groups solve complex challenges and drive results. A true “dealer guy”, he’s passionate about improving customer experience, strengthening OEM-dealer partnerships, and challenging the status quo to unlock growth. 

Follow Wally on LinkedIn  

Learn More about Wally Burchfield

 

Meet our Host

Jim Hertzfeld

Jim Hertzfeld is Area Vice President, Strategy for Perficient.

For over two decades, he has worked with clients to convert market insights into real-world digital products and customer experiences that actually grow their business. More than just a strategist, Jim is a pragmatic rebel known for challenging the conventional and turning grand visions into actionable steps. His candid demeanor, sprinkled with a dose of cynical optimism, shapes a narrative that challenges and inspires listeners.

Connect with Jim:

LinkedIn | Perficient

 

 

]]>
https://blogs.perficient.com/2025/10/02/customer-experience-automotive-wally-burchfield/feed/ 0 387540
Perficient Included in IDC Market Glance: Enterprise Intelligence Services Report https://blogs.perficient.com/2025/10/01/perficient-included-in-idc-market-glance-enterprise-intelligence-services-report-2/ https://blogs.perficient.com/2025/10/01/perficient-included-in-idc-market-glance-enterprise-intelligence-services-report-2/#respond Wed, 01 Oct 2025 19:13:40 +0000 https://blogs.perficient.com/?p=387611

Enterprise intelligence is redefining the future of business. For modern organizations the ability to harness information and turn it into strategic insight is no longer optional, it’s essential. Organizations are increasingly recognizing that enterprise intelligence is the catalyst for smarter decisions, accelerated innovation, and transformative customer experiences. As the pace of digital transformation quickens, those who invest in intelligent technologies are positioning themselves to lead.

IDC Market Glance: Enterprise Intelligence Services, 3Q25

We’re proud to share that Perficient has once again been included  in IDC’s Market Glance: Enterprise Intelligence Services report (doc #US52792625, September 2025). This marks our second consecutive year being included in the “IT Services Providers with Enterprise Intelligence Services offerings” category, which we believe reinforces our commitment to delivering innovative, data-driven solutions that empower enterprise transformation.

IDC defines Enterprise Intelligence as “as an organization’s capacity to learn combined with its ability to synthesize the information it needs in order to learn and to apply the resulting insights at scale by establishing a strong data culture.”

We believe our inclusion highlights Perficient’s continued investment in enterprise intelligence capabilities and our ability to embed these technologies into traditional IT services to drive smarter, faster outcomes for our clients.

We’re honored to be included alongside other providers and remain committed to helping organizations harness the power of enterprise intelligence to unlock new opportunities and accelerate growth.

Engineering the Future of Enterprise Intelligence

IDC notes: “Many IT services providers with heritage in systems integration, application development and management, and IT infrastructure services have practices focusing on technical advice, implementation and integration, management, and support of enterprise intelligence technology solutions.”

We don’t just deliver data services. We engineer intelligent ecosystems powered by AI that bring your data strategy to life and accelerate enterprise transformation. Our Data practice integrates every facet of enterprise intelligence, with a focus on AI-driven strategy, implementation, integration, and support of advanced, end-to-end technologies that reshape how businesses think, operate, and grow.

The future of enterprise intelligence is more than data collection. It’s about building adaptive, AI-enabled frameworks that learn, evolve, and empower smarter, faster decision-making.

To discover how Perficient can help you harness the power of enterprise intelligence and stay ahead of digital disruption, visit Data Solutions/Perficient.

]]>
https://blogs.perficient.com/2025/10/01/perficient-included-in-idc-market-glance-enterprise-intelligence-services-report-2/feed/ 0 387611