Analytics Articles / Blogs / Perficient https://blogs.perficient.com/category/services/data-intelligence/analytics/ Expert Digital Insights Thu, 24 Jul 2025 17:42:32 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Analytics Articles / Blogs / Perficient https://blogs.perficient.com/category/services/data-intelligence/analytics/ 32 32 30508587 AI at the Service of Talent: Building Evaluator Agents with Copilot and n8n https://blogs.perficient.com/2025/07/23/ai-at-the-service-of-talent-building-evaluator-agents-with-copilot-and-n8n/ https://blogs.perficient.com/2025/07/23/ai-at-the-service-of-talent-building-evaluator-agents-with-copilot-and-n8n/#respond Thu, 24 Jul 2025 01:03:33 +0000 https://blogs.perficient.com/?p=385075

Abstract

This article explores how artificial intelligence can transform technical evaluation processes in talent acquisition, particularly in the software development field. Through the Dessert & Learn session, two complementary approaches are presented: a basic one using prompts in Copilot for automated code analysis, and a more advanced one using intelligent agents built with n8n and the RAG (Retrieval-Augmented Generation) pattern. These tools demonstrate how bias can be reduced, evaluation accuracy improved, and hiring processes scaled. Additionally, the fundamentals of RAG and n8n are explained, highlighting their potential to build contextualized automation workflows that are replicable across multiple industries.


Introduction

In a world where tech talent is increasingly scarce and competitive, traditional hiring processes face major challenges: subjective evaluations, lack of scalability, unconscious biases, and difficulties in identifying the true technical potential of candidates.

In this context, Artificial Intelligence (AI) emerges as a powerful ally. In this Dessert & Learn session, we explore how to build intelligent evaluator agents using tools like Copilotn8n, and the RAG (Retrieval-Augmented Generation) pattern to transform the way we assess technical talent.

 

The Challenge of Traditional Hiring

In traditional hiring processes—particularly in the software development field—companies face multiple challenges:

  • Subjective evaluations, often based more on intuition than on objective technical criteria.
  • Lack of scalability, as manually reviewing dozens or hundreds of resumes consumes significant time and resources.
  • Difficulty identifying true technical potential, due to poorly structured interviews or misaligned assessments.
  • Unconscious biases, which can undermine fairness in candidate selection.

These challenges not only slow down the hiring process but also increase the risk of poor decisions, negatively impacting productivity, organizational culture, and talent retention. In this context, artificial intelligence emerges as a powerful tool to transform and optimize the recruitment workflow.

From Prompts to Agents: The Automation Journey

Automation begins with something as simple as a prompt in Copilot, but it evolves into autonomous agents capable of reasoning, retrieving contextual information, and making decisions. This journey involves integrating tools like OpenAI, Pinecone, and orchestration platforms such as n8n, enabling the creation of intelligent workflows that analyze, classify, and generate automated reports.

Basic Agent with Copilot

The first step was to design a specific prompt in Copilot to analyze technical development tests. This basic agent evaluates the source code submitted by a candidate, identifying best practices, common mistakes, and compliance with technical criteria. It’s an agile solution for obtaining an initial automated assessment.

Screenshot 2025 07 23 100312

Figure 1. Technical Interviews Agent

This agent is designed to assist in the technical evaluation of candidates applying for .NET developer positions. Its main role is to analyze the responses given during technical interviews and generate a detailed report based on widely accepted principles such as Clean Code, Object-Oriented Programming (OOP), SOLID principles, Entity Framework, software architecture, REST services, software testing (unit, integration, TDD), and experience with both relational and NoSQL databases. Through this analysis, the agent identifies technical strengths, applied best practices, and highlighted areas of knowledge, as well as potential improvement opportunities or topics the candidate should reinforce. This process provides a clear and structured view of the interviewee’s technical readiness, facilitating well-informed decision-making during the selection process. The ultimate goal is to ensure that the candidate meets the organization’s expected technical standards.

Screenshot 2025 07 23 193813

Figure 2. Configuration the Agent on Copilot

Agent with n8n and RAG

In this demonstration, I will showcase an advanced evaluator agent built using n8n and the Retrieval-Augmented Generation (RAG) pattern. This agent goes beyond simple prompts by integrating multiple tools and services to perform intelligent evaluations of software developer profiles.

The agent uses n8n as the orchestration platform, combining OpenAI for language processing, Pinecone for vector-based retrieval of previously indexed CVs or technical documents, and custom logic for decision-making. The workflow includes:

  • Semantic understanding of input (e.g., technical skills, project history)
  • Contextual search across a vector store to enrich the analysis
  • Rule-based scoring and classification (Junior, Mid, Senior)
  • Generation of structured, automated technical reports

This example illustrates how to build scalable and intelligent automation flows capable of interpreting, reasoning, and deciding in real-world hiring processes.

First Flow on n8n

This automated workflow in n8n is designed to process candidate folders based on their technical profile (FullStack React.js, Angular, BackEnd with C#, or Java), stored in Dropbox. Upon execution, the workflow lists all available folders, splits them individually, and routes each one using a Switch Folders node according to the detected stack. It then lists and downloads the corresponding files from each folder, which may include resumes, interview notes, or technical responses. These documents are sent to the Candidates Vectorization module, which uses Azure OpenAI to generate semantic embeddings. The resulting vectors are further processed by a Default Data Loader and text segmentation tools (Token Splitter), optimizing the format for future queries or analysis. This flow enables structured and enriched processing of candidate data, facilitating intelligent searches and contextual evaluations, ultimately supporting more efficient and accurate technical hiring decisions.

Screenshot 2025 07 23 104502

Figure 3. CV’s Flow vectorization

Second Flow on n8n – Candidate Assistant

This flow represents an intelligent agent designed to answer technical questions about candidates categorized by technology stack: FullStack React.js, FullStack Angular, BackEnd with C#, and BackEnd with Java. The process begins with an incoming HTTP request via a webhook. The request fields are then edited and sent to the central AI Agent node, which combines simple memory (for session context) with an Azure OpenAI chat model.

The agent evaluates which specific tool to use based on the mentioned stack and redirects the query to one of four custom tools (one per stack). Each tool is connected to a dedicated vector store containing previously generated embeddings from resumes, interview responses, or technical assessments.

For each stack, there is a dedicated chat model that interacts directly with its respective vector store, enabling contextualized and accurate responses. This modular design allows highly specialized interaction tailored to the candidate’s profile. Finally, the generated response is returned via webhook to the original request source.

This flow enables smart querying of technical information, providing a semantic search system powered by embeddings and language models—ideal for automating the evaluation process in technical talent recruitment.

Screenshot 2025 07 23 104541

Figure 4. Intelligent Agent using RAG architecture

Interactive Web Chat with AI Agent

The image shows a web chat interface connected to an automation flow in n8n via a webhook. The purpose of this chatbot is to assist in technical recruitment processes by allowing users to request detailed information about candidates based on their technology stack or specific skills. In this case, the user inquires about profiles with C# experience, and the agent responds by listing three candidates: Camilo Herrera, Natalia Paredes, and Valentina Ríos, noting that none explicitly mention Entity Framework expertise. When asked for details about Camilo Herrera, the agent replies with a professional summary describing his growth from Junior Developer to Tech Lead, highlighting his experience in maintenance, refactoring, and technical leadership at CloudForge Ltda. (2022–2023). This solution combines natural language processing with retrieval-augmented generation (RAG), enabling contextualized responses based on previously vectorized information. The workflow streamlines the selection process by providing accurate, automated, and easily accessible candidate insights through a user-friendly interface.

Screenshot 2025 07 23 195457

Figure 5. Web Chat Endpoint with Embedded Agent

Conclusions

AI is a powerful ally in technical talent evaluation
Intelligent agents help reduce bias, streamline processes, and improve decision-making during developer selection.

From simple prompts to advanced agents: the journey is progressive
It’s possible to start with basic automated evaluations (like those in Copilot) and gradually evolve toward more robust, scalable solutions with n8n and RAG.

n8n enables complex workflows without sacrificing technical control
By connecting tools like OpenAI, Pinecone, and custom rules, we can build contextual evaluators that generate structured reports and make autonomous decisions.

RAG enables more informed and accurate evaluations
By retrieving relevant information in real time, agents don’t just “respond”—they understand and contextualize, improving the precision of the analysis.

This approach is replicable across multiple domains
While the use case focused on talent evaluation, the same principles apply to support, education, document analysis, healthcare, and more.

Cognitive automation is no longer the future—it’s the present
Implementing intelligent agents is already viable, accessible, and a clear competitive advantage for tech, HR, and analytics teams.

]]>
https://blogs.perficient.com/2025/07/23/ai-at-the-service-of-talent-building-evaluator-agents-with-copilot-and-n8n/feed/ 0 385075
Intelligent Automation in the Healthcare Sector with n8n, OpenAI, and Pinecone https://blogs.perficient.com/2025/07/11/intelligent-automation-in-the-healthcare-sector-with-n8n-openai-and-pinecone/ https://blogs.perficient.com/2025/07/11/intelligent-automation-in-the-healthcare-sector-with-n8n-openai-and-pinecone/#respond Fri, 11 Jul 2025 15:02:36 +0000 https://blogs.perficient.com/?p=384424

Abstract

In today’s digital-first world, healthcare organizations face increasing pressure to modernize operations and improve service delivery. Intelligent automation is no longer a luxury (it’s the foundation for scalable, efficient, and personalized healthcare systems). At Perficient, we’re driving innovation by integrating tools like n8n, Azure OpenAI, and Pinecone to develop smarter, context-aware solutions for the medical field.

This blog explores how we built an automation pipeline that connects clinical data ingestion, semantic search, and conversational interfaces (without the need for complex infrastructure). Using n8n as the orchestration engine, we retrieve medical records, process them through Azure OpenAI to generate embeddings, and store them in Pinecone, a high-performance vector database.

To complete the experience, we added an AI-powered Telegram assistant. This bot interacts with users in real time (patients or staff), answers questions, retrieves medical data, and checks doctor availability by leveraging our semantic layer.

This architecture proves how low-code platforms combined with enterprise AI and vector tools can deliver conversational and data-driven healthcare experiences. Whether you’re a provider, architect, or innovator, this solution offers a real glimpse into the future (where decisions are supported by smart, contextual agents and users get meaningful, accurate answers).

If you need mor information about Chunks and Embeddings, and Vector Databases, you can visit this previous Post:

https://blogs.perficient.com/2025/07/07/turn-your-database-into-a-smart-chatbot-with-openai-langchain-and-chromadb/

Proof of Concept

A simple proof of concept (POC) was developed to demonstrate how an automation and AI-based solution can effectively integrate into real clinical environments. This prototype allows users to quickly and contextually check a patient’s recorded medical visits (including relevant data such as weight, height, consultation date, and clinical notes) and verify whether a healthcare professional is available for a new appointment. The solution, built using visual workflows in n8n and connected to a structured medical database, shows how accurate and helpful responses can be delivered through a channel like Telegram (without the need for complex apps or multiple steps). All of this was achieved by combining tools like Pinecone for semantic search and Azure OpenAI for natural language understanding, resulting in a smooth, user-centered conversational experience.

Creation Embeddings Flow

For the AI assistant to provide useful and contextual responses, all information related to medical appointments and clinical records must be transformed into a format that allows it to truly understand the content (not just read it literally). That’s why the first automation focuses on converting this data into embeddings (numerical representations that capture the meaning of the text).

This process runs automatically every hour, ensuring that any new data (such as a recent appointment or updated clinical note) is quickly processed and indexed. The workflow begins with an API call that retrieves the most recent clinical records or, if it’s the first run, the entire medical history. The data then goes through a processing and cleanup stage before being sent to Azure OpenAI, where the corresponding embeddings are generated.

These vectors are stored in Pinecone, a system specialized in semantic search, allowing the AI assistant to retrieve relevant information accurately (even when the user doesn’t phrase the question exactly as it was recorded).

Thanks to this preparation step, the assistant can respond with specific details about diagnoses, consultation dates, or previously recorded information (all without the user having to manually search through their history). This approach not only improves the user experience, it also ensures that the information is available at the right time and communicated in a natural way.

Imagen1

Figure 1: Creation Embeddings Flow

Once the SQL query is executed and the clinical data is retrieved (including patient name, appointment date, medical notes, professional details, and vital signs), the records go through a transformation process to prepare them for embedding generation. This step converts the content of each appointment into a numerical format (one that the language model can understand), enabling more accurate contextual searches and more relevant responses when the assistant queries the information.

Imagen 2

Figure 2: Clinical Data Embedding Pipeline, SQL Section

The vectorization process was carried out using the Pinecone Vector Store node, which handles the storage of the generated embeddings in a database specifically designed for high-speed semantic searches. This step ensures that the clinical information is organized in a format the assistant can easily query (even when the user’s questions don’t exactly match the original wording), significantly improving the accuracy and usefulness of each response.

Imagen 3

Figure 3: Clinical Data Embedding Pipeline, Creation of the Chunks

Imagen 4

Figure 4: Clinical Data Embedding Pipeline, Creation of the embeddings

Creation Embeddings Flow

This second workflow allows users to interact directly with the system through Telegram, using a conversational assistant connected to language models and external tools. When a message is received, the AI Agent analyzes the request (supported by an Azure OpenAI model and an internal memory that maintains context) and decides what action to take. If the user asks about medical history, the agent queries the vector store stored in Pinecone (via the Medical_History node) to retrieve relevant information. If the request is related to a doctor’s availability, the agent connects to the medical database through the Agenda_Doctors node. Finally, the response is sent back through Telegram in natural language (clear and to the point), allowing for a conversational experience that is agile, helpful, and aligned with the needs of a clinical environment.

Imagen 5

Figure 5: AI-Powered Telegram Assistant for Clinical Queries

This image shows a real example of the assistant working within Telegram. Through a natural conversation, the bot is able to identify the patient by last name, retrieve their full name, and then provide the date and time of their last medical appointment (including the doctor’s name and specialty). All of this happens within seconds and without the need to navigate through portals or forms, demonstrating how the integration of AI, semantic search, and instant messaging can streamline access to clinical information in a fast and accurate way.

Imagen 6

Figure6: Real-Time Patient Query via Telegram Assistant

Conclusions

  • Intelligent automation improves efficiency in clinical environments
    By combining tools like n8n, Azure OpenAI, and Pinecone, it’s possible to build workflows that reduce repetitive tasks and provide faster access to medical information (without constant manual intervention).

  • Vectorizing clinical data enables more accurate queries
    Transforming medical records into embeddings allows for more effective semantic searches (even when users don’t phrase their questions exactly as written in the original text).

  • Conversational assistants offer a natural and accessible experience
    Integrating the workflow into platforms like Telegram lets users interact with the system in an intuitive and direct way (without technical barriers or complex interfaces).

  • Hourly updates ensure information is always current
    Running the embedding process every hour keeps the system in sync with the latest records (which improves the accuracy and relevance of the assistant’s responses).

  • A well-structured POC shows the real value of AI in healthcare
    Even as a prototype, this case demonstrates how artificial intelligence can be applied in a concrete and functional way in the healthcare sector (enhancing both user experience and internal processes).

]]>
https://blogs.perficient.com/2025/07/11/intelligent-automation-in-the-healthcare-sector-with-n8n-openai-and-pinecone/feed/ 0 384424
Turn Your Database into a Smart Chatbot with Azure OpenAI, LangChain, and ChromaDB https://blogs.perficient.com/2025/07/07/turn-your-database-into-a-smart-chatbot-with-openai-langchain-and-chromadb/ https://blogs.perficient.com/2025/07/07/turn-your-database-into-a-smart-chatbot-with-openai-langchain-and-chromadb/#respond Mon, 07 Jul 2025 15:33:22 +0000 https://blogs.perficient.com/?p=384077

Abstract

In recent years, language models and AI have reshaped how we interact with data. Yet, a significant portion of business knowledge still resides in relational databases (highly structured but not exactly user-friendly for those unfamiliar with SQL). So what if you could query that data using natural language, as if you were speaking to a smart assistant?

In this post, we’ll walk through a practical proof of concept (POC) using the well-known Northwind database as our data source. We’ll apply Retrieval-Augmented Generation (RAG) techniques to convert structured data into meaningful, searchable knowledge. By combining tools like LangChain, OpenAI Embeddings, and ChromaDB, we’ll build a system that can answer real business questions about customers, orders, and sales (all without writing a single SQL query).

This hands-on example will demonstrate how to turn raw database records into descriptive text, generate semantic embeddings, and store them in a vector database optimized for intelligent retrieval. Finally, we’ll connect a conversational AI model to deliver precise, context-aware answers in plain language.

Our goal isn’t just to show a technical integration (it’s much more than that). We aim to explore a new way of accessing business data that’s intuitive, scalable, and aligned with the future of intelligent interfaces. If you’ve ever wondered how to bring your data to life through conversational AI, this is the roadmap you’ve been looking for.

Introduction

Relational databases have been the backbone of enterprise systems for decades, offering structured and reliable storage for vast amounts of business information. However, accessing that data still relies heavily on technical users who are comfortable writing SQL queries (which limits its utility for many other roles within an organization). This post aims to demonstrate how we can transform structured data into accessible knowledge through AI-powered conversational interfaces.

To do this, we’ll build a practical proof of concept using the classic Northwind database (a well-known dataset that includes customers, orders, products, and more). Instead of querying the data directly with SQL, we’ll generate readable and semantically rich descriptions that can be interpreted by a language model. These textual fragments will be converted into embeddings using OpenAI, stored in a vector database powered by ChromaDB, and made retrievable through LangChain, all using Python as the orchestration layer.

Why apply embeddings to tabular data? Because it allows us to move beyond the rigid structure of SELECT statements and JOINs (toward a system where users can ask questions like “Which customer bought the most in 1997?” and receive clear, context-aware answers in natural language). This approach does not replace traditional querying techniques (instead, it complements them), making data access more inclusive and aligned with modern AI-driven experiences.

Ultimately, this opens the door to a new way of working with data (one where conversation replaces complexity and insight becomes truly accessible).

What Are Embeddings?

One of the fundamental challenges in working with language models or textual data is representation. Computers operate on numbers, but words have no inherent numerical value. To bridge this gap, we need a way to transform text into a format that machines can understand while still preserving its semantic meaning. This is precisely where embeddings come into play.

Embeddings are dense vector representations of words, sentences, or even entire documents. Each piece of text is mapped to a real-valued vector—often in a space with hundreds or thousands of dimensions—where semantic relationships can be modeled geometrically. Unlike older methods like one-hot encoding, embeddings allow us to capture similarity: words such as “king” and “queen” appear close together, while unrelated terms like “king” and “lettuce” remain far apart.

The real power of embeddings lies in their ability to reflect meaning. This enables models to reason not just about the surface of the text, but about what it implies—unlocking applications in translation, sentiment analysis, document classification, recommendation systems, semantic search, and, crucially, retrieval-augmented conversational agents.

In the context of this blog, we use embeddings to convert structured database records into descriptive, semantically rich text fragments. These fragments are embedded using a model like OpenAI’s and stored in a vector database. When a user asks a question, it too is embedded, and the system retrieves the most relevant fragments to generate a natural-language answer. This technique is part of what’s known as Retrieval-Augmented Generation (RAG).

Embeddings are commonly produced by pre-trained models from providers like OpenAI, Hugging Face, or Cohere. In our case, we rely on OpenAIEmbeddings, which leverage large-scale transformer models trained on diverse, multilingual datasets optimized for semantic understanding.

One of the greatest advantages of embeddings is their ability to generalize. For instance, a user might ask “Who was the top customer in 1997?” and the system can infer related notions like “highest purchase volume” or “most frequent buyer” without needing exact word matches. This goes far beyond traditional keyword-based search.

Modern embeddings are also contextual. In models like BERT, ELMo, or GPT, the vector for a word depends on its surrounding text. The word “bank” in “sat on the bank” and “deposited money in the bank” will generate entirely different embeddings. This dynamic understanding of context is one reason why these models perform so well in complex language tasks.

In our use case, we apply embeddings to fragments derived from SQL queries, effectively transforming structured information into semantically searchable knowledge. This enables a more natural interaction with data, where users don’t need to understand database schemas or SQL syntax to retrieve meaningful insights.

The pipeline involves embedding each text chunk, storing the resulting vectors in a vector store like ChromaDB, and embedding the user’s query to perform a similarity search. The most relevant matches are passed to a language model, which uses them as context to generate an intelligent, context-aware response.

This method not only streamlines access to information but also enhances accuracy by leveraging the semantic proximity between questions and data fragments.

Let’s understand, what is a chunk?

In the context of language models and semantic search, chunks refer to small, meaningful segments of text that have been split from a larger document. Rather than processing an entire file or paragraph at once, the system breaks down the content into manageable pieces (usually a few hundred characters long with some overlap between them). This technique allows the model to better understand and retrieve relevant information during a query.

Chunking is essential when working with long documents or structured data transformed into natural language. It ensures that each piece maintains enough context to be useful while staying within the token limits of the language model. For example, an entire order history from a database might be divided into chunks that describe individual transactions, making it easier for the system to locate and reason over specific details.

This process not only improves the efficiency of embedding generation and similarity search but also enhances the relevance of responses provided by the conversational agent.

LangChain and ChromaDB: Connecting Language Models to Meaningful Data

To build a system where users can ask questions in natural language and receive intelligent, relevant answers, we need more than just a powerful language model (we need a framework that can manage context, memory, and retrieval). That’s where LangChain comes in.

LangChain is an open-source framework designed to help developers integrate large language models (LLMs) with external data sources and workflows. Rather than treating the model as a black box, LangChain provides structured components (like prompt templates, memory modules, agents, and chains) that make it easier to build dynamic, stateful, and context-aware applications. One of its most popular use cases is Retrieval-Augmented Generation (RAG) (where the model uses external knowledge retrieved from a document store to improve its responses).

To make this retrieval process efficient and accurate, LangChain works seamlessly with ChromaDB.

ChromaDB is a lightweight, high-performance vector database optimized for storing and querying embeddings. It enables fast similarity searches (allowing the system to retrieve the most semantically relevant pieces of information based on a user’s query). This makes Chroma ideal for use in search engines, recommendation systems, and conversational agents.

In a LangChain workflow, ChromaDB serves as the brain’s long-term memory. It stores the embedded representations of documents or data fragments, and returns the most relevant ones when queried. These fragments are then injected into the language model as context (resulting in more accurate and grounded responses).

Together, LangChain and ChromaDB bridge the gap between raw data and intelligent conversation.

Proof of Concept: From SQL Rows to Smart Conversations

In this section, we’ll walk through the steps of building a fully functional proof of concept. Our goal: enable users to ask questions in plain language (such as “Which customers placed the most orders in 1997?”) and get accurate answers generated using data from a relational database.

We’ll use the classic Northwind database, which contains tables for customers, orders, products, and more. Instead of querying it directly with SQL, we’ll extract meaningful data, turn it into descriptive text fragments, generate semantic embeddings, and store them in ChromaDB. Then, we’ll use LangChain to retrieve relevant chunks and feed them to OpenAI’s language model (turning structured data into natural conversation).

For this Proof of Concept you must follow the following steps:

Step 0: Environment setup

Before diving into building the intelligent assistant, it’s essential to prepare a clean and isolated development environment. This ensures that all dependencies are properly aligned, and avoids conflicts with other global Python packages or projects. Here’s how to set up everything from scratch.

Create Embedding Depployment

Before we can generate embeddings from our text data, we need to create a deployment for an embedding model within our Azure OpenAI resource. This deployment acts as the gateway through which we send text and receive vector representations in return.

 

Imagen5

Figure 1: Azure OpenAI – Embedding and Chat Model Deployments.

To begin, navigate to your Azure OpenAI resource in the Azure Portal. Select the Deployments tab, then click + Create to initiate a new deployment. Choose the model text-embedding-3-small from the dropdown list (this is one of the most efficient and semantically rich models currently available). Assign a unique name to your deployment—for example, text-embedding-3-small—and ensure you take note of this name, as it will be required later in your code.

 

Imagen6

Figure 2: Selecting the Text Embedding Model (text-embedding-3-small) in Azure OpenAI.

Once deployed, Azure will expose a dedicated endpoint along with your API key. These credentials will allow your application to communicate securely with the deployed model. Be sure to also confirm the API version associated with the model (such as 2024-02-01) and verify that this matches the version specified in your code or environment variables.

By completing this step, you set up the foundation for semantic understanding in your application. The embedding model will convert text into high-dimensional vectors that preserve the meaning and context of the input, enabling powerful similarity search and retrieval capabilities later on in your pipeline.

LLM Model Deployment

Don’t forget to configure the LLM model as well (such as gpt-4.1-mini), since it will be responsible for generating responses during the interaction phase of the implementation.

Imagen7

Figure 3: Deploying the LLM Base Model in Azure OpenAI (gpt-4.1-mini).

Imagen8

Figure 4: Selecting a Chat completion Model from Azure OpenAI Catalog.

To connect your application with the deployed LLM, you will need the endpoint URL and the API key shown in the deployment details. This information is essential for authenticating your requests and sending prompts to the model. In this case, we are using the gpt-4.1-mini deployment with the Azure OpenAI SDK and API key authentication. Once retrieved, these credentials allow your code to securely interact with the model and generate context-aware responses as part of the proof of concept.

Imagen6

Figure 5: Accessing Endpoint and API Key for gpt-4.1-mini Deployment.

The key information we need from this screenshot to correctly configure our code in the Proof of Concept is the following:

  1. Endpoint URL (Target URI)
    (Located under the “Endpoint” section)
    This is the base URL you will use to send requests to the deployed model. It’s required when initializing the client in your code.

  2. API Key
    (Hidden under the “Key” field)
    This is your secret authentication token. You must pass it securely in your code to authorize requests to the Azure OpenAI service.

  3. Deployment Name
    (Shown as “gpt‑4.1‑mini” in the “Deployment info” section)
    You will need this name when specifying which model deployment your client should interact with (e.g., when using LangChain or the OpenAI SDK).

  4. Provisioning Status
    (Shows “Succeeded”)
    Confirms that the deployment is ready to use. If this status is anything other than “Succeeded,” the model is not yet available.

  5. Model Version and Creation Timestamp
    (Optional, for auditing or version control)
    Useful for documentation, debugging, or future migration planning.

Create a requirements.txt file

Start by listing all the libraries your project will depend on. Save the following content as requirements.txt in your project root:

pyodbc
langchain==0.3.25
openai==1.82.0
chromadb==1.0.10
tiktoken
pydantic
langchain-core
langchain-community
langchain-text-splitters
langchain-openai==0.1.8

This file defines the exact versions needed for everything—from LangChain and ChromaDB to the Azure OpenAI integration.

Create a virtual environment

To avoid interfering with other Python installations on your machine, use a virtual environment. Run the following command in your terminal:

python -m venv venv

This creates a dedicated folder called venv that holds a self-contained Python environment just for this project.

Activate the virtual environment

Next, activate the environment:

venv\Scripts\activate

Once activated, your terminal prompt should change to reflect the active environment.

Install dependencies

Now install all required libraries in one go by running:

pip install -r requirements.txt

This will install all the packages listed in your requirements file, ensuring your environment is ready to connect to the Northwind database and work with LangChain and Azure OpenAI. With everything in place, you’re ready to move on to building the assistant—from querying structured data to transforming it into natural, intelligent responses.

Step 1: Azure OpenAI Configuration

Before diving into code, we need to configure the environment so that our application can access Azure OpenAI services securely and correctly. This involves setting three essential environment variables:

  • AZURE_OPENAI_API_KEY (your Azure OpenAI API key)

  • AZURE_OPENAI_ENDPOINT (the full endpoint URL of your Azure OpenAI resource)

  • AZURE_OPENAI_API_VERSION (the specific API version compatible with your deployed models)

These variables are defined directly in Python using the os.environ method:

# --- 1. Environment setup ---
os.environ["AZURE_OPENAI_API_KEY"] = "<ApiKey>"
os.environ["AZURE_OPENAI_ENDPOINT"] = "<Endpoint>"
os.environ["AZURE_OPENAI_API_VERSION"] = "<AzureOpenIAVersion>"

By setting these values, LangChain will know how to connect to your Azure deployment and access the correct models for embeddings and chat completion.

It’s important to ensure that the deployment names used in your code match exactly the ones configured in your Azure portal. With this step complete, you’re now ready to start connecting to your database and transforming structured data into natural language knowledge.

Step 2: Connecting to the database

With the environment ready, the next step is to connect to the Northwind database and retrieve meaningful records. Northwind is a well-known sample dataset that contains information about customers, employees, orders, products, and their relationships. It offers a rich source of structured data for demonstrating how to turn database rows into conversational context.

To begin, we establish a connection with a local SQL Server instance using pyodbc (a Python driver for ODBC-based databases). Once connected, we execute a SQL query that joins several related tables (Orders, Customers, Employees, Order Details, and Products). This query returns detailed records for each order (including the customer who placed it, the salesperson involved, the date, and the specific products purchased with their quantities, prices, and discounts).

By retrieving all of this information in a single query, we ensure that each order contains enough context to be transformed later into meaningful text that a language model can understand.

# --- 2. Database connection ---
conn = pyodbc.connect(
    "DRIVER={ODBC Driver 17 for SQL Server};SERVER=localhost;DATABASE=Northwind;Trusted_Connection=yes;"
)
cursor = conn.cursor()
cursor.execute("""
    SELECT 
        o.OrderID,
        c.CompanyName AS Customer,
        e.FirstName + ' ' + e.LastName AS Salesperson,
        o.OrderDate,
        p.ProductName,
        od.Quantity,
        od.UnitPrice,
        od.Discount
    FROM Orders o
    JOIN Customers c ON o.CustomerID = c.CustomerID
    JOIN Employees e ON o.EmployeeID = e.EmployeeID
    JOIN [Order Details] od ON o.OrderID = od.OrderID
    JOIN Products p ON od.ProductID = p.ProductID
""")
records = cursor.fetchall()

Step 3: Transforming records into text

Relational databases are optimized for storage and querying, not for natural language understanding. To bridge that gap, we need to convert structured rows into readable, descriptive sentences that capture the meaning behind the data.

In this step, we take the SQL query results and group them by order. Each order includes metadata (such as customer name, date, and salesperson) along with a list of purchased products. We format this information into short narratives that resemble human-written descriptions.

For example, an entry might read:
“Order 10250 was placed by ‘Ernst Handel’ on 1996-07-08. Salesperson: Nancy Davolio. Items included: 10 x Camembert Pierrot at $34.00 each with 0% discount.”

By doing this, we make the data semantically rich and accessible to the language model. Instead of dealing with abstract IDs and numeric values, the model now sees contextually meaningful information about who bought what, when, and under what conditions. These text fragments are the foundation for generating accurate embeddings and useful answers later on.

Then, the next part of the code in charge of performing this task, is as follows

# --- 3. Transform records into text ---
from collections import defaultdict

orders_data = defaultdict(list)
for row in records:
    key = (row.OrderID, row.Customer, row.Salesperson, row.OrderDate)
    orders_data[key].append((row.ProductName, row.Quantity, row.UnitPrice, row.Discount))

documents = []
for (order_id, customer, seller, date), items in orders_data.items():
    lines = [f"Order {order_id} was placed by '{customer}' on {date:%Y-%m-%d}. Salesperson: {seller}."]
    lines.append("Items included:")
    for product, qty, price, discount in items:
        lines.append(f" - {qty} x {product} at ${price:.2f} each with {discount*100:.0f}% discount.")
    documents.append(" ".join(lines))

Relational databases are optimized for storage and querying, not for natural language understanding. To bridge that gap, we need to convert structured rows into readable, descriptive sentences that capture the meaning behind the data.

In this step, we take the SQL query results and group them by order. Each order includes metadata (such as customer name, date, and salesperson) along with a list of purchased products. We format this information into short narratives that resemble human-written descriptions.

For example, an entry might read:
“Order 10250 was placed by ‘Ernst Handel’ on 1996-07-08. Salesperson: Nancy Davolio. Items included: 10 x Camembert Pierrot at $34.00 each with 0% discount.”

By doing this, we make the data semantically rich and accessible to the language model. Instead of dealing with abstract IDs and numeric values, the model now sees contextually meaningful information about who bought what, when, and under what conditions. These text fragments are the foundation for generating accurate embeddings and useful answers later on.

Step 4: Splitting text into chunks

Once we have our text-based descriptions, the next challenge is to prepare them for embedding. Language models (and vector databases) perform best when working with manageable segments of text rather than long, unstructured paragraphs.

To achieve this, we break each document into smaller chunks using a character-based splitter. In this case, we set a chunk size of 300 characters with an overlap of 50 characters. The overlap ensures that important information near the edges of a chunk isn’t lost when transitioning between segments.

For example, if an order includes many products or a particularly detailed description, it may span multiple chunks. These overlapping fragments preserve continuity and improve the accuracy of downstream retrieval.

# --- 4. Split texts into chunks ---
splitter = CharacterTextSplitter(chunk_size=300, chunk_overlap=50)
docs = splitter.create_documents(documents)

This process not only improves the quality of the embeddings but also enhances retrieval performance later on, when a user asks a question and the system needs to locate the most relevant context quickly.

By preparing clean, consistent input in this way, we’re setting the stage for a robust and semantically aware assistant that understands each order at a granular level.

Step 5: Creating embeddings with Azure OpenAI

With our text chunks ready, the next step is to convert them into numerical representations that capture their semantic meaning. These representations, known as embeddings, allow the system to measure how similar one piece of text is to another—not by exact words, but by meaning.

To generate these embeddings, we use the text-embedding-3-small model deployed on Azure OpenAI. This model transforms each chunk into a high-dimensional vector, where semantically similar chunks are positioned close together in vector space.

# --- 5. Create embeddings with Azure OpenAI ---
embeddings = AzureOpenAIEmbeddings(
    deployment="text-embedding-3-small",  # Deployment correcto para embeddings
    model="text-embedding-3-small",
    api_version="2024-02-01"
)

For instance, two orders that include similar products or are placed by the same customer will produce embeddings that are close in distance. This similarity is what allows the assistant to later retrieve relevant information based on a natural language query, even if the wording is different.

Using Azure OpenAI for embeddings offers both scalability and enterprise-grade integration. It also ensures compatibility with the rest of our LangChain pipeline, as the embeddings can be seamlessly stored and queried within a vector database like Chroma.

This step essentially transforms our structured business data into a format that the language model can reason over—making it a critical part of the entire retrieval-augmented workflow.

Step 6: Storing embeddings in Chroma

Once the embeddings are generated, they need to be stored in a way that allows for fast and accurate retrieval. This is where Chroma comes in—a lightweight, high-performance vector database built specifically for handling semantic search.

Each text chunk, along with its corresponding embedding, is stored in Chroma using a local persistence directory. By doing so, we’re creating a searchable memory that allows the system to quickly find the most relevant fragments when a question is asked.

# --- 6. Store the embeddings in Chroma ---
db = Chroma.from_documents(docs, embeddings, persist_directory="./northwind_db")

Chroma supports similarity search out of the box. When a user submits a query, the system converts that query into its own embedding and searches the database for nearby vectors (in other words, the most semantically related pieces of content).

This design mimics how our own memory works—we don’t recall entire databases, just the most relevant bits based on context and meaning. Storing the embeddings in Chroma gives our assistant the same ability.

By the end of this step, we’ve effectively turned structured business data into a knowledge base that can be queried using natural language, enabling more intelligent and human-like interactions.

Step 7: Configuring the question-answering engine with AzureChatOpenAI

At this stage, we have a searchable knowledge base ready to go. Now it’s time to build the brain of the assistant—the component that takes a user’s question, retrieves the relevant context, and generates a natural, intelligent response.

We use AzureChatOpenAI, a LangChain-compatible wrapper for Azure-hosted GPT models. In this example, we configure it to use the gpt-4.1-mini deployment. This model serves as the core reasoning engine, capable of understanding user queries and formulating answers based on the data retrieved from Chroma.

LangChain’s RetrievalQA chain orchestrates the interaction. When a question is submitted, the process works as follows:

  1. The system converts the query into an embedding.

  2. Chroma searches for the most relevant chunks.

  3. The retrieved chunks are passed as context to the GPT model.

  4. The model generates a concise and informative response.

# --- 7. Configure RetrievalQA with AzureChatOpenAI ---
retriever = db.as_retriever()
llm = AzureChatOpenAI(
    deployment_name="gpt-4.1-mini",  # Deployment LLM
    model="gpt-4",                   # Model LLM
    api_version="2024-02-01",
    temperature=0
)
qa = RetrievalQA.from_chain_type(llm=llm, retriever=retriever)

This architecture is what makes Retrieval-Augmented Generation (RAG) so effective. Rather than relying solely on the model’s training data, it supplements it with real, dynamic business information—allowing it to give accurate and up-to-date answers.

By combining a high-quality language model with focused contextual data, we give our assistant the tools to reason, explain, and even summarize complex order information without writing a single SQL query.

Step 8: Question loop

With everything in place, the final step is to set up a simple interaction loop that allows users to engage with the assistant through natural language. This loop waits for a user’s input, processes the question, retrieves the most relevant data from Chroma, and generates an answer using Azure OpenAI.

The experience is intuitive—users don’t need to know SQL or the structure of the database. Instead, they can simply ask questions like:

  • Which employee achieved the highest total unit sales?
  • Which discounted products were the most frequently sold?
  • Which customer purchased the widest variety of products?
  • Which order had the highest overall value?
  • Who processed the most orders in July 1996?

Behind the scenes, the assistant interprets the question, finds the best-matching entries from the embedded knowledge base, and composes a response based on actual transactional data.

# --- 8. Question loop ---
print("Ask me about any order from the Northwind database.")
while True:
    question = input("\nYour question: ")
    if question.lower() == "exit":
        break
    result = qa(question)
    print("\n💡 Aswer:\n", result["result"])

This approach makes data exploration conversational. It lowers the barrier for interacting with structured information, opening new possibilities for customer support, sales analysis, and executive reporting—all through plain language.

At the end of this loop, we’ve built something more than just a chatbot. It’s a practical proof of concept showing how large language models can bring structured data to life in real time, transforming static records into dynamic, human-centered knowledge.

Northwind Assistant in Action: Sample Questions and Answers

This section showcases the assistant in action. By combining data from the Northwind database with Azure OpenAI and Chroma, we’ve built a system that understands natural language and responds with precise, contextual answers. Instead of writing complex SQL queries, users can now explore business insights simply by asking questions. Below are some example queries and the kind of intelligent responses the assistant is capable of generating.

Imagen1

Figure 6: User prompt asking which discounted products were sold the most.

Imagen2

Figure 7: AI-generated answer showing the top-selling products with discounts applied.

Imagen3

Figure 8: Natural language question about which order had the highest total value.

Imagen4

Figure 9: Response calculating and displaying the order with the highest overall value.

Conclusions

Combining SQL with LLMs unlocks new value from legacy data
By extracting and transforming structured information from a traditional database like Northwind, we demonstrated how even decades-old datasets can become fuel for modern AI-driven insights (without rewriting backend systems).

Semantic search enhances how we ask and answer questions
Using embeddings and a vector store (in this case, ChromaDB), the assistant is able to retrieve contextually relevant chunks of information instead of relying on rigid keyword matches. This allows for more flexible and intuitive interactions.

Natural language becomes the new interface for analytics
Thanks to Azure OpenAI’s chat capabilities, users no longer need to write complex SQL queries to understand data. Instead, they can simply ask questions in plain English (and get coherent answers backed by structured sources).

Modularity and scalability are built into the architecture
Each step of the assistant—data extraction, transformation, embedding, storage, and retrieval—is modular. This makes it easy to extend to new datasets, scale up in the cloud, or integrate into enterprise tools and workflows.

This approach bridges the gap between business users and data
Perhaps most importantly, this proof of concept shows that language models can act as intelligent intermediaries (allowing non-technical users to access meaningful insights from complex databases, instantly and conversationally).

References

Microsoft. (2024).What is Azure OpenAI Service? Microsoft Learn.
https://learn.microsoft.com/en-us/azure/ai-services/openai/overview

Chroma. (2024).ChromaDB: The AI-native open-source embedding database.
https://docs.trychroma.com/

OpenAI. (2024). Text embeddings documentation. OpenAI API Reference.
https://platform.openai.com/docs/guides/embeddings/what-are-embeddings

FutureSmart AI. (2024). ChromaDB: An open-source vector embedding database. FutureSmart AI Blog. https://blog.futuresmart.ai/chromadb-an-open-source-vector-embedding-database

FutureSmart AI. (2023). Master RAG with LangChain: A practical guide. FutureSmart AI Blog. https://blog.futuresmart.ai/master-rag-with-langchain-a-practical-guide

Jurafsky, D., & Martin, J. H. (2023). Speech and Language Processing (3rd ed., Draft). Stanford University. Retrieved from https://web.stanford.edu/~jurafsky/slp3/

]]>
https://blogs.perficient.com/2025/07/07/turn-your-database-into-a-smart-chatbot-with-openai-langchain-and-chromadb/feed/ 0 384077
Intelligent Agents with n8n: AI-Powered Automation https://blogs.perficient.com/2025/07/04/intelligent-agents-with-n8n-ai-powered-automation/ https://blogs.perficient.com/2025/07/04/intelligent-agents-with-n8n-ai-powered-automation/#respond Sat, 05 Jul 2025 00:23:01 +0000 https://blogs.perficient.com/?p=384044

Abstract

We live in a time when automating processes is no longer a luxury, but a necessity for any team that wants to remain competitive. But automation has evolved. It is  no longer just about executing repetitive tasks, but about creating solutions that understand context, learn over time, and make smarter decisions. In this blog, I want to show you how n8n (a visual and open-source automation tool) can become the foundation for building intelligent agents powered by AI.

We will explore what truly makes an agent “intelligent,” including how modern AI techniques allow agents to retrieve contextual information, classify tickets, or automatically respond based on prior knowledge.

I will also show you how to connect AI services and APIs from within a workflow in n8n, without the need to write thousands of lines of code. Everything will be illustrated with concrete examples and real-world applications that you can adapt to your own projects.

This blog is an invitation to go beyond basic bots and start building agents that truly add value. If you are exploring how to take automation to the next level, thiss journey will be of great interest to you.

 

Introduction

Automation has moved from being a trend to becoming a foundational pillar for development, operations, and business teams. But amid the rise of tools that promise to do more with less, a key question emerges: how can we build truly intelligent workflows that not only execute tasks but also understand context and act with purpose? This is where AI agents begin to stand out.

This blog was born from that very need. Over the past few months, I’ve been exploring how to take automation to the next level by combining two powerful elements: n8n (a visual automation platform) and the latest advances in artificial intelligence. This combination enables the design of agents capable of understanding, relating, and acting based on the content they receive—with practical applications in classification, search, personalized assistance, and more.

In the following sections, I’ll walk you through how these concepts work, how they connect with each other, and most importantly, how you can apply them yourself (without needing to be an expert in machine learning or advanced development). With clear explanations and real-world examples built with n8n, this blog aims to be a practical, approachable guide for anyone looking to go beyond basic automation and start building truly intelligent solutions.

What are AI Agents?

An AI agent is an autonomous system (software or hardware) that perceives its environment, processes information, and makes decisions to achieve specific goals. It does not merely react to basic events; it can analyze context, query external sources, and select the most appropriate action. Unlike traditional bots, intelligent agents integrate reasoning and sometimes memory, allowing them to adapt and make decisions based on accumulated experience (Wooldridge & Jennings, 1995; Cheng et al., 2024).

In the context of n8n, an AI agent translates into workflows that not only execute tasks but also interpret data using language models and act according to the situation, enabling more intelligent and flexible processes.

From Predictable to Intelligent: Traditional Bot vs. Context-Aware AI Agent

A traditional bot operates based on a set of rigid rules and predefined responses, which limits its ability to adapt to unforeseen situations or understand nuances in conversation. Its interaction is purely reactive: it responds only to specific commands or keywords, without considering the conversation’s history or the context in which the interaction occurs. In contrast, a context-aware artificial intelligence agent uses advanced natural language processing techniques and conversational memory to adapt its responses according to the flow of the conversation and the previous information provided by the user. This allows it to offer a much more personalized, relevant, and coherent experience, overcoming the limitations of traditional bots. Context-aware agents significantly improve user satisfaction, as they can understand intent and dynamically adapt to different conversational scenarios (Chen, Xu, & Wang, 2022).

Imagen1

Figure 1: Architecture of an intelligent agent with hybrid memory in n8n (Dąbrowski, 2024).

How Does n8n Facilitate the Creation of Agents?

n8n is an open-source automation platform that enables users to design complex workflows visually, without the need to write large amounts of code. It simplifies the creation of intelligent agents by seamlessly integrating language models (such as OpenAI or Azure OpenAI), vector databases, conditional logic, and contextual memory storage.

With n8n, an agent can receive text input, process it using an AI model, retrieve relevant information from a vector store, and respond based on conversational history. All of this is configured through visual nodes within a workflow, making advanced solutions accessible even to those without a background in artificial intelligence.

Thanks to its modular and flexible design, n8n has become an ideal platform for building agents that not only automate tasks but also understand, learn, and act autonomously.

 

Imagen2

Figure 2: Automated workflow in n8n for onboarding and permission management using Slack, Jira, and ServiceNow (TextCortex, 2025).

Integrations with OpenAI, Python, External APIs, and Conditional Flows

One of n8n’s greatest strengths is its ability to connect with external tools and execute custom logic. Through native integrations, it can interact with OpenAI (or Azure OpenAI), enabling the inclusion of language models for tasks such as text generation, semantic classification, or automated responses.

Additionally, n8n supports custom code execution through Python or JavaScript nodes, expanding its capabilities and making it highly adaptable to different use cases. It can also communicate with any external service that provides a REST API, making it ideal for enterprise-level integrations.

Lastly, its conditional flow system allows for dynamic branching within workflows, evaluating logical conditions in real time and adjusting the agent’s behavior based on the context or incoming data.

Imagen3

Figure 3: Basic conversational agent flow in n8n with language model and contextual memory.

This basic flow in n8n represents the core logic of a conversational intelligent agent. The process begins when a message is received through the “When chat message received” node. That message is then passed to the AI Agent node, which serves as the central component of the system.

The agent is connected to two key elements: a language model (OpenAI Chat Model) that interprets and generates responses, and a simple memory that allows it to retain the context or conversation history. This combination enables the agent not only to produce relevant responses but also to remember previous information and maintain coherence across interactions.

This type of architecture demonstrates how, with just a few nodes, it is possible to build agents with contextual behavior and basic reasoning capabilities—ideal for customer support flows, internal assistants, or automated conversational interfaces.

Before the agent can interact with users, it needs to be connected to a language model. The following shows how to configure this integration in n8n.

Configuring the Language Model in the AI Agent

As developers at Perficient, we have the advantage of accessing OpenAI services through the Azure platform. This integration allows us to leverage advanced language models in a secure, scalable manner, fully aligned with corporate policies, and facilitates the development of artificial intelligence solutions tailored to our needs.

One of the fundamental steps in building an AI agent in n8n is to define the language model that will be used to process and interpret user inputs. In this case, we use the OpenAI Chat Model node, which enables the agent to connect with advanced language models available through the Azure OpenAI API.

When configuring this node, n8n will require an access credential, which is essential for authenticating the connection between n8n and your Azure OpenAI service. If you do not have one yet, you can create it from the Azure portal by following these steps:

  • Go to the Azure portal. If you do not yet have an Azure OpenAI resource, create one by selecting “Create a resource“, searching for “Azure OpenAI”, and following the setup wizard to configure the service with your subscription parameters. Then access the implemented resource.
  • Go to url https://ai.azure.com and sign in with your Azure account. Select the Azure OpenAI resource you created and, from the side menu, navigate to the “Deployments” section. There you must create a new deployment, selecting the language model you want to use (for example, GPT 3.5 or GPT 4) and assigning it a unique deployment name. You can also click on the Command-Go to Azure AI Foundry portal option as shown in the image.

Imagen4

Figure 4: Access to the Azure AI Foundry portal from the Azure OpenAI resource.

  • Once the deployment is created, go to “API Keys & Endpoints” to copy the access key (API Key) and the endpoint corresponding to your resource.

Imagen5

Figure 5: Visualization of model deployments in Azure AI Foundry.

Once the model deployment has been created in Azure AI Foundry, it is essential to access the deployment details in order to obtain the necessary information for integrating and consuming the model from external applications. This view provides the API endpoint, the access key (API Key), as well as other relevant technical details of the deployment, such as the assigned name, status, creation date, and available authentication parameters.

This information is crucial for correctly configuring the connection from tools like n8n, ensuring a secure and efficient integration with the language model deployed in Azure OpenAI.

Imagen6

Figure 6: Azure AI Foundry deployment and credentialing details.

  • Step 1:

    In n8n, select the “+ Create new credential” option in the node configuration, and enter the endpoint, the API key, and the deployment name you configured. But first we must create the AI Agent:

Imagen8

Figure 7: Chat with AI Agent.

Step 2:

After the creation of the Agent, the model is added, as shown in the figure above. Within the n8n environment, integration with language models is accomplished through specialized nodes for each AI provider.

To connect our agent with Azure OpenAI services, it is necessary to select the Azure OpenAI Chat Model node in the Language Models section.

This node enables you to leverage the advanced capabilities of language models deployed in Azure, making it easy to build intelligent and customizable workflows for various corporate use cases. Its configuration is straightforward and, once properly authenticated, the agent will be ready to process requests using the selected model from Azure’s secure and scalable infrastructure.

Imagen7

Figure 8: Selection of the Azure OpenAI Chat Model node in n8n.

Step 3:

Once the Azure OpenAI Chat Model node has been selected, the next step is to integrate it into the n8n workflow as the primary language model for the AI agent.

The following image illustrates how this model is connected to the agent, allowing chat inputs to be processed intelligently by leveraging the capabilities of the model deployed in Azure. This integration forms the foundation for building more advanced and personalized conversational assistants in enterprise environments.

Imagen9

Figure 9: Selection of the Azure OpenAI Chat Model node in n8n.

Step 4:

When configuring the Azure OpenAI Chat Model node in n8n, it is necessary to select the access credential that will allow the connection to the Azure service.

If a credential has not yet been created, you can do so directly from this panel by selecting the “Create new credential” option.

This step is essential to authenticate and authorize the use of language models deployed in Azure within your automation workflows.

Imagen10

Figura 10: Selection of the Azure OpenAI Chat Model node in n8n.

Step 5:

To complete the integration with Azure OpenAI in n8n, it is necessary to properly configure the access credentials.

The following screen shows the required fields, where you must enter the API Key, resource name, API version, and the corresponding endpoint.

This information ensures that the connection between n8n and Azure OpenAI is secure and functional, enabling the use of language models deployed in the Azure cloud.

Imagen11

Figure 11: Azure OpenAI credentials configuration in n8n.

Step 6:

After creating and selecting the appropriate credentials, the next step is to configure the model deployment name in the Azure OpenAI Chat Model node.

In this field, you must enter exactly the name assigned to the model deployment in Azure, which will allow n8n to correctly use the deployed instance to process natural language requests. Remember to select the Model of the selected implementation in Azure OpenIA, in this case gtp-4.1-mini:

Imagen12

Figure 12: Configuration of the deployment name in the Azure OpenAI Chat Model node.

Step 7:

Once the language model is connected to the AI agent in n8n, you can enhance its capabilities by adding memory components.

Integrating a memory system allows the agent to retain relevant information from previous interactions, which is essential for building more intelligent and contextual conversational assistants.

In the following image, the highlighted area shows where a memory module can be added to enrich the agent’s behavior.

Imagen13

Figure 13: Connecting the memory component to the AI agent in n8n.

Step 8:

To start equipping the AI agent with memory capabilities, n8n offers different options for storing conversation history.

The simplest alternative is Simple Memory, which stores the data directly in n8n’s internal memory without requiring any additional credentials.

There are also more advanced options available, such as storing the history in external databases like MongoDB, Postgres, or Redis, which provide greater persistence and scalability depending on the project’s requirements.

Imagen14

Figure 14: Memory storage options for AI agents in n8n.

Step 9:

The configuration of the Simple Memory node in n8n allows you to easily define the parameters for managing the conversational memory of the AI agent.

In this interface, you can specify the session identifier, the field to be used as the conversation tracking key, and the number of previous interactions the model will consider as context.

These settings are essential for customizing information retention and improving continuity in the user’s conversational experience.

Imagen15

Figure 15: Memory storage options for AI agents in n8n.

Step 10:

The following image shows the successful execution of a conversational workflow in n8n, where the AI agent responds to a chat message using a language model deployed on Azure and manages context through a memory component.

You can see how each node in the workflow performs its function and how the conversation history is stored, enabling the agent to provide more natural and contextual responses.

Imagen16

Figure 16: Execution of a conversational workflow with Azure OpenAI and memory in n8n.

Once a valid credential has been added and selected, the node will be ready to send requests to the chosen language model (such as GPT 3.5 or GPT 4) and receive natural language responses, allowing the agent to continue the conversation or execute actions automatically.

With this integration, n8n becomes a powerful automation tool, enabling use cases such as conversational assistants, support bots, intelligent classification, and much more.

Integration of the AI agent into the web application through an n8n workflow triggered by a webhook.

Before integrating the AI agent into a web application, it is essential to have a ready-to-use n8n workflow that receives and responds to messages via a Webhook. Below is a typical workflow example where the main components for conversational processing are connected.

For the purposes of this blog, we will assume that both the Webhook node (which receives HTTP requests) and the Set/Edit Fields node (which prepares the data for the agent) have already been created. As shown in the following image, the workflow continues with the configuration of the language model (Azure OpenAI Chat Model), memory management (Simple Memory), processing via the AI Agent node, and finally, sending the response back to the user using the Respond to Webhook node.

Imagen17

Figure 17: n8n Workflow for AI Agent Integration with Webhook.

Before connecting the web interface to the AI agent deployed in n8n, it is essential to validate that the Webhook is working correctly. The following image shows how, using a tool like Postman, you can send an HTTP POST request to the Webhook endpoint, including the user’s message and the session identifier. As a result, the flow responds with the message generated by the agent, demonstrating that the end-to-end integration is functioning properly.

Imagen7

Figure 18: Testing the n8n Webhook with Postman.

    1. Successful Test of the n8n Chatbot in a WebApp: The following image shows the functional integration between a chatbot built in n8n and a custom web interface using Bootstrap. By sending messages through the application, responses from the AI agent deployed on Azure OpenAI are displayed in real time, enabling a seamless and fully automated conversational experience directly from the browser.
      Imagen19
      Figure 19: n8n Chatbot Web Interface Working with Azure OpenAI.
    2. Introductory Text: Before consuming the agent from a web page or an external application, it is essential to ensure that the flow in n8n is activated. As shown in the image, the “Active” button must be enabled (green) so that the webhook works continuously and can receive requests at any time. Additionally, remember that when deploying to a production environment, you must change the webhook URL, using the appropriate public address instead of “localhost”, ensuring external access to the flow.

      Imagen20
      Figure 20: Activation and Execution Tracking of the Flow in n8n.

Conclusions

Intelligent automation is essential for today’s competitiveness

Automating tasks is no longer enough; integrating intelligent agents allows teams to go beyond simple repetition, adding the ability to understand context, learn from experience, and make informed decisions to deliver real value to business processes.

Intelligent agents surpass the limitations of traditional bots

Unlike classic bots that respond only to rigid rules, contextual agents can analyze the flow of conversation, retain memory, adapt to changing situations, and offer personalized and coherent responses, significantly improving user satisfaction.

n8n democratizes the creation of intelligent agents

Thanks to its low-code/no-code approach and more than 400 integrations, n8n enables both technical and non-technical users to design complex workflows with artificial intelligence, without needing to be experts in advanced programming or machine learning.

The integration of language models and memory in n8n enhances conversational workflows

Easy connection with advanced language models (such as Azure OpenAI) and the ability to add memory components makes n8n a flexible and scalable platform for building sophisticated and customizable conversational agents.

Proper activation and deployment of workflows ensures the availability of AI agents

To consume agents from external applications, it is essential to activate workflows in n8n and use the appropriate production endpoints, thus ensuring continuous, secure, and scalable responses from intelligent agents in real-world scenarios.

References

  • Wooldridge, M., & Jennings, N. R. (1995). Intelligent agents: Theory and practice. The Knowledge Engineering Review, 10(2), 115–152.
  • Cheng, Y., Zhang, C., Zhang, Z., Meng, X., Hong, S., Li, W., Zhao, J. (2024). Exploring Large Language Model based Intelligent Agents: Definitions, Methods, and Prospects. arXiv.
  • Chen, C., Xu, Y., & Wang, Z. (2022). Context-Aware Conversational Agents: A Review of Methods and Applications. IEEE Transactions on Artificial Intelligence, 3(4), 410-425.
  • Zamani, H., Sadoughi, N., & Croft, W. B. (2023). Intelligent Workflow Automation: Integrating Memory-Augmented Agents in Business Processes. Journal of Artificial Intelligence Research, 76, 325-348.
  • Dąbrowski, D. (2024). Day 67 of 100 Days Agentic Engineer Challenge: n8n Hybrid Long-Term Memory. Medium. https://damiandabrowski.medium.com/day-67-of-100-days-agentic-engineer-challenge-n8n-hybrid-long-term-memory-ce55694d8447
  • n8n. (2024). Build your first AI Agent – powered by Google Gemini with memory. https://n8n.io/workflows/4941-build-your-first-ai-agent-powered-by-google-gemini-with-memory
  • Luo, Y., Liang, P., Wang, C., Shahin, M., & Zhan, J. (2021). Characteristics and challenges of low code development: The practitioners’ perspective. arXiv. http://dx.doi.org/10.48550/arXiv.2107.07482
  • TextCortex. (2025). N8N Review: Features, pricing & use cases. Cybernews. https://cybernews.com/ai-tools/n8n-review/
]]>
https://blogs.perficient.com/2025/07/04/intelligent-agents-with-n8n-ai-powered-automation/feed/ 0 384044
Databricks Lakebase – Database Branching in Action https://blogs.perficient.com/2025/07/04/databricks-lakebase-database-branching-in-action/ https://blogs.perficient.com/2025/07/04/databricks-lakebase-database-branching-in-action/#respond Fri, 04 Jul 2025 07:17:16 +0000 https://blogs.perficient.com/?p=383982

What is Databricks Lakebase?

Databricks Lakebase is a Postgres OLTP engine, integrated into Databricks Data Intelligence Platform. A database instance is a compute type that provides fully managed storage and compute resources for a postgres database. Lakebase leverages an architecture that separates compute and storage, which allows independent scaling while supporting low latency (<10ms) and high concurrency transactions.

Databricks has integrated this powerful postgres engine along with sophisticated capabilities that are benefited by Databricks recent acquisition of Neon. Lakebase is fully managed by Databricks, which means no infrastructure has to be provisioned and maintained separately. In addition to traditional OLTP engine, Lakebase comes with below features,

  • Openness: Lakebase are built on open-source standards
  • Storage and compute separation: Lakebase stores data in data lakes in open format. It enables scaling storage and compute independently.
  • Serverless: Lakebase is lightweight, meaning it can scale instantly up and down based on the load. It can scale down to zero, at which the cost of the lakebase is just for the storage  of data only. No compute cost will be applied.
  • Modern development workflow: Branching a database is as simple as branching a code repository. It is done near instantly.
  • Built for AI Agents: Lakebases are designed to support a large number of AI agents. It’s branching and checkpointing capabilities enable AI agents to experiment and rewind to any point in time.
  • Lakehouse Integration: Lakebase make it easy to combine operational, analytical and AI systems without complex ETL pipelines.

In this article, we shall discuss in detail about how database branching feature works in Lakebase.

Database Branching

Database branching is one of the unique features introduced in Lakebase, that enables to branch out a database. It resembles the exact behavior of how code branch could be branched out from an existing branch.

Branching database is beneficial for an isolated test environment or point in time recovery. Lakebase uses copy-on-write branching mechanism to create an instant zero-copy clone of the database, with dedicated compute to operate on that branch. With zero-copy clone, it enables to create a branch of parent database of any size instantly.

The child branch is managed independently of the parent branch. With child isolated database branch, one can perform testing/debugging in the production copy of data. Though both parent and child databases appear separate, physically both instances would be pointing to same data pages. Under the hood, child database will be pointing to the actual data pages which parent is pointing to. When a change occurs in any of the data in child branch, then a new data page will be created with the new changes, and it will be available only to the branch. Any changes done in branch will not reflect in parent branch.

How branching works

The below diagrams represent how database branching works under the hood,

Database Branching

Database Branching Updates

Lakebase in action

Here is the demonstration of how Lakebase instance can be created, branch out an instance and how table changes behave,

To create Lakebase instance, login Databricks and navigate to Compute -> OLTP Database tab -> Click “Create New Instance” button,

Create New Instance 01

Create New Instance Success 02

Click “New Query” to launch SQL Editor for PostgreSQL Database. In current instance, let’s create a new table and add some records.

Instance1 Create Table 03

Instance1 Query Table 04

Let’s create a database branch “pginstance2” from instance “pginstance1”. Goto Compute –> OLTP Database –> Create Database instance

Enter new instance name and expand “Advanced Settings” -> Enable “Create from parent” option -> Enter the source instance name “pginstance1”.

Under “Include data from parent up to”, select “Current point in time” option. Here, we can choose any specific point in time instance too.

Create Instance2 05

Instance2 Create Success 06

Launch SQL Editor from pginstance2 database instance and query tbl_user_profile table

Instance2 Query Table 07

Now, let’s insert new record and update an existing record in the tbl_user_profile table in pginstance2,

Instance2 Update Table 08

Now, let’s switch back to parent database instance pginsntance1 and query tbl_user_profile table. The table in pginsntance1 should still be only 3 records. All the changes done in tbl_user_profile table should be available only in pginstance2.

Instance1 Query Table 09

Conclusion

Database changes that are done in one branch will not impact/reflect in another branch, thereby provide clear isolation of database at scale. Currently Lakebase do not have a feature to merge database branch. However, Databricks is committed and working towards database merge capability in near future.

]]>
https://blogs.perficient.com/2025/07/04/databricks-lakebase-database-branching-in-action/feed/ 0 383982
Monitoring Object Creation/Deletion in Cloud Storage with GCP Pub-Sub https://blogs.perficient.com/2025/07/02/monitoring-object-creation-deletion-in-cloud-storage-with-gcp-pub-sub/ https://blogs.perficient.com/2025/07/02/monitoring-object-creation-deletion-in-cloud-storage-with-gcp-pub-sub/#respond Wed, 02 Jul 2025 09:20:48 +0000 https://blogs.perficient.com/?p=383879

When using cloud-based event-driven systems, it’s essential to respond to changes at the storage level, such as when files are added, modified, or deleted. Google Cloud Platform (GCP) makes this easy by enabling Cloud Storage and Pub/Sub to talk to one another directly. This arrangement lets you send out structured real-time alerts whenever something happens inside a bucket. This configuration is specifically designed to catch deletion occurrences. When a file is deleted from a GCS bucket, a message is sent to a Pub/Sub topic. That subject becomes the main connection, providing alerts to any systems that are listening, such as a Cloud Run service, an external API, or another microservice. These systems can then react by cleaning up data, recording the incident, or sending out alarms. The architecture also takes care of critical backend needs. It employs IAM roles to set limits on who can access what, has retry rules in case something goes wrong for a short time, and links to a Dead-Letter Queue (DLQ) to keep messages that couldn’t be sent even after numerous tries. The whole system stays loosely coupled and strong because it employs technologies that are built into GCP. You can easily add or remove downstream services without changing the initial bucket. This pattern is a dependable and adaptable way to enforce cleanup rules, track changes for auditing, or initiate actions in real-time. In this article, we’ll explain the fundamental ideas, show you how to set it up, and talk about the important design choices that make this type of event notification system work with Pub/Sub to keep everything running smoothly.

Why Use Pub/Sub for Storage Notifications?

Pub/Sub makes it easy to respond to changes in Cloud Storage, like when a file is deleted, without having to connect everything closely. You don’t link each service directly to the storage bucket. Instead, you send events using Pub/Sub. This way, logging tools, data processors, and alarm systems may all work on their own without interfering with each other. The best thing? You can count on it. Even if something goes wrong, Pub/Sub makes sure that events don’t get lost. And since you only pay when messages are delivered or received, you don’t have to pay for resources that aren’t being used. This setup lets you be flexible, respond in real time, and evolve, which is great for cloud-native systems that need to be able to adapt and stay strong.

Architecture Overview

Archi Ovr

Step 1: Create a Cloud Storage Bucket

If you don’t already have a bucket, go to the Cloud Storage console, click ‘Create Bucket’, and follow these steps:
– Name: Choose a globally unique bucket name (e.g., demopoc-pubsub)
– Location: Pick a region or multi-region
– Default settings: You can leave the rest unchanged for this demo

S1

Step 2: Create a Pub/Sub Topic

Go to Pub/Sub in the Cloud Console and:
1. Click ‘Create Topic’
2. Name it something like demo-poc-pubsub
3. Leave the rest as defaults
4. Click Create

Step 3: Create a Pub/Sub Subscription (Pull-based)

  1. Click on the topic gcs-object-delete-topic
    2. Click ‘Create Subscription’
    3. Choose a pull subscription
    4. Name it gcs-delete-sub
    5. Leave other options as default
    6. Click Create

Step 4: Grant Pub/Sub Permission to Publish to the Topic

Go to the IAM permissions for your topic:
1. In the Pub/Sub console, go to your topic
2. Click ‘Permissions’
3. Click ‘Grant Access’
4. Add the GCS service account: service-<project-number-sample>@gs-project-accounts.iam.gserviceaccount.com
5. Assign it the role: Pub/Sub Publisher
6. Click Save

Step 5: Connect Cloud Storage to Pub/Sub via Shell

Open your cloud shell terminal and run:

gcloud storage buckets notifications create gs://my-delete-audit-bucket –topic=gcs-object-delete-topic –event-types=OBJECT_DELETE –payload-format=json

Explanation of gsutil command:

 gs://my-delete-audit-bucket: Your storage bucket

–topic: Pub/Sub topic name

–event-types=OBJECT_DELETE: Triggers only when objects are deleted

–payload-format=json: Format of the Pub/Sub message

 

Step 6: Test the Notification System
Then pull the message from Pub/Sub console.

Expected message payload:
{
“kind”: “storage#object”,
“bucket”: “my-delete-audit-bucket”,
“name”: “test.txt”,
“timeDeleted”: “2025-06-05T14:32:29.123Z”
}

Sample Use Cases

  1. Audit Trail for Object Deletion Use Case: Keep track of every deletion for compliance or internal audits.        How it works:When a file is destroyed, the Cloud Storage trigger sends an event to Pub/Sub. A Cloud Function or Dataflow task subscribes to the topic and sends metadata (such as the file name, timestamp, user, and bucket) to Big-Query.Why it matters:
    Keeps an unchangeable audit trail, helps with compliance (such HIPAA and SOC2), security audits, and internal investigations.

    Enforcement of Data Retention

    Use Case: Stop people from accidentally deleting data or make sure that data is kept for at least a certain amount of time.How it works:

    When an object is deleted, the system checks to see if it should have been deleted based on its name, metadata, and other factors.
    It logs the incident or restores the object from backup (for example, using Nearline or Archive tier) if it finds a violation.

    Why it matters:

    Helps keep data safe from being lost because to accidental or unlawful deletion. Makes sure that data lifecycle policies are followed

    3. Start cleaning jobs downstream

    Use Case: When an object is removed, connected data in other systems should be cleaned away automatically.How it works:

    When you delete a GCS object, a Cloud Function or Dataflow pipeline is triggered by the Pub/Sub message. This job deletes records that are linked to it in Big-Query, Fire-store, or invalidates cache/CDN entries.

    Why it matters:
    Keeps data systems working together. Stops recordings from being left behind or metadata from getting old

    4. Alerts in Real Time

    Use Case: Let teams or monitoring systems know when sensitive or unexpected removals happen.How it works:

    A Cloud Function that listens to Pub/Sub looks at the event. It gives an alert if the deletion meets certain criteria, such as a certain folder or file type.

Why it matters
Allows for a response in real time. Increases the ability to see high-risk operations

Result:

We created a modular, fault-tolerant real-time event-driven pipeline by using a Pub/Sub-based notification system for deleting Cloud Storage objects. When an object is added to or removed from the specified GCS bucket, alert notifications are sent to a Pub/Sub topic. That topic makes sure that the message gets to one or more downstream consumers.

Conclusion

Combining Cloud Storage with Pub/Sub for deleting objects is a basic idea in today’s GCP design. It publishes events to a Pub/Sub topic in almost real time when something is deleted. These events can be used for audit trails, enforcing data policies, automatic cleanups, and even alarms.This method promotes loose coupling by enabling Cloud Storage send events without having to know who the subscribers are. Cloud Functions, Dataflow, and custom applications that subscribers use can handle messages on their own. This makes the system easier to scale and manage.Using pub/sub makes production workflows more organized because it adds reliability, parallelism, retries, and other benefits. If GCP engineers want to design cloud systems that are adaptable, responsive, and ready for the future, they need to be experts at event-driven integration.

]]>
https://blogs.perficient.com/2025/07/02/monitoring-object-creation-deletion-in-cloud-storage-with-gcp-pub-sub/feed/ 0 383879
Using AI to Compare Retail Product Performance https://blogs.perficient.com/2025/06/30/using-ai-to-compare-retail-product-performance/ https://blogs.perficient.com/2025/06/30/using-ai-to-compare-retail-product-performance/#respond Mon, 30 Jun 2025 13:00:12 +0000 https://blogs.perficient.com/?p=383632

AI this, AI that. It seems like everyone is trying to shoehorn AI into everything even if it doesn’t make sense. Many of the use cases I come across online are either not a fit for AI or could be easily done without it. However, below I explore a use case that is not only a good fit, but also very much accelerated by the use of AI.

The Use Case

In the retail world, sometimes you have products that don’t seem to sell well even though they might be very similar to another product that does. Being able to group these products and analyze them as a cohort is the first useful step in understanding why.

The Data and Toolset

For this particular exercise I will be using a retail sales dataset from Zara that I got from Kaggle. It contains information about sales as well as the description of the items.

The tools I will be using are:

    • Python
        • Pandas
        • Langchain

High-level actions

I spend a lot of my time design solutions and one thing I’ve learned is that creating a high-level workflow is crucial in the early stages of solutioning. It allows for quick critique, communication, and change, if needed. This particular solution is not very complex, nevertheless, below are the high-level actions we will be performing.

  1. Load the csv data onto memory using Pandas
  2. Create a Vector Store to store our embeddings.
    1. Embed the description of the products
  3. Modify the Pandas dataframe to accommodate the results we want to get.
  4. Create a template that will be sent to the LLM for analysis
  5. Process each product on its own
    1. Get a list of comparable products based on the description. (This is where we leverage the LLM)
    2. Capture comparable products
    3. Rank the comparable products based on sales volume
  6. Output the data onto a new CSV
  7. Load the CSV onto PowerBI for visualization
    1. Add thresholding and filters.

The Code

All of the code for this exercise can be found here

The Template

Creating a template to send to the LLM is crucial. You can play around with it to see what works best and modify it to fit your scenario. What I used was this:

 
template = """<br>    You are an expert business analyst that specializes in retail sales analysis.<br>    The data you need is provided below. It is in dictionary format including:<br>    "Product Position": Where the product is positioned within the store,<br>    "Sales Volume": How many units of a given product were sold,<br>    "Product Category": The category for the product,<br>    "Promotion": Whether or not the product was sold during a promotion.<br>    There is additional information such as the name of the product, price, description, and more.<br>    Here is all the data you need to answer questions: {data}<br>    Here is the question to answer: {question}<br>    When referencing products, add a list of the Product IDs at the end of your response in the following format: 'product_ids = [<id1>, <id2>, ... ]'.<br>"""

When we iterate, we will use the following as the question:

 
question = f"Look for 5 products that loosely match this description: {product['description']}?"

The output

Once python does its thing and iterates over all the products we get something like this:

Product ID Product Name Product Description Sales Volume Comparable Product 1 Comparable Product 2 Group Ranking
185102 BASIC PUFFER JACKET Puffer jacket made of tear-resistant… 2823 133100 128179 1
187234 STRETCH POCKET OVERSHIRT Overshirt made of stretchy fabric…. 2575 134104 182306 0.75

Power BI

We then load the data onto Power BI to visualize it better. This will allow us to not only analyze the data using filtering and conditional formatting, but we can also explore the data even further with Copilot.

Look at the screenshot below. I’ve initially setup conditional formatting so that all the products that rank low within their group are highlighted.

I then used Copilot to ask how all of these relate to each other. It was quick to point out that all of them were jackets.

Pbi Copilot

This arms us with enough information to go down a narrower search to figure out why the products are not performing. Some other questions we could ask are:

  1. Is this data seasonal and only includes summer sales?
  2. How long have these jackets been on sale?
  3. Are they all sold within a specific region or along all the stores?
  4. etc.

Conclusion

Yes, there are many, many use cases that don’t make sense for AI, however there are many that do! I hope that what you just read sparks some creativity in how you can use AI to further analyze data. The one thing to remember is that in order for AI to work as it should, it needs contextual information about the data. That can be accomplished via semantic layers. To know more, got to my post on semantic layers

Do you have a business problem and need to talk to an expert about how to go about it? Are you unsure how AI can help? Reach out and we can talk about it!

]]>
https://blogs.perficient.com/2025/06/30/using-ai-to-compare-retail-product-performance/feed/ 0 383632
Revisiting Semantic Layers: Lessons from Their Rise and Return https://blogs.perficient.com/2025/06/26/revisiting-semantic-layers/ https://blogs.perficient.com/2025/06/26/revisiting-semantic-layers/#respond Thu, 26 Jun 2025 15:09:07 +0000 https://blogs.perficient.com/?p=383473

For most of my data-focused career, I’ve been dealing with semantic layers one way or another. Either because the tool I was using to present data required it explicitly, or because the solution itself needed data to have relationships defined to make sense and be better organized.

With the recent focus and hype on AI-infused solutions, there has been an increasing amount of chatter around semantic layers. What is it? What is it used for? Does my organization need one? And, what does it have to do with AI?

What are semantic layers?

In its simplest form, a semantic layer is a collection of rules that define the relationships between different data concepts. For example, your organization may have an idea of office locations and territories. Each office location belongs to one (and only one) territory. A semantic layer would contain the definition that a group of office locations constitutes a territory. Similarly, a Person may have a current address assigned to them. These definitions are typically established by the business and its operational practices. A typical business analyst would be able to define this in your organization.

The semantic layer bridges the gap between how the data is stored and how it’s used by the business.

History of semantic layers

Pre 2000s

1970s-1980s: As relational databases started to become conceptualized, there was a need to create high-level, business-oriented, views. These sometimes included business logic in the form of rollups, simple aggregations, and other functions. These concepts started laying the groundwork for modern-day semantic layers.

1980s-1990s: Data warehousing started to become common and we saw the emergence of OLAP cubes. The primary purpose of data warehousing was to support analytical processing, primarily for business use. We saw the rise of Ralph Kimball’s modeling approach (which is still very much relevant today). This started to focus on business needs when relating data tables in a warehouse.

Additionally, we saw the invention of the Online Analytical Processing (OLAP) Cubes. This took data warehousing a step further because multi-dimensional “cubes” allowed data to be accessed in multiple intersections of the dimensions the data had a relationship with. You can try to visualize a 3-dimensional cube that hosts transaction data with the axes being: Time, Cashier, Product, and the intersection being the Sales Price. Any point in the cube will hold Sales for all permutations of the dimensions.

Prior to 2000, accessing data still required a high level of technical skill in addition to understanding how the business would use the data in order to solve problems or perform day-to-day operations.

Early 2000s: The Rise of Semantic Layers

The early 2000s saw a significant increase in the popularity of semantic layers. This was primarily driven by the adoption of business intelligence tools. Companies like Business Objects, Cognos, Hyperion, and MicroStrategy all had their own semantic layers. The aim was to make it easier for business users to access data.

Business Intelligence tools utilize their own semantic layers to provide:

  • Consistency and governance
  • Performance utilization
    • Caching and precalculated aggregates
    • Some tools had their own in-memory layer that served as a quicker way to store aggregated data for quick retrieval
  • Dashboards and reporting
    • Users could create their own reporting and dashboards without involving IT by leveraging business-friendly entities, without worrying about the underlying data structure.

The Fall of Semantic Layers

As BI tools (and semantic layers) gained popularity, a new professional emerged: the Business Intelligence Professional. These were highly analytical individuals who sat between IT and business users, translating business requirements into IT requirements. Additionally, they were able to create semantic models and configure various business intelligence platforms to extract the necessary business value from the stored data.

As business intelligence tools became more monolithic and harder to maintain, we started to see the emergence of departmental business intelligence tools. The most notable example is Tableau. 

In 2005, Tableau launched globally with the promise of “eliminating IT” from business intelligence. Users had the ability to connect directly to databases, spreadsheets, and other data sources, eliminating the need for an organization’s IT staff to provide connectivity or curate the data.

Because of how easily business users can connect to data and manipulate it. There was no “single version of the truth”, no governance on the data being consumed, and certainly no centralized semantic layer that housed the enterprise’s business rules. Instead, each business user or department had their own view (and presentation of the data). The time from requiring data to be presented or reported on to the time it actually happened was reduced dramatically. It was during this time that enterprise-wide semantic layers became less popular.

In parallel, many of the business rules started to become more and more incorporated into the ETL and ELT processes. This allowed some of the semantics to be precalculated before being consumed by the business intelligence layer. This had many drawbacks that were not apparent to the typical Data Engineer, but were very apparent (and important) to the business intelligence professionals.

The rise (again) of Semantic Layers

As time passed, we began to see business executives, business operators, and other data consumers questioning the veracity of the data. Since there was no centralized location, there was no central owner of the data. This is when the industry started seeing the creation of the Chief Data Office role which, among other things, typically has the responsibility of data governance.

For some years, the battle between centralized BI and departmental BI continued. Agility vs uniformity constantly fueled arguments and as companies started to force centralized BI, the emergence of shadow IT groups within organizations started popping up. You can likely see this in your own organization where departments run part of their operations in Excel because of lack of access to proper data.

We also saw the popularity of Analytics Centers of Excellence increasing. They took care of data governance and the single version of the truth. The greatest tool at their disposal was the mighty semantic layer.

Enter Gen AI

No doubt, generative AI has taken the world by storm. Everyone is trying to make sense of it: how do I use it? Am I doing it correctly? What do I not know? One thing is certain: for Gen AI to work properly, it needs to understand how users utilize the data and what it means to them. This is accomplished by semantic layers. This little concept that has been sticking with us for decades is suddenly even more important than it was in the past.

There is a current push for smaller, purpose-built LLMs. This will undoubtedly increase the importance of semantic layers in feeding necessary metadata to the application that utilizes them.

What’s going on right now?

Currently, we are seeing an increasing number of semantic layer-only tools that are decoupled from business intelligence platforms. Companies like AtScale, Denodo, and Dreamio promise to host the business rules and apply them to queries issued by business intelligence and visualization tools. They act as a broker between such tools and the underlying data. This, in theory, has the great benefit of having many tools utilize the semantics built into the data in their favorite tool of choice, whether that is a command-line SQL interface, a REST API call, or a visualization tool like Tableau. Additionally, companies like Tableau, which previously lacked semantic layers, are now incorporating semantic layer capabilities into their suite of tools. Others, such as Strategy (former MicroStrategy) are decoupling their powerful semantic layer from their BI suite to provide it as a standalone product.

Does my organization need one?

By now, you probably already have an idea of the answer to the question of whether your organization could benefit from a semantic layer. If you want your organization to succeed in its quest to leverage AI properly and derive proper business insight from it, you should think about what is telling AI how your business operates and how that data is organized.

What do I do now?

Contact Perficient for a conversation around how we can help your organization leverage analytical tools (including artificial intelligence) properly through our experience with semantic models.

 

]]>
https://blogs.perficient.com/2025/06/26/revisiting-semantic-layers/feed/ 0 383473
An Introduction to PAPSS – Pan African Payment and Settlement System https://blogs.perficient.com/2025/06/19/an-introduction-to-papss-pan-african-payment-and-settlement-system/ https://blogs.perficient.com/2025/06/19/an-introduction-to-papss-pan-african-payment-and-settlement-system/#respond Thu, 19 Jun 2025 13:17:19 +0000 https://blogs.perficient.com/?p=382819

In existence since just July of 2019, the Pan African Payment and Settlement System (PAPSS) has in many ways surpassed the payment and settlement process of the western banking world. PAPSS enables the efficient flow of money securely across African borders to minimize risk and thereby contributing to financial integration of the African continent. PAPSS core service is provided by the PAPSS Instant Payment system (“PIP”). PIP offers:

  • real time/near real time and irrevocable credits to customer accounts
  • 24/7/365 immediate confirmation to both originator and the beneficiary
  • ISO 20022 global message standard enabling interoperability
  • Cyber security and payment fraud prevention powered by Artificial Intelligence.

PAPSS fits into the payment strategies of several Perficient clients who either currently are making or receiving payments to/from Africa or are exploring ways to execute and settle financial transactions in Africa. This blog also speaks to Perficient’s clients who are focused on modernizing the payment experience.

How It Works

A. Pre-funding by participating banks

  1. Direct Participants issue credit instructions to settlement account at Central Bank.
  2. Central Bank credits pre-funded account of Direct Participant and notifies PAPSS.
  3. PAPSS credits Direct Participant’s clearing account.
  4. Indirect Participants leverage Sponsorship Agreements to issue funding instructions via Direct Participants.

Papss Pre Funding

diagram from the PAPSS corporate website

B. Instant Payment

  1. The originator issues a payment instruction in their local currency to its bank or payment service provider.
  2. The payment instruction is sent to PAPSS.
  3. PAPSS carries out all validation checks on the payment instruction.
  4. The payment instruction is forwarded to the beneficiary’s bank or payment service provider.
  5. The beneficiary’s bank clears the payment in their local currency.

How Papss Works

diagram from the PAPSS corporate website

End of Day

  1. PAPSS determines multilateral net position in local currency for participating Central banks for their agreed settlement currencies.
  2. PAPSS issues aggregated net settlement instructions to the African Export-Import Bank (“Afreximbank”) to debit/credit the respective bank accounts.

Advantages of PAPSS for banks and other financial intermediaries

  • PAPSS enables cross-border payments in local currencies without the need for transactions to pass through USD denominated correspondent banks. Traditional cross-currency clearing involves converting funds to US dollars or other foreign currencies.
  • a simplified process that reduces the costs and complexities of foreign exchange for cross-border transactions between African markets
  • are provided an instant and secure cross-border payment capability to their customers across Africa
  • now have a platform that enables innovation in cross-border trade and access to new African markets

Corporates, SMEs and individuals can benefit from:

  • instant/near instant payments of cross-border transactions without the hassle of currency conversion
  • improved working capital through payment certainty and faster transactions
  • access to various payment facilitating options through a growing network of financial intermediaries

The logo of PAPSS, which was on the homepage of this blog, comes from the cowrie shell, which is one of the oldest known currencies. The shell takes center stage, surrounded by radial lines that signify the connectivity of the technology of the PAPSS digital platform and the key partnerships that enable the payment system.

Ready to explore your firm’s payments strategy? 

Our financial services experts continuously monitor the regulatory landscape and deliver pragmatic, scalable solutions that meet the mandate and more. Reach out to Perficient’s Financial Services Managing Director David Weisel to discover why we’ve been trusted by 18 of the top 20 banks, 16 of the 20 largest wealth and asset management firms, and are regularly recognized by leading analyst firms.

 

]]>
https://blogs.perficient.com/2025/06/19/an-introduction-to-papss-pan-african-payment-and-settlement-system/feed/ 0 382819
Master Data Management: The Key to Improved Analytics Reporting https://blogs.perficient.com/2025/06/12/master-data-management-the-key-to-improved-analytics-reporting/ https://blogs.perficient.com/2025/06/12/master-data-management-the-key-to-improved-analytics-reporting/#respond Thu, 12 Jun 2025 14:50:34 +0000 https://blogs.perficient.com/?p=382763

In today’s data-driven business environment, organizations rely heavily on analytics to make strategic decisions. However, the effectiveness of analytics reporting depends on the quality, consistency, and reliability of data. This is where Master Data Management (MDM) plays a crucial role. By establishing a single, authoritative source of truth for critical data domains, MDM ensures that analytics reporting is built on a foundation of high-quality, trustworthy information.

The Role of MDM in Accurate Data Insights

  1. Ensuring Data Consistency and Quality

One of the biggest challenges organizations face is inconsistent and poor-quality data. Disparate systems often contain duplicate, outdated, or conflicting records, leading to inaccurate analytics. MDM addresses this by creating a golden record—a unified, clean version of each critical data entity. Through robust data governance and validation processes, MDM ensures that data used for reporting is accurate, consistent, and complete, fostering trust among the user community.

  1. Eliminating Data Silos and Enabling Systems Consolidation

Enterprises often struggle with fragmented data stored across multiple systems. This creates inefficiencies, as teams must reconcile conflicting records manually. MDM plays a pivotal role in systems consolidation by eliminating data silos and harmonizing information across the organization. By integrating data from various sources into a single, authoritative repository, MDM ensures that analytics tools and business intelligence platforms access consistent, up-to-date information.

  1. Acting as a Bridge Between Enterprise Systems

MDM does not operate in isolation—it seamlessly integrates with enterprise systems through APIs and connectors. By syndicating critical data across platforms, MDM acts as a bridge between disparate applications, ensuring a smooth flow of reliable information. This integration enhances operational efficiency and empowers businesses to leverage advanced analytics and AI-driven insights more effectively.

  1. Enhancing Data-Driven Decision-Making

With a reliable MDM framework in place, organizations can confidently use analytics to drive strategic decisions. High-quality data leads to more accurate reporting, allowing businesses to identify trends, optimize processes, and uncover new opportunities. By maintaining clean and governed master data, companies can fully realize the potential of data-driven decision-making.

Why Organizations Should Implement MDM

Organizations that invest in MDM gain a competitive edge by ensuring that their analytics and reporting efforts are based on trustworthy data. Key benefits include:

  • Improved operational efficiency through reduced manual data reconciliation
  • Higher confidence in analytics due to consistent and accurate data
  • Streamlined data integration across enterprise systems
  • Better compliance and governance with regulated data policies

By implementing MDM, businesses create a strong data foundation that fuels accurate analytics, fosters collaboration, and drives informed decision-making. In an era where data is a strategic asset, MDM is not just an option—it’s a necessity for organizations aiming to maximize their analytics potential.

Reference Data Management (RDM) plays a vital role in ensuring that standardized data—such as country codes, product classifications, industry codes, and currency symbols—remains uniform across all systems and applications. Without effective RDM, businesses risk inconsistencies that can lead to reporting errors, compliance issues, and operational inefficiencies. By centralizing the management of reference data, companies can enhance data quality, improve decision-making, and ensure seamless interoperability between different departments and software systems.

Beyond maintaining consistency, RDM is essential for regulatory compliance and risk management. Many industries, such as finance, healthcare, and manufacturing, depend on accurate reference data to meet regulatory requirements and adhere to global standards. Incorrect or outdated reference data can result in compliance violations, financial penalties, or operational disruptions. A well-structured RDM strategy not only helps businesses stay compliant but also enables greater agility by ensuring data integrity across evolving business landscapes. As organizations continue to embrace digital transformation, investing in robust Reference Data Management practices is no longer optional—it’s a necessity for maintaining competitive advantage and operational excellence.

 

 

 

 

]]>
https://blogs.perficient.com/2025/06/12/master-data-management-the-key-to-improved-analytics-reporting/feed/ 0 382763
Revolutionizing Clinical Trial Data Management with AI-Powered Collaboration https://blogs.perficient.com/2025/06/10/revolutionizing-clinical-trial-data-management-with-ai-powered-collaboration/ https://blogs.perficient.com/2025/06/10/revolutionizing-clinical-trial-data-management-with-ai-powered-collaboration/#respond Tue, 10 Jun 2025 20:51:08 +0000 https://blogs.perficient.com/?p=363367

Clinical trial data management is critical to pharmaceutical research, yet it remains a significant challenge for many organizations. The industry faces several persistent hurdles:

  • Data fragmentation: Research teams often struggle with siloed information across departments, hindering collaboration and comprehensive analysis.
  • Outdated systems: Many organizations rely on legacy data management tools that fail to meet the demands of modern clinical trials.
  • Incomplete or inaccurate data: Ensuring data completeness and accuracy is an ongoing battle, potentially compromising trial integrity and patient safety.
  • Limited data accessibility: Researchers frequently lack efficient ways to access and interpret the specific data relevant to their roles.
  • Collaboration barriers: Disparate teams often struggle to share insights and work cohesively, slowing down the research process.
  • Regulatory compliance: Keeping up with evolving data management regulations adds another layer of complexity to clinical trials.

These challenges not only slow down the development of new treatments but also increase costs and potentially impact patient outcomes. As clinical trials grow more complex and data-intensive, addressing these pain points in data management becomes increasingly crucial for researchers and product teams.

A Unified Clinical Trial Data Management Platform 

Life sciences leaders are engaging our industry experts to reimagine the clinical data review process. We recently embarked on a journey with a top-five life sciences organization that shared a similar clinical collaboration vision and, together, moved from vision to global production use of this unified platform. This cloud-based, client-tailored solution leverages AI, rich integrations, and collaborative tools to streamline the clinical trial data management process. 

Key Features of Our Client-Tailored Clinical Data Review Solution: 

  1. Data Review Whiteboard: A centralized module providing access to clean, standardized data with customized dashboards for different team needs.
  2. Patient Profiles: Easily track individual trial participants across multiple data domains, ensuring comprehensive patient monitoring.
  3. EDC Integration: Seamlessly integrate Electronic Data Capture system queries, enabling interactive conversations between clinical team members.
  4. Study Setup: Centralize and manage all metadata, facilitating efficient study design and execution.
  5. AI-Powered Insights: Leverage artificial intelligence to analyze vast amounts of clinical trial data, automatically identify anomalies, and support improved decision-making.

The Impact: Enhanced Collaboration and Faster Results 

By implementing our clinical trial data management solution, organizations can: 

  • Ensure patient safety through comprehensive data visibility
  • Break down data silos, promoting collaboration across teams 
  • Accelerate the development of new treatments 
  • Improve decision-making with AI-driven insights 
  • Streamline the clinical data review process 

Breaking Down Clinical Data Siloes for Better Outcomes 

Leveraging a modern, cloud-based architecture and open-source technologies to create a unified clinical data repository, the clinical data review solution takes aim at the siloes that have historically plagued the clinical review process. By breaking down these silos, researchers can avoid duplicating efforts, share insights earlier, and ultimately accelerate the development of new treatments.

AI Drives Clinical Data Insights 

Clinical trials produce vast amounts of data—all of it useful, but potentially cumbersome to sort and examine. That’s where artificial intelligence (AI) models can step in, analyzing and extracting meaning from mountains of raw information. It can also be deployed to automatically identify anomalies, alerting researchers that further action is needed. By embedding AI directly into its main data pipelines, our tailored clinical data review solution effortlessly supports improved decision making.

Data Puts Patients First 

Patient safety must be the number one concern of any ethical trial, and clinical research data can play a key role in ensuring it. With a clinical data hub offering unparalleled vision into every piece of data generated for the trial – from lab results and anomalies to adverse reactions, – teams can track the well-being of each patient in their study. Users can flag potential issues, making it easy for collaborators to review any concerns.

Clinical Trial Data Collaboration Blog Post 1 Image

Success In Action

Our tailored solution for a top-five life sciences leader integrated data from 13 sources and included bi-directional EDC integration and multiple AI models. Our deep understanding of clinical trial processes, data management, and platforms proved instrumental in delivering a solution that met—and exceeded—expectations. 

Want to know more about our approach to clinical trial data collaboration? Check out our guide on the subject.

Transform Clinical Data Review With An Expert Partner

Discover why the largest life sciences organizations – including 14 of the top 20 pharma/biotech firms, 6 of the top 10 CROs, and 14 of the top 20 medical device organizations – have counted on our world-class industry capabilities and experience with leading technology innovators. Our deep expertise in life sciences and digital technologies, including artificial intelligence and machine learning, helps transform the R&D process and deliver meaningful value to patients and healthcare professionals.

Contact us to learn about our life sciences and healthcare expertise and capabilities, and how we can help you transform your business.

Empower Healthcare With AI-Driven Insights

 

]]>
https://blogs.perficient.com/2025/06/10/revolutionizing-clinical-trial-data-management-with-ai-powered-collaboration/feed/ 0 363367
PWC-IDMC Migration Gaps https://blogs.perficient.com/2025/06/05/pwc-idmc-migration-gaps/ https://blogs.perficient.com/2025/06/05/pwc-idmc-migration-gaps/#respond Thu, 05 Jun 2025 05:26:54 +0000 https://blogs.perficient.com/?p=382445

In the age of technological advancements happening almost every minute, upgrading a business is essential to survive competition, offering a customer experience beyond expectations while deploying fewer resources to derive value from any process or business.

Platform upgrades, software upgrades, security upgrades, architectural enhancements, and so on are required to ensure stability, agility, and efficiency.

Customers prefer to move from Legacy systems to the Cloud due to the offerings it brings. From cost, monitoring, maintenance, operations, ease of use, and landscape, Cloud has transformed D&A businesses significantly over the last decade.

Movement from Informatica Powercenter to IDMC has been perceived as the need of the hour due to the humongous advantages it offers. Developers must understand both flavors to perform this code transition effectively.

This post explains the PWC vs IDMC CDI gaps from different perspectives.

  • Development
  • Data
  • Operations

Development

  • The difference in native datatypes can be observed in IDMC when importing Source, Target, or Lookup. Workaround as follows.,
    • If any consistency is observed in IDMC mappings with Native Datatype/Precision/Scale, ensure that the Metadata Is Edited to keep them in sync between DDL and CDI mappings.
  • In CDI, taskflow workflow parameter values experience read and consumption issues. Workaround as follows.,
    • A Dummy Mapping task has to be created where the list of Parameters/Variables needs to be defined for further consumption by tasks within the taskflows (Ex, Command task/Email task, etc)
    • Make sure to limit the # of Dummy Mapping tasks during this process
    • Best practice is to create 1 Dummy Mapping task for a folder to capture all the Parameters/Variables required for that entire folder.
    • For Variables whose value needs to be persistent for the next taskflow run, make sure the Variable value is mapped to the Dummy Mapping task via an Assignment task. This Dummy mapping task would be used at the start and end of the task flow to ensure that the overall task flow processing is enabled for Incremental Data processing.
  • All mapping tasks/sessions in IDMC are reusable. They could be used in any task flow. If some Audit sessions are expected to run concurrently within other taskflows, ensure that the property “Allow the mapping task to be executed simultaneously” is enabled.
  • Sequence generator: Data overlap issues in CDI. Workaround as follows.,
    • If a sequence generator is likely to be used in multiple sessions/workflows, it’s better to make it a reusable/SHARED Sequence.
  • VSAM Sources/Normalizer was not available in CDI. Workaround as follows.,
    • Use the Sequential File connector type for mappings using Mainframe VSAM Sources/Normalizer.
  • Sessions are configured to have STOP ON ERRORS >0. Workaround as follows.,
    • Ensure the LINK conditions for the next task to be “PreviousTask.TaskStatus – STARTS WITH ANY OF 1, 2” within CDI taskflows.
  • Partitions are not supported with Sources under Query mode. Workaround as follows.,
    • Ensure multiple sessions are created and run in parallel as a workaround.
  • Currently, parameterization of Schema/Table is not possible for Mainframe DB2. Workaround as follows.,
    • Use an ODBC-type connection to access DB2 with Schema/Table parameterization.
  • A mapping with a LOOKUP transformation used across two sessions cannot be overridden at the session or mapping task level to enable or disable caching. Workaround as follows.,
    • Use 2 different mappings with LOOKUP transformations if 1 mapping/session has to have cache enabled and the other mapping/session has to have cache disabled.

Data

  • IDMC Output data containing additional Double quotes. Workaround as follows.,
    • Session level – use this property – __PMOV_FFW_ESCAPE_QUOTE=No
    • Administrator settings level – use this property – UseCustomSessionConfig = Yes
  • IDMC Output data containing additional Scale values with Decimal datatype (ex., 11.00). Workaround as follows.,
    • Use IF-THEN-ELSE statement to remove Unwanted 0s in data (O/P : from 11.00 -> 11)

Operations

  • CDI doesn’t store logs beyond 1000 mapping tasks run in 3 days on Cloud (it does store logs in Secure Agent). Workaround as follows.,
    • To retain Cloud job run stats, create Audit tables and use the Data Marketplace utility to get the Audit info (Volume processes, Start/End time, etc) loaded to the Audit tables by scheduling this job at regular intervals (Hourly or Daily).
  • Generic Restartability issues occur during IDMC Operations. Workaround as follows.,
    • Ensure a Dummy assignment task is introduced whenever the code contains Custom error handling flow.
  • SKIP FAILED TASK and RESUME FROM NEXT TASK operations have issues in IDMC. Workaround as follows.,
    • Ensure every LINK condition has an additional condition appended, “Mapping task. Fault.Detail.ErrorOutputDetail.TaskStatus=1”
  • In PWC, any task can be run from anywhere within a workflow; however, this is not possible in IDMC. Workaround as follows.
    • Feature request worked upon by GCS to update the Software
  • IDMC mapping task config level is not capable due to parameter concatenation issues. Workaround as follows.,
    • Ensure to use a separate parameter within the parameter file to have the Mapping task log file names suffixed with the Concurrent run workflow instance name.
  • IDMC doesn’t honour the “Save Session log for these runs” property set at the mapping task level when the session log file name is parameterized. Workaround as follows.,
    • Ensure to copy the mapping task log files in the Secure agent server after the job run
  • If Session Log File Directory contains / (Slash) when used along with parameters (ex., $PMSessionLogDir/ABC) under Session Log Directory Path, this would append every run log to the same log file. Workaround as follows.,
    • Ensure to use a separate parameter within the parameter file for $PMSessionLogDir
  • In IDMC, the @numAppliedRows and @numAffectedRows features are not available to get the source and target success rows to load them in the audit table. Workaround as follows.,
    • @numAppliedRows is used instead of @numAffectedRows
  • Concurrent runs cannot be performed on taskflows from the CDI Data Integration UI. Workaround as follows.,
    • Use the Paramset utility to upload concurrent paramsets and use the runAJobCli utility to run taskflows with multiple concurrent run instances from the command prompt.

Conclusion

While performing PWC to IDMC conversions, the following Development and Operations workarounds will help avoid rework and save effort, thereby achieving customer satisfaction in delivery.

]]>
https://blogs.perficient.com/2025/06/05/pwc-idmc-migration-gaps/feed/ 0 382445