Development Articles / Blogs / Perficient https://blogs.perficient.com/category/services/innovation-product-development/development/ Expert Digital Insights Mon, 09 Mar 2026 06:52:13 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Development Articles / Blogs / Perficient https://blogs.perficient.com/category/services/innovation-product-development/development/ 32 32 30508587 Optimize Snowflake Compute: Dynamic Table Refreshes https://blogs.perficient.com/2026/03/07/optimize-snowflake-compute-dynamic-table-refreshes/ https://blogs.perficient.com/2026/03/07/optimize-snowflake-compute-dynamic-table-refreshes/#respond Sat, 07 Mar 2026 10:46:14 +0000 https://blogs.perficient.com/?p=390653

In this blog, we will discuss one of the problems: the system refreshes as per the target_lag even if no new data in the downstream tables. Most of the time, nothing has changed, which means we’re wasting compute for no reason.

If your data does not change, your compute should not either. Here is how to optimize your Dynamic table to save resources.

Core concepts use in this blog: –
Snowflake:
Snowflake is a fully managed cloud data warehouse that lets you store data and SQL queries at massive scale—without managing servers.

Compute Resources:
Compute resources in Snowflake are the processing power (virtual warehouses) that Snowflake uses to run your queries, load data, and perform calculations.
In simple way:
Storage = where data lives
Compute = the power used to process the data

Dynamic table:
In Snowflake, a Dynamic Table acts as a self-managing data container that bridges the gap between a query and a physical table. Instead of you manually inserting records, you provide Snowflake with a “blueprint” (a SQL query), and the system ensures the table’s physical content always matches that blueprint.

Stream:
A Stream in Snowflake is a tool that keeps track of all changes made to a table so you can process only the updated data instead of scanning the whole table.

Task:
Tasks can run at specific times you choose, or they can automatically start when something happens — for example, when new data shows up in a stream.
Scenario:

The client has requested that data be inserted every 1 hour, but sometimes there may be no new data coming into the downstream tables.

Steps: –
First, we go through the traditional approach and below are the steps.
1. Create source data:

— Choose a role/warehouse you can use

USE ROLE SYSADMIN;

USE WAREHOUSE SNOWFLAKE_LEARNING_WH;

 

— Create database/schema for the demo

CREATE DATABASE IF NOT EXISTS DEMO_DB;

CREATE SCHEMA IF NOT EXISTS DEMO_DB.DEMO_SCHEMA;

USE SCHEMA DEMO_DB.DEMO_SCHEMA;

— Base table: product_changes

CREATE OR REPLACE TABLE product_changes (

product_code VARCHAR(50),

product_name VARCHAR(200),

price NUMBER(10, 2),

price_start_date TIMESTAMP_NTZ(9),

last_updated    TIMESTAMP_NTZ DEFAULT CURRENT_TIMESTAMP()
);

— Seed a few rows

INSERT INTO product_changes (product_code, product_name, price, price_start_date,last_updated)

SELECT

‘PC-‘ || LPAD(TO_VARCHAR(MOD(SEQ4(), 10000) + 1), 3, ‘0’) AS product_code,

‘Product ‘ || LPAD(TO_VARCHAR(MOD(SEQ4(), 10000) + 1), 3, ‘0’) AS product_name,

ROUND(10.00 + (MOD(SEQ4(), 10000) * 5) + (SEQ4() * 0.01), 2) AS price,

DATEADD(MINUTE, SEQ4() * 5, ‘2025-01-01 00:00:00’) AS PRICE_START_DATE,

CURRENT_TIMESTAMP() AS last_updated

FROM

TABLE(GENERATOR(ROWCOUNT => 100000000));

— Create dynamic table

CREATE OR REPLACE DYNAMIC TABLE product_current_price_v1

TARGET_LAG = ‘1 hour’

WAREHOUSE = SNOWFLAKE_LEARNING_WH

INITIALIZE = ON_SCHEDULE

REFRESH_MODE = INCREMENTAL

AS

SELECT

h.product_code,

h.product_name,

h.price,

h.price_start_date

FROM product_changes h

INNER JOIN (

SELECT product_code, MAX(price_start_date) max_price_start_date

FROM product_changes

GROUP BY product_code

) m ON h.price_start_date = m.max_price_start_date AND h.product_code = m.product_code;

 

–Manually Refresh

ALTER DYNAMIC TABLE product_current_price_v1 REFRESH;
Always, we need to do manual refresh after an hour to check the new data is in table
Picture3

Because Snowflake uses a pay‑as‑you‑go credit model for compute, keeping a dynamic table refreshed every hour means compute resources are running continuously. Over time, this constant usage can drive up costs, making frequent refresh intervals less cost‑effective for customers.

To tackle this problem in a smarter and more cost‑efficient way, we follow a few simple steps that make the entire process smoother and more optimized:
First, we set the target_lag to 365 days when creating the dynamic table. This ensures Snowflake doesn’t continually consume compute resources for frequent refreshes, helping us optimize costs right from the start.

— Create dynamic table

CREATE OR REPLACE DYNAMIC TABLE product_current_price_v1

TARGET_LAG = ‘365 days’

WAREHOUSE = SNOWFLAKE_LEARNING_WH

INITIALIZE = ON_SCHEDULE

REFRESH_MODE = INCREMENTAL

AS

SELECT

h.product_code,

h.product_name,

h.price,

h.price_start_date

FROM product_changes h

INNER JOIN (

SELECT product_code, MAX(price_start_date) max_price_start_date

FROM product_changes

GROUP BY product_code

) m ON h.price_start_date = m.max_price_start_date AND h.product_code = m.product_code;

— A) Stream to detect changes in data
CREATE OR REPLACE STREAM STR_PRODUCT_CHANGES ON TABLE PRODUCT_CHANGES;

—  Stored procedure: refresh only when stream has data

CREATE OR REPLACE PROCEDURE SP_REFRESH_DT_IF_NEW()
RETURNS VARCHAR
LANGUAGE SQL
EXECUTE AS OWNER
AS
$$
DECLARE
v_has_data BOOLEAN;
BEGIN
SELECT SYSTEM$STREAM_HAS_DATA(‘STR_PRODUCT_CHANGESS’) INTO :v_has_data;
IF (v_has_data) THEN
ALTER DYNAMIC TABLE DEMO_DB.DEMO_SCHEMA.PRODUCT_CURRENT_PRICE_V1
REFRESH;
RETURN ‘Refreshed dynamic table DT_SALES (new data detected).’;
ELSE
RETURN ‘Skipped refresh (no new data).’;

END IF;

END;
$$;

— Create TASK
Here, we can schedule as per requirement

   EXAMPLE:
CREATE OR REPLACE TASK PUBLIC.T_REFRESH_DT_IF_NEW
WAREHOUSE = SNOWFLAKE_LEARNING_WH
SCHEDULE = ‘5 MINUTE’
AS
CALL PUBLIC.SP_REFRESH_DT_IF_NEW();

ALTER TASK PUBLIC.T_REFRESH_DT_IF_NEW RESUME;
Conclusion:
Optimizing Snowflake compute isn’t just about reducing costs—it’s about making your data pipelines smarter, faster, and more efficient. By carefully managing how and when dynamic tables refresh, teams can significantly cut down on unnecessary compute usage while still maintaining reliable, up‑to‑date data.

Adjusting refresh intervals, thoughtfully using features like target_lag, and designing workflows that trigger updates only when needed can turn an expensive, always‑running process into a cost‑effective, well‑tuned system. With the right strategy, Snowflake’s powerful dynamic tables become not just a convenience, but a competitive advantage in building lean, scalable data platforms.

 

]]>
https://blogs.perficient.com/2026/03/07/optimize-snowflake-compute-dynamic-table-refreshes/feed/ 0 390653
Performance optimization techniques for React https://blogs.perficient.com/2026/03/06/performance-optimization-techniques-for-react/ https://blogs.perficient.com/2026/03/06/performance-optimization-techniques-for-react/#respond Fri, 06 Mar 2026 08:16:03 +0000 https://blogs.perficient.com/?p=390649

Performance plays a vital role in developing modern React applications. React by default is developed by keeping performance in mind, it offers virtual DOM, efficient reconciliation, and a component-based architecture. But as application grows, we start facing performance issues. This blog will help you with some performance optimization tips to consider before release.

1. memo to Prevent Unnecessary Re-renders:

React.memo is a build in component memorization technic. When we wrap a component within React.memo, it shallowly compares previous and current props to avoid unnecessary re-renders. Since this comparison has a cost, it should be applied selectively rather than to every component.

Example:

const getEmployeeId= React.memo(({emp}) => {
  return <div>{emp.empId }</div>;
});

Bonus Tip: Make use of useCallback (memorizes a function reference) and useMemo (memorizes a expensive computed value) hooks to prevent unnecessary re-renderings.

2. lazy to load component on demand:

React.lazy is to load component only on demand instead of including it in the initial JavaScript bundle. This is mainly useful, if it is a large application with many screens.
Example:

const UserProfile= React.lazy(() => import("./userProfile"));

Bonus Tip: Make use of < Suspense> to show fallback loading component, while the actual component loads. This gives better user experience.

<Suspense fallback={<Loader />}>
  < UserProfile /></Suspense>

3. Avoid using array index as element key

Using proper stable and unique identifier is always important. Below are few examples which look harmless but have a serious performance impact.

{users.map((user, index) => (

  <li key={index}> {user.name} </li>

))}

{ users.map(user => (

  <li key={Math.random()} > {user.name} </li>

))}

This causes below performance issues:

  • keys change every render
  • React treats every item as new
  • full list remounts on every update

Correct Usage:

{users.map((user) => (<li key={user.id}>{user.name}</li>
))}

4. Use Debounce & Throttle for expensive operations

When user activity interacts with applications like typing scrolling or dragging, multiple API calls can happen per second unintentionally. Debouncing and throttling are two core techniques used to limit how often these operations should be executed, hence help in improving the performance.

const getData = useCallback(
debounce((value) => {
axios.get(`https://api.sample.in/employee /${empId }`)
.then(response => {
console.log(response.data[0]);
});
}, 2000),
[]
);

In above example, the debounce function from Lodash is used to delay the API call until 2 seconds after every user interaction.

fromEvent(window, 'resize').pipe(
startWith(null),
throttleTime(2000, undefined, { leading: true, trailing: true }),
map(() => ({ w: window.innerWidth, h: window.innerHeight })),
takeUntilDestroyed(this.destroyRef)
).subscribe(({ w, h }) => {
this.width = w;
this.height = h;
});}

In the above example operation runs every 2000 milliseconds, even if event fire continuously (scroll, resize, dragging).

Conclusion:

React comes with devtools support which provides a Profiler tab. This helps us in understanding key area where performance dips. It gives a list of slow components, identifies wasted renders and real bottlenecks in applications. Start by identifying the performance issues, then address them with the appropriate solutions to build a super‑speed application. Happy learning! 🚀

Reference:

React Performance Optimization: 15 Best Practices for 2025 – DEV Community
React Optimization Techniques to Help You Write More Performant Code

]]>
https://blogs.perficient.com/2026/03/06/performance-optimization-techniques-for-react/feed/ 0 390649
From Coding Assistants to Agentic IDEs https://blogs.perficient.com/2026/02/26/from-coding-assistants-to-agentic-ides/ https://blogs.perficient.com/2026/02/26/from-coding-assistants-to-agentic-ides/#respond Fri, 27 Feb 2026 03:38:25 +0000 https://blogs.perficient.com/?p=390580

The difference between a coding assistant and an agentic IDE is not just a matter of capability — it’s architectural. A coding assistant responds to prompts. An agentic system operates in a closed loop: it reads the current state of the codebase, plans a sequence of changes, executes them, and verifies the result before reporting completion. That loop is what makes the tooling genuinely useful for non-trivial work.

Agentic CLIs

Most of the conversation around agentic AI focuses on graphical IDEs, but the CLI tools are worth understanding separately. They integrate more naturally into existing scripts and automation pipelines, and in some cases offer capabilities the GUI tools don’t.

The main options currently available:

Claude Code (Anthropic) works with the Claude Sonnet and Opus model families. It handles multi-file reasoning well and tends to produce more explanation alongside its changes, which is useful when the reasoning behind a decision matters as much as the decision itself.

OpenAI Codex CLI is more predictable for tasks requiring strict adherence to a specification — business logic, security-sensitive code, anything where creative interpretation is a liability rather than an asset.

Gemini CLI is notable mainly for its context window, which reaches 1–2 million tokens depending on the model. Large enough to load a substantial codebase without chunking, which changes what kinds of questions are practical to ask.

OpenCode is open-source and accepts third-party API keys, including mixing providers. Relevant for environments with restrictions on approved vendors.

Configuration and Permission Levels

Configuration is stored in hidden directories under the user home folder — ~/.claude/ for Claude Code, ~/.codex/ for Codex. Claude uses JSON; Codex uses TOML. The parameter that actually matters day-to-day is the permission level.

By default, most tools ask for confirmation before destructive operations: file deletion, script execution, anything irreversible. There’s also typically a mode where the agent executes without asking. It’s faster, and it will occasionally remove something that shouldn’t have been removed. The appropriate context for that mode is throwaway branches and isolated environments where the cost of a mistake is low.


Structuring a Development Session

Jumping straight to code generation tends to produce output that looks correct but requires significant rework. The agent didn’t have enough context to make the right decisions, so it made assumptions — and those assumptions have to be found and corrected manually.

Plan Mode

Before any code is written, the agent should decompose the task and surface ambiguities. This is sometimes called Plan Mode or Chain of Thought mode. The output is a list of verifiable subtasks and a set of clarifying questions, typically around:

  • Tech stack and framework choices
  • Persistence strategy (local storage, SQL, vector database)
  • Scope boundaries — what’s in and what’s explicitly out

It feels like overhead. The time is recovered during implementation because the agent isn’t making assumptions that have to be corrected later.

Repository Setup via GitHub CLI

The GitHub CLI (gh) integrates cleanly with agentic workflows. Repository initialization, .gitignore configuration, and GitHub issue creation with acceptance criteria and implementation checklists can all be handled by the agent. Having the backlog populated automatically keeps work visible without manual overhead.


Context Management

The context window is finite. How it’s used determines whether the agent stays coherent across a long session or starts producing inconsistent output. Three mechanisms matter here: rules, skills, and MCP.

Rule Hierarchy

Rules operate at three levels:

User-level rules are global preferences that apply across all projects — language requirements, style constraints, operator restrictions. Set once.

Project rules (.cursorrules or AGENTS.md) are repository-specific: naming conventions, architectural patterns, which shared components to reuse before creating new ones. In a team context, this file deserves the same review process as any other documentation. It tends to get neglected and then blamed when the agent produces inconsistent output.

Conditional rules activate only for specific file patterns. Testing rules that only load when editing .test.ts files, for example. This keeps the context lean when those rules aren’t relevant to the current task.

Skills

Skills are reusable logic packages that the agent loads on demand. Each skill lives in .cursor/skills/ and consists of a skill.md file with frontmatter metadata, plus any executable scripts it needs (Python, Bash, or JavaScript). The agent discovers them semantically or they can be invoked explicitly.

The practical value is context efficiency — instead of re-explaining a pattern every session, the skill carries it and only loads when the task requires it.

Model Context Protocol (MCP)

MCP is the standard for giving agents access to external systems. An MCP server exposes Tools (functions the agent can call) and Resources (data it can query). Configuration is added to the IDE’s config file, after which the agent can interact with connected systems directly.

Common integrations: Slack for notifications, Sentry for querying recent errors related to code being modified, Chrome DevTools for visual validation. The Figma MCP integration is particularly useful — design context can be pulled directly without manual translation of specs into implementation requirements.


Validation

A task isn’t complete until there’s evidence it works. The validation sequence should cover four things:

Compilation and static analysis. The build runs, linters pass. Errors get fixed before the agent reports done.

Test suite. Unit and integration tests for the affected logic must pass. Existing tests must stay green. This sounds obvious and is frequently skipped.

Runtime verification. The agent launches the application in a background process and monitors console output. Runtime errors that don’t surface in tests are common enough that skipping this step is a real risk.

Visual validation. With a browser MCP server, the agent can take a screenshot and compare it against design requirements. Layout and styling issues won’t be caught by any automated test.


Security Configuration

Two files, different purposes, frequently confused:

.cursorignore is a hard block. The agent cannot read files listed here. Use it for .env files, credentials, secrets — anything that shouldn’t leave the local environment. This is the primary security layer.

.cursorindexingignore excludes files from semantic indexing but still allows the agent to read them if explicitly requested. The appropriate use is performance optimization: node_modules, build outputs, generated files that would pollute the index without adding useful signal.

For corporate environments, Privacy Mode should be explicitly verified as enabled rather than assumed. This prevents source code from being stored by the provider or used for model training. Most enterprise tiers include it; the default state varies by tool and version.


Hooks

Hooks are event-driven triggers that run custom scripts at specific points in the agent’s lifecycle. Not necessary for small projects, but worth the setup as the codebase grows.

beforeSubmitPrompt runs before a prompt is sent. Useful for injecting dynamic context — current branch name, recent error logs — or for auditing what’s about to be sent.

afterFileEdit fires immediately after the agent modifies a file. The natural use is triggering auto-formatting or running the test suite, catching regressions as they’re introduced.

pre-compact fires when the context window is about to be trimmed. Allows prioritization of what information should be retained. Relevant for long sessions where important context has accumulated, and the default trimming behavior would discard it.


Parallel Development with Git Worktrees

Sequential work on a single branch is a bottleneck when multiple tasks are running in parallel. Git worktrees allow different branches to exist as separate working directories simultaneously:

git worktree add ../wt-feature-name -b feature/branch-name

Each worktree should have its own .env with unique local ports (PORT=3001, PORT=3002) to prevent dev server collisions. The agent can handle rebases and straightforward merge conflicts autonomously. Complex conflicts still require human judgment — the agent will flag them rather than guess.


The model itself is less of a determining factor than it might seem. Rule configuration, context management, and validation coverage drive the actual quality of the output. A well-configured environment with a mid-tier model will consistently outperform a poorly configured one with a better model. The engineering work shifts toward writing the constraints and verification steps that govern how code gets produced, which is a different skill than writing the code directly, but the productivity difference once it’s in place is significant.

 

]]>
https://blogs.perficient.com/2026/02/26/from-coding-assistants-to-agentic-ides/feed/ 0 390580
Language Mastery as the New Frontier of Software Development https://blogs.perficient.com/2026/02/16/language-mastery-as-the-new-frontier-of-software-development/ https://blogs.perficient.com/2026/02/16/language-mastery-as-the-new-frontier-of-software-development/#respond Mon, 16 Feb 2026 17:23:54 +0000 https://blogs.perficient.com/?p=390355
In the current technological landscape, the interaction between human developers and Large Language Models (LLMs) has transitioned from a peripheral experiment into a core technical competency. We are witnessing a fundamental shift in software development: the evolution from traditional code logic to language logic. This discipline, known as Prompt Engineering, is not merely about “chatting” with an AI; it is the structured ability to translate human intent into precise machine action. For the modern software engineer, designing and refining instructions is now as critical as writing clean, executable code.

1. Technical Foundations: From Prediction to Instruction

To master AI-assisted development, one must first understand the nature of the model. An LLM, at its core, is a probabilistic prediction engine. When given a sequence of text, it calculates the most likely next word (or token) based on vast datasets.
Base Models vs. Instruct Models
Technical proficiency requires a distinction between Base Models and Instruct Models. A Base LLM is designed for simple pattern completion or “autocomplete.” If asked to classify a text, a base model might simply provide another example of a text rather than performing the classification. Professional software development relies almost exclusively on Instruct Models. These models have been aligned through Reinforcement Learning from Human Feedback (RLHF) to follow explicit directions rather than just continuing a text pattern.
The fundamental paradigm of this interaction is simple but absolute: the quality of the input (the prompt) directly dictates the quality and accuracy of the output (the response).

2. The Two Pillars of Effective Prompting

Every successful interaction with an LLM rests on two non-negotiable principles. Neglecting either leads to unpredictable, generic, or logically flawed results.
1. Clarity and Specificity

Ambiguity is the primary enemy of quality AI output. Models cannot read a developer’s mind or infer hidden contexts that are omitted from the prompt. When an instruction is vague, the model is forced to “guess,” often resulting in a generic “average response” that fails to meet specific technical requirements. A specific prompt must act as an explicit manual. For instance, rather than asking to “summarize an email,” a professional prompt specifies the role (Executive Assistant), the target audience (a Senior Manager), the focus (required actions and deadlines), and the formatting constraints (three key bullet points).

Vague Prompt (Avoid) Specific Prompt (Corporate Standard)
“Summarize this email.” “Act as an executive assistant. Summarize the following email in 3 key bullet points for my manager. Focus on required actions and deadlines. Omit greetings.”
“Do something about marketing.” “Generate 5 Instagram post ideas for the launch of a new tech product, each including an opening hook and a call-to-action.”

 

 

2. Allowing Time for Reasoning
LLMs are prone to logical errors when forced to provide a final answer immediately—a phenomenon described as “impulsive reasoning.” This is particularly evident in mathematical logic or complex architectural problems. The solution is to explicitly instruct the model to “think step-by-step.” This technique, known as Chain-of-Thought (CoT), forces the model to calculate intermediate steps and verify its own logic before concluding. By breaking a complex task into a sequence of simpler sub-tasks, the reliability of the output increases exponentially.
3. Precision Structuring Tactics
To transform a vague request into a high-precision technical order, developers should utilize five specific tactics.
• Role Assignment (Persona): Assigning a persona—such as “Software Architect” or “Cybersecurity Expert”—activates specific technical vocabularies and restricts the model’s probabilistic space toward expert-level responses. It moves the AI away from general knowledge toward specialized domain expertise.
• Audience and Tone Definition: It is imperative to specify the recipient of the information. Explaining a SQL injection to a non-technical manager requires a completely different lexicon and level of abstraction than explaining it to a peer developer.
• Task Specification: The central instruction must be a clear, measurable action. A well-defined task eliminates ambiguity regarding the expected outcome.
• Contextual Background: Because models lack access to private internal data or specific business logic, developers must provide the necessary background information, project constraints, and specific data within the prompt ecosystem.
• Output Formatting: For software integration, leaving the format to chance is unacceptable. Demanding predictable structures—such as JSON arrays, Markdown tables, or specific code blocks—is critical for programmatic parsing and consistency.
Technical Delimiters Protocol
To prevent “Prompt Injection” and ensure application robustness, instructions must be isolated from data using:
• Triple quotes (“””): For large blocks of text.
• Triple backticks (`): For code snippets or technical data.
• XML tags (<tag>): Recommended standard for organizing hierarchical information.
• Hash symbols (###): Used to separate sections of instructions.
Once the basic structure is mastered, the standard should address highly complex tasks using advanced reasoning.
4. Advanced Reasoning and In-Context Learning
Advanced development requires moving beyond simple “asking” to “training in the moment,” a concept known as In-Context Learning.
Shot Prompting: Zero, One, and Few-Shot
• Zero-Shot: Requesting a task directly without examples. This works best for common, direct tasks the model knows well.
• One-Shot: Including a single example to establish a basic pattern or format.
• Few-Shot: Providing multiple examples (usually 2 to 5). This allows the model to learn complex data classification or extraction patterns by identifying the underlying rule from the history of the conversation.
Task Decomposition
This involves breaking down a massive, complex process into a pipeline of simpler, sequential actions. For example, rather than asking for a full feature implementation in one go, a developer might instruct the model to: 1. Extract the data requirements, 2. Design the data models, 3. Create the repository logic, and 4. Implement the UI. This grants the developer superior control and allows for validation at each intermediate step.
ReAct (Reasoning and Acting)
ReAct is a technique that combines reasoning with external actions. It allows the model to alternate between “thinking” and “acting”—such as calling an API, performing a web search, or using a specific tool—to ground its final response in verifiable, up-to-date data. This drastically reduces hallucinations by ensuring the AI doesn’t rely solely on its static training data.
5. Context Engineering: The Data Ecosystem
Prompting is only one component of a larger system. Context Engineering is the design and control of the entire environment the model “sees” before generating a response, including conversation history, attached documents, and metadata.
Three Strategies for Model Enhancement
1. Prompt Engineering: Designing structured instructions. It is fast and cost-free but limited by the context window’s token limit.
2. RAG (Retrieval-Augmented Generation): This technique retrieves relevant documents from an external database (often a vector database) and injects that information into the prompt. It is the gold standard for handling dynamic, frequently changing, or private company data without the need to retrain the model.
3. Fine-Tuning: Retraining a base model on a specific dataset to specialize it in a particular style, vocabulary, or domain. This is a costly and slow strategy, typically reserved for cases where prompting and RAG are insufficient.
The industry “Golden Rule” is to start with Prompt Engineering, add RAG if external data is required, and use Fine-Tuning only as a last resort for deep specialization.
6. Technical Optimization and the Context Window
The context window is the “working memory” of the model, measured in tokens. A token is roughly equivalent to 0.75 words in English or 0.25 words in Spanish. Managing this window is a technical necessity for four reasons:
• Cost: Billing is usually based on the total tokens processed (input plus output).
• Latency: Larger contexts require longer processing times, which is critical for real-time applications.
• Forgetfulness: Once the window is full, the model begins to lose information from the beginning of the session.
• Lost in the Middle: Models tend to ignore information located in the center of extremely long contexts, focusing their attention only on the beginning and the end.
Optimization Strategies
Effective context management involves progressive summarization of old messages, utilizing “sliding windows” to keep only the most recent interactions, and employing context caching to reuse static information without incurring reprocessing costs.
7. Markdown: The Communication Standard

Markdown has emerged as the de facto standard for communicating with LLMs. It is preferred over HTML or XML because of its token efficiency and clear visual hierarchy. Its predictable syntax makes it easy for models to parse structure automatically. In software documentation, Markdown facilitates the clear separation of instructions, code blocks, and expected results, enhancing the model’s ability to understand technical specifications.

Token Efficiency Analysis

The choice of format directly impacts cost and latency:

  • Markdown (# Title): 3 tokens.
  • HTML (<h1>Title</h1>): 7 tokens.
  • XML (<title>...</title>): 10 tokens.

Corporate Syntax Manual

Element Syntax Impact on LLM
Hierarchy # / ## / ### Defines information architecture.
Emphasis **bold** Highlights critical constraints.
Isolation ``` Separates code and data from instructions.

 

8. Contextualization for AI Coding Agents
AI coding agents like Cursor or GitHub Copilot require specific files that function as “READMEs for machines.” These files provide the necessary context regarding project architecture, coding styles, and workflows to ensure generated code integrates seamlessly into the repository.
• AGENTS.md: A standardized file in the repository root that summarizes technical rules, folder structures, and test commands.
• CLAUDE.md: Specific to Anthropic models, providing persistent memory and project instructions.
• INSTRUCTIONS.md: Used by tools like GitHub Copilot to understand repository-specific validation and testing flows.
By placing these files in nested subdirectories, developers can optimize the context window; the agent will prioritize the local context of the folder it is working in over the general project instructions, reducing noise.
9. Dynamic Context: Anthropic Skills
One of the most powerful innovations in context management is the implementation of “Skills.” Instead of saturating the context window with every possible instruction at the start, Skills allow information to be loaded in stages as needed.
A Skill consists of three levels:
1. Metadata: Discovery information in YAML format, consuming minimal tokens so the model knows the skill exists.
2. Instructions: Procedural knowledge and best practices that only enter the context window when the model triggers the skill based on the prompt.
3. Resources: Executable scripts, templates, or references that are launched automatically on demand.
This dynamic approach allows for a library of thousands of rules—such as a company’s entire design system or testing protocols—to be available without overwhelming the AI’s active memory.
10. Workflow Context Typologies
To structure AI-assisted development effectively, three types of context should be implemented:
1. Project Context (Persistent): Defines the tech stack, architecture, and critical dependencies (e.g., PROJECT_CONTEXT.md).
2. Workflow Context (Persistent): Specifies how the AI should act during repetitive tasks like bug fixing, refactoring, or creating new features (e.g., WORKFLOW_FEATURE.md).
3. Specific Context (Temporary): Information created for a specific session or a single complex task (e.g., an error analysis or a migration plan) and deleted once the task is complete.
A practical example of this is the migration of legacy code. A developer can define a specific migration workflow that includes manual validation steps, turning the AI into a highly efficient and controlled refactoring tool rather than a source of technical debt.
Conclusion: The Role of the Context Architect
In the era of AI-assisted programming, success does not rely solely on the raw power of the models. It depends on the software engineer’s ability to orchestrate dialogue and manage the input data ecosystem. By mastering prompt engineering tactics and the structures of context engineering, developers transform LLMs from simple text assistants into sophisticated development companions. The modern developer is evolving into a “Context Architect,” responsible for directing the generative capacity of the AI toward technical excellence and architectural integrity. Mastery of language logic is no longer optional; it is the definitive tool of the Software Engineer 2.0.
]]>
https://blogs.perficient.com/2026/02/16/language-mastery-as-the-new-frontier-of-software-development/feed/ 0 390355
Enhancing Fluent UI DetailsList with Custom Sorting, Filtering, Lazy Loading and Filter Chips https://blogs.perficient.com/2026/02/04/enhancing-fluent-ui-detailslist-with-custom-sorting-filtering-lazy-loading-and-filter-chips/ https://blogs.perficient.com/2026/02/04/enhancing-fluent-ui-detailslist-with-custom-sorting-filtering-lazy-loading-and-filter-chips/#respond Wed, 04 Feb 2026 07:48:24 +0000 https://blogs.perficient.com/?p=390027

Fluent UI DetailsList custom sorting and filtering can transform how structured data is displayed. While the default DetailsList component is powerful, it doesn’t include built‑in features like advanced sorting, flexible filtering, lazy loading, or selection‑driven filter chips. In this blog, we’ll show you how to extend Fluent UI DetailsList with these enhancements, making it more dynamic, scalable, and user‑friendly.

We’ll also introduce simple, reusable hooks that allow you to implement your own filtering and sorting logic, which will be perfect for scenarios where the default behavior doesn’t quite fit your needs. By the end, you’ll have a flexible, feature-rich Fluent UI DetailsList setup with sorting and filtering that can handle complex data interactions with ease.

Here’s what our wrapper brings to the table:

  • Context‑aware column menus that enable sorting beyond simple A↔Z ordering
  • Filter interfaces designed for each data type (.i.e. freeform text, choice lists, numeric ranges, or time values)
  • Selection chips that display active filters and allow quick deselection with a single click
  • Lazy loading with infinite scroll, seamlessly integrated with your API or pagination pipeline
  • One orchestrating component that ties all these features together, eliminating repetitive boilerplate

Core Architecture

The wrapper includes:

  • Column Definitions: To control how each column sorts/filters
  • State & Refs: To manage final items, full dataset, and UI flags
  • Default Logic By overriding hooks – onSort, onFilter
  • Selection: Powered by Fluent UI Selection API
  • Lazy Loading: Using IntersectionObserver
  • Filter Chips: Reflect selected rows

Following are the steps to achieve these features:

Step 1: Define Column Metadata

Each column in the DetailsList must explicitly describe its data type, sort behavior, and filtering behavior. This metadata helps the wrapper render the correct UI elements such as combo boxes, number inputs, or time pickers.

Each column needs metadata describing:

  • Field type
  • Sort behavior
  • Filter behavior
  • UI options (choice lists, icons, etc.)
export interface IDetailsListColumnDefinition {
  fieldName: string;
  displayName: string;
  columnType?: DetailsListColumnType; // Text, Date, Time, etc.
  sortDetails?: { fieldType: SortFilterType };
  filterDetails?: {
    fieldType: SortFilterType;
    filterOptions?: IComboBoxOption[];
    appliedFilters?: any[];
  };
}

Following is the example:

const columns = [{
  fieldName: 'status',
  displayName: 'Status',
  columnType: DetailsListColumnType.Text,
  sortDetails: {
    fieldType: SortFilterType.Choice
  },
  filterDetails: {
    fieldType: SortFilterType.Choice,
    filterOptions: [{
      key: 'Active',
      text: 'Active'
    },
    {
      key: 'Inactive',
      text: 'Inactive'
    }]
  }
}];

Step 2: Implement Type-Aware Fluent UI DetailsList Custom Sorting

The sorting mechanism dynamically switches based on the column’s data type. Time fields are converted to minutes to ensure consistent sorting, while text and number fields use their native values. It supports following:

  • Supports Text, Number, NumberRange, Date, and Time (custom handling for time via minute conversion).
  • Sort direction is controlled from the column’s context menu.
  • Works with default sorting or lets you inject custom sorting via onSort.
  • Default sorting uses lodash orderBy unless onSort is provided

Sample code for its implementation can be written as follows:

switch (sortColumnType) {
case SortFilterType.Time:
  sortedItems = orderBy(sortedItems, [item = >getTimeForField(item, column.key)], column.isSortedDescending ? ['desc'] : ['asc']);
  break;
default:
  sortedItems = orderBy(sortedItems, column.fieldName, column.isSortedDescending ? 'desc': 'asc');
}

Step 3: Implement Fluent UI DetailsList Custom Filtering (Text/Choice/Range/Time)

Filtering inputs change automatically based on column type. Text and choice filters use combo boxes, while numeric fields use range inputs. Time filters extract and compare HH:mm formatted values.

Text & Choice Filters

Implemented using Fluent UI ComboBox as follows:

<ComboBox

    allowFreeform={!isChoiceField}
    
    multiSelect={true}
    
    options={comboboxOptions}
    
    onChange={(e, option, index, value) =>
    
    _handleFilterDropdownChange(e, column, option, index, value)
    
    }
/>

Number Range Filter

Implemented as two input boxes, min & max for defining number range.

  • Min/Max chips are normalized in order [min, max].
  • Only applied if present; absence of either acts as open‑ended range.

Time Filter

For filtering time, we are ignoring date part and just considering time part.

  • Times are converted to minutes since midnight(HH:mm) to sort reliably regardless of display format.
  • Filtering uses date-fns format() for display and matching.

Step 4: Build the Filtering Pipeline

This step handles the filtering logic as capturing user-selected values, updating filter states, re-filtering all items, and finally applying the active sorting order. If custom filter logic is provided, it overrides the defaults. It will work as follows:

  1. User changes filter
  2. Update column.filterDetails.appliedFilters
  3. Call onFilter (if provided)
  4. Otherwise run default filter pipeline as follows:

allItems → apply filter(s) → apply current sort → update UI

Following are some helper functions that can be created for handing filter/sort logic:

  • _filterItems
  • _applyDefaultFilter
  • _applyDefaultSort

Step 5: Display Filter Chips

When selection is enabled, each selected row appears as a dismissible chip above the grid. Removing the chip automatically deselects the row, ensuring tight synchronization between UI and data.

<FilterChip key={filterValue.key} filterValue={filterValue} onRemove={_handleChipRemove} />

Note: This is a custom subcomponent used to handle filter chips. Internally it display selected values in chip form and we can control its values and functioning using onRemove and filterValue props.

Chip removal:

  • Unselects row programmatically
  • Updates the selection object

Step 6: Implementing Lazy Loading (IntersectionObserver)

The component makes use of IntersectionObserver, to detect if the user reaches the end of the list. Once triggered, it calls the lazy loading callback to fetch the next batch of items from the server or state.

  • An additional row at the bottom triggers onLazyLoadTriggered() as it enters the viewport.
  • Displays a spinner while loading; attaches the observer when more data is available.

A sentinel div at the bottom triggers loading:

observer.current = new IntersectionObserver(async entries => {
  const entry = entries[0];
  if (entry.isIntersecting) {
    observer.current ? .unobserve(lazyLoadRef.current!);
    await lazyLoadDetails.onLazyLoadTriggered();
  }
});

Props controlling behavior:

lazyLoadDetails ? :{
  enableLazyLoad: boolean;
  onLazyLoadTriggered: () => void;
  isLoading: boolean;
  moreItems: boolean;
};

Step 7: Sticky Headers

Sticky headers keep the column titles visible as the user scrolls through large datasets, improving readability and usability. Following is the code where, maxHeight property determines the scrollable container height:

const stickyHeaderStyle = {
  root: {
    maxHeight: stickyHeaderDetails ? .maxHeight ? ?450
  },
  headerWrapper: {
    position: 'sticky',
    top: 0,
    zIndex: 1
  }
};

Step 8: Putting It All Together — Minimal Example for Fluent UI DetailsList custom filtering and sorting

Following is an example where we are calling our customizes details list component:

<CustomDetailsList
  columnDefinitions={columns}
  items={data}
  allItems={data}
  checkboxVisible={CheckboxVisibility.always}
  initialSort={{ fieldName: "name", direction: SortDirection.Asc }}
  filterChipDetails={{
    filterChipKeyColumnName: "key",

    filterChipColumnName: "name",
  }}
  stickyHeaderDetails={{ enableStickyHeader: true, maxHeight: 520 }}
  lazyLoadDetails={{
    enableLazyLoad: true,

    isLoading: false,

    moreItems: true,

    onLazyLoadTriggered: async () => {
      // load more
    },
  }}
/>;

Accessibility & UX Notes

  • Keyboard: Enter key applies text/number inputs instantly; menu remains open so users can stack filters.
  • Clear filter: Context menu shows “Clear filter” action only when a filter exists; there’s also a “Clear Filters (n)” button above the grid that resets all columns at once.
  • Selection cap: To begin, maxSelectionCount helps prevent accidental bulk selections; next, it provides immediate visual feedback so users can clearly see their limits in action.

Performance Guidelines

  • Virtualization: For very large datasets, you can enable virtualization and validate both menu positioning and performance. For current example, onShouldVirtualize={() => false} is used to maintain a predictable menu experience.
  • Server‑side filtering/sorting: If your dataset is huge, pass onSort/onFilter and do the heavy lifting server‑side, then feed the component the updated page through items.
  • Lazy loading: Use moreItems to hide the sentinel when the server reports the last page; set isLoading to true to show the spinner row.

Conclusion

Finally, we have created a fully customized Fluent UI DetailsList with custom filtering and sorting which condenses real‑world list interactions into one drop‑in component. CustomDetailsList provides a production-ready, extensible, developer-friendly data grid wrapper with following enhanced features:

  • Clean context menus for type‑aware sort & filter
  • Offers selection chips for quick, visual interaction and control
  • Supports lazy loading that integrates seamlessly with your API
  • Allows you to keep headers sticky to maintain clarity in long lists
  • Delivers a ready‑to‑use design while allowing full customization when needed

GitHub repository

Please refer to the GitHub repository below for the full code. A sample has been provided within to illustrate its usage:

https://github.com/pk-tech-dev/customdetailslist

 

 

 

]]>
https://blogs.perficient.com/2026/02/04/enhancing-fluent-ui-detailslist-with-custom-sorting-filtering-lazy-loading-and-filter-chips/feed/ 0 390027
Kube Lens: The Visual IDE for Kubernetes https://blogs.perficient.com/2026/02/02/kube-lens/ https://blogs.perficient.com/2026/02/02/kube-lens/#comments Mon, 02 Feb 2026 15:37:47 +0000 https://blogs.perficient.com/?p=389778

Kube Lens — The Visual IDE for Kubernetes

Kube Lens is a desktop Kubernetes IDE that gives you a single, visual control plane for clusters, resources, logs and metrics—so you spend less time wrestling with kubectl output and more time solving real problems. In this post I’ll walk through installing Lens, adding clusters, and the everyday workflows I actually use, the features that speed up debugging, and practical tips to get teams onboarded safely.

Prerequisites

A valid kubeconfig (~/.kube/config) with the cluster contexts you need (or point Lens at alternate kubeconfig files).

What is Lens (Lens IDE / Kube Lens)

Lens is a cross-platform desktop application that connects to one or many Kubernetes clusters and presents a curated, interactive UI for exploring workloads, nodes, pods, services, and configuration. Think of it as your cluster’s cockpit—visual, searchable, and stateful—without losing the ability to run kubectl commands when you need them.

Kube Lens features

Kube Lens shines by packaging common operational tasks into intuitive views:

  • Multi-cluster visibility and quick context switching so you can compare clusters without copying kubeconfigs.
  • Live metrics and health signals (CPU, memory, pod counts, events) visible on a cluster overview for fast triage.
  • Built-in terminal scoped to the selected cluster/context so CLI power is always one click away.
  • Log viewing, searching, tailing, and exporting right next to pod details — no more bouncing between tools.
  • Port-forwarding and local access to cluster services for debugging apps in-situ.
  • Helm integration for discovering, installing, and managing releases from the UI.
  • CRD inspection and custom resource management so operators working with controllers and operators aren’t blind to their resources.
  • Team and governance features (SSO, RBAC-aware views, CVE reporting) for secure enterprise use.

Install Lens (short how-to)

Kube Lens runs on macOS, Windows, and Linux. Download the installer from the Lens site,

 

Lens installer window on desktop

 

After installing it, launch Lens and complete the initial setup, and create/sign in with a Lens ID (for syncing and team features)

Add your cluster(s)

  • Lens automatically scans default kubeconfig locations (~/.kube/config).
  • To add a cluster manually: go to the Catalog or Clusters view → Add Cluster → paste kubeconfig or point to a file.
  • You can rename clusters and tag them (e.g., dev, staging, prod) for easier filtering.

Klens Clusters

Main UI walkthrough

Klens Overview

  • Overview shows your cluster health assessment. This is where you get visibility into node status, resource utilization, and workload distribution

Klens Cluster Overview

  • Nodes show you data about your cluster nodes

Klens Nodes

  • Workloads will let you explore your deployed resources

Klens Workloads

  • Config will show you data about your configmaps, secrets, resource quotas, limit ranges and more

Klens Config

  • In the Network you will see information about your services, ingresses, and others

Klens Network

And as you can see, there are other options present, so this would be a great time to stay a couple of minutes in the app, and explore all the things that you can do.

As soon as there are changes happening in your cluster, Lens picks them and propagates them immediately through the interface. Pod restarts, scaling operations, and configuration changes appear without manual refresh, providing live insight into cluster operations that static kubectl output cannot simply match.

Example:

I will start with a basic nginx deployment that shows pod lifecycle management:

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.21
        ports:
        - containerPort: 80
        resources:
          requests:
            memory: "64Mi"
            cpu: "250m"
          limits:
            memory: "128Mi"
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  selector:
    app: nginx
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
  type: ClusterIP
---

apiVersion: v1
kind: ConfigMap
metadata:
  name: nginx-config
data:
  nginx.conf: |
    server {
        listen 80;
        location / {
            return 200 'Hello from Lens!\n';
            add_header Content-Type text/plain;
        }

Apply this using kubectl.

kubectl apply -f nginx_deployment.yaml

Now that we’ve created a couple of resources, we are ready to explore Lens.

Here are all the pods running:

Klens Pods

By clicking on the 3 dots on the right side, you get a couple of options:

Klens Pod Option

You can easily attach to a pod, open a shell, evict it, view the logs, edit it, and even delete it.

Here is the ConfigMap:

Klens Configmap View

And this is the service:
Klens Service View

Port-Forward to Nginx

Apart from everything that I’ve shown you until now, you also get an easy way to enable port forwarding through Lens.

Just go to your Network tab, select Services, and then choose your service:

Port Forward View

You will see an option to Forward it, so let’s click on it:

Klens Port Forward View 1

You can choose a local port to forward it to, or leave it as Random, have the option to directly open in your browser

Helm Deploy:

Lens provides a built-in Helm client to browse, install, manage, and even roll back Helm charts directly from its graphical user interface (GUI), simplifying deployment and management of Kubernetes applications. You can find available charts from repositories (like Bitnami, enabled by default), customize values.yaml, and install releases with a few clicks, seeing all your Helm deployments in the dedicated Helm tab. 

  1. Access Helm: Click the “Helm” icon in Lens, then select “Charts” to see available options.
  2. Browse & Search: Find charts from repositories (Artifact Hub, Bitnami, etc.) or add custom ones.
  3. Install: Select a chart, choose a version, edit parameters in the values.yaml section, and click “Install”.
  4. Manage Releases: View installed releases, check their details (applied values), and perform actions like rolling back. 

Using built-in metrics and charts

  • Lens integrates cluster metrics (where available) for nodes and workloads.
  • Toggle charts in the details pane to get CPU/memory trends over time.

Klens Dashboard

Tips and best practices

  • Keep kubeconfigs minimal per cluster and use named contexts for clarity.
  • Tag clusters (dev/stage/prod) and use color coding to reduce the risk of accidental changes.
  • Use Lens for exploration and quick fixes; keep complex automation in CI/CD pipelines.
  • For sensitive environments, restrict Lens access and avoid storing long-lived credentials locally.

 

Reference

https://docs.k8slens.dev/

]]>
https://blogs.perficient.com/2026/02/02/kube-lens/feed/ 1 389778
Just what exactly is Visual Builder Studio anyway? https://blogs.perficient.com/2026/01/29/just-what-exactly-is-visual-builder-studio-anyway/ https://blogs.perficient.com/2026/01/29/just-what-exactly-is-visual-builder-studio-anyway/#respond Thu, 29 Jan 2026 15:40:45 +0000 https://blogs.perficient.com/?p=389750

If you’re in the world of Oracle Cloud, you are most likely busy planning your big switch to Redwood. While it’s easy to get excited about a new look and a plethora of AI features, I want to take some time to talk about a tool that’s new (at least to me) that comes along with Redwood. Functional users will come to know VB Studio as the new method for delivering page customizations, but I’ve learned it’s much more.

VB Studio has been around since 2020, but I only started learning about it recently. At its core, VB Studio is Oracle’s extension platform. It provides users with a safe way to customize by building around their systems instead of inside of it. Since changes to the core code are not allowed, upgrades are much less problematic and time consuming.  Let’s look at how users of different expertise might use VB Studio.

Oracle Cloud Application Developers

I wouldn’t call myself a developer, but this is the area I fit into. Moving forward, I will not be using Page Composer or HCM Experience Design Studio…and I’m pretty happy about that. Every client I work with wants customization, so having a one-stop shop with Redwood is a game-changer after years of juggling tools.

Sandboxes are gone. VB Studio uses Git repositories with branches to track and log every change. Branches let multiple people work on different features without conflict, and teams review and merge changes into the main branch in a controlled process.

And what about when these changes are ready for production? By setting up a pipeline from your development environment to your production environment, these changes can be pushed straight into production. This is huge for me! It reduces the time needed to implement new Oracle modules. It also helps with updating or changing existing systems as well. I’ve spent countless hours on video calls instructing system administrators on how to perform requested changes in their production environment because their policy did not allow me to have access. Now, I can make these changes in a development instance and push them to production. The sys admin can then view these changes and approve or reject them for production. Simple!

Maxresdefault

Low-Code Developers

 

Customizations to existing features are great, but what about building entirely new functionality and embedding it right into your system?  VB Studio simplifies building applications, letting low-code developers move quickly without getting bogged down in traditional coding. With VB Studio’s visual designer, developers can drag and drop components, arrange them the way they want, and preview changes instantly. This is exciting for me because I feel like it is accessible for someone who does very little coding. Of course, for those who need more flexibility, you can still add custom logic using familiar web technologies like JavaScript and HTML (also accessible with the help of AI). Once your app is ready, deployment is easy. This approach means quicker turnaround, less complexity, and applications that fit your business needs perfectly.

 

Experienced Programmers

Okay, now we’re getting way out of my league here, so I’ll be brief. If you really want to get your hands dirty by modifying the code of an application created by others, you can do that. If you prefer building a completely custom application using the web programming language of your choice, you can also do that. Oracle offers users a wide range of tools and stays flexible in how they use them. Organizations need tailored systems, and Oracle keeps evolving to make that possible.

 

https://www.oracle.com/application-development/visual-builder-studio/

]]>
https://blogs.perficient.com/2026/01/29/just-what-exactly-is-visual-builder-studio-anyway/feed/ 0 389750
Build a Custom Accordion Component in SPFx Using React – SharePoint https://blogs.perficient.com/2026/01/22/build-a-custom-accordion-component-in-spfx-using-react-sharepoint/ https://blogs.perficient.com/2026/01/22/build-a-custom-accordion-component-in-spfx-using-react-sharepoint/#comments Thu, 22 Jan 2026 07:50:54 +0000 https://blogs.perficient.com/?p=389813

When building modern SharePoint Framework (SPFx) solutions, reusable UI components play a crucial role in keeping your code clean, scalable, and maintainable. In particular, interactive components help improve the user experience without cluttering the interface.

Among these components, the Accordion is a commonly used UI element. It allows users to expand and collapse sections, making it easier to display large amounts of information in a compact and organized layout. In this blog, we’ll walk through how to create a custom accordion component in SPFx using React.


Create the Accordion Wrapper Component

To begin with, we’ll create a wrapper component that acts as a container for multiple accordion items. At a high level, this component’s responsibility is intentionally simple: it renders child accordion items while keeping styling and layout consistent across the entire accordion.This approach allows individual accordion items to remain focused on their own behavior, while the wrapper handles structure and reusability.

Accordion.tsx

import * as React from 'react';
import styles from './Accordion.module.scss';
import classNames from 'classnames';
import { IAccordionItemProps } from './subcomponents/AccordionItem';

import { ReactElement } from 'react';

export interface IAccordionProps {
  children?:
    | ReactElement<IAccordionItemProps>
    | ReactElement<IAccordionItemProps>[];
  className?: string;
}


const Accordion: React.FunctionComponent<
  React.PropsWithChildren<IAccordionProps>
> = (props) => {
  const { children, className } = props;
  return (
    <div className={classNames(styles.accordionSubcomponent, className)}>
      {children}
    </div>
  );
};

export default Accordion;

Styling with SCSS Modules

Next, let’s focus on styling. SPFx supports SCSS modules, which is ideal for avoiding global CSS conflicts and keeping styles scoped to individual components. Let’s see styling for accordion and accordion items.

Accordion.module.scss

.accordionSubcomponent {
    margin-bottom: 12px;
    .accordionTitleRow {
        display: flex;
        flex-direction: row;
        align-items: center;
        padding: 5px;
        font-size: 18px;
        font-weight: 600;
        cursor: pointer;
        -webkit-touch-callout: none;
        -webkit-user-select: none;
        -khtml-user-select: none;
        -moz-user-select: none;
        -ms-user-select: none;
        user-select: none;
        border-bottom: 1px solid;
        border-color: "[theme: neutralQuaternaryAlt]";
        background: "[theme: neutralLighter]";
    }
    .accordionTitleRow:hover {
        opacity: .8;
    }
    .accordionIconCol {
        padding: 0px 5px;
    }
    .accordionHeaderCol {
        display: inline-block;
        width: 100%;
    }
    .iconExpandCollapse {
        margin-top: -4px;
        font-weight: 600;
        vertical-align: middle;
    }
    .accordionContent {
        margin-left: 12px;
        display: grid;
        grid-template-rows: 0fr;
        overflow: hidden;
        transition: grid-template-rows 200ms;
        &.expanded {
          grid-template-rows: 1fr;
        }
        .expandableContent {
          min-height: 0;
        }
    }
}

Styling Highlights

  • Grid‑based animation for expand/collapse
  • SharePoint theme tokens
  • Hover effects for better UX

Creating Accordion Item Component

Each expandable section is managed by AccordionItem.tsx.

import * as React from 'react';
import styles from '../Accordion.module.scss';
import classNames from 'classnames';
import { Icon } from '@fluentui/react';
import { useState } from 'react';


export interface IAccordionItemProps {
  iconCollapsed?: string;
  iconExpanded?: string;
  headerText?: string;
  headerClassName?: string;
  bodyClassName?: string;
  isExpandedByDefault?: boolean;
}
const AccordionItem: React.FunctionComponent<React.PropsWithChildren<IAccordionItemProps>> = (props: React.PropsWithChildren<IAccordionItemProps>) => {
  const {
    iconCollapsed,
    iconExpanded,
    headerText,
    headerClassName,
    bodyClassName,
    isExpandedByDefault,
    children
  } = props;
  const [isExpanded, setIsExpanded] = useState<boolean>(!!isExpandedByDefault);
  const _toggleAccordion = (): void => {
    setIsExpanded((prevIsExpanded) => !prevIsExpanded);
  }
  return (
    <Stack>
    <div className={styles.accordionTitleRow} onClick={_toggleAccordion}>
        <div className={styles.accordionIconCol}>
            <Icon
                iconName={isExpanded ? iconExpanded : iconCollapsed}
                className={styles.iconExpandCollapse}
            />
        </div>
        <div className={classNames(styles.accordionHeaderCol, headerClassName)}>
            {headerText}
        </div>
    </div>
    <div className={classNames(styles.accordionContent, bodyClassName, {[styles.expanded]: isExpanded})}>
      <div className={styles.expandableContent}>
        {children}
      </div>
    </div>
    </Stack>
  )
}
AccordionItem.defaultProps = {
  iconExpanded: 'ChevronDown',
  iconCollapsed: 'ChevronUp'
};
export default AccordionItem;

Example Usage in SPFx Web Part

<Accordion>
  <AccordionItem headerText="What is SPFx?">
    <p>SPFx is a development model for SharePoint customizations.</p>

  </AccordionItem>

  <AccordionItem
    headerText="Why use custom controls?"
    isExpandedByDefault={true}
  >
    <p>Custom controls improve reusability and UI consistency.</p>
  </AccordionItem>
</Accordion>

Accordion

Conclusion

By building a custom accordion component in SPFx using React, you gain:

  • Full control over UI behavior
  • Lightweight and reusable code
  • Native SharePoint theming

This pattern is perfect for:

  • FAQ sections
  • Configuration panels
  • Dashboard summaries
]]>
https://blogs.perficient.com/2026/01/22/build-a-custom-accordion-component-in-spfx-using-react-sharepoint/feed/ 1 389813
Upgrading from Gulp to Heft in SPFx | Sharepoint https://blogs.perficient.com/2026/01/14/upgrading-from-gulp-to-heft-in-spfx-sharepoint/ https://blogs.perficient.com/2026/01/14/upgrading-from-gulp-to-heft-in-spfx-sharepoint/#respond Wed, 14 Jan 2026 09:59:20 +0000 https://blogs.perficient.com/?p=389727

With the release of SPFx v1.22, Microsoft introduced Heft as the new build engine, replacing Gulp. This change brings better performance, modern tooling, and a more standardized approach to building SPFx solutions. In this blog, we’ll explore what this means for developers and how to upgrade.

What is Gulp in SPFx?

In SharePoint Framework (SPFx), Gulp is a JavaScript-based task runner that was traditionally used to automate build and development tasks.

What Gulp Did in SPFx

Historically, the SharePoint Framework (SPFx) relied on Gulp as its primary task runner, responsible for orchestrating the entire build pipeline. Gulp did a series of scripted tasks, defined inside gulpfile.js and in different SPFx build rig packages. These tasks automate important development and packaging workflows.These tasks included:

  • Automates repetitive tasks such as:
    • TypeScript to JavaScript.
    • Bundling multiple files into optimized packages.
    • Minifying code for better performance.
    • Packaging the solution into a “.sppkg” file for deployment.
  • Runs development servers for testing (gulp serve).
  • Watches for changes and rebuilds automatically during development

Because these tasks depended on ad‑hoc JavaScript streams and SPFx‑specific build rig wrappers, the pipeline could become complex and difficult to extend consistently across projects.

The following are the common commands included in gulp:

  • gulp serve – local workbench/dev server
  • gulp build – build the solution
  • gulp bundle – produce deployable bundles
  • gulp package-solution – create the .sppkg for the App Catalog

What is Heft?

In SharePoint Framework (SPFx), Heft is the new build engine introduced by Microsoft, starting with SPFx v1.22. It replaces the older Gulp-based build system.

Heft has replaced Gulp to support modern architecture, improve performance, ensure consistency and standardization, and provide greater extensibility.

Comparison between heft and gulp:

Area Gulp (Legacy) Heft (SPFx v1.22+)
Core model Task runner with custom JS/streams (gulpfile.js) Config‑driven orchestrator with plugins/rigs
Extensibility Write custom tasks per project Use Heft plugins or small “patch” files; standardized rigs
Performance Sequential tasks; no native caching Incremental builds, caching, unified TypeScript pass
Config surface Often scattered across gulpfile.js and build rig packages Centralized JSON/JS configs (heft.json, Webpack patch/customize hooks)
Scale Harder to keep consistent across many repos Designed to scale consistently (Rush Stack)

Installation Steps for Heft

  • To work with the upgraded version, you need to install Node v22.
  • Run the command npm install @rushstack/heft –global

Removing Gulp from an SPFx Project and Adding Heft (Clean Steps)

  • To work with the upgraded version, install Node v22.
  • Remove your current node_modules and package-lock.json, and run npm install again
  • NOTE: deleting node_modules can take a very long time if you don’t skip the recycle bin.
    • Open PowerShell
    • Navigate to your Project folder
    • Run command Remove-Item -Recurse -Force node_modules
    • Run command Remove-Item -Force package-lock.json
  • Open the solution in VS Code
  • In terminal run command npm cache clean –force
  • Then run npm install
  • Run the command npm install @rushstack/heft –global

After that, everything should work, and you will be using the latest version of SPFx with heft. However, going forward, there are some commands to be aware of

Day‑to‑day Commands on Heft

  • heft clean → cleans build artifacts (eq. gulp clean)
  • heft build → compiles & bundles (eq. gulp build/bundle) (Note— prod settings are driven by config rather than –ship flags.)
  • heft start → dev server (eq. gulp serve)
  • heft package-solution → creates.sppkg (dev build)
  • heft package-solution –production → .sppkg for production (eq. gulp package-solution –ship)
  • heft trust-dev-cert → trusts the local dev certificate used by the dev server (handy if debugging fails due to HTTPS cert issues

Conclusion

Upgrading from Gulp to Heft in SPFx projects marks a significant step toward modernizing the build pipeline. Heft uses a standard, configuration-based approach that improves performance, makes things the same across projects, and can be expanded for future needs. By adopting Heft, developers align with Microsoft’s latest architecture, reduce maintenance overhead, and gain a more scalable and reliable development experience.

]]>
https://blogs.perficient.com/2026/01/14/upgrading-from-gulp-to-heft-in-spfx-sharepoint/feed/ 0 389727
From Legacy to Modern: Migrating WCF to Web API with the Help of AI https://blogs.perficient.com/2026/01/13/from-legacy-to-modern-migrating-wcf-to-web-api-with-the-help-of-ai/ https://blogs.perficient.com/2026/01/13/from-legacy-to-modern-migrating-wcf-to-web-api-with-the-help-of-ai/#respond Tue, 13 Jan 2026 17:32:36 +0000 https://blogs.perficient.com/?p=389673

Introduction

The modernization of legacy applications has always been a costly process: understanding old code, uncovering hidden dependencies, translating communication models (for example, from SOAP to REST), and ensuring that nothing breaks in production. This is where artificial intelligence changes the game.

AI does not replace the architect or the developer, but it speeds up the heaviest steps in a migration: it helps read and summarize large codebases, proposes equivalent designs in the new technology, generates drafts of controllers, DTOs, and tests, and even suggests architectural improvements that take advantage of the change. Instead of spending hours on mechanical tasks, the team can focus on what really matters: the business rules and the quality of the new solution.

In this post, we’ll look at that impact applied to a concrete case: migrating a WCF service written in C# to an ASP.NET Core Web API, using a real public repository as a starting point and relying on AI throughout the entire process.

Sample project: a real WCF service to be migrated

For this article, we’ll use the public project jecamayo/t-facturo.net as a real-world example: a .NET application that exposes SOAP services based on WCF to manage advisors and branches, using NHibernate for data access. This kind of solution perfectly represents the scenario of many legacy applications currently running in production, and it will serve as our basis to show how artificial intelligence can speed up and improve their migration to a modern architecture with ASP.NET Core Web API.

Key Steps to Migrate from Legacy WCF to a Modern Web API

Migrating a legacy application is not just about “moving code” from one technology to another: it involves understanding the business context, the existing architecture, and designing a modern solution that will be sustainable over time. To structure that process—and to clearly show where artificial intelligence brings the most value—it’s useful to break the migration down into a few key steps like the ones we’ll look at next.

  1. Define the goals and scope of the migration
    Clarify what you want to achieve with the modernization (for example, moving to .NET 8, exposing REST, improving performance or security) and which parts of the system are in or out of the project, in order to avoid surprises and rework.
  2. Analyze the current architecture and design the target architecture
    Understand how the solution is built today (layers, projects, WCF, NHibernate, database) and, with that snapshot, define the target architecture in ASP.NET Core Web API (layers, patterns, technologies) that will replace the legacy system.
  3. Identify dependencies, models, DTOs, and business rules
    Locate external libraries, frameworks, and critical components; inventory domain entities and DTOs; and extract the business rules present in the code to ensure they are properly preserved in the new implementation.
  4. Design the testing strategy and migration plan
    Decide how you will verify that the new API behaves the same (unit tests, integration tests, comparison of WCF vs Web API responses) and define whether the migration will be gradual or a “big bang”, including phases and milestones.
  5. Implement the new Web API, validate it, and retire the legacy WCF
    Build the Web API following the target architecture, migrate the logic and data access, run the test plan to validate behavior, deploy the new solution and, once its stability has been confirmed, deactivate the inherited WCF service.

How to Use AI Prompts During a Migration

Artificial intelligence becomes truly useful in a migration when we know what to ask of it and how to ask it. It’s not just about “asking for code,” but about leveraging it in different phases: understanding the legacy system, designing the target architecture, generating repetitive parts, proposing tests, and helping document the change. To do this, we can classify prompts into a few simple categories (analysis, design, code generation, testing, and documentation) and use them as a practical guide throughout the entire migration process.

Analysis and Understanding Prompts

These focus on having the AI read the legacy code and help you understand it faster: what a WCF service does, what responsibilities a class has, how projects are related, or which entities and DTOs exist. They are ideal for obtaining “mental maps” of the system without having to review every file by hand.

Usage examples:

  • Summarize what a project or a WCF service does.
  • Explain what responsibilities a class or layer has.
  • Identify domain models, DTOs, or design patterns.

Design and Architecture Prompts

These are used to ask the AI for target architecture proposals in the new technology: how to translate WCF contracts into REST endpoints, what layering structure to follow in ASP.NET Core, or which patterns to apply to better separate domain, application, and infrastructure. They do not replace the architect’s judgment, but they offer good starting points and alternatives.

Usage examples:

  • Propose how to translate a WCF contract into REST endpoints.
  • Suggest a project structure following Clean Architecture.
  • Compare technological alternatives (keeping NHibernate vs migrating to EF Core).

Code Generation and Refactoring Prompts

These are aimed at producing or transforming specific code: generating Web API controllers from WCF interfaces, creating DTOs and mappings, or refactoring large classes into smaller, more testable services. They speed up the creation of boilerplate and make it easier to apply good design practices.

Usage examples:

  • Create a Web API controller from a WCF interface.
  • Generate DTOs and mappings between entities and response models.
  • Refactor a class with too many responsibilities into cleaner services/repositories.

Testing and Validation Prompts

Their goal is to help ensure that the migration does not break existing behavior. They can be used to generate unit and integration tests, define representative test cases, or suggest ways to compare responses between the original WCF service and the new Web API.

Usage examples:

  • Generate unit or integration tests for specific endpoints.
  • Propose test scenarios for a business rule.
  • Suggest strategies to compare responses between WCF and Web API.

Documentation and Communication Prompts

They help explain the before and after of the migration: documenting REST endpoints, generating technical summaries for the team, creating tables that show the equivalence between WCF operations and Web API endpoints, or writing design notes for future evolutions. They simplify communication with developers and non-technical stakeholders.

Usage examples:

  • Write documentation for the new API based on the controllers.
  • Generate technical summaries for the team or stakeholders.
  • Create equivalence tables between WCF operations and REST endpoints.

To avoid making this article too long and to be able to go deeper into each stage of the migration, we’ll leave the definition of specific prompts —with real examples applied to the t-facturo.net project— for an upcoming post. In that next article, we’ll go through, step by step, what to ask the AI in each phase (analysis, design, code generation, testing, and documentation) and how those prompts directly impact the quality, speed, and risk of a WCF-to-Web-API migration.

Conclusions

The experience of migrating a legacy application with the help of AI shows that its main value is not just in “writing code,” but in reducing the intellectual friction of the process: understanding old systems, visualizing possible architectures, and automating repetitive tasks. Instead of spending hours reading WCF contracts, service classes, and DAOs, AI can summarize, classify, and propose migration paths, allowing the architect and the team to focus their time on key design decisions and business rules.

At the same time, AI speeds up the creation of the new solution: it generates skeletons for Web API controllers, DTOs, mappings, and tests, acting as an assistant that produces drafts for the team to iterate on and improve. However, human judgment remains essential to validate each proposal, adapt the architecture to the organization’s real context, and ensure that the new application not only “works,” but is maintainable, secure, and aligned with business goals.

]]>
https://blogs.perficient.com/2026/01/13/from-legacy-to-modern-migrating-wcf-to-web-api-with-the-help-of-ai/feed/ 0 389673
Building a Reliable Client-Side Token Management System in Flutter https://blogs.perficient.com/2026/01/08/building-a-reliable-client-side-token-management-system-in-flutter/ https://blogs.perficient.com/2026/01/08/building-a-reliable-client-side-token-management-system-in-flutter/#respond Fri, 09 Jan 2026 05:15:35 +0000 https://blogs.perficient.com/?p=389472

In one of my recent Flutter projects, I had to implement a session token mechanism that behaved very differently from standard JWT-based authentication systems.

The backend issued a 15-minute session token, but with strict constraints:

  • No expiry timestamp was provided
  • The server extended the session only when the app made an API call
  • Long-running user workflows depended entirely on session continuity

If the session expired unexpectedly, users could lose progress mid-flow, leading to inconsistent states and broken experiences. This meant the entire token lifecycle had to be controlled on the client, in a predictable and self-healing way.

This is the architecture I designed.


  1. The Core Challenge

The server provided the token but not its expiry. The only rule:

“Token is valid for 15 minutes, and any API call extends the session.”

To protect long-running user interactions, the application needed to:

  • Track token lifespan locally
  • Refresh or extend sessions automatically
  • Work uniformly across REST and GraphQL
  • Survive app backgrounding and resuming
  • Preserve in-progress workflows without UI disruption

This required a fully client-driven token lifecycle engine.


  1. Client-Side Countdown Timer

Since expiry data was not available from the server, I implemented a local countdown timer to represent session validity.

How it works:

  • When token is obtained → start a 15-minute timer
  • When any API call happens → reset the timer (because backend extends session)
  • If the timer is about to expire:
    • Active user flow → show a visible countdown
    • Passive or static screens → attempt silent refresh
  • If refresh fails → gracefully log out in case of logged-in users

This timer became the foundation of the entire system.

 

Blank Diagram (3)


  1. Handling App Lifecycle Transitions

Users frequently minimize or switch apps. To maintain session correctness:

  • On background: pause the timer and store timestamp
  • On resume: calculate elapsed background time
    • If still valid → refresh & restart timer
    • If expired → re-authenticate or log out

This prevented accidental session expiry just because the app was minimized.

Blank Diagram (4)

 


  1. REST Auto-Refresh with Dio Interceptors

For REST APIs, Dio interceptors provided a clean, centralized way to manage token refresh.

Interceptor Responsibilities:

  • If timer is null → start timer
  • If timer exists but is inactive,
    • token expired → refresh token
    • perform silent re-login if needed
  • If timer is active → reset the timer
  • Inject updated token into headers

Conceptual Implementation:

class SessionInterceptor extends Interceptor {

  @override

  Future<void> onRequest(

    RequestOptions options,

    RequestInterceptorHandler handler,

  ) async {

    if (sessionTimer == null) {

      startSessionTimer();

    } else if (!sessionTimer.isActive) {

      await refreshSession();

      if (isAuthenticatedUser) {

        await silentReauthentication();

      }

    }

    options.headers[‘Authorization’] = ‘Bearer $currentToken’;

    resetSessionTimer();

    handler.next(options);

  }

}

This made REST calls self-healing, with no manual checks in individual services.


  1. GraphQL Auto-Refresh with Custom AuthLink

GraphQL required custom handling because it doesn’t support interceptors.
I implemented a custom AuthLink where token management happened inside getToken().

AuthLink Responsibilities:

  • Timer null → start
  • Timer inactive,
    • refresh token
    • update storage
    • silently re-login if necessary
  • Timer active → reset timer and continue

GraphQL operations then behaved consistently with REST, including auto-refresh and retry.

Conceptual implementation:

class CustomAuthLink extends AuthLink {

  CustomAuthLink()

      : super(

          getToken: () async {

            if (sessionTimer == null) {

              startSessionTimer();

              return currentToken;

            }

            if (!sessionTimer.isActive) {

              await refreshSession();

              if (isAuthenticatedUser) {

                await silentReauthentication();

              }

              return currentToken;

            }

            resetSessionTimer();

            return currentToken;

          },

        );

}


  1. Silent Session Extension for Authenticated Users

When authenticated users’ sessions extended:

  • token refresh happened in background
  • user data was re-synced silently
  • no screens were reset
  • no interruptions were shown

This was essential for long-running user workflows.


Engineering Lessons Learned

  • When token expiry information is not provided by the backend, session management must be treated as a first-class client responsibility rather than an auxiliary concern. Deferring this logic to individual API calls or UI layers leads to fragmentation and unpredictable behavior.
  • A client-side timer, when treated as the authoritative representation of session validity, significantly simplifies the overall design. By anchoring all refresh, retry, and termination decisions to a single timing mechanism, the system becomes easier to reason about, test, and maintain.
  • Application lifecycle events have a direct and often underestimated impact on session correctness. Explicitly handling backgrounding and resumption prevents sessions from expiring due to inactivity that does not reflect actual user intent or engagement.
  • Centralizing session logic for REST interactions through a global interceptor reduces duplication and eliminates inconsistent implementations across services. This approach ensures that every network call adheres to the same session rules without requiring feature-level awareness.
  • GraphQL requires a different integration point, but achieving behavioral parity with REST is essential. Embedding session handling within a custom authorization link proved to be the most reliable way to enforce consistent session behavior across both communication models.
  • Silent session extension for authenticated users is critical for preserving continuity during long-running interactions. Refreshing sessions transparently avoids unnecessary interruptions and prevents loss of in-progress work.
  • In systems where backend constraints limit visibility into session expiry, a client-driven lifecycle model is not merely a workaround. It is a necessary architectural decision that improves reliability, protects user progress, and provides predictable behavior under real-world usage conditions.
]]>
https://blogs.perficient.com/2026/01/08/building-a-reliable-client-side-token-management-system-in-flutter/feed/ 0 389472
Model Context Protocol (MCP) – Simplified https://blogs.perficient.com/2026/01/08/model-context-protocol-mcp-simplified/ https://blogs.perficient.com/2026/01/08/model-context-protocol-mcp-simplified/#comments Thu, 08 Jan 2026 07:50:15 +0000 https://blogs.perficient.com/?p=389415

What is MCP?

Model Context Protocol (MCP) is an open-source standard for integrating AI applications to external systems. With AI use cases getting traction more and more, it becomes evident that AI applications tend to connect to multiple data sources to provide intelligent and relevant responses.

Earlier AI systems interacted with users through Large language Models (LLM) that leveraged pre-trained datasets. Then, in larger organizations, business users work with AI applications/agents expect more relevant responses from enterprise dataset, from where Retrieval Augmented Generation (RAG) came into play.

Now, AI applications/agents are expected to produce more accurate responses leveraging latest data, that requires AI systems to interact with multiple data sources and fetch accurate information. When multi-system interactions are established, it requires the communication protocol to be more standardized and scalable. That is where MCP enables a standardized way to connect AI applications to external systems.

 

Architecture

Mcp Architecture

Using MCP, AI applications can connect to data source (ex; local files, databases), tools and workflows – enabling them to access key information and perform tasks. In enterprises scenario, AI applications/agents can connect to multiple databases across organization, empowering users to analyze data using natural language chat.

Benefits of MCP

MCP serves a wide range of benefits

  • Development: MCP reduces development time and complexity when building, or integrating with AI application/agent. It makes integrating MCP host with multiple MCP servers simple by leveraging built-in capability discovery feature.
  • AI applications or agents: MCP provides access to an ecosystem of data sources, tools and apps which will enhance capabilities and improve the end-user experience.
  • End-users: MCP results in more capable AI applications or agents which can access your data and take actions on user behalf when necessary.

MCP – Concepts

At the top level of MCP concepts, there are three entities,

  • Participants
  • Layers
  • Data Layer Protocol

 

Participants

MCP follows a client-server architecture where an MCP host – an AI application like enterprise chatbot establishes connections to one or more MCP servers. The MCP host accomplishes this by creating a MCP client for each MCP server. Each MCP client maintains a dedicated connection with its MCP server.

The key participants of MCP architecture are:

  • MCP Host: AI application that coordinates and manages one or more MCP clients
  • MCP Client: A component that maintains a dedicated connection to an MCP server and obtains context from an MCP server for MCP host to interact
  • MCP Server: A program that provides context to MCP clients (i.e. generate responses or perform actions on user behalf)

Mcp Client Server

Layers

MCP consists of two layers:

  • Data layer – Defines JSON-RPC based protocol for client-server communication including,
    • lifecycle management – initiate connection, capability discovery & negotiation, connection termination
    • Core primitives – enabling server features like tools for AI actions, resources for context data, prompt templates for client-server interaction and client features like ask client to sample from host LLM, log messages to client
    • Utility features – Additional capabilities like real-time notifications, track progress for long-running operations
  • Transport Layer – Manages communication channels and authentication between clients and servers. It handles connection establishment, message framing and secure communication between MCP participants

Data Layer Protocol

The core part of MCP is defining the schema and semantics between MCP clients and MCP servers. It is the part of MCP that defines the ways developers can share context from MCP servers to MCP clients.

MCP uses JSON-RPC 2.0 as its underlying RPC protocol. Client and servers send requests to each other and respond accordingly. Notifications can be used when no response is required.

Life Cycle Management

MCP is a stateful protocol that requires lifecycle management. The purpose of lifecycle management is to negotiate the capabilities (i.e. functionalities) that both client and server support.

Primitives

Primitives define what clients and servers can offer each other. These primitives specify the types of contextual information that can be shared with AI applications and the range of actions that can be performed. MCP defines three core primitives that servers can expose:

  • Tools: Executable functions that AI applications can invoke to perform actions (e.g., API calls, database queries)
  • Resources: Data sources that provide contextual information to AI applications (e.g., file contents, API responses, database records)
  • Prompts: Reusable templates that help structure interactions with language models (e.g., system prompts, few-shot examples)

 

Notifications

The protocol supports real-time notifications to enable dynamic updates between servers and clients. For example, when a server’s available tools change – such as when new functionalities are added or existing functionality is updated – the server can send tool update notifications to all its connected clients about these changes.

 

Security in Data Accessing

While AI applications communicate with multiple enterprise data sources thgrouch MCP and fetch real-time sensitive data like customer information, financial data to serve the users, data security becomes absolutely critical factor to be addressed.

MCP ensures secure access.

Authentication and Authorization

MCP implements server-side authentication where each MCP server validates who is making the request. The enterprise system controls access through:

  • User-specific credentials – Each user connecting through MCP has their own authentication tokens
  • Role-based access control (RBAC) – Users only access data that the role permits
  • Session management – Time-limited sessions that expire automatically

Data Access Controls

MCP server acts as a security gateway that enforces the same access policies as direct system access:

    • Users can only query data that they are authroized to access
    • The server validates every request against permission rules
    • Sensitive information can be masked or filtered based on user privileges

Secure Communication

      • Encrypted connections – All data transmissions uses TLS/HTTPS encryption
      • No data storage in AI – AI systems do not store the financial data it accesses; it only process it during the conversation session

Audit and Monitoring

MCP implementations in enterprise ecosystem should include:

      • Complete audit logs – Every data access request is logged with user, timestamp and data accessed
      • Anomaly detection – Engage mechanisms that monitor unusual access patterns and trigger alerts
      • Compliance tracking – All interactions meet regulatory requirements like GDPR, PCI-DSS

Architecture Isolation

Enterprises typically deploy MCP using:

      • Private network deployment – MCP servers stay within the enterprise secure firewall boundary
      • API gateway integration – Requests go through existing security infrastructure
      • No direct database access – MCP connects and access data through secure APIs, not direct access to database

The main idea is that MCP does not bypass existing security. It works within the same security as other enterprise applications, just showing a smarter interface.

 

MCP Implementation & Demonstration

In this section, I will demonstrate a simple use case where MCP client (Claude Desktop) interacts with “Finance Manager” MCP server that can fetch financial information from the database.

Financial data is maintained in Postgres database tables. MCP client (Claude Desktop app) will request information about customer account, MCP host will discover appropriate capability based on user prompt and invoke respective MCP tool function that can fetch data from the database table.

To make MCP client-server in action, there are three parts to be configured

      • Backend Database
      • MCP server implementation
      • MCP server registration in MCP Host

Backend Database

Postgres table “accounts” maintains accounts data with below information, “transactions” table maintains the transaction performed on the accounts

Accounts Table

Transactions Table

MCP server implementation

Mcp Server Implementation

FastMCP class implements MCP server components and creating an object of it initialize and enables access to those components to create enterprise MCP server capabilities.

The annotation “@mcp.tool()” defines the capability and the respective function will be recognized as MCP capability. These functions will be exposed to AI applications and will be invoked from MCP Host to perform designated actions.

In order to invoke MCP capabilities from client, MCP server should be up & running. In this example, there are two functions defined as MCP tool capabilities,

      • get_account_details – The function accept account number as input parameter, query “accounts” table and returns account information
      • add_transaction – The function accepts account number and transaction amount as parameters, make entry into “transactions” table

 

MCP Server Registration in MCP Host

For AI applications to invoke MCP server capability, MCP server should be registered in MCP host at client end. For this demonstration, I am using Claude Desktop as MCP client from where I interact with MCP server.

First, MCP server is registered with MCP host in Claude Desktop as below,

Claude Desktop -> Settings -> Developer -> Local MCP Servers -> Click “Edit Config”

Developer Settings

Open “claude_desktop_config” JSON file in Notepad. Add configurations in the JSON as below. The configurations define the path where MCP server implementation is located and instruct command to MCP host to run. Save the file and close.

Register Mcp Server

Restart “Claude Desktop” application, go to Settings -> Developer -> Local MCP servers tab. The newly added MCP server (finance-manager) will be in running state as below,

Mcp Server Running

Go to chat window in Claude Desktop. Issue a prompt to fetch details of an account in “accounts” table and review the response,

 

Claude Mcp Invocation

User Prompt: User issues a prompt to fetch details of an account.

MCP Discovery & Invoke: The client (Claude Desktop) processes the prompt, interacts with MCP host, automatically discover the relevant capability – get_account_details function in this case – without explicitly mention the function name and invoke the function with necessary parameter.

Response: MCP server process the request, fetch account details from the table and respond details to the client. The client formats the response and present it to the user.

Another example to add a transaction in the backend table for an account,

Mcp Server Add Transaction

Here, “add_transaction” capability has been invoked to add a transaction record in “transactions” table. In the chat window, you could notice that what MCP function is being invoked along with request & response body.

The record has been successfully added into the table,

Add Transaction Postgres Table

Impressive, isn’t it..!!

There are a wide range of use cases implementing MCP servers and integrate with enterprise AI systems that bring in intelligent layer to interact with enterprise data sources.

Here, you may also develop a thought that in what ways MCP (Model Context Protocol) is different from RAG (Retrieval Augmented Generation), as I did so. Based on my research, I just curated a comparison matrix of the features that would add more clarity,

 

Aspect RAG (Retrieval Augmented Generation) MCP (Model Context Protocol)
Purpose Retrieve unstructured docs to improve LLM responses AI agents access structured data/tools dynamically
Data Type Unstructured text (PDFs, docs, web pages) Structured data (JSON, APIs, databases)
Workflow Retrieve → Embed → Prompt injection → Generate AI requests context → Protocol delivers → AI reasons
Context Delivery Text chunks stuffed into prompt Structured objects via standardized interface
Token Usage High (full text in context) Low (references/structured data)
Action Capability Read-only (information retrieval) Read + Write (tools, APIs, actions)
Discovery Pre-indexed vector search Runtime tool/capability discovery
Latency Retrieval + embedding time Real-time protocol calls
Use Case Q&A over documents, chatbots AI agents, tool calling, enterprise systems
Maturity Widely adopted, mature ecosystem Emerging standard (2025+)
Complexity Vector DB + embedding pipeline Protocol implementation + AI agent

 

Conclusion

MCP Servers extend the capabilities of AI assistants by allowing them to interact with external services and data sources using natural language commands. Model Context Protocol (MCP) has a wide range of use cases and there are several enterprises already implemented and hosted MCP servers for AI clients to integrate and interact.

Some of the prominent MCP servers include:

GitHub MCP Server: Allows AI to manage repositories, issues, pull requests, and monitor CI/CD workflows directly within the development environment.

Azure DevOps MCP Server: Integrates AI with Azure DevOps services for managing pipelines, work items and repositories, ideal for teams withing the Microsoft ecosystem.

PostgreSQL MCP Server: bridges the gap between AI and databases, allowing natural language queries, schema exploration and data analysis without manual SQL scripting.

Slack MCP Server: Turns Slack into an AI-powered collaboration hub, enabling message posting, channel management

]]>
https://blogs.perficient.com/2026/01/08/model-context-protocol-mcp-simplified/feed/ 1 389415