Innovation + Product Development Articles / Blogs / Perficient https://blogs.perficient.com/category/services/innovation-product-development/ Expert Digital Insights Mon, 09 Mar 2026 06:52:13 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Innovation + Product Development Articles / Blogs / Perficient https://blogs.perficient.com/category/services/innovation-product-development/ 32 32 30508587 Optimize Snowflake Compute: Dynamic Table Refreshes https://blogs.perficient.com/2026/03/07/optimize-snowflake-compute-dynamic-table-refreshes/ https://blogs.perficient.com/2026/03/07/optimize-snowflake-compute-dynamic-table-refreshes/#respond Sat, 07 Mar 2026 10:46:14 +0000 https://blogs.perficient.com/?p=390653

In this blog, we will discuss one of the problems: the system refreshes as per the target_lag even if no new data in the downstream tables. Most of the time, nothing has changed, which means we’re wasting compute for no reason.

If your data does not change, your compute should not either. Here is how to optimize your Dynamic table to save resources.

Core concepts use in this blog: –
Snowflake:
Snowflake is a fully managed cloud data warehouse that lets you store data and SQL queries at massive scale—without managing servers.

Compute Resources:
Compute resources in Snowflake are the processing power (virtual warehouses) that Snowflake uses to run your queries, load data, and perform calculations.
In simple way:
Storage = where data lives
Compute = the power used to process the data

Dynamic table:
In Snowflake, a Dynamic Table acts as a self-managing data container that bridges the gap between a query and a physical table. Instead of you manually inserting records, you provide Snowflake with a “blueprint” (a SQL query), and the system ensures the table’s physical content always matches that blueprint.

Stream:
A Stream in Snowflake is a tool that keeps track of all changes made to a table so you can process only the updated data instead of scanning the whole table.

Task:
Tasks can run at specific times you choose, or they can automatically start when something happens — for example, when new data shows up in a stream.
Scenario:

The client has requested that data be inserted every 1 hour, but sometimes there may be no new data coming into the downstream tables.

Steps: –
First, we go through the traditional approach and below are the steps.
1. Create source data:

— Choose a role/warehouse you can use

USE ROLE SYSADMIN;

USE WAREHOUSE SNOWFLAKE_LEARNING_WH;

 

— Create database/schema for the demo

CREATE DATABASE IF NOT EXISTS DEMO_DB;

CREATE SCHEMA IF NOT EXISTS DEMO_DB.DEMO_SCHEMA;

USE SCHEMA DEMO_DB.DEMO_SCHEMA;

— Base table: product_changes

CREATE OR REPLACE TABLE product_changes (

product_code VARCHAR(50),

product_name VARCHAR(200),

price NUMBER(10, 2),

price_start_date TIMESTAMP_NTZ(9),

last_updated    TIMESTAMP_NTZ DEFAULT CURRENT_TIMESTAMP()
);

— Seed a few rows

INSERT INTO product_changes (product_code, product_name, price, price_start_date,last_updated)

SELECT

‘PC-‘ || LPAD(TO_VARCHAR(MOD(SEQ4(), 10000) + 1), 3, ‘0’) AS product_code,

‘Product ‘ || LPAD(TO_VARCHAR(MOD(SEQ4(), 10000) + 1), 3, ‘0’) AS product_name,

ROUND(10.00 + (MOD(SEQ4(), 10000) * 5) + (SEQ4() * 0.01), 2) AS price,

DATEADD(MINUTE, SEQ4() * 5, ‘2025-01-01 00:00:00’) AS PRICE_START_DATE,

CURRENT_TIMESTAMP() AS last_updated

FROM

TABLE(GENERATOR(ROWCOUNT => 100000000));

— Create dynamic table

CREATE OR REPLACE DYNAMIC TABLE product_current_price_v1

TARGET_LAG = ‘1 hour’

WAREHOUSE = SNOWFLAKE_LEARNING_WH

INITIALIZE = ON_SCHEDULE

REFRESH_MODE = INCREMENTAL

AS

SELECT

h.product_code,

h.product_name,

h.price,

h.price_start_date

FROM product_changes h

INNER JOIN (

SELECT product_code, MAX(price_start_date) max_price_start_date

FROM product_changes

GROUP BY product_code

) m ON h.price_start_date = m.max_price_start_date AND h.product_code = m.product_code;

 

–Manually Refresh

ALTER DYNAMIC TABLE product_current_price_v1 REFRESH;
Always, we need to do manual refresh after an hour to check the new data is in table
Picture3

Because Snowflake uses a pay‑as‑you‑go credit model for compute, keeping a dynamic table refreshed every hour means compute resources are running continuously. Over time, this constant usage can drive up costs, making frequent refresh intervals less cost‑effective for customers.

To tackle this problem in a smarter and more cost‑efficient way, we follow a few simple steps that make the entire process smoother and more optimized:
First, we set the target_lag to 365 days when creating the dynamic table. This ensures Snowflake doesn’t continually consume compute resources for frequent refreshes, helping us optimize costs right from the start.

— Create dynamic table

CREATE OR REPLACE DYNAMIC TABLE product_current_price_v1

TARGET_LAG = ‘365 days’

WAREHOUSE = SNOWFLAKE_LEARNING_WH

INITIALIZE = ON_SCHEDULE

REFRESH_MODE = INCREMENTAL

AS

SELECT

h.product_code,

h.product_name,

h.price,

h.price_start_date

FROM product_changes h

INNER JOIN (

SELECT product_code, MAX(price_start_date) max_price_start_date

FROM product_changes

GROUP BY product_code

) m ON h.price_start_date = m.max_price_start_date AND h.product_code = m.product_code;

— A) Stream to detect changes in data
CREATE OR REPLACE STREAM STR_PRODUCT_CHANGES ON TABLE PRODUCT_CHANGES;

—  Stored procedure: refresh only when stream has data

CREATE OR REPLACE PROCEDURE SP_REFRESH_DT_IF_NEW()
RETURNS VARCHAR
LANGUAGE SQL
EXECUTE AS OWNER
AS
$$
DECLARE
v_has_data BOOLEAN;
BEGIN
SELECT SYSTEM$STREAM_HAS_DATA(‘STR_PRODUCT_CHANGESS’) INTO :v_has_data;
IF (v_has_data) THEN
ALTER DYNAMIC TABLE DEMO_DB.DEMO_SCHEMA.PRODUCT_CURRENT_PRICE_V1
REFRESH;
RETURN ‘Refreshed dynamic table DT_SALES (new data detected).’;
ELSE
RETURN ‘Skipped refresh (no new data).’;

END IF;

END;
$$;

— Create TASK
Here, we can schedule as per requirement

   EXAMPLE:
CREATE OR REPLACE TASK PUBLIC.T_REFRESH_DT_IF_NEW
WAREHOUSE = SNOWFLAKE_LEARNING_WH
SCHEDULE = ‘5 MINUTE’
AS
CALL PUBLIC.SP_REFRESH_DT_IF_NEW();

ALTER TASK PUBLIC.T_REFRESH_DT_IF_NEW RESUME;
Conclusion:
Optimizing Snowflake compute isn’t just about reducing costs—it’s about making your data pipelines smarter, faster, and more efficient. By carefully managing how and when dynamic tables refresh, teams can significantly cut down on unnecessary compute usage while still maintaining reliable, up‑to‑date data.

Adjusting refresh intervals, thoughtfully using features like target_lag, and designing workflows that trigger updates only when needed can turn an expensive, always‑running process into a cost‑effective, well‑tuned system. With the right strategy, Snowflake’s powerful dynamic tables become not just a convenience, but a competitive advantage in building lean, scalable data platforms.

 

]]>
https://blogs.perficient.com/2026/03/07/optimize-snowflake-compute-dynamic-table-refreshes/feed/ 0 390653
Performance optimization techniques for React https://blogs.perficient.com/2026/03/06/performance-optimization-techniques-for-react/ https://blogs.perficient.com/2026/03/06/performance-optimization-techniques-for-react/#respond Fri, 06 Mar 2026 08:16:03 +0000 https://blogs.perficient.com/?p=390649

Performance plays a vital role in developing modern React applications. React by default is developed by keeping performance in mind, it offers virtual DOM, efficient reconciliation, and a component-based architecture. But as application grows, we start facing performance issues. This blog will help you with some performance optimization tips to consider before release.

1. memo to Prevent Unnecessary Re-renders:

React.memo is a build in component memorization technic. When we wrap a component within React.memo, it shallowly compares previous and current props to avoid unnecessary re-renders. Since this comparison has a cost, it should be applied selectively rather than to every component.

Example:

const getEmployeeId= React.memo(({emp}) => {
  return <div>{emp.empId }</div>;
});

Bonus Tip: Make use of useCallback (memorizes a function reference) and useMemo (memorizes a expensive computed value) hooks to prevent unnecessary re-renderings.

2. lazy to load component on demand:

React.lazy is to load component only on demand instead of including it in the initial JavaScript bundle. This is mainly useful, if it is a large application with many screens.
Example:

const UserProfile= React.lazy(() => import("./userProfile"));

Bonus Tip: Make use of < Suspense> to show fallback loading component, while the actual component loads. This gives better user experience.

<Suspense fallback={<Loader />}>
  < UserProfile /></Suspense>

3. Avoid using array index as element key

Using proper stable and unique identifier is always important. Below are few examples which look harmless but have a serious performance impact.

{users.map((user, index) => (

  <li key={index}> {user.name} </li>

))}

{ users.map(user => (

  <li key={Math.random()} > {user.name} </li>

))}

This causes below performance issues:

  • keys change every render
  • React treats every item as new
  • full list remounts on every update

Correct Usage:

{users.map((user) => (<li key={user.id}>{user.name}</li>
))}

4. Use Debounce & Throttle for expensive operations

When user activity interacts with applications like typing scrolling or dragging, multiple API calls can happen per second unintentionally. Debouncing and throttling are two core techniques used to limit how often these operations should be executed, hence help in improving the performance.

const getData = useCallback(
debounce((value) => {
axios.get(`https://api.sample.in/employee /${empId }`)
.then(response => {
console.log(response.data[0]);
});
}, 2000),
[]
);

In above example, the debounce function from Lodash is used to delay the API call until 2 seconds after every user interaction.

fromEvent(window, 'resize').pipe(
startWith(null),
throttleTime(2000, undefined, { leading: true, trailing: true }),
map(() => ({ w: window.innerWidth, h: window.innerHeight })),
takeUntilDestroyed(this.destroyRef)
).subscribe(({ w, h }) => {
this.width = w;
this.height = h;
});}

In the above example operation runs every 2000 milliseconds, even if event fire continuously (scroll, resize, dragging).

Conclusion:

React comes with devtools support which provides a Profiler tab. This helps us in understanding key area where performance dips. It gives a list of slow components, identifies wasted renders and real bottlenecks in applications. Start by identifying the performance issues, then address them with the appropriate solutions to build a super‑speed application. Happy learning! 🚀

Reference:

React Performance Optimization: 15 Best Practices for 2025 – DEV Community
React Optimization Techniques to Help You Write More Performant Code

]]>
https://blogs.perficient.com/2026/03/06/performance-optimization-techniques-for-react/feed/ 0 390649
From Coding Assistants to Agentic IDEs https://blogs.perficient.com/2026/02/26/from-coding-assistants-to-agentic-ides/ https://blogs.perficient.com/2026/02/26/from-coding-assistants-to-agentic-ides/#respond Fri, 27 Feb 2026 03:38:25 +0000 https://blogs.perficient.com/?p=390580

The difference between a coding assistant and an agentic IDE is not just a matter of capability — it’s architectural. A coding assistant responds to prompts. An agentic system operates in a closed loop: it reads the current state of the codebase, plans a sequence of changes, executes them, and verifies the result before reporting completion. That loop is what makes the tooling genuinely useful for non-trivial work.

Agentic CLIs

Most of the conversation around agentic AI focuses on graphical IDEs, but the CLI tools are worth understanding separately. They integrate more naturally into existing scripts and automation pipelines, and in some cases offer capabilities the GUI tools don’t.

The main options currently available:

Claude Code (Anthropic) works with the Claude Sonnet and Opus model families. It handles multi-file reasoning well and tends to produce more explanation alongside its changes, which is useful when the reasoning behind a decision matters as much as the decision itself.

OpenAI Codex CLI is more predictable for tasks requiring strict adherence to a specification — business logic, security-sensitive code, anything where creative interpretation is a liability rather than an asset.

Gemini CLI is notable mainly for its context window, which reaches 1–2 million tokens depending on the model. Large enough to load a substantial codebase without chunking, which changes what kinds of questions are practical to ask.

OpenCode is open-source and accepts third-party API keys, including mixing providers. Relevant for environments with restrictions on approved vendors.

Configuration and Permission Levels

Configuration is stored in hidden directories under the user home folder — ~/.claude/ for Claude Code, ~/.codex/ for Codex. Claude uses JSON; Codex uses TOML. The parameter that actually matters day-to-day is the permission level.

By default, most tools ask for confirmation before destructive operations: file deletion, script execution, anything irreversible. There’s also typically a mode where the agent executes without asking. It’s faster, and it will occasionally remove something that shouldn’t have been removed. The appropriate context for that mode is throwaway branches and isolated environments where the cost of a mistake is low.


Structuring a Development Session

Jumping straight to code generation tends to produce output that looks correct but requires significant rework. The agent didn’t have enough context to make the right decisions, so it made assumptions — and those assumptions have to be found and corrected manually.

Plan Mode

Before any code is written, the agent should decompose the task and surface ambiguities. This is sometimes called Plan Mode or Chain of Thought mode. The output is a list of verifiable subtasks and a set of clarifying questions, typically around:

  • Tech stack and framework choices
  • Persistence strategy (local storage, SQL, vector database)
  • Scope boundaries — what’s in and what’s explicitly out

It feels like overhead. The time is recovered during implementation because the agent isn’t making assumptions that have to be corrected later.

Repository Setup via GitHub CLI

The GitHub CLI (gh) integrates cleanly with agentic workflows. Repository initialization, .gitignore configuration, and GitHub issue creation with acceptance criteria and implementation checklists can all be handled by the agent. Having the backlog populated automatically keeps work visible without manual overhead.


Context Management

The context window is finite. How it’s used determines whether the agent stays coherent across a long session or starts producing inconsistent output. Three mechanisms matter here: rules, skills, and MCP.

Rule Hierarchy

Rules operate at three levels:

User-level rules are global preferences that apply across all projects — language requirements, style constraints, operator restrictions. Set once.

Project rules (.cursorrules or AGENTS.md) are repository-specific: naming conventions, architectural patterns, which shared components to reuse before creating new ones. In a team context, this file deserves the same review process as any other documentation. It tends to get neglected and then blamed when the agent produces inconsistent output.

Conditional rules activate only for specific file patterns. Testing rules that only load when editing .test.ts files, for example. This keeps the context lean when those rules aren’t relevant to the current task.

Skills

Skills are reusable logic packages that the agent loads on demand. Each skill lives in .cursor/skills/ and consists of a skill.md file with frontmatter metadata, plus any executable scripts it needs (Python, Bash, or JavaScript). The agent discovers them semantically or they can be invoked explicitly.

The practical value is context efficiency — instead of re-explaining a pattern every session, the skill carries it and only loads when the task requires it.

Model Context Protocol (MCP)

MCP is the standard for giving agents access to external systems. An MCP server exposes Tools (functions the agent can call) and Resources (data it can query). Configuration is added to the IDE’s config file, after which the agent can interact with connected systems directly.

Common integrations: Slack for notifications, Sentry for querying recent errors related to code being modified, Chrome DevTools for visual validation. The Figma MCP integration is particularly useful — design context can be pulled directly without manual translation of specs into implementation requirements.


Validation

A task isn’t complete until there’s evidence it works. The validation sequence should cover four things:

Compilation and static analysis. The build runs, linters pass. Errors get fixed before the agent reports done.

Test suite. Unit and integration tests for the affected logic must pass. Existing tests must stay green. This sounds obvious and is frequently skipped.

Runtime verification. The agent launches the application in a background process and monitors console output. Runtime errors that don’t surface in tests are common enough that skipping this step is a real risk.

Visual validation. With a browser MCP server, the agent can take a screenshot and compare it against design requirements. Layout and styling issues won’t be caught by any automated test.


Security Configuration

Two files, different purposes, frequently confused:

.cursorignore is a hard block. The agent cannot read files listed here. Use it for .env files, credentials, secrets — anything that shouldn’t leave the local environment. This is the primary security layer.

.cursorindexingignore excludes files from semantic indexing but still allows the agent to read them if explicitly requested. The appropriate use is performance optimization: node_modules, build outputs, generated files that would pollute the index without adding useful signal.

For corporate environments, Privacy Mode should be explicitly verified as enabled rather than assumed. This prevents source code from being stored by the provider or used for model training. Most enterprise tiers include it; the default state varies by tool and version.


Hooks

Hooks are event-driven triggers that run custom scripts at specific points in the agent’s lifecycle. Not necessary for small projects, but worth the setup as the codebase grows.

beforeSubmitPrompt runs before a prompt is sent. Useful for injecting dynamic context — current branch name, recent error logs — or for auditing what’s about to be sent.

afterFileEdit fires immediately after the agent modifies a file. The natural use is triggering auto-formatting or running the test suite, catching regressions as they’re introduced.

pre-compact fires when the context window is about to be trimmed. Allows prioritization of what information should be retained. Relevant for long sessions where important context has accumulated, and the default trimming behavior would discard it.


Parallel Development with Git Worktrees

Sequential work on a single branch is a bottleneck when multiple tasks are running in parallel. Git worktrees allow different branches to exist as separate working directories simultaneously:

git worktree add ../wt-feature-name -b feature/branch-name

Each worktree should have its own .env with unique local ports (PORT=3001, PORT=3002) to prevent dev server collisions. The agent can handle rebases and straightforward merge conflicts autonomously. Complex conflicts still require human judgment — the agent will flag them rather than guess.


The model itself is less of a determining factor than it might seem. Rule configuration, context management, and validation coverage drive the actual quality of the output. A well-configured environment with a mid-tier model will consistently outperform a poorly configured one with a better model. The engineering work shifts toward writing the constraints and verification steps that govern how code gets produced, which is a different skill than writing the code directly, but the productivity difference once it’s in place is significant.

 

]]>
https://blogs.perficient.com/2026/02/26/from-coding-assistants-to-agentic-ides/feed/ 0 390580
3 Topics We’re Excited About at TRANSACT 2026 https://blogs.perficient.com/2026/02/26/3-topics-were-excited-about-at-transact-2026/ https://blogs.perficient.com/2026/02/26/3-topics-were-excited-about-at-transact-2026/#respond Fri, 27 Feb 2026 01:07:37 +0000 https://blogs.perficient.com/?p=390619

For years, digital wallets in the U.S. have been steady but unspectacular—useful for tap‑to‑pay, not exactly groundbreaking. But the energy in payments today is coming from somewhere unexpected: the crypto wallet world. Stablecoins now exceed $300 billion in circulation, and the infrastructure behind them is delivering the kind of security, interoperability, and user control traditional payments have long needed. 

That shift sets the stage for TRANSACT 2026, where Perficient’s Director of Payments, Amanda Estiverne, will moderate “Keys, Tokens & Trust: How Crypto Wallets Unlock Tomorrow’s Payments,” unpacking how these technologies can finally push digital wallets into their next era. She’ll be joined by three industry leaders for the future-minded discussion:

“Beyond the session I’m moderating on crypto wallets—and how this technology is set to supercharge tokenization, transform digital identity, and reinvent the very idea of a mobile wallet—I’m fired up for several powerhouse conversations.” – Amanda Estiverne 

Here are three topics we’re looking forward to exploring—and why they matter now. 

Security That Actually Builds Trust 

Security remains one of the biggest obstacles to broader U.S. digital wallet adoption—but it’s also the area where crypto wallets offer the clearest blueprint forward. Having spent years securing billions in digital assets in high‑risk environments, crypto wallets have refined capabilities such as multi‑signature authentication, advanced biometrics, tokenization, and decentralized key management. They show how strong security and user‑friendly design can coexist.

As regulators sharpen guidance and consumers demand more control over their data, these crypto‑born approaches are becoming increasingly relevant to mainstream payments. In her session, Amanda will explore how these wallet innovations—originally designed for digital assets—can address the core security concerns holding back U.S. mobile wallets and help transform them from simple tap‑to‑pay tools into trusted financial hubs.

“ETA Transact is the gathering place for the entire payments ecosystem. Banks, networks, fintechs, processors, and regulators all come together under one roof to explore what’s next in payments.” – Amanda Estiverne

Interoperability Across Rails and Borders

One of the most persistent challenges in payments is fragmentation—different rails, incompatible systems, and cross‑border friction that create cost and complexity for businesses and consumers alike. Crypto wallets, by contrast, were designed for interoperability from the start. A single wallet can span multiple networks, assets, and payment types without the user having to think about what’s happening behind the scenes.

It’s a timely shift: real‑time payments are scaling, embedded finance is showing up in more places than ever, and stablecoins have now crossed $300 billion in circulation. With tokenized deposits, stablecoins, and traditional rails now coexisting, payment providers need ways to make these systems work together in a unified experience.

Amanda’s session will break down how the cross‑network, cross‑border capabilities pioneered in crypto wallets can help overcome the interoperability gaps limiting today’s mobile and digital wallets—and why solving this is key to building the next generation of payments.

Identity and Personalization in the AI Era

Digital wallets are quickly becoming more than a place to store cards. With AI, they can deliver smarter, more contextual experiences—from personalized rewards to anticipatory recommendations to voice‑enabled commerce. But to power these experiences responsibly, wallets need identity models that balance personalization with user privacy and control.

Crypto wallets have long used decentralized identity credentials that allow individuals to share only what’s necessary for each interaction. As AI‑driven personalization becomes the norm, that selective‑sharing model becomes even more valuable.

Amanda’s session will explore how decentralized identity frameworks emerging from the crypto space—and now reinforced by tokenization—can give digital wallets the foundation they need to support personalized, AI‑enhanced experiences while still preserving user trust.

“Agentic commerce, stablecoins and digital assets, digital identity, personalized payments, and instant payments are among the key themes shaping the conversation. The financial system is undergoing massive transformation, and these emerging areas will play a defining role in the infrastructure of tomorrow’s payments ecosystem.” – Amanda Estiverne 

Discover the Next Payment Innovation Trends 

Transact 2026 is where theory meets practice. Where banks, networks, fintechs, processors, and regulators pressure-test ideas and forge the partnerships that will define the next era of payments.

Amanda’s session focuses on how crypto‑wallet innovations—biometrics, tokenization, decentralized identity, and cross‑border interoperability—can help U.S. mobile wallets finally graduate from tap‑to‑pay conveniences into trusted, intelligent financial hubs.

“It’s where partnerships are forged, new ideas are pressure-tested, and the future of how money moves begins to take shape.” – Amanda Estiverne 

For payment leaders exploring what comes next, this conversation offers a grounded look at the capabilities most likely to redefine digital wallets across security, identity, interoperability, and user experience.

Attending TRANSACT 2026? Come by the Idea Zone at 1:40pm on Thursday, March 19th to hear the exclusive insights. Not attending? Contact Perficient to explore how we help payment and Fintech firms innovate and boost market position with transformative, AI-first digital experiences and efficient operations.

]]>
https://blogs.perficient.com/2026/02/26/3-topics-were-excited-about-at-transact-2026/feed/ 0 390619
Mind Games – Stretch Your Imagination (30 Examples) https://blogs.perficient.com/2026/02/23/mind-games-stretch-your-imagination-30-examples/ https://blogs.perficient.com/2026/02/23/mind-games-stretch-your-imagination-30-examples/#respond Mon, 23 Feb 2026 12:43:26 +0000 https://blogs.perficient.com/?p=390139

I want to play mind games with you. In my last blog post, I shared how to plan an agenda for your brainstorming session. I mentioned that I’m not a big fan of traditional ice breakers – they work fine, but they feel too much like forced socialization rather than a way to prepare your brain for creativity. In this article, I’m going to show you how I loosen teams up and get them thinking with mind games.

Loosen Up by Stretching

The goal is to stretch your imagination. It’s just like stretching before you go for a run (which is also a great thing to do while preparing for brainstorming). We want to disrupt routine thought patterns, and push past the initial “easy” ideas to look for that unique approach and competitive advantage. These mind exercises help people realize that even things that seem impossible can have solutions (even simple solutions). These are NOT a test, it’s OK to not understand, and people should feel welcome to throw out wild or goofy suggestions.

In the rest of this article, I’m going to share several types of mind games: optical illusions, brain teasers, riddles, jokes, and team activities. I’ll share enough that you can run several brainstorming sessions for the same team without reusing them. So pick the ones you like and get to stretching!

Don’t Spoil It!

When you run these in a live brainstorming session, make sure to tell your attendees not to spoil it if they’ve seen one before. Let people have time to think about it and enjoy them. Consider offering to let people leave the room when you reveal the answers.

NOTE: To allow you to read this article without spoiling any of the brain teasers, I have set it up to click to view hints and answers.

Optical Illusions

Here are six optical illusions that I love. It shows your attendees that things are not always what they first appear, and that our brains can play tricks on us.

Optical Illusion #1 – Peripheral Drift Rotating Snakes

This is a static image, but as you look around the image it appears to move with rotating circles. (Wikimedia Commons)

Optical Illusion - Rotating Snakes

Optical Illusion #2 – Double-Image

There is more than one picture in this image. (Wikimedia Commons)

Optical Illusion - Double Image

Reveal Answer

Can you see the duck? How about the rabbit?

Optical Illusion - Double Image

Optical Illusion #3 – Scintillating Grid

Staring at this will cause the white circles to appear like black dots around the edges of your focus. (Wikimedia Commons)

Optical Illusion - Scintillating Hermann Grid

Optical Illusion #4 – Penrose Triangle

This illusion works because a 2D drawing can appear to be 3D but achieve effects that cannot be done in 3D. This shape cannot exist in 3D space. (Wikimedia Commons)

Optical Illusion - Penrose Triangle

Optical Illusion #5 – Ebbinghause Illusion

Each set of circles has a center circle. Which center circle is the largest? (Wikimedia Commons)

Optical Illusion - Ebbinghause

Reveal Answer

They are exactly the same size. The sizes of the shapes that surround the center circle changes our perception.

Optical Illusion - Ebbinghause

Optical Illusion #6 – Troxler Effect

Stare at the red dot for up to 20 seconds and the blue circle will disappear. (Wikimedia Commons)

Optical Illusion - Troxler Effect

Brain Teasers

Next, try these six brain teasers that will stump and entertain your crew. These help teams realize that problems are difficult, but that doesn’t mean they can’t be solved.

Brain Teaser #1 – Cross the Moat

A treasure sits in the middle of a perfectly square island surrounded by a moat 10 feet wide and too deep and treacherous to cross. You need to get across the moat without jumping, climbing, or swimming. There are two sturdy planks 10 feet in length and 3 feet wide. There is nothing to bind the planks together and nothing to cut them with. How can you use the planks to walk safely over the moat?

Brain Teaser - Moat Crossing

Get a Hint

The planks do not need to be longer. Instead consider ways to overlap the two planks.

Reveal Answer

Create a “T” shape at the corner of the moat, then go retrieve your treasure!

Brain Teaser - Moat Crossing

Brain Teaser #2 – Confusing Math

Can you explain this odd and unexpected problem?

Brain Teaser - Numbers
Get a Hint

This isn’t math. How can you use the number 2 to end up with a fish? Or the number 3 to arrive at an eight?

Reveal Answer

Duplicate the shape of each number, then position, rotate, and/or mirror the shape of the original number to create the word on the right.

Brain Teaser - Numbers

 

.

Brain Teaser #3 – Light Switch Problem

Three light bulbs side-by-side, one is lit. (Light Switch Problem)

You have three incandescent lightbulbs in a small room. Each is controlled by its own light switch outside the room where you cannot see the bulbs or their light. You can flip as many light switches as you want, but you can only check the room once. How do you determine which switch controls each bulb?

Get a Hint

Incandescent lightbulbs have more than one property that may be useful.

Reveal Answer

Flip the first switch on for a few minutes, then flip it off. Flip the second switch and then go check the room. The light that is on is controlled by the second switch. The light that is warm to the touch is controlled by the first switch. The light that is cold is controlled by the third switch.

Brain Teaser #4 – 9-Dot Puzzle

If you had a print-out of this grid of nine dots, using a pen or pencil, connect all the dots by drawing only four or less straight interconnected line segments without picking the pen up from the paper once you begin. (Wikimedia Commons)

Brain Teaser - Nine Dot Board & Unsuccessful Example

Get a Hint

Try venturing outside the grid of dots.

Reveal Answers

The solution requires extending your lines outside the grid of nine. Your line segment corners do not have to land on a dot. There are two possible solutions.

Brain Teaser - Nine Dot Board Solutions

Brain Teaser #5 – Birthday Season

Brain Teaser - Birthday Celebration

Jane was born on Dec. 28th, yet her birthday always falls in the summer. How is this possible?

Get a Hint

Not everyone lives in the same place.

Reveal Answer

Jane lives in the southern hemisphere.

Brain Teaser #6 – Escape Plan

Brain Teaser - Room Escape

You are stuck in a concrete room with no windows or doors. The room has only a mirror and a wooden plank for you to use. How do you get out?

Get a Hint

This is a fantasy play on words, not a physical solution.

Reveal Answer

Look in the mirror to see what you “saw.” Take the saw and cut the plank in half. You now have two halves which make a “whole.” Climb through the hole to escape!

Riddles

Here are six riddles to keep their minds moving. Riddles are great because the answer feels like it is within reach, but it is hard to make the connections to come up with the answer – just like real-world problems!

Riddle #1

What occurs once in a minute, twice in a moment, and never in a thousand years?

Get a Hint

The word “occurs” can be misleading.

Reveal Answer

The letter “M” appears once in “minute”, twice in “moment”, and does not appear in “a thousand years”.

Riddle #2

What has cities but no houses, forests but no trees, and rivers but no water?

Get a Hint

What might depict things, but not in any real detail?

Reveal Answer

A map shows cities, forested areas, and rivers, but it doesn’t show their details or have them physically.

Riddle #3

I am tall when I’m young, and short when I’m old. What am I?

Get a Hint

There are a couple valid answers to this. Consider how things change when used.

Reveal Answer

A candle or a pencil are shortened as they are used.

Riddle #4

What is two words but thousands of letters?

Get a Hint

This is a play on words, and the answer has two words in it.

Reveal Answer

A “post office” has thousands of letters in it.

Riddle #5

What is the longest word in the dictionary?

Get a Hint

Not the longest in number of letters. Also, the answer is not a word that measures a type of distance or time (lightyear or infinity would not be what we’re looking for).

Reveal Answer

”Smiles” – because there’s a MILE between each “s”.

Riddle #6

Forward I am heavy. Backward I am not. What am I?

Get a Hint

Focus on words that are heavy.

Reveal Answer

The word “ton”, when spelled backward is “not”.

Jokes

Everyone loves a good joke. They are good for brainstorming for two reasons. One, they make you think about what the punchline could be. Two, they get people laughing and comfortable. These are perfect even when you’re not the creative type.

Joke #1

The past, the present, and the future walked into a bar.

Reveal Punchline

It was tense.

Joke #2

What’s the difference between a literalist and a kleptomaniac?

Reveal Punchline

A literalist takes things literally, while a kleptomaniac takes things…literally.

Joke #3

Can February march?

Reveal Punchline

No, but April may! (February, March, April, May)

Joke #4

I’d tell you a chemistry joke…

Reveal Punchline

…but I know it wouldn’t get a reaction.

Joke #5

I don’t mind coming to work…

Reveal Punchline

…it’s the eight-hour wait to go home that I can’t stand.

Joke #6

Did you hear about the first restaurant to open on the moon?

Reveal Punchline

It had great food but no atmosphere.

Physical Challenges

Some ice breakers are physical challenges, and these are the ones that are an exception of to my rule (of not liking ice breakers). Get people up and moving, blood flowing, minds engaged, and working together to solve a problem!

Challenge #1 – Marshmallow Tower

Each team or person is asked to build a tower as tall as they can using just 20 sticks of dry spaghetti and 20 mini-marshmallows. How tall of a structure can each team get by sticking dry spaghetti into the mini-marshmallows?

This is a trial-and-error activity, those who are not afraid to fail and retry will do the best – children often outperform adults in this exercise. If you have true engineers in the session, they will likely win.

Challenge #2 – The Human Knot

This is a team exercise, so you’ll need 4+ people per team. Each team should stand in a tight shoulder-to-shoulder circle then each member needs to grab hands with two different people in the group. The team must work together to untangle their circle.

Hands must not let go except for a minor change of holding position for comfort. It is not allowed to let go in order to help untangle or to provide additional room. They can step over, under, and through people’s arms. In larger groups it may be possible to untangle into more than one circle.

Challenge #3 – Blindfold Course

Create an obstacle course using chairs, cones, ropes, office supplies…whatever you come up with. Blindfold one team member and have the others guide them through the course using only verbal commands. No touching. No peeking.

Challenge #4 – Toxic Waste Removal

Fill a small bucket with tennis balls (“toxic waste”) and place in the center of a boundary circle of about 10-20 feet in diameter. No one can directly touch the toxic waste or enter the circle. Provide team members with tools such as rope, string, bungee cords, yard sticks, or similar items. The group must find a way to use the tools to get the toxic waste out of the circle and into another small “containment” bucket outside the circle.

Challenge #5 – The Architect

In small groups, one person will be designated the “Architect”, all other group members will be blindfolded. Provide some sort of building materials such as LEGO® bricks, paper cups, straws, tape, or whatever you like. The Architect must verbally instruct the blind “Builders” on how to build something from the materials. This might be a tower judged on height, or a structure judged on creativity. Only the Builders can touch the building materials. If time allows, you can break halfway through, allow the Builders to remove blindfolds and discuss, and then do one last round blindfolded again and guided by the Architect.

Challenge #6 – Paper Airplane Challenge

Start this activity by asking each participant to build a paper airplane on their own. Throw the planes down a hall or in an open area and see whose flies the furthest. Then have small groups build a paper plane together (now that they’ve seen which one flew the best). See which group can win the second round.

Add a Twist at the End

The facilitator can crumple a sheet of paper into a ball and throw it to see if it flies further than the planes. Whether it does or not, this is a great example of how teams can break convention and bend rules.

Conclusion

I hope you find some of these mind games fun! People who dislike ice breakers will likely find more enjoyment with these mental exercises. But these are more than just fun, these are intentional aids to get people thinking in a new way before you ask them to provide you with industry-changing ideas in a brainstorming session!

……

If you are looking for a partner who will play fun mind games with you, reach out to your Perficient account manager or use our contact form to begin a conversation.

]]>
https://blogs.perficient.com/2026/02/23/mind-games-stretch-your-imagination-30-examples/feed/ 0 390139
Language Mastery as the New Frontier of Software Development https://blogs.perficient.com/2026/02/16/language-mastery-as-the-new-frontier-of-software-development/ https://blogs.perficient.com/2026/02/16/language-mastery-as-the-new-frontier-of-software-development/#respond Mon, 16 Feb 2026 17:23:54 +0000 https://blogs.perficient.com/?p=390355
In the current technological landscape, the interaction between human developers and Large Language Models (LLMs) has transitioned from a peripheral experiment into a core technical competency. We are witnessing a fundamental shift in software development: the evolution from traditional code logic to language logic. This discipline, known as Prompt Engineering, is not merely about “chatting” with an AI; it is the structured ability to translate human intent into precise machine action. For the modern software engineer, designing and refining instructions is now as critical as writing clean, executable code.

1. Technical Foundations: From Prediction to Instruction

To master AI-assisted development, one must first understand the nature of the model. An LLM, at its core, is a probabilistic prediction engine. When given a sequence of text, it calculates the most likely next word (or token) based on vast datasets.
Base Models vs. Instruct Models
Technical proficiency requires a distinction between Base Models and Instruct Models. A Base LLM is designed for simple pattern completion or “autocomplete.” If asked to classify a text, a base model might simply provide another example of a text rather than performing the classification. Professional software development relies almost exclusively on Instruct Models. These models have been aligned through Reinforcement Learning from Human Feedback (RLHF) to follow explicit directions rather than just continuing a text pattern.
The fundamental paradigm of this interaction is simple but absolute: the quality of the input (the prompt) directly dictates the quality and accuracy of the output (the response).

2. The Two Pillars of Effective Prompting

Every successful interaction with an LLM rests on two non-negotiable principles. Neglecting either leads to unpredictable, generic, or logically flawed results.
1. Clarity and Specificity

Ambiguity is the primary enemy of quality AI output. Models cannot read a developer’s mind or infer hidden contexts that are omitted from the prompt. When an instruction is vague, the model is forced to “guess,” often resulting in a generic “average response” that fails to meet specific technical requirements. A specific prompt must act as an explicit manual. For instance, rather than asking to “summarize an email,” a professional prompt specifies the role (Executive Assistant), the target audience (a Senior Manager), the focus (required actions and deadlines), and the formatting constraints (three key bullet points).

Vague Prompt (Avoid) Specific Prompt (Corporate Standard)
“Summarize this email.” “Act as an executive assistant. Summarize the following email in 3 key bullet points for my manager. Focus on required actions and deadlines. Omit greetings.”
“Do something about marketing.” “Generate 5 Instagram post ideas for the launch of a new tech product, each including an opening hook and a call-to-action.”

 

 

2. Allowing Time for Reasoning
LLMs are prone to logical errors when forced to provide a final answer immediately—a phenomenon described as “impulsive reasoning.” This is particularly evident in mathematical logic or complex architectural problems. The solution is to explicitly instruct the model to “think step-by-step.” This technique, known as Chain-of-Thought (CoT), forces the model to calculate intermediate steps and verify its own logic before concluding. By breaking a complex task into a sequence of simpler sub-tasks, the reliability of the output increases exponentially.
3. Precision Structuring Tactics
To transform a vague request into a high-precision technical order, developers should utilize five specific tactics.
• Role Assignment (Persona): Assigning a persona—such as “Software Architect” or “Cybersecurity Expert”—activates specific technical vocabularies and restricts the model’s probabilistic space toward expert-level responses. It moves the AI away from general knowledge toward specialized domain expertise.
• Audience and Tone Definition: It is imperative to specify the recipient of the information. Explaining a SQL injection to a non-technical manager requires a completely different lexicon and level of abstraction than explaining it to a peer developer.
• Task Specification: The central instruction must be a clear, measurable action. A well-defined task eliminates ambiguity regarding the expected outcome.
• Contextual Background: Because models lack access to private internal data or specific business logic, developers must provide the necessary background information, project constraints, and specific data within the prompt ecosystem.
• Output Formatting: For software integration, leaving the format to chance is unacceptable. Demanding predictable structures—such as JSON arrays, Markdown tables, or specific code blocks—is critical for programmatic parsing and consistency.
Technical Delimiters Protocol
To prevent “Prompt Injection” and ensure application robustness, instructions must be isolated from data using:
• Triple quotes (“””): For large blocks of text.
• Triple backticks (`): For code snippets or technical data.
• XML tags (<tag>): Recommended standard for organizing hierarchical information.
• Hash symbols (###): Used to separate sections of instructions.
Once the basic structure is mastered, the standard should address highly complex tasks using advanced reasoning.
4. Advanced Reasoning and In-Context Learning
Advanced development requires moving beyond simple “asking” to “training in the moment,” a concept known as In-Context Learning.
Shot Prompting: Zero, One, and Few-Shot
• Zero-Shot: Requesting a task directly without examples. This works best for common, direct tasks the model knows well.
• One-Shot: Including a single example to establish a basic pattern or format.
• Few-Shot: Providing multiple examples (usually 2 to 5). This allows the model to learn complex data classification or extraction patterns by identifying the underlying rule from the history of the conversation.
Task Decomposition
This involves breaking down a massive, complex process into a pipeline of simpler, sequential actions. For example, rather than asking for a full feature implementation in one go, a developer might instruct the model to: 1. Extract the data requirements, 2. Design the data models, 3. Create the repository logic, and 4. Implement the UI. This grants the developer superior control and allows for validation at each intermediate step.
ReAct (Reasoning and Acting)
ReAct is a technique that combines reasoning with external actions. It allows the model to alternate between “thinking” and “acting”—such as calling an API, performing a web search, or using a specific tool—to ground its final response in verifiable, up-to-date data. This drastically reduces hallucinations by ensuring the AI doesn’t rely solely on its static training data.
5. Context Engineering: The Data Ecosystem
Prompting is only one component of a larger system. Context Engineering is the design and control of the entire environment the model “sees” before generating a response, including conversation history, attached documents, and metadata.
Three Strategies for Model Enhancement
1. Prompt Engineering: Designing structured instructions. It is fast and cost-free but limited by the context window’s token limit.
2. RAG (Retrieval-Augmented Generation): This technique retrieves relevant documents from an external database (often a vector database) and injects that information into the prompt. It is the gold standard for handling dynamic, frequently changing, or private company data without the need to retrain the model.
3. Fine-Tuning: Retraining a base model on a specific dataset to specialize it in a particular style, vocabulary, or domain. This is a costly and slow strategy, typically reserved for cases where prompting and RAG are insufficient.
The industry “Golden Rule” is to start with Prompt Engineering, add RAG if external data is required, and use Fine-Tuning only as a last resort for deep specialization.
6. Technical Optimization and the Context Window
The context window is the “working memory” of the model, measured in tokens. A token is roughly equivalent to 0.75 words in English or 0.25 words in Spanish. Managing this window is a technical necessity for four reasons:
• Cost: Billing is usually based on the total tokens processed (input plus output).
• Latency: Larger contexts require longer processing times, which is critical for real-time applications.
• Forgetfulness: Once the window is full, the model begins to lose information from the beginning of the session.
• Lost in the Middle: Models tend to ignore information located in the center of extremely long contexts, focusing their attention only on the beginning and the end.
Optimization Strategies
Effective context management involves progressive summarization of old messages, utilizing “sliding windows” to keep only the most recent interactions, and employing context caching to reuse static information without incurring reprocessing costs.
7. Markdown: The Communication Standard

Markdown has emerged as the de facto standard for communicating with LLMs. It is preferred over HTML or XML because of its token efficiency and clear visual hierarchy. Its predictable syntax makes it easy for models to parse structure automatically. In software documentation, Markdown facilitates the clear separation of instructions, code blocks, and expected results, enhancing the model’s ability to understand technical specifications.

Token Efficiency Analysis

The choice of format directly impacts cost and latency:

  • Markdown (# Title): 3 tokens.
  • HTML (<h1>Title</h1>): 7 tokens.
  • XML (<title>...</title>): 10 tokens.

Corporate Syntax Manual

Element Syntax Impact on LLM
Hierarchy # / ## / ### Defines information architecture.
Emphasis **bold** Highlights critical constraints.
Isolation ``` Separates code and data from instructions.

 

8. Contextualization for AI Coding Agents
AI coding agents like Cursor or GitHub Copilot require specific files that function as “READMEs for machines.” These files provide the necessary context regarding project architecture, coding styles, and workflows to ensure generated code integrates seamlessly into the repository.
• AGENTS.md: A standardized file in the repository root that summarizes technical rules, folder structures, and test commands.
• CLAUDE.md: Specific to Anthropic models, providing persistent memory and project instructions.
• INSTRUCTIONS.md: Used by tools like GitHub Copilot to understand repository-specific validation and testing flows.
By placing these files in nested subdirectories, developers can optimize the context window; the agent will prioritize the local context of the folder it is working in over the general project instructions, reducing noise.
9. Dynamic Context: Anthropic Skills
One of the most powerful innovations in context management is the implementation of “Skills.” Instead of saturating the context window with every possible instruction at the start, Skills allow information to be loaded in stages as needed.
A Skill consists of three levels:
1. Metadata: Discovery information in YAML format, consuming minimal tokens so the model knows the skill exists.
2. Instructions: Procedural knowledge and best practices that only enter the context window when the model triggers the skill based on the prompt.
3. Resources: Executable scripts, templates, or references that are launched automatically on demand.
This dynamic approach allows for a library of thousands of rules—such as a company’s entire design system or testing protocols—to be available without overwhelming the AI’s active memory.
10. Workflow Context Typologies
To structure AI-assisted development effectively, three types of context should be implemented:
1. Project Context (Persistent): Defines the tech stack, architecture, and critical dependencies (e.g., PROJECT_CONTEXT.md).
2. Workflow Context (Persistent): Specifies how the AI should act during repetitive tasks like bug fixing, refactoring, or creating new features (e.g., WORKFLOW_FEATURE.md).
3. Specific Context (Temporary): Information created for a specific session or a single complex task (e.g., an error analysis or a migration plan) and deleted once the task is complete.
A practical example of this is the migration of legacy code. A developer can define a specific migration workflow that includes manual validation steps, turning the AI into a highly efficient and controlled refactoring tool rather than a source of technical debt.
Conclusion: The Role of the Context Architect
In the era of AI-assisted programming, success does not rely solely on the raw power of the models. It depends on the software engineer’s ability to orchestrate dialogue and manage the input data ecosystem. By mastering prompt engineering tactics and the structures of context engineering, developers transform LLMs from simple text assistants into sophisticated development companions. The modern developer is evolving into a “Context Architect,” responsible for directing the generative capacity of the AI toward technical excellence and architectural integrity. Mastery of language logic is no longer optional; it is the definitive tool of the Software Engineer 2.0.
]]>
https://blogs.perficient.com/2026/02/16/language-mastery-as-the-new-frontier-of-software-development/feed/ 0 390355
Enhancing Fluent UI DetailsList with Custom Sorting, Filtering, Lazy Loading and Filter Chips https://blogs.perficient.com/2026/02/04/enhancing-fluent-ui-detailslist-with-custom-sorting-filtering-lazy-loading-and-filter-chips/ https://blogs.perficient.com/2026/02/04/enhancing-fluent-ui-detailslist-with-custom-sorting-filtering-lazy-loading-and-filter-chips/#respond Wed, 04 Feb 2026 07:48:24 +0000 https://blogs.perficient.com/?p=390027

Fluent UI DetailsList custom sorting and filtering can transform how structured data is displayed. While the default DetailsList component is powerful, it doesn’t include built‑in features like advanced sorting, flexible filtering, lazy loading, or selection‑driven filter chips. In this blog, we’ll show you how to extend Fluent UI DetailsList with these enhancements, making it more dynamic, scalable, and user‑friendly.

We’ll also introduce simple, reusable hooks that allow you to implement your own filtering and sorting logic, which will be perfect for scenarios where the default behavior doesn’t quite fit your needs. By the end, you’ll have a flexible, feature-rich Fluent UI DetailsList setup with sorting and filtering that can handle complex data interactions with ease.

Here’s what our wrapper brings to the table:

  • Context‑aware column menus that enable sorting beyond simple A↔Z ordering
  • Filter interfaces designed for each data type (.i.e. freeform text, choice lists, numeric ranges, or time values)
  • Selection chips that display active filters and allow quick deselection with a single click
  • Lazy loading with infinite scroll, seamlessly integrated with your API or pagination pipeline
  • One orchestrating component that ties all these features together, eliminating repetitive boilerplate

Core Architecture

The wrapper includes:

  • Column Definitions: To control how each column sorts/filters
  • State & Refs: To manage final items, full dataset, and UI flags
  • Default Logic By overriding hooks – onSort, onFilter
  • Selection: Powered by Fluent UI Selection API
  • Lazy Loading: Using IntersectionObserver
  • Filter Chips: Reflect selected rows

Following are the steps to achieve these features:

Step 1: Define Column Metadata

Each column in the DetailsList must explicitly describe its data type, sort behavior, and filtering behavior. This metadata helps the wrapper render the correct UI elements such as combo boxes, number inputs, or time pickers.

Each column needs metadata describing:

  • Field type
  • Sort behavior
  • Filter behavior
  • UI options (choice lists, icons, etc.)
export interface IDetailsListColumnDefinition {
  fieldName: string;
  displayName: string;
  columnType?: DetailsListColumnType; // Text, Date, Time, etc.
  sortDetails?: { fieldType: SortFilterType };
  filterDetails?: {
    fieldType: SortFilterType;
    filterOptions?: IComboBoxOption[];
    appliedFilters?: any[];
  };
}

Following is the example:

const columns = [{
  fieldName: 'status',
  displayName: 'Status',
  columnType: DetailsListColumnType.Text,
  sortDetails: {
    fieldType: SortFilterType.Choice
  },
  filterDetails: {
    fieldType: SortFilterType.Choice,
    filterOptions: [{
      key: 'Active',
      text: 'Active'
    },
    {
      key: 'Inactive',
      text: 'Inactive'
    }]
  }
}];

Step 2: Implement Type-Aware Fluent UI DetailsList Custom Sorting

The sorting mechanism dynamically switches based on the column’s data type. Time fields are converted to minutes to ensure consistent sorting, while text and number fields use their native values. It supports following:

  • Supports Text, Number, NumberRange, Date, and Time (custom handling for time via minute conversion).
  • Sort direction is controlled from the column’s context menu.
  • Works with default sorting or lets you inject custom sorting via onSort.
  • Default sorting uses lodash orderBy unless onSort is provided

Sample code for its implementation can be written as follows:

switch (sortColumnType) {
case SortFilterType.Time:
  sortedItems = orderBy(sortedItems, [item = >getTimeForField(item, column.key)], column.isSortedDescending ? ['desc'] : ['asc']);
  break;
default:
  sortedItems = orderBy(sortedItems, column.fieldName, column.isSortedDescending ? 'desc': 'asc');
}

Step 3: Implement Fluent UI DetailsList Custom Filtering (Text/Choice/Range/Time)

Filtering inputs change automatically based on column type. Text and choice filters use combo boxes, while numeric fields use range inputs. Time filters extract and compare HH:mm formatted values.

Text & Choice Filters

Implemented using Fluent UI ComboBox as follows:

<ComboBox

    allowFreeform={!isChoiceField}
    
    multiSelect={true}
    
    options={comboboxOptions}
    
    onChange={(e, option, index, value) =>
    
    _handleFilterDropdownChange(e, column, option, index, value)
    
    }
/>

Number Range Filter

Implemented as two input boxes, min & max for defining number range.

  • Min/Max chips are normalized in order [min, max].
  • Only applied if present; absence of either acts as open‑ended range.

Time Filter

For filtering time, we are ignoring date part and just considering time part.

  • Times are converted to minutes since midnight(HH:mm) to sort reliably regardless of display format.
  • Filtering uses date-fns format() for display and matching.

Step 4: Build the Filtering Pipeline

This step handles the filtering logic as capturing user-selected values, updating filter states, re-filtering all items, and finally applying the active sorting order. If custom filter logic is provided, it overrides the defaults. It will work as follows:

  1. User changes filter
  2. Update column.filterDetails.appliedFilters
  3. Call onFilter (if provided)
  4. Otherwise run default filter pipeline as follows:

allItems → apply filter(s) → apply current sort → update UI

Following are some helper functions that can be created for handing filter/sort logic:

  • _filterItems
  • _applyDefaultFilter
  • _applyDefaultSort

Step 5: Display Filter Chips

When selection is enabled, each selected row appears as a dismissible chip above the grid. Removing the chip automatically deselects the row, ensuring tight synchronization between UI and data.

<FilterChip key={filterValue.key} filterValue={filterValue} onRemove={_handleChipRemove} />

Note: This is a custom subcomponent used to handle filter chips. Internally it display selected values in chip form and we can control its values and functioning using onRemove and filterValue props.

Chip removal:

  • Unselects row programmatically
  • Updates the selection object

Step 6: Implementing Lazy Loading (IntersectionObserver)

The component makes use of IntersectionObserver, to detect if the user reaches the end of the list. Once triggered, it calls the lazy loading callback to fetch the next batch of items from the server or state.

  • An additional row at the bottom triggers onLazyLoadTriggered() as it enters the viewport.
  • Displays a spinner while loading; attaches the observer when more data is available.

A sentinel div at the bottom triggers loading:

observer.current = new IntersectionObserver(async entries => {
  const entry = entries[0];
  if (entry.isIntersecting) {
    observer.current ? .unobserve(lazyLoadRef.current!);
    await lazyLoadDetails.onLazyLoadTriggered();
  }
});

Props controlling behavior:

lazyLoadDetails ? :{
  enableLazyLoad: boolean;
  onLazyLoadTriggered: () => void;
  isLoading: boolean;
  moreItems: boolean;
};

Step 7: Sticky Headers

Sticky headers keep the column titles visible as the user scrolls through large datasets, improving readability and usability. Following is the code where, maxHeight property determines the scrollable container height:

const stickyHeaderStyle = {
  root: {
    maxHeight: stickyHeaderDetails ? .maxHeight ? ?450
  },
  headerWrapper: {
    position: 'sticky',
    top: 0,
    zIndex: 1
  }
};

Step 8: Putting It All Together — Minimal Example for Fluent UI DetailsList custom filtering and sorting

Following is an example where we are calling our customizes details list component:

<CustomDetailsList
  columnDefinitions={columns}
  items={data}
  allItems={data}
  checkboxVisible={CheckboxVisibility.always}
  initialSort={{ fieldName: "name", direction: SortDirection.Asc }}
  filterChipDetails={{
    filterChipKeyColumnName: "key",

    filterChipColumnName: "name",
  }}
  stickyHeaderDetails={{ enableStickyHeader: true, maxHeight: 520 }}
  lazyLoadDetails={{
    enableLazyLoad: true,

    isLoading: false,

    moreItems: true,

    onLazyLoadTriggered: async () => {
      // load more
    },
  }}
/>;

Accessibility & UX Notes

  • Keyboard: Enter key applies text/number inputs instantly; menu remains open so users can stack filters.
  • Clear filter: Context menu shows “Clear filter” action only when a filter exists; there’s also a “Clear Filters (n)” button above the grid that resets all columns at once.
  • Selection cap: To begin, maxSelectionCount helps prevent accidental bulk selections; next, it provides immediate visual feedback so users can clearly see their limits in action.

Performance Guidelines

  • Virtualization: For very large datasets, you can enable virtualization and validate both menu positioning and performance. For current example, onShouldVirtualize={() => false} is used to maintain a predictable menu experience.
  • Server‑side filtering/sorting: If your dataset is huge, pass onSort/onFilter and do the heavy lifting server‑side, then feed the component the updated page through items.
  • Lazy loading: Use moreItems to hide the sentinel when the server reports the last page; set isLoading to true to show the spinner row.

Conclusion

Finally, we have created a fully customized Fluent UI DetailsList with custom filtering and sorting which condenses real‑world list interactions into one drop‑in component. CustomDetailsList provides a production-ready, extensible, developer-friendly data grid wrapper with following enhanced features:

  • Clean context menus for type‑aware sort & filter
  • Offers selection chips for quick, visual interaction and control
  • Supports lazy loading that integrates seamlessly with your API
  • Allows you to keep headers sticky to maintain clarity in long lists
  • Delivers a ready‑to‑use design while allowing full customization when needed

GitHub repository

Please refer to the GitHub repository below for the full code. A sample has been provided within to illustrate its usage:

https://github.com/pk-tech-dev/customdetailslist

 

 

 

]]>
https://blogs.perficient.com/2026/02/04/enhancing-fluent-ui-detailslist-with-custom-sorting-filtering-lazy-loading-and-filter-chips/feed/ 0 390027
Kube Lens: The Visual IDE for Kubernetes https://blogs.perficient.com/2026/02/02/kube-lens/ https://blogs.perficient.com/2026/02/02/kube-lens/#comments Mon, 02 Feb 2026 15:37:47 +0000 https://blogs.perficient.com/?p=389778

Kube Lens — The Visual IDE for Kubernetes

Kube Lens is a desktop Kubernetes IDE that gives you a single, visual control plane for clusters, resources, logs and metrics—so you spend less time wrestling with kubectl output and more time solving real problems. In this post I’ll walk through installing Lens, adding clusters, and the everyday workflows I actually use, the features that speed up debugging, and practical tips to get teams onboarded safely.

Prerequisites

A valid kubeconfig (~/.kube/config) with the cluster contexts you need (or point Lens at alternate kubeconfig files).

What is Lens (Lens IDE / Kube Lens)

Lens is a cross-platform desktop application that connects to one or many Kubernetes clusters and presents a curated, interactive UI for exploring workloads, nodes, pods, services, and configuration. Think of it as your cluster’s cockpit—visual, searchable, and stateful—without losing the ability to run kubectl commands when you need them.

Kube Lens features

Kube Lens shines by packaging common operational tasks into intuitive views:

  • Multi-cluster visibility and quick context switching so you can compare clusters without copying kubeconfigs.
  • Live metrics and health signals (CPU, memory, pod counts, events) visible on a cluster overview for fast triage.
  • Built-in terminal scoped to the selected cluster/context so CLI power is always one click away.
  • Log viewing, searching, tailing, and exporting right next to pod details — no more bouncing between tools.
  • Port-forwarding and local access to cluster services for debugging apps in-situ.
  • Helm integration for discovering, installing, and managing releases from the UI.
  • CRD inspection and custom resource management so operators working with controllers and operators aren’t blind to their resources.
  • Team and governance features (SSO, RBAC-aware views, CVE reporting) for secure enterprise use.

Install Lens (short how-to)

Kube Lens runs on macOS, Windows, and Linux. Download the installer from the Lens site,

 

Lens installer window on desktop

 

After installing it, launch Lens and complete the initial setup, and create/sign in with a Lens ID (for syncing and team features)

Add your cluster(s)

  • Lens automatically scans default kubeconfig locations (~/.kube/config).
  • To add a cluster manually: go to the Catalog or Clusters view → Add Cluster → paste kubeconfig or point to a file.
  • You can rename clusters and tag them (e.g., dev, staging, prod) for easier filtering.

Klens Clusters

Main UI walkthrough

Klens Overview

  • Overview shows your cluster health assessment. This is where you get visibility into node status, resource utilization, and workload distribution

Klens Cluster Overview

  • Nodes show you data about your cluster nodes

Klens Nodes

  • Workloads will let you explore your deployed resources

Klens Workloads

  • Config will show you data about your configmaps, secrets, resource quotas, limit ranges and more

Klens Config

  • In the Network you will see information about your services, ingresses, and others

Klens Network

And as you can see, there are other options present, so this would be a great time to stay a couple of minutes in the app, and explore all the things that you can do.

As soon as there are changes happening in your cluster, Lens picks them and propagates them immediately through the interface. Pod restarts, scaling operations, and configuration changes appear without manual refresh, providing live insight into cluster operations that static kubectl output cannot simply match.

Example:

I will start with a basic nginx deployment that shows pod lifecycle management:

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.21
        ports:
        - containerPort: 80
        resources:
          requests:
            memory: "64Mi"
            cpu: "250m"
          limits:
            memory: "128Mi"
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  selector:
    app: nginx
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
  type: ClusterIP
---

apiVersion: v1
kind: ConfigMap
metadata:
  name: nginx-config
data:
  nginx.conf: |
    server {
        listen 80;
        location / {
            return 200 'Hello from Lens!\n';
            add_header Content-Type text/plain;
        }

Apply this using kubectl.

kubectl apply -f nginx_deployment.yaml

Now that we’ve created a couple of resources, we are ready to explore Lens.

Here are all the pods running:

Klens Pods

By clicking on the 3 dots on the right side, you get a couple of options:

Klens Pod Option

You can easily attach to a pod, open a shell, evict it, view the logs, edit it, and even delete it.

Here is the ConfigMap:

Klens Configmap View

And this is the service:
Klens Service View

Port-Forward to Nginx

Apart from everything that I’ve shown you until now, you also get an easy way to enable port forwarding through Lens.

Just go to your Network tab, select Services, and then choose your service:

Port Forward View

You will see an option to Forward it, so let’s click on it:

Klens Port Forward View 1

You can choose a local port to forward it to, or leave it as Random, have the option to directly open in your browser

Helm Deploy:

Lens provides a built-in Helm client to browse, install, manage, and even roll back Helm charts directly from its graphical user interface (GUI), simplifying deployment and management of Kubernetes applications. You can find available charts from repositories (like Bitnami, enabled by default), customize values.yaml, and install releases with a few clicks, seeing all your Helm deployments in the dedicated Helm tab. 

  1. Access Helm: Click the “Helm” icon in Lens, then select “Charts” to see available options.
  2. Browse & Search: Find charts from repositories (Artifact Hub, Bitnami, etc.) or add custom ones.
  3. Install: Select a chart, choose a version, edit parameters in the values.yaml section, and click “Install”.
  4. Manage Releases: View installed releases, check their details (applied values), and perform actions like rolling back. 

Using built-in metrics and charts

  • Lens integrates cluster metrics (where available) for nodes and workloads.
  • Toggle charts in the details pane to get CPU/memory trends over time.

Klens Dashboard

Tips and best practices

  • Keep kubeconfigs minimal per cluster and use named contexts for clarity.
  • Tag clusters (dev/stage/prod) and use color coding to reduce the risk of accidental changes.
  • Use Lens for exploration and quick fixes; keep complex automation in CI/CD pipelines.
  • For sensitive environments, restrict Lens access and avoid storing long-lived credentials locally.

 

Reference

https://docs.k8slens.dev/

]]>
https://blogs.perficient.com/2026/02/02/kube-lens/feed/ 1 389778
Just what exactly is Visual Builder Studio anyway? https://blogs.perficient.com/2026/01/29/just-what-exactly-is-visual-builder-studio-anyway/ https://blogs.perficient.com/2026/01/29/just-what-exactly-is-visual-builder-studio-anyway/#respond Thu, 29 Jan 2026 15:40:45 +0000 https://blogs.perficient.com/?p=389750

If you’re in the world of Oracle Cloud, you are most likely busy planning your big switch to Redwood. While it’s easy to get excited about a new look and a plethora of AI features, I want to take some time to talk about a tool that’s new (at least to me) that comes along with Redwood. Functional users will come to know VB Studio as the new method for delivering page customizations, but I’ve learned it’s much more.

VB Studio has been around since 2020, but I only started learning about it recently. At its core, VB Studio is Oracle’s extension platform. It provides users with a safe way to customize by building around their systems instead of inside of it. Since changes to the core code are not allowed, upgrades are much less problematic and time consuming.  Let’s look at how users of different expertise might use VB Studio.

Oracle Cloud Application Developers

I wouldn’t call myself a developer, but this is the area I fit into. Moving forward, I will not be using Page Composer or HCM Experience Design Studio…and I’m pretty happy about that. Every client I work with wants customization, so having a one-stop shop with Redwood is a game-changer after years of juggling tools.

Sandboxes are gone. VB Studio uses Git repositories with branches to track and log every change. Branches let multiple people work on different features without conflict, and teams review and merge changes into the main branch in a controlled process.

And what about when these changes are ready for production? By setting up a pipeline from your development environment to your production environment, these changes can be pushed straight into production. This is huge for me! It reduces the time needed to implement new Oracle modules. It also helps with updating or changing existing systems as well. I’ve spent countless hours on video calls instructing system administrators on how to perform requested changes in their production environment because their policy did not allow me to have access. Now, I can make these changes in a development instance and push them to production. The sys admin can then view these changes and approve or reject them for production. Simple!

Maxresdefault

Low-Code Developers

 

Customizations to existing features are great, but what about building entirely new functionality and embedding it right into your system?  VB Studio simplifies building applications, letting low-code developers move quickly without getting bogged down in traditional coding. With VB Studio’s visual designer, developers can drag and drop components, arrange them the way they want, and preview changes instantly. This is exciting for me because I feel like it is accessible for someone who does very little coding. Of course, for those who need more flexibility, you can still add custom logic using familiar web technologies like JavaScript and HTML (also accessible with the help of AI). Once your app is ready, deployment is easy. This approach means quicker turnaround, less complexity, and applications that fit your business needs perfectly.

 

Experienced Programmers

Okay, now we’re getting way out of my league here, so I’ll be brief. If you really want to get your hands dirty by modifying the code of an application created by others, you can do that. If you prefer building a completely custom application using the web programming language of your choice, you can also do that. Oracle offers users a wide range of tools and stays flexible in how they use them. Organizations need tailored systems, and Oracle keeps evolving to make that possible.

 

https://www.oracle.com/application-development/visual-builder-studio/

]]>
https://blogs.perficient.com/2026/01/29/just-what-exactly-is-visual-builder-studio-anyway/feed/ 0 389750
Build a Custom Accordion Component in SPFx Using React – SharePoint https://blogs.perficient.com/2026/01/22/build-a-custom-accordion-component-in-spfx-using-react-sharepoint/ https://blogs.perficient.com/2026/01/22/build-a-custom-accordion-component-in-spfx-using-react-sharepoint/#comments Thu, 22 Jan 2026 07:50:54 +0000 https://blogs.perficient.com/?p=389813

When building modern SharePoint Framework (SPFx) solutions, reusable UI components play a crucial role in keeping your code clean, scalable, and maintainable. In particular, interactive components help improve the user experience without cluttering the interface.

Among these components, the Accordion is a commonly used UI element. It allows users to expand and collapse sections, making it easier to display large amounts of information in a compact and organized layout. In this blog, we’ll walk through how to create a custom accordion component in SPFx using React.


Create the Accordion Wrapper Component

To begin with, we’ll create a wrapper component that acts as a container for multiple accordion items. At a high level, this component’s responsibility is intentionally simple: it renders child accordion items while keeping styling and layout consistent across the entire accordion.This approach allows individual accordion items to remain focused on their own behavior, while the wrapper handles structure and reusability.

Accordion.tsx

import * as React from 'react';
import styles from './Accordion.module.scss';
import classNames from 'classnames';
import { IAccordionItemProps } from './subcomponents/AccordionItem';

import { ReactElement } from 'react';

export interface IAccordionProps {
  children?:
    | ReactElement<IAccordionItemProps>
    | ReactElement<IAccordionItemProps>[];
  className?: string;
}


const Accordion: React.FunctionComponent<
  React.PropsWithChildren<IAccordionProps>
> = (props) => {
  const { children, className } = props;
  return (
    <div className={classNames(styles.accordionSubcomponent, className)}>
      {children}
    </div>
  );
};

export default Accordion;

Styling with SCSS Modules

Next, let’s focus on styling. SPFx supports SCSS modules, which is ideal for avoiding global CSS conflicts and keeping styles scoped to individual components. Let’s see styling for accordion and accordion items.

Accordion.module.scss

.accordionSubcomponent {
    margin-bottom: 12px;
    .accordionTitleRow {
        display: flex;
        flex-direction: row;
        align-items: center;
        padding: 5px;
        font-size: 18px;
        font-weight: 600;
        cursor: pointer;
        -webkit-touch-callout: none;
        -webkit-user-select: none;
        -khtml-user-select: none;
        -moz-user-select: none;
        -ms-user-select: none;
        user-select: none;
        border-bottom: 1px solid;
        border-color: "[theme: neutralQuaternaryAlt]";
        background: "[theme: neutralLighter]";
    }
    .accordionTitleRow:hover {
        opacity: .8;
    }
    .accordionIconCol {
        padding: 0px 5px;
    }
    .accordionHeaderCol {
        display: inline-block;
        width: 100%;
    }
    .iconExpandCollapse {
        margin-top: -4px;
        font-weight: 600;
        vertical-align: middle;
    }
    .accordionContent {
        margin-left: 12px;
        display: grid;
        grid-template-rows: 0fr;
        overflow: hidden;
        transition: grid-template-rows 200ms;
        &.expanded {
          grid-template-rows: 1fr;
        }
        .expandableContent {
          min-height: 0;
        }
    }
}

Styling Highlights

  • Grid‑based animation for expand/collapse
  • SharePoint theme tokens
  • Hover effects for better UX

Creating Accordion Item Component

Each expandable section is managed by AccordionItem.tsx.

import * as React from 'react';
import styles from '../Accordion.module.scss';
import classNames from 'classnames';
import { Icon } from '@fluentui/react';
import { useState } from 'react';


export interface IAccordionItemProps {
  iconCollapsed?: string;
  iconExpanded?: string;
  headerText?: string;
  headerClassName?: string;
  bodyClassName?: string;
  isExpandedByDefault?: boolean;
}
const AccordionItem: React.FunctionComponent<React.PropsWithChildren<IAccordionItemProps>> = (props: React.PropsWithChildren<IAccordionItemProps>) => {
  const {
    iconCollapsed,
    iconExpanded,
    headerText,
    headerClassName,
    bodyClassName,
    isExpandedByDefault,
    children
  } = props;
  const [isExpanded, setIsExpanded] = useState<boolean>(!!isExpandedByDefault);
  const _toggleAccordion = (): void => {
    setIsExpanded((prevIsExpanded) => !prevIsExpanded);
  }
  return (
    <Stack>
    <div className={styles.accordionTitleRow} onClick={_toggleAccordion}>
        <div className={styles.accordionIconCol}>
            <Icon
                iconName={isExpanded ? iconExpanded : iconCollapsed}
                className={styles.iconExpandCollapse}
            />
        </div>
        <div className={classNames(styles.accordionHeaderCol, headerClassName)}>
            {headerText}
        </div>
    </div>
    <div className={classNames(styles.accordionContent, bodyClassName, {[styles.expanded]: isExpanded})}>
      <div className={styles.expandableContent}>
        {children}
      </div>
    </div>
    </Stack>
  )
}
AccordionItem.defaultProps = {
  iconExpanded: 'ChevronDown',
  iconCollapsed: 'ChevronUp'
};
export default AccordionItem;

Example Usage in SPFx Web Part

<Accordion>
  <AccordionItem headerText="What is SPFx?">
    <p>SPFx is a development model for SharePoint customizations.</p>

  </AccordionItem>

  <AccordionItem
    headerText="Why use custom controls?"
    isExpandedByDefault={true}
  >
    <p>Custom controls improve reusability and UI consistency.</p>
  </AccordionItem>
</Accordion>

Accordion

Conclusion

By building a custom accordion component in SPFx using React, you gain:

  • Full control over UI behavior
  • Lightweight and reusable code
  • Native SharePoint theming

This pattern is perfect for:

  • FAQ sections
  • Configuration panels
  • Dashboard summaries
]]>
https://blogs.perficient.com/2026/01/22/build-a-custom-accordion-component-in-spfx-using-react-sharepoint/feed/ 1 389813
An Example Brainstorming Session https://blogs.perficient.com/2026/01/20/example-brainstorming-session/ https://blogs.perficient.com/2026/01/20/example-brainstorming-session/#respond Tue, 20 Jan 2026 23:42:15 +0000 https://blogs.perficient.com/?p=389807

In my last blog post I addressed how to prepare your team for a unique experience and have them primed and ready for brainstorming.

Now I want to cover what actually happens INSIDE the brainstorming session itself. What activities should be included? How do you keep the energy up throughout the session?

Here’s a detailed brainstorming framework and agenda you can follow to generate real results. It works whether you have 90 minutes or a full day; whether you are tackling product innovation, process improvement, strategic planning, or problem solving; and whether you have 4 people on the team or 12 (try not to do more than that). Feel free to pick and choose what you like and adjust to fit your team and desired depth.

Pre-Session Checklist

  • Room Setup: Seating arranged to encourage collaboration (avoid traditional conference setups), background music playing softly, be free to move around. Being offsite is best!
  • Materials: Whiteboards, sticky notes, markers, small and large paper pads, dot stickers for voting, projector/screen.
  • Helpers: Enlist volunteers to capture ideas, manage breakout groups, and tally votes. Ensure they know their roles ahead of time.
  • Technology: If you’re using digital tools, screen sharing, or virtual whiteboards, test everything before the team arrives.
  • Breaks: Make sure you plan for breaks. People need mental and physical break periods.
  • Food: Have snacks and beverages ready. If you have a session over 3 hours, plan lunch and/or supper.

1. Welcome the Team (5-20 minutes)

As people arrive, keep things light to set the tone. Try to keep a casual conversation going, laughs are ideal! This isn’t another meeting, it’s a space for creative thinking.

If anyone participated in personal disruptions ahead of the meeting, (with no pressure) see if they’ll share. As the facilitator, have your own ready to share and also explain the room disruptions you’ve set up.

2. Mental Warmups (5-20 minutes)

The personal disruptions mentioned in my other post are meant to break people out of their mental ruts. This period of warm up is meant to achieve the same thing.

Many facilitators do this with ice breakers. I personally don’t like them and have had better luck with other approaches. Consider sharing some optical illusions or brain teasers that stretch their minds rather than putting them on the spot with forced socialization.

That said, ice breakers that get people up and building something together can work too, if you have one you like. Things like small teams building the tallest tower out of toothpicks and mini-marshmallows is a common one that works well.

3. Cover the Brainstorming Ground Rules (2-10 minutes)

  • No Bad Ideas: Save negativity for later. Right now, we’re generating not judging.
  • Quantity Over Quality: More ideas mean more chances for success. Aim for volume.
  • Wild Ideas Welcome: Suspend reality temporarily. One impossible idea can spark a feasible one.
  • No Ownership Battles: Ideas belong to the team. Collaboration beats competition.
  • Build on Others: Use “Yes, and…” thinking. Evolve, merge, and improve ideas together.
  • Stay Present: No emails, no phones. Even during breaks, don’t get distracted.

These rules should be available throughout the session. Consider hanging a poster with them or sharing an attendee packet that includes it. If anyone is attending remotely, share these in the chat area.

As the facilitator, you should be prepared to enforce these rules!

4. Frame the Challenge (5-20 minutes)

Why are we here today? What’s the goal of this brainstorming session? What do we hope to achieve after spending hours together?

This is a critical time to ensure everyone’s head is in the right place before diving into the actual brainstorming. We’re not here just to have fun, we’re here to solve a business problem. Use whatever information you have to enlighten the team on current state, desired state, competition, business data, customer feedback…whatever you have.

Now that we have everyone mentally prepared, consider a short break after this.

5.A. Individual Ideation (5-15 minutes)

This time is well spent whether you had your team generate ideas ahead of time or not. Even if you asked them to, you cannot expect everyone to have devoted time to think about your business objective ahead of time. You will end up with more diverse ideas if you keep this individual time in the agenda.

Here, we want to provide your attendees with paper, pens, and/or sticky notes, and set a timer. Remind them that quantity of ideas is the goal.

Ask the team on their own to come up with 10+ ideas in 5 minutes. They can compete to see who comes up with the most. Keep some soft background music playing (instrumental music). Consider dropping a “crazy bomb of an idea” as an example… something completely unrealistic and surprising, just to jar their minds one last time before they start. Show them that it’s OK to be wild in their suggestions.

When the round is done, optionally, you can take the next 5-10 minutes hearing some of the team’s favorites. Not all, just the favorites. Write them on a board, or post the sticky notes up.

5.B. Second Round of Individual Ideation (10-20 minutes)

If you have time, do a second round of individual idea creation, but this time introduce lateral thinking. Using random entry to show them that ideas can be triggered through associations. Have snippets of paper with random words for each person to draw from a bowl or hat. Give them an additional 5 or 10 minutes to come up with another set of ideas that relates to the word they selected.

For this second round you should be prepared to help anyone who struggles. You can suggest connections to their selected word, or push them to explore synonyms, antonyms, or other associations. For instance, if they draw “tiger”, you can associate animal, cat, jungle, teeth, claws, stripes, fur, orange, black, white, predator, aggression, primal, mascot, camouflage, frosted flakes, breakfast, sports, Detroit, baseball, Cincinnati, football, apparel, clothing, costume, Halloween, and more!

The associations are endless. They draw “tiger”, associate “stripe”, and relate that to the objective in how “striping” could mean updating parts of a system, and not all of it. Or they associate “baseball” and relate that to the objective in how a “bunt” is a strategic move that averts expectations and gets you on base.

6. Idea Sharing (10-60 minutes)

This portion of brainstorming is where ideas start to come together. When people start sharing their initial ideas, others get inspired. Remind everyone that we’re not after ownership, we’re collectively trying to solve the business problem. Your helpers can take notes on who was involved in an idea, so they can later be tagged for additional input or the project team.

This step can be nerve-wracking. Professionals may be uncertain about sharing half-baked ideas, but this is what we need! Don’t pressure anyone, so you, as the facilitator, can offer to share ideas on their behalf if they would like that.

As part of this step, begin identifying patterns and themes. People’s first ideas are generally the easy ones that multiple people will have (including your competitors). There will be similarities. Group those ideas now and try to give the groupings easy to reference names.

The bulk of the ideas are now in everyone’s heads, consider a short break after this.

7. Idea Expansion (20-60 minutes)

As the team comes back from a break, do a round of dot voting. Your ideas are pasted up and grouped, and the team has had some time to let those ideas settle in their minds. Now we’re ready to start driving the focus of the rest of this session.

There should be a set of concepts that are most intriguing to the team. Now, you will encourage pushing some further, spin-off ideas, and cross-pollination. Even flipping ideas to their opposite is still welcome. SCAMPER is an acronym that applies to creative thinking, and you might print it out and display it for your session today.

Like comedy improv, we still do not want to be negative about any idea. Use “yes, and…” to elaborate on someone’s idea. “I really like this idea, now imagine if we spin it as…” Make sure these expansions are being written down and captured.

8. Wild Card Rounds (10-60 minutes)

If you have a larger group, this time is ideal for break-out sessions. If your group is small, it can be another individual ideation round.

Take the top contending themes and divvy them out to groups or individuals. Then you can run 1-3 speed rounds, rotating themes between rounds.

  1. Role Play: Ask them to expand on their theme as if they were Steve Jobs, Jeff Bezzos, Einstein, your competitor, or SpongeBob. This makes them think differently.
  2. Constraints: Consider how they would have to change the idea if they were limited by budget, time, quality, or approach. Poetry is beautiful because of its constraints.
  3. Wishful Thinking: What could you do if all constraints were lifted? If you were writing a fictional book, how would you make this happen?
  4. Exaggeration: Take the idea to the extreme. If the idea as stated is 10%, what does 100% look like? What does 10-times look like?

This level of pushing creativity can be exhausting, consider a break after this.

9. Bring it Together (10-60 minutes)

Update your board with the latest ideas and iterations, if you haven’t already. Give the attendees a few minutes to peruse the posted ideas and reflect. Refresh the favorites list with another round of dot voting.

If time allows, move on from all this divergent thinking, and ask the attendees to list some constraints or areas that need to be investigated for these favorite ideas to work. Keep in mind this is still a “no bad ideas” session, so this effort should be a means to identify next steps for the idea and how to ensure it is successful if it is selected to move forward.

If you still have more time available, start some discussion that could help create a priority matrix after the meeting (like How/Now/Wow). Venture into identifying the following for each of the favorite ideas. We’re just looking for broad strokes and wide ranges today. On a scale of 1-10, where do these fall?

  • Impact: How much would this change the story for the business?
  • Effort: How much effort from business resources might be required?
  • Timeline: What would the timeline look like?
  • Cost: Would there be outside costs?

10. Next Steps (5-10 minutes)

This is the last step of this brainstorming session, but this is not the end. Now we fill the team in on what happens next and give them confidence that today’s effort will be useful. Start by asking the team what excited or surprised them the most today, and what they’d like to do again sometime.

Explain to the team how these ideas will be documented and shared out. The team should already be excited about at least one of today’s ideas, they’ll sleep on these ideas and continue thinking. So, let them know that there will be an opportunity to add additional thoughts to their favorites in the days/weeks to come.

Explain if you have any further plans to get feedback from stakeholders, leaders, or customers. If there are decision makers that are not in this meeting, then help your team understand what you’ll be doing to share these collective ideas with those who will make the final call.

Lastly, thank them for their time today. Express your own satisfaction and excitement for what’s to come. Try to squeeze in a few more laughs and build a feeling of teamwork. Consider remarking on something from this meeting as a “you had to be there” type of joke, even if it is the unrealistic bombshell of an idea that gets a laugh.

Tips for the Facilitator

  • Energy Management: Watch the room’s energy. If it dips, inject movement. Stand up, stretch, take a quick walk, change the pace with a speed round.
  • Protect the Quiet Voices: Don’t let extroverts dominate. Use techniques like written brainstorming and round-robin sharing to ensure everyone contributes.
  • Embrace the Awkward Silence: When you ask a question and get silence, resist the urge to fill it. Give people time to think. Count to ten in your head before jumping in, and don’t make them feel like it was a failure to not say anything.
  • Document Everything: Assign helpers to photograph whiteboards, capture sticky notes, and record key insights. You’ll lose valuable ideas if you rely on memory alone.
  • Keep Your “Crazy Idea Bomb” Ready: If the room gets stuck, be prepared to throw out something intentionally wild to break the pattern. Sometimes the group needs permission to think bigger.
  • Stay Neutral: As facilitator, your job is to guide the process, not advocate for specific ideas. You can participate, if you want to, but save your own advocacy for later. No idea is a bad idea in this session.

Conclusion

I hope you find this example brainstorming session agenda helpful! It’s one of my favorite things to run through. Get your team prepped and ready, then deliver an amazing workshop to drive creativity and innovation!

……

If you are looking for a partner to run brainstorming with, reach out to your Perficient account manager or use our contact form to begin a conversation.

]]>
https://blogs.perficient.com/2026/01/20/example-brainstorming-session/feed/ 0 389807
Upgrading from Gulp to Heft in SPFx | Sharepoint https://blogs.perficient.com/2026/01/14/upgrading-from-gulp-to-heft-in-spfx-sharepoint/ https://blogs.perficient.com/2026/01/14/upgrading-from-gulp-to-heft-in-spfx-sharepoint/#respond Wed, 14 Jan 2026 09:59:20 +0000 https://blogs.perficient.com/?p=389727

With the release of SPFx v1.22, Microsoft introduced Heft as the new build engine, replacing Gulp. This change brings better performance, modern tooling, and a more standardized approach to building SPFx solutions. In this blog, we’ll explore what this means for developers and how to upgrade.

What is Gulp in SPFx?

In SharePoint Framework (SPFx), Gulp is a JavaScript-based task runner that was traditionally used to automate build and development tasks.

What Gulp Did in SPFx

Historically, the SharePoint Framework (SPFx) relied on Gulp as its primary task runner, responsible for orchestrating the entire build pipeline. Gulp did a series of scripted tasks, defined inside gulpfile.js and in different SPFx build rig packages. These tasks automate important development and packaging workflows.These tasks included:

  • Automates repetitive tasks such as:
    • TypeScript to JavaScript.
    • Bundling multiple files into optimized packages.
    • Minifying code for better performance.
    • Packaging the solution into a “.sppkg” file for deployment.
  • Runs development servers for testing (gulp serve).
  • Watches for changes and rebuilds automatically during development

Because these tasks depended on ad‑hoc JavaScript streams and SPFx‑specific build rig wrappers, the pipeline could become complex and difficult to extend consistently across projects.

The following are the common commands included in gulp:

  • gulp serve – local workbench/dev server
  • gulp build – build the solution
  • gulp bundle – produce deployable bundles
  • gulp package-solution – create the .sppkg for the App Catalog

What is Heft?

In SharePoint Framework (SPFx), Heft is the new build engine introduced by Microsoft, starting with SPFx v1.22. It replaces the older Gulp-based build system.

Heft has replaced Gulp to support modern architecture, improve performance, ensure consistency and standardization, and provide greater extensibility.

Comparison between heft and gulp:

Area Gulp (Legacy) Heft (SPFx v1.22+)
Core model Task runner with custom JS/streams (gulpfile.js) Config‑driven orchestrator with plugins/rigs
Extensibility Write custom tasks per project Use Heft plugins or small “patch” files; standardized rigs
Performance Sequential tasks; no native caching Incremental builds, caching, unified TypeScript pass
Config surface Often scattered across gulpfile.js and build rig packages Centralized JSON/JS configs (heft.json, Webpack patch/customize hooks)
Scale Harder to keep consistent across many repos Designed to scale consistently (Rush Stack)

Installation Steps for Heft

  • To work with the upgraded version, you need to install Node v22.
  • Run the command npm install @rushstack/heft –global

Removing Gulp from an SPFx Project and Adding Heft (Clean Steps)

  • To work with the upgraded version, install Node v22.
  • Remove your current node_modules and package-lock.json, and run npm install again
  • NOTE: deleting node_modules can take a very long time if you don’t skip the recycle bin.
    • Open PowerShell
    • Navigate to your Project folder
    • Run command Remove-Item -Recurse -Force node_modules
    • Run command Remove-Item -Force package-lock.json
  • Open the solution in VS Code
  • In terminal run command npm cache clean –force
  • Then run npm install
  • Run the command npm install @rushstack/heft –global

After that, everything should work, and you will be using the latest version of SPFx with heft. However, going forward, there are some commands to be aware of

Day‑to‑day Commands on Heft

  • heft clean → cleans build artifacts (eq. gulp clean)
  • heft build → compiles & bundles (eq. gulp build/bundle) (Note— prod settings are driven by config rather than –ship flags.)
  • heft start → dev server (eq. gulp serve)
  • heft package-solution → creates.sppkg (dev build)
  • heft package-solution –production → .sppkg for production (eq. gulp package-solution –ship)
  • heft trust-dev-cert → trusts the local dev certificate used by the dev server (handy if debugging fails due to HTTPS cert issues

Conclusion

Upgrading from Gulp to Heft in SPFx projects marks a significant step toward modernizing the build pipeline. Heft uses a standard, configuration-based approach that improves performance, makes things the same across projects, and can be expanded for future needs. By adopting Heft, developers align with Microsoft’s latest architecture, reduce maintenance overhead, and gain a more scalable and reliable development experience.

]]>
https://blogs.perficient.com/2026/01/14/upgrading-from-gulp-to-heft-in-spfx-sharepoint/feed/ 0 389727