Services Articles / Blogs / Perficient https://blogs.perficient.com/category/services/ Expert Digital Insights Tue, 04 Nov 2025 15:08:37 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Services Articles / Blogs / Perficient https://blogs.perficient.com/category/services/ 32 32 30508587 Extending Personalization in Sitecore XM Cloud using Custom Conditions in Sitecore Personalize https://blogs.perficient.com/2025/10/31/extending-personalization-in-sitecore-xm-cloud-using-custom-conditions-in-sitecore-personalize-2/ https://blogs.perficient.com/2025/10/31/extending-personalization-in-sitecore-xm-cloud-using-custom-conditions-in-sitecore-personalize-2/#comments Fri, 31 Oct 2025 18:25:53 +0000 https://blogs.perficient.com/?p=388139

Over the past few months, I have shared a couple of blogs exploring embedded personalization in Sitecore XM Cloud:

While XM Cloud embedded personalization (found within Pages) offers an excellent, out-of-the-box solution for common personalization needs, it has a key limitation: it restricts you to a predefined set of marketer-friendly conditions. This streamlined, page-variant-based approach is quick to implement for scenarios like localizing content by geography or targeting new vs. returning visitors, making it a great starting point for content authors.

The Need for a Deeper Personalization Engine

But here’s the limitation: XM Cloud is built for speed, scalability, and a smooth content authoring experience. It doesn’t let you create custom conditions inside Pages. In legacy Sitecore XP, developers could extend the rule engine with custom C# code. In XM Cloud, the design is different. For any personalization logic beyond basic, marketer-friendly, out-of-the-box rules (like device type or referrer), Sitecore directs you to its dedicated, cloud-native platform – Sitecore Personalize.

This is where the world of digital experience often demands logic that goes beyond these basic audience segments. When you need to personalize based on proprietary business rules, custom data streams, or complex, multi-touchpoint journeys, the embedded tools won’t suffice.

This is where Sitecore Personalize becomes essential. Personalize is a standalone, cloud-native personalization and decisioning engine built to handle advanced scenarios.

Sitecore Personalize: The Technical Extension

Personalize acts as the key to unlocking limitless personalization power within your XM Cloud solution.

  • Custom Conditions: The most critical technical feature is the ability to define developer-written custom conditions. These conditions are authored in JavaScript within Personalize, built once by a developer, and then exposed to marketers for endless reuse within the XM Cloud Pages audience builder. This allows you to create highly tailored audience rules based on any data you can pipe into the platform.
  • Advanced Decisioning and Experimentation: Personalize is API-first, supporting complex decision models, A/B testing, and multi-variant experiments that can go far beyond simple page variants. It includes dedicated dashboards and robust analytics for measuring the performance of these sophisticated experiences.
  • Cross-Channel Orchestration: Personalize is designed to be truly cross-channel. Its capabilities extend beyond just the website, enabling you to orchestrate consistent, personalized experiences across email, mobile apps, and other API-driven touchpoints—leveraging a unified profile of the customer.

In short, Personalize extends XM Cloud’s personalization boundaries, making it possible to design highly tailored user journeys without being locked into out-of-the-box conditions.

This blog is the next step in the series: a developer-friendly deep dive into how Personalize works with XM Cloud.

XM Cloud and Personalize Integration

The seamless delivery of personalized content relies on a robust technical connection between your XM Cloud content management environment and the Sitecore Personalize decision engine. This part often feels like “black box magic,” so let’s clarify the two key phases: initial provisioning and run-time execution.

Provisioning & Tenant Setup

When an organization purchases a Sitecore Personalize license, the initial setup is managed by Sitecore Professional Services or your dedicated implementation partner. This process ensures the two separate cloud products are securely linked.

The typical process involves:

  • Tenant Mapping: Establishing a clear connection between your environments (e.g., XM Cloud Dev/Test connecting to a Non-Production Personalize tenant, and XM Cloud Production connecting to a Production Personalize tenant). This separation is crucial for ensuring you don’t mix test audiences and conditions with real production traffic.
  • Portal Access: You’ll receive invites to both the Sitecore Cloud Portal and the Personalize portal.
  • Site Identifier Configuration: A Site Identifier must be created and mapped within XM Cloud. This is a critical technical step that tells Personalize exactly which rendering host (your website) it’s communicating with.

Important: Don’t be surprised if you don’t see your new custom condition in XM Cloud Pages right away—always first check that the site identifier is configured correctly and that the tenant setup is complete. Once this is wired, your XM Cloud Pages environment can successfully surface custom conditions and experiences created in Personalize.

Runtime Architecture: How the Two Talk at Request Time

Once environments are provisioned, your XM Cloud rendering host (typically a Next.js application) needs to fetch the correct personalization decision for every page request.

The Sitecore Cloud SDK is the exclusive and recommended method for integrating personalization and A/B/n testing within JSS SDK Next.js applications hosted on XM Cloud. This is designed to leverage Next.js Middleware and server-side rendering (SSR) for optimal performance. But let us take a look at both the SDKs that can be used to integrate with Personalize:

Evaluation MethodDescriptionKey SDKUse Case & Impact
Server-Side (SSR / Edge)The Next.js rendering host calls Personalize at render-time (before the HTML is sent to the browser).Cloud SDKRecommended for XM Cloud. Ensures the variant is chosen before the page is delivered, eliminating the dreaded "flicker" of default content. Optimal for consistency and SEO.
Client-Side (Browser)The page loads first, then Personalize (via an SDK snippet) evaluates and applies the winning variant in the user's browser.Engage SDKSimpler for non-XM Cloud or highly component-specific scenarios. However, this approach can cause a flash of default content before the personalized swap occurs.

Personalization Data Flow Summary (SSR/Edge)

The personalization flow for a dynamic (SSR/Edge) page is a server-side handshake orchestrated by the Next.js Middleware and the Cloud SDK:

  • Initial Check (JSS/Next.js Experience Edge): The Next.js Middleware first queries Experience Edge to verify if any personalization variants exist for the page.
  • Decisioning Call (JSS/Next.js Personalize): If variants are found, the Middleware uses the Cloud SDK to send the visitor’s context to Sitecore Personalize.
  • Variant ID Returned (Personalize JSS/Next.js): Personalize evaluates all rules and returns the winning Variant ID (or the default ID) based on the evaluated rules.
  • Content Fetch (JSS/Next.js Experience Edge): The rendering host then makes a second request to Experience Edge, fetching the specific content layout matching the winning Variant ID.
  • Final Delivery: The fully personalized HTML is rendered on the server and delivered to the visitor, ensuring a flicker-free experience.

Static Site Generation (SSG) Personalization Flow

The personalization process for pages built with SSG differs slightly from SSR to gain performance benefits by leveraging pre-generated HTML:

  • Initial Request: When a visitor requests an SSG page, the Next.js application receives the standard HTTP request details (cookies, headers).
  • Pre-Render Check: The Personalize middleware first checks its cache to see if static HTML variants have already been pre-rendered for this page.If a static variant is found, the middleware skips the initial API query to Experience Edge, speeding up the process. If no static variants are found, the process falls back to the standard SSR flow to dynamically fetch personalized variants from the CMS/Edge.
  • Audience Decision: Assuming variants exist, the middleware sends a request to the Personalize (via the Cloud SDK) to identify which audience the current visitor belongs to.
  • Delivery:
    • If the visitor matches an audience and that variant was already pre-generated, the middleware simply returns the cached static HTML for the personalized variant immediately.
    • If the visitor matches an audience but the static variant has not yet been built, the Next.js application generates the page variant HTML on the fly and then caches that output to serve as a static asset for all future visitors belonging to that same audience.
    • If the visitor does not match any defined audience, the generic default static HTML page is returned.

This method allows high-traffic personalized pages to benefit from the speed of static hosting after the first request generates the variant HTML.

Wrapping Up

Bringing Sitecore XM Cloud and Sitecore Personalize together creates a powerful framework for delivering experiences that adapt in real time. While XM Cloud manages structured content and headless delivery, Personalize adds a decisioning layer that evaluates context, behaviors, and data signals to tailor each interaction.

This integration not only extends the personalization capabilities of XM Cloud beyond static rules but also enables continuous testing, optimization, and experimentation. For teams building modern digital experiences, this approach provides the agility to serve relevant, data-driven content at scale – while maintaining the flexibility of a cloud-native, headless architecture.

In my next blog, I’ll walk through creating custom conditions in Sitecore Personalize, so you can define personalization logic that truly aligns with the unique business needs.

 

]]>
https://blogs.perficient.com/2025/10/31/extending-personalization-in-sitecore-xm-cloud-using-custom-conditions-in-sitecore-personalize-2/feed/ 1 388139
Sitecore Redirects using Vercel Edge Config https://blogs.perficient.com/2025/10/31/sitecore-redirects-using-vercel-edge-config/ https://blogs.perficient.com/2025/10/31/sitecore-redirects-using-vercel-edge-config/#respond Fri, 31 Oct 2025 18:19:55 +0000 https://blogs.perficient.com/?p=388136

Over the past few months, I have shared a couple of blogs exploring embedded personalization in Sitecore XM Cloud:

While XM Cloud embedded personalization (found within Pages) offers an excellent, out-of-the-box solution for common personalization needs, it has a key limitation: it restricts you to a predefined set of marketer-friendly conditions. This streamlined, page-variant-based approach is quick to implement for scenarios like localizing content by geography or targeting new vs. returning visitors, making it a great starting point for content authors.

The Need for a Deeper Personalization Engine

But here’s the limitation: XM Cloud is built for speed, scalability, and a smooth content authoring experience. It doesn’t let you create custom conditions inside Pages. In legacy Sitecore XP, developers could extend the rule engine with custom C# code. In XM Cloud, the design is different. For any personalization logic beyond basic, marketer-friendly, out-of-the-box rules (like device type or referrer), Sitecore directs you to its dedicated, cloud-native platform – Sitecore Personalize.

This is where the world of digital experience often demands logic that goes beyond these basic audience segments. When you need to personalize based on proprietary business rules, custom data streams, or complex, multi-touchpoint journeys, the embedded tools won’t suffice.

This is where Sitecore Personalize becomes essential. Personalize is a standalone, cloud-native personalization and decisioning engine built to handle advanced scenarios.

Sitecore Personalize: The Technical Extension

Personalize acts as the key to unlocking limitless personalization power within your XM Cloud solution.

  • Custom Conditions: The most critical technical feature is the ability to define developer-written custom conditions. These conditions are authored in JavaScript within Personalize, built once by a developer, and then exposed to marketers for endless reuse within the XM Cloud Pages audience builder. This allows you to create highly tailored audience rules based on any data you can pipe into the platform.
  • Advanced Decisioning and Experimentation: Personalize is API-first, supporting complex decision models, A/B testing, and multi-variant experiments that can go far beyond simple page variants. It includes dedicated dashboards and robust analytics for measuring the performance of these sophisticated experiences.
  • Cross-Channel Orchestration: Personalize is designed to be truly cross-channel. Its capabilities extend beyond just the website, enabling you to orchestrate consistent, personalized experiences across email, mobile apps, and other API-driven touchpoints—leveraging a unified profile of the customer.

In short, Personalize extends XM Cloud’s personalization boundaries, making it possible to design highly tailored user journeys without being locked into out-of-the-box conditions.

This blog is the next step in the series: a developer-friendly deep dive into how Personalize works with XM Cloud.

XM Cloud and Personalize Integration

The seamless delivery of personalized content relies on a robust technical connection between your XM Cloud content management environment and the Sitecore Personalize decision engine. This part often feels like “black box magic,” so let’s clarify the two key phases: initial provisioning and run-time execution.

Provisioning & Tenant Setup

When an organization purchases a Sitecore Personalize license, the initial setup is managed by Sitecore Professional Services or your dedicated implementation partner. This process ensures the two separate cloud products are securely linked.

The typical process involves:

  • Tenant Mapping: Establishing a clear connection between your environments (e.g., XM Cloud Dev/Test connecting to a Non-Production Personalize tenant, and XM Cloud Production connecting to a Production Personalize tenant). This separation is crucial for ensuring you don’t mix test audiences and conditions with real production traffic.
  • Portal Access: You’ll receive invites to both the Sitecore Cloud Portal and the Personalize portal.
  • Site Identifier Configuration: A Site Identifier must be created and mapped within XM Cloud. This is a critical technical step that tells Personalize exactly which rendering host (your website) it’s communicating with.

Important: Don’t be surprised if you don’t see your new custom condition in XM Cloud Pages right away—always first check that the site identifier is configured correctly and that the tenant setup is complete. Once this is wired, your XM Cloud Pages environment can successfully surface custom conditions and experiences created in Personalize.

Runtime Architecture: How the Two Talk at Request Time

Once environments are provisioned, your XM Cloud rendering host (typically a Next.js application) needs to fetch the correct personalization decision for every page request.

The Sitecore Cloud SDK is the exclusive and recommended method for integrating personalization and A/B/n testing within JSS SDK Next.js applications hosted on XM Cloud. This is designed to leverage Next.js Middleware and server-side rendering (SSR) for optimal performance. But let us take a look at both the SDKs that can be used to integrate with Personalize:

Evaluation MethodDescriptionKey SDKUse Case & Impact
Server-Side (SSR / Edge)The Next.js rendering host calls Personalize at render-time (before the HTML is sent to the browser).Cloud SDKRecommended for XM Cloud. Ensures the variant is chosen before the page is delivered, eliminating the dreaded "flicker" of default content. Optimal for consistency and SEO.
Client-Side (Browser)The page loads first, then Personalize (via an SDK snippet) evaluates and applies the winning variant in the user's browser.Engage SDKSimpler for non-XM Cloud or highly component-specific scenarios. However, this approach can cause a flash of default content before the personalized swap occurs.

Personalization Data Flow Summary (SSR/Edge)

The personalization flow for a dynamic (SSR/Edge) page is a server-side handshake orchestrated by the Next.js Middleware and the Cloud SDK:

  • Initial Check (JSS/Next.js Experience Edge): The Next.js Middleware first queries Experience Edge to verify if any personalization variants exist for the page.
  • Decisioning Call (JSS/Next.js Personalize): If variants are found, the Middleware uses the Cloud SDK to send the visitor’s context to Sitecore Personalize.
  • Variant ID Returned (Personalize JSS/Next.js): Personalize evaluates all rules and returns the winning Variant ID (or the default ID) based on the evaluated rules.
  • Content Fetch (JSS/Next.js Experience Edge): The rendering host then makes a second request to Experience Edge, fetching the specific content layout matching the winning Variant ID.
  • Final Delivery: The fully personalized HTML is rendered on the server and delivered to the visitor, ensuring a flicker-free experience.

Static Site Generation (SSG) Personalization Flow

The personalization process for pages built with SSG differs slightly from SSR to gain performance benefits by leveraging pre-generated HTML:

  • Initial Request: When a visitor requests an SSG page, the Next.js application receives the standard HTTP request details (cookies, headers).
  • Pre-Render Check: The Personalize middleware first checks its cache to see if static HTML variants have already been pre-rendered for this page.If a static variant is found, the middleware skips the initial API query to Experience Edge, speeding up the process. If no static variants are found, the process falls back to the standard SSR flow to dynamically fetch personalized variants from the CMS/Edge.
  • Audience Decision: Assuming variants exist, the middleware sends a request to the Personalize (via the Cloud SDK) to identify which audience the current visitor belongs to.
  • Delivery:
    • If the visitor matches an audience and that variant was already pre-generated, the middleware simply returns the cached static HTML for the personalized variant immediately.
    • If the visitor matches an audience but the static variant has not yet been built, the Next.js application generates the page variant HTML on the fly and then caches that output to serve as a static asset for all future visitors belonging to that same audience.
    • If the visitor does not match any defined audience, the generic default static HTML page is returned.

This method allows high-traffic personalized pages to benefit from the speed of static hosting after the first request generates the variant HTML.

Wrapping Up

Bringing Sitecore XM Cloud and Sitecore Personalize together creates a powerful framework for delivering experiences that adapt in real time. While XM Cloud manages structured content and headless delivery, Personalize adds a decisioning layer that evaluates context, behaviors, and data signals to tailor each interaction.

This integration not only extends the personalization capabilities of XM Cloud beyond static rules but also enables continuous testing, optimization, and experimentation. For teams building modern digital experiences, this approach provides the agility to serve relevant, data-driven content at scale – while maintaining the flexibility of a cloud-native, headless architecture.

In my next blog, I’ll walk through creating custom conditions in Sitecore Personalize, so you can define personalization logic that truly aligns with the unique business needs.

 

]]>
https://blogs.perficient.com/2025/10/31/sitecore-redirects-using-vercel-edge-config/feed/ 0 388136
Node.js vs PHP, Which one is better? https://blogs.perficient.com/2025/10/31/node-js-vs-php-which-one-is-better/ https://blogs.perficient.com/2025/10/31/node-js-vs-php-which-one-is-better/#respond Fri, 31 Oct 2025 10:39:08 +0000 https://blogs.perficient.com/?p=388128

In the world of server-side scripting, two heavyweight contenders keep reappearing in discussions, RFPs, and code reviews: Node.js and PHP. This article dives into a clear, pragmatic comparison for developers and technical leads who need to decide which stack best fits a given project. Think of it as a blunt, slightly witty guide that respects both the history and the present-day realities of server-side development.

Background and History

PHP began as a personal project in the mid-1990s and evolved into a dominant server-side language for the web. Its philosophy centered on simplicity and rapid development for dynamic websites. Node.js, introduced in 2009, brought JavaScript to the server, leveraging the event-driven, non-blocking I/O model that underpins modern asynchronous web applications. The contrast is telling: PHP grew out of the traditional request‑response cycle, while Node.js grew out of the need for scalable, event-oriented servers.

Today, both technologies are mature, with active ecosystems and broad hosting support. The choice often comes down to project requirements, team expertise, and architectural goals.

Performance and Concurrency

Node.js shines in scenarios that require high concurrency with many I/O-bound operations. Its single-threaded event loop can handle numerous connections efficiently, provided you design for non-blocking I/O.

PHP’s traditional model is multi-threaded or process-per-request in its common web server setups; each request runs in a separate process. Modern PHP runtimes and frameworks offer asynchronous capabilities and improved performance, but Node.js tends to be more naturally aligned with non-blocking patterns.

Important takeaway: for CPU-intensive tasks, Node.js can struggle without worker threads or offloading to services.
PHP can be equally challenged by long-running tasks unless you use appropriate background processing (e.g., queues, workers) or switch to other runtimes.

Brief benchmark explanation: consider latency under high concurrent requests and throughput (requests per second). Node.js often maintains steady latency under many simultaneous I/O operations, while PHP tends to perform robustly for classic request/response workloads. Real-world results depend on code quality, database access patterns, and server configuration.

Ecosystem and Package Managers

Node.js features npm (and yarn/pnpm) with a vast, fast-growing ecosystem. Packages range from web frameworks like Express and Fastify to tooling for testing, deployment, and microservices.

PHP’s ecosystem centers around Composer as its package manager, with Laravel, Symfony, and WordPress shaping modern PHP development. Both ecosystems offer mature libraries, but the Node.js ecosystem tends to emphasize modularity and microservice-ready tooling, while PHP communities often emphasize rapid web application development with integrated frameworks.

Development Experience and Learning Curve

Node.js appeals to front-end developers who already speak JavaScript. A unified language stack can reduce cognitive load and speed up onboarding. Its asynchronous style, however, can introduce complexity for beginners (callbacks, promises, async/await).

PHP, by contrast, has a gentler entry path for many developers. Modern PHP with frameworks emphasizes clear MVC patterns, readable syntax, and synchronous execution that aligns with many developers’ mental models.

Recommendation: if your team is JS-fluent and you’re building highly interactive, I/O-bound services, Node.js is compelling. If you need rapid server-side web development with minimal context switching and a stable, synchronous approach, PHP remains a solid choice.

Tooling and Deployment

Deployment models for Node.js often leverage containerization, orchestration (Kubernetes), and serverless options. The lightweight, event-driven nature of Node.js fits microservices and API gateways well.

PHP deployment typically benefits from proven traditional hosting stacks (LAMP/LEMP) or modern containerized approaches. Frameworks like Laravel add modern tooling—routing, queues, events, and packaging—that pair nicely with robust deployment pipelines.

Security Considerations

Security is not tied to the language alone but to the ecosystem, coding practices, and configuration. Node.js projects must guard against prototype pollution, dependency vulnerabilities, and insecure defaults in npm packages.

PHP projects should be mindful of input validation, dependency integrity, and keeping frameworks up to date. In both ecosystems, employing a secure development lifecycle, dependency auditing, and automated tests is essential.

Scalability and Architecture Patterns

Node.js is often favored for horizontal scaling, stateless services, and API-driven architectures. Microservices, edge functions, and real-time features align well with Node.js’s strengths.

PHP-based architectures commonly leverage stateless app servers behind load balancers, with robust support for queues and background processing via workers. For long-running tasks and heavy CPU work, both stacks perform best when using dedicated services or offloading workloads to separate workers or service layers.

Typical Use Cases

  • Node.js: highly concurrent APIs, real-time applications, microservices, serverless functions, and streaming services.
  • PHP: traditional web applications with rapid development cycles, CMS-backed sites, monolithic apps, and projects with established PHP expertise.

Cost and Hosting Considerations

Both ecosystems offer broad hosting options. Node.js environments may incur slightly higher operational complexity in some managed hosting scenarios, but modern cloud providers offer scalable, cost-effective solutions for containerized or serverless Node.js apps.

PHP hosting is widely supported, often with economical LAMP/LEMP stacks. Total cost of ownership hinges on compute requirements, maintenance overhead, and the sophistication of deployment automation.

Developer Productivity

Productivity benefits come from language familiarity, tooling quality, and ecosystem maturity. Node.js tends to accelerate frontend-backend collaboration due to shared JavaScript fluency and a rich set of development tools.

PHP offers productivity through mature frameworks, extensive documentation, and a strong pool of experienced developers. The right choice depends on your teams’ strengths and project goals.

Community and Long-Term Viability

Both Node.js and PHP have vibrant communities and long-standing track records. Node.js maintains robust corporate backing, broad adoption in modern stacks, and a continuous stream of innovations. PHP remains deeply entrenched in the web with steady updates and widespread usage across many domains. For sustainability, prefer active maintenance, regular security updates, and a healthy ecosystem of plugins and libraries.

Pros and Cons Summary

  • Node.js Pros: excellent for high-concurrency I/O, single language across stack, strong ecosystem for APIs and microservices, good for real-time features.
  • Node.js Cons: can be challenging for CPU-heavy tasks, callback complexity (mitigated by async/await and worker threads).
  • PHP Pros: rapid web development with mature frameworks, straightforward traditional hosting, stable performance for typical web apps.
  • PHP Cons: historically synchronous model may feel limiting for highly concurrent workloads, ecosystem fragmentation in some areas.

Recommendation Guidance Based on Project Type

Choose Node.js when building highly scalable APIs, real-time features, or microservices that demand non-blocking I/O and a unified JavaScript stack.

Choose PHP when you need rapid development of traditional web applications, rely on established CMS ecosystems, or have teams with deep PHP expertise.

Hybrid approaches are also common: use Node.js for specific microservices and PHP for monolithic web interfaces, integrating through well-defined APIs.

Conclusion

Node.js and PHP each have a well-earned place in modern software architecture. The right choice isn’t a dogmatic rule but a thoughtful alignment of project goals, team capabilities, and operational realities. As teams grow and requirements evolve, a pragmatic blend—leveraging Node.js for scalable services and PHP for dependable, rapid web delivery—often yields the best of both worlds. With disciplined development practices and modern tooling, you can build resilient, maintainable systems regardless of the core language you choose.

Code Snippets: Simple HTTP Server

// Node.js: simple HTTP server
const http = require('http');
const port = 3000;

const server = http.createServer((req, res) => {
  res.statusCode = 200;
  res.setHeader('Content-Type', 'text/plain');
  res.end('Hello from Node.js server!\\n');
});

server.listen(port, () => {
  console.log(`Node.js server running at http://localhost:${port}/`);
});

 

PHP (built-in server):

// PHP: simple HTTP server (CLI)
<?php
// save as server.php and run: php -S localhost:8080
echo "Hello from PHP server!\\n";
?>

Note: In production, prefer robust frameworks and production-grade servers (e.g., Nginx + PHP-FPM, or Node.js with a process manager and reverse proxy).

]]>
https://blogs.perficient.com/2025/10/31/node-js-vs-php-which-one-is-better/feed/ 0 388128
Building for Humans – Even When Using AI https://blogs.perficient.com/2025/10/29/building-for-humans-even-when-using-ai/ https://blogs.perficient.com/2025/10/29/building-for-humans-even-when-using-ai/#comments Thu, 30 Oct 2025 01:03:55 +0000 https://blogs.perficient.com/?p=388108

Artificial Intelligence (AI) is everywhere. Every month brings new features promising “deeper thinking” and “agentic processes.” Tech titans are locked in trillion-dollar battles. Headlines scream about business, economic, and societal concerns. Skim the news and you’re left excited and terrified!

Here’s the thing: we’re still human – virtues, flaws, quirks, and all. We’ve always had our agency, collectively shaping our future. Even now, while embracing AI, we need to keep building for us.

We Fear What We Do Not Know

“AI this… AI that…” Even tech leaders admit they don’t fully understand it. Sci-fi stories warn us with cautionary tales. News cycles fuel anxiety about job loss, disconnected human relationships, and cognitive decline.

Luckily, this round of innovation is surprisingly transparent. You can read the Attention is All You Need paper (2017) that started it all. You can even build your own AI if you want! This isn’t locked behind a walled garden. That’s a good thing.

What the Past Can Tell Us

I like to look at the past to gauge what we can expect from the future. Humans have feared every major invention and technological breakthrough. We expect the worst, but most have proven to improve life.

We’ve always had distractions from books, movies, games, to TikTok brain-rot. Some get addicted and go too deep, while others thrive. People favor entertainment and leisure activities – this is nothing new – so I don’t feel like cognitive decline is anything to worry about. Humanity has overcome all of it before and will continue to do so.

 

.

 

Humans are Simple (and Complicated) Creatures

We look for simplicity and speed. Easy to understand, easy to look at, easy to interact with, easy to buy from. We skim read, we skip video segments, we miss that big red CTA button. The TL;DR culture rules. Even so, I don’t think we’re at risk of the future from Idiocracy (2006).

That’s not to say that we don’t overcomplicate things. The Gods Must Be Crazy movie (1980) has a line that resonates, “The more [we] improved [our] surroundings to make life easier, the more complicated [we] made it.” We bury our users (our customers) in detail when they just want to skim, skip, and bounce.

Building for Computers

The computer revolution (1950s-1980s) started with machines serving humans. Then came automation. And eventually, systems talking to systems.

Fast-forward to the 2010s, where marketers gamed the algorithms to win at SEO, SEM, and social networking. Content was created for computers, not humans. Now we have the dead internet theory. We were building without humans in mind.

We will still have to build for systems to talk to systems. That won’t change. APIs are more important than ever, and agentic AI relies on them. Because of this, it is crucial to make sure what you are building “plays well with others”. But AIs and APIs are tools, not the audience.

Building for Humans

Google used to tell us all to build what people want, as opposed to gaming their systems. I love that advice. However, at first it felt unrealistic…gaming the system worked. Then after many updates, for a short bit, it felt like Google was getting there! Then it got worse and feels like pay-to-play recently.

Now AI is reshaping search and everything else. You can notice the gap between search results and AI recommendations. They don’t match. AI assistants aim to please humans, which is great, until it inevitably changes.

Digital teams must build for AI ingestion, but if you neglect the human aspect and the end user experience, then you will only see short-term wins.

Examples of Building for Humans

  • Make it intuitive and easy. Simple for end users means a lot of work for builders, but it is worth it! Reduce their cognitive load.
  • Build with empathy. Appeal to real people, not just personas and bots. Include feedback loops so they can feel heard.
  • Get to the point. Don’t overwhelm users, instead help them take action! Delight your customers by saving them time.
  • Add humor when appropriate. Don’t be afraid to be funny, weird, or real…it connects on a human level.
  • Consider human bias. Unlike bots and crawlers, humans aren’t always logical. Design for human biases.
  • Watch your users. Focus groups or digital tracking tools are great for observing. Learn from real users and iterate.

Conclusion

Building for humans never goes out of style. Whatever comes after AI will still need to serve people. So as tech evolves, let’s keep honing systems that work with and around our human nature.

……

If you are looking for that extra human touch (built with AI), reach out to your Perficient account manager or use our contact form to begin a conversation.

]]>
https://blogs.perficient.com/2025/10/29/building-for-humans-even-when-using-ai/feed/ 1 388108
Perficient Honored as Organization of the Year for Cloud Computing https://blogs.perficient.com/2025/10/28/perficient-honored-as-stratus-organization-of-the-year-for-cloud-computing/ https://blogs.perficient.com/2025/10/28/perficient-honored-as-stratus-organization-of-the-year-for-cloud-computing/#respond Tue, 28 Oct 2025 20:43:03 +0000 https://blogs.perficient.com/?p=388091

Perficient has been named Cloud Computing Organization of the Year by the 2025 Stratus Awards, presented by the Business Intelligence Group. This prestigious recognition celebrates our leadership in cloud innovation and the incredible work of our entire Cloud team.

Now in its 12th year, the Stratus Awards honor the companies, products, and individuals that are reshaping the digital frontier. This year’s winners are leading the way in cloud innovation across AI, cybersecurity, sustainability, scalability, and service delivery — and we’re proud to be among them.

“Cloud computing is the foundation of today’s most disruptive technologies,” said Russ Fordyce, Chief Recognition Officer of the Business Intelligence Group. “The 2025 Stratus Award winners exemplify how cloud innovation can drive competitive advantage, customer success and global impact.”

This award is a direct reflection of the passion, expertise, and dedication of our Cloud team — a group of talented professionals who consistently deliver transformative solutions for our clients. From strategy and migration to integration and acceleration, their work is driving real business outcomes and helping organizations thrive in an AI-forward world.

We’re honored to receive this recognition and remain committed to pushing the boundaries of what’s possible in the cloud with AI.

Read more about our Cloud Practice.

]]>
https://blogs.perficient.com/2025/10/28/perficient-honored-as-stratus-organization-of-the-year-for-cloud-computing/feed/ 0 388091
Executing a Sitecore Migration: Development, Performance, and Beyond https://blogs.perficient.com/2025/10/28/executing-a-sitecore-migration-development/ https://blogs.perficient.com/2025/10/28/executing-a-sitecore-migration-development/#respond Tue, 28 Oct 2025 12:23:25 +0000 https://blogs.perficient.com/?p=388061

In previous blog, the strategic and architectural considerations that set the foundation for a successful Sitecore migration is explored. Once the groundwork is ready, it’s time to move from planning to execution, where the real complexity begins. The development phase of a Sitecore migration demands precision, speed, and scalability. From choosing the right development environment and branching strategy to optimizing templates, caching, and performance, every decision directly impacts the stability and maintainability of your new platform.

This blog dives into the practical side of migration, covering setup best practices, developer tooling (IDE and CI/CD), coding standards, content model alignment, and performance tuning techniques to help ensure that your transition to Sitecore’s modern architecture is both seamless and future-ready.Title (suggested): Executing a Successful Sitecore Migration: Development, Performance, and Beyond

 

1. Component and Code Standards Over Blind Reuse

  • In any Sitecore migration, one of the biggest mistakes teams make is lifting and shifting old components into the new environment. While this may feel faster in the short term, it creates long-term problems.
  • Missed product offerings: Old components were often built around constraints of an earlier Sitecore version. Reusing them as-is means you can’t take advantage of new product features like improved personalization, headless capabilities, SaaS integrations, and modern analytics.
  • Outdated standards: Legacy code usually does not meet current coding, security, and performance standards. This can introduce vulnerabilities and inefficiencies into your new platform.
    Accessibility gaps: Many older components don’t align with WCAG and ADA accessibility standards — missing ARIA roles, semantic HTML, or proper alt text. Reusing them will carry accessibility debt into your fresh build.
  • Maintainability issues: Old code often has tight coupling, minimal test coverage, and obsolete dependencies. Keeping it will slow down future upgrades and maintenance.

Best practice: Treat the migration as an opportunity to raise your standards. Audit old components for patterns and ideas, but don’t copy-paste them. Rebuild them using modern frameworks, Sitecore best practices, security guidelines, and accessibility compliance. This ensures the new solution is future-proof and aligned with the latest Sitecore roadmap.

 

2. Template Creation and Best Practices

  • Templates define the foundation of your content structure, so designing them carefully is critical.
  • Analyze before creating: Study existing data models, pages, and business requirements before building templates.
  • Use base templates: Group common fields (e.g., Meta, SEO, audit info) into base templates and reuse them across multiple content types.
  • Leverage branch templates: Standardize complex structures (like a landing page with modules) by creating branch templates for consistency and speed.
  • Follow naming and hierarchy conventions: Clear naming and logical organization make maintenance much easier.

 

3. Development Practices and Tools

A clean, standards-driven development process ensures the migration is efficient, maintainable, and future-proof. It’s not just about using the right IDEs but also about building code that is consistent, compliant, and friendly for content authors.

  • IDEs & Tools
    • Use Visual Studio or VS Code with Sitecore- and frontend-specific extensions for productivity.
    • Set up linting, code analysis, and formatting tools (ESLint, Prettier in case of JSS code, StyleCop) to enforce consistency.
    • Use AI assistance (GitHub Copilot, Codeium, etc.) to speed up development, but always review outputs for compliance and quality. There are many different AI tools available in market that can even change the design/prototypes into specified code language.
  • Coding Standards & Governance
    • Follow SOLID principles and keep components modular and reusable.
    • Ensure secure coding standards: sanitize inputs, validate data, avoid secrets in code.
    • Write accessible code: semantic HTML, proper ARIA roles, alt text, and keyboard navigation.
    • Document best practices and enforce them with pull request reviews and automated checks.
  • Package & Dependency Management
    • Select npm/.NET packages carefully: prefer well-maintained, community-backed, and security-reviewed ones.
    • Avoid large, unnecessary dependencies that bloat the project.
    • Run dependency scanning tools to catch vulnerabilities.
    •  Keep lockfiles for environment consistency.
  • Rendering Variants & Parameters
    • Leverage rendering variants (SXA/headless) to give flexibility without requiring code changes.
    • Add parameters so content authors can adjust layouts, backgrounds, or alignment safely.
    • Always provide sensible defaults to protect design consistency.
  • Content Author Experience

Build with the content author in mind:

    • Use clear, meaningful field names and help text.
    • Avoid unnecessary complexity: fewer, well-designed fields are better.
    • Create modular components that authors can configure and reuse.
    • Validate with content author UAT to ensure the system is intuitive for day-to-day content updates.

Strong development practices not only speed up migration but also set the stage for easier maintenance, happier authors, and a longer-lasting Sitecore solution.

 

4. Data Migration & Validation

Migrating data is not just about “moving items.” It’s about translating old content into a new structure that aligns with modern Sitecore best practices.

  • Migration tools
    Sitecore does provides migration tools to shift data like XM to XM Cloud. Leverage these tools for data that needs to be copied.
  • PowerShell for Migration
    • Use Sitecore PowerShell Extensions (SPE) to script the migration of data from the old system that does not need to be as is but in different places and field from old system.
    • Automate bulk operations like item creation, field population, media linking, and handling of multiple language versions.
    • PowerShell scripts can be run iteratively, making them ideal as content continues to change during development.
    • Always include logging and reporting so migrated items can be tracked, validated, and corrected if needed.
  • Migration Best Practices
    • Field Mapping First: Analyze old templates and decide what maps directly, what needs transformation, and what should be deprecated.
    • Iterative Migration: Run migration scripts in stages, validate results, and refine before final cutover.
    • Content Cleanup: Remove outdated, duplicate, or unused content instead of carrying it forward.
    • SEO Awareness: Ensure titles, descriptions, alt text, and canonical fields are migrated correctly.
    • Audit & Validation:
      • Use PowerShell reports to check item counts, empty fields, or broken links.
      • Crawl both old and new sites with tools like Screaming Frog to compare URLs, metadata, and page structures.

 

5. SEO Data Handling

SEO is one of the most critical success factors in any migration — if it’s missed, rankings and traffic can drop overnight.

  • Metadata: Preserve titles, descriptions, alt text, and Open Graph tags. Missing these leads to immediate SEO losses.
  • Redirects: Map old URLs with 301 redirects (avoid chains). Broken redirects = lost link equity.
  • Structured Data: Add/update schema (FAQ, Product, Article, VideoObject). This improves visibility in SERPs and AI-generated results.
  • Core Web Vitals: Ensure the new site is fast, stable, and mobile-first. Poor performance = lower rankings.
  • Emerging SEO: Optimize for AI/Answer Engine results, focus on E-E-A-T (author, trust, freshness), and create natural Q&A content for voice/conversational search.
  • Validation: Crawl the site before and after migration with tools like Screaming Frog or Siteimprove to confirm nothing is missed.

Strong SEO handling ensures the new Sitecore build doesn’t just look modern — it retains rankings, grows traffic, and is ready for AI-powered search.

 

6. Serialization & Item Deployment

Serialization is at the heart of a smooth migration and ongoing Sitecore development. Without the right approach, environments drift, unexpected items get deployed, or critical templates are missed.

  • ✅ Best Practices
    • Choose the Right Tool: Sitecore Content Serialization (SCS), Unicorn, or TDS — select based on your project needs.
    • Scope Carefully: Serialize only what is required (templates, renderings, branches, base content). Avoid unnecessary content items.
    • Organize by Modules: Structure serialization so items are grouped logically (feature, foundation, project layers). This keeps deployments clean and modular.
    • Version Control: Store serialization files in source control (Git/Azure devops) to track changes and allow safe rollbacks.
    • Environment Consistency: Automate deployment pipelines so serialized items are promoted consistently from dev → QA → UAT → Prod.
    • Validation: Always test deployments in lower environments first to ensure no accidental overwrites or missing dependencies.

Properly managed serialization ensures clean deployments, consistent environments, and fewer surprises during migration and beyond.

 

7. Forms & Submissions

In Sitecore XM Cloud, forms require careful planning to ensure smooth data capture and migration.

  •  XM Cloud Forms (Webhook-based): Submit form data via webhooks to CRM, backend, or marketing platforms. Configure payloads properly and ensure validation, spam protection, and compliance.
  • Third-Party Forms: HubSpot, Marketo, Salesforce, etc., can be integrated via APIs for advanced workflows, analytics, and CRM connectivity.
  • Create New Forms: Rebuild forms with modern UX, accessibility, and responsive design.
  • Migrate Old Submission Data: Extract and import previous form submissions into the new system or CRM, keeping field mapping and timestamps intact.
  • ✅ Best Practices: Track submissions in analytics, test end-to-end, and make forms configurable for content authors.

This approach ensures new forms work seamlessly while historical data is preserved.

 

8. Personalization & Experimentation

Migrating personalization and experimentation requires careful planning to preserve engagement and insights.

  • Export & Rebuild: Export existing rules, personas, and goals. Review them thoroughly and recreate only what aligns with current business requirements.
  • A/B Testing: Identify active experiments, migrate if relevant, and rerun them in the new environment to validate performance.
  • Sitecore Personalize Implementation:
    • Plan data flow into the CDP and configure event tracking.
    • Implement personalization via Sitecore Personalize Cloud or Engage SDK for xm cloud implementation, depending on requirements.

✅Best Practices:

  • Ensure content authors can manage personalization rules and experiments without developer intervention.
  • Test personalized experiences end-to-end and monitor KPIs post-migration.

A structured approach to personalization ensures targeted experiences, actionable insights, and a smooth transition to the new Sitecore environment.

 

9. Accessibility

Ensuring accessibility is essential for compliance, usability, and SEO.

  • Follow WCAG standards: proper color contrast, semantic HTML, ARIA roles, and keyboard navigation.
  • Validate content with accessibility tools and manual checks before migration cutover.
  • Accessible components improve user experience for all audiences and reduce legal risk.

 

10. Performance, Caching & Lazy Loading

Optimizing performance is critical during a migration to ensure fast page loads, better user experience, and improved SEO.

  • Caching Strategies:
    • Use Sitecore output caching and data caching for frequently accessed components.
    • Implement CDN caching for media assets to reduce server load and improve global performance.
    • Apply cache invalidation rules carefully to avoid stale content.
  • Lazy Loading:
    • Load images, videos, and heavy components only when they enter the viewport.
    • Improves perceived page speed and reduces initial payload.
  • Performance Best Practices:
    • Optimize images and media (WebP/AVIF).
    • Minimize JavaScript and CSS bundle size, and use tree-shaking where possible.
    • Monitor Core Web Vitals (LCP, CLS, FID) post-migration.
    • Test performance across devices and regions before go-live.
    • Content Author Consideration:
    • Ensure caching and lazy loading do not break dynamic components or personalization.
    • Provide guidance to authors on content that might impact performance (e.g., large images or embeds).

Proper caching and lazy loading ensure a fast, responsive, and scalable Sitecore experience, preserving SEO and user satisfaction after migration.

 

11. CI/CD, Monitoring & Automated Testing

A well-defined deployment and monitoring strategy ensures reliability, faster releases, and smooth migrations.

  • CI/CD Pipelines:
    • Set up automated builds and deployments according to your hosting platform: Azure, Vercel, Netlify, or on-premise.
    • Ensure deployments promote items consistently across Dev → QA → UAT → Prod.
    • Include code linting, static analysis, and unit/integration tests in the pipeline.
  • Monitoring & Alerts:
    • Track website uptime, server health, and performance metrics.
    • Configure timely alerts for downtime or abnormal behavior to prevent business impact.
  • Automated Testing:
    • Implement end-to-end, regression, and smoke tests for different environments.
    • Include automated validation for content, forms, personalization, and integrations.
    • Integrate testing into CI/CD pipelines to catch issues early.
  • ✅ Best Practices:
    • Ensure environment consistency to prevent drift.
    • Use logs and dashboards for real-time monitoring.
    • Align testing and deployment strategy with business-critical flows.

A robust CI/CD, monitoring, and automated testing strategy ensures reliable deployments, reduced downtime, and faster feedback cycles across all environments.

 

12. Governance, Licensing & Cutover

A successful migration is not just technical — it requires planning, training, and governance to ensure smooth adoption and compliance.

  • License Validation: Compare the current Sitecore license with what the new setup requires. Ensure coverage for all modules, environments. Validate and provide accurate rights to users and roles.
  • Content Author & Marketer Readiness:
    • Train teams on the new workflows, tools, and interface.
    • Provide documentation, demos, and sandbox environments to accelerate adoption.
  • Backup & Disaster Recovery:
    • Plan regular backups and ensure recovery procedures are tested.
    • Define RTO (Recovery Time Objective) and RPO (Recovery Point Objective) for critical data.
  • Workflow, Roles & Permissions:
    • Recreate workflows, roles, and permissions in the new environment.
    • Implement custom workflows if required.
    • Governance gaps can lead to compliance and security risks — audit thoroughly.
  • Cutover & Post-Go-Live Support:
    • Plan the migration cutover carefully to minimize downtime.
    • Prepare a support plan for immediate issue resolution after go-live.
    • Monitor KPIs, SEO, forms, personalization, and integrations to ensure smooth operation.

Proper governance, training, and cutover planning ensures the new Sitecore environment is compliant, adopted by users, and fully operational from day one.

 

13. Training & Documentation

Proper training ensures smooth adoption and reduces post-migration support issues.

  • Content Authors & Marketers: Train on new workflows, forms, personalization, and content editing.
  • Developers & IT Teams: Provide guidance on deployment processes, CI/CD, and monitoring.
  • Documentation: Maintain runbooks, SOPs, and troubleshooting guides for ongoing operations.
  • Encourage hands-on sessions and sandbox practice to accelerate adoption.

 

Summary:

Sitecore migrations are complex, and success often depends on the small decisions made throughout development, performance tuning, SEO handling, and governance. This blog brings together practical approaches and lessons learned from real-world implementations — aiming to help teams build scalable, accessible, and future-ready Sitecore solutions.

While every project is different, the hope is that these shared practices offer a useful starting point for others navigating similar journeys. The Sitecore ecosystem continues to evolve, and so do the ways we build within it.

 

]]>
https://blogs.perficient.com/2025/10/28/executing-a-sitecore-migration-development/feed/ 0 388061
Spring Boot + OpenAI : A Developer’s Guide to Generative AI Integration https://blogs.perficient.com/2025/10/27/spring-boot-openai-a-developers-guide-to-generative-ai-integration/ https://blogs.perficient.com/2025/10/27/spring-boot-openai-a-developers-guide-to-generative-ai-integration/#respond Mon, 27 Oct 2025 08:02:27 +0000 https://blogs.perficient.com/?p=387157

Introduction

In this blog, we’ll explore how to connect OpenAI’s API with a Spring Boot application, step by step.

We’ll cover the setup process, walk through the implementation with a practical example.

By integrating OpenAI with Spring Boot, you can create solutions that are not only powerful but also scalable and reliable.

Prerequisites

  • Java 17+
  • Maven
  • Spring Boot (3.x recommended)
  • OpenAI API Key (get it from platform.openai.com)
  • Basic knowledge of REST APIs

OpenAI’s platform helps developers to understand how to prompt a models to generate meaningful text. It’s basically a cheat sheet for how to communicate to AI so it gives you smart and useful answers by providing prompts. 

Implementation in Spring Boot

To integrate OpenAI’s GPT-4o-mini model into a Spring Boot application, we analyzed the structure of a typical curl request and response provided by OpenAI.

API docs reference:

https://platform.openai.com/docs/overview

https://docs.spring.io/spring-boot/index.html

Curl Request

<html>
curl https://api.openai.com/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{
    "model": "gpt-4o-mini",
    "messages": [
      {"role": "assistant", "content": "Hello"},
      {"role": "user", "content": "Hi"}
    ]
  }'
</html>

Note-

“role”: “user” – Represents the end-user interacting with the assistant

“role”: “assistant” – Represents the assistant’s response.

The response generated from the model and it looks like this:

{
  "id": "chatcmpl-B9MBs8CjcvOU2jLn4n570S5qMJKcT",
  "object": "chat.completion",
  "created": 1741569952,
  "model": "gpt-4o-mini-2025-04-14",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "Hello! How can I assist you today?",
        "refusal": null,
        "annotations": []
      },
      "logprobs": null,
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 19,
    "completion_tokens": 10,
    "total_tokens": 29,
    "prompt_tokens_details": {
      "cached_tokens": 0,
      "audio_tokens": 0
    },
    "completion_tokens_details": {
      "reasoning_tokens": 0,
      "audio_tokens": 0,
      "accepted_prediction_tokens": 0,
      "rejected_prediction_tokens": 0
    }
  },
  "service_tier": "default"
}

 

Controller Class:

In below snippet, we will explore a simple spring boot controller to interact with Open AI’s API. When end user sends a prompt to that url (e.g /bot/chat?prompt=what is spring boot), the controller reads the model name and API url from applocation.properties file. It then creates a request using prompt provided and sends it to Open AI using rest call(RestTemplate). After verifying the request, OpenAI sends back a response.

@RestController
@RequestMapping("/bot")
public class GenAiController {

    @Value("${openai.model}")
    private String model;

    @Value(("${openai.api.url}"))
    private String apiURL;

    @Autowired
    private RestTemplate template;

    @GetMapping("/chat")
    public String chat(@RequestParam("prompt") String prompt) {
        GenAiRequest request = new GenAiRequest(model, prompt);
        System.out.println("Request: " + request );
        GenAIResponse genAIResponse = template.postForObject(apiURL, request, GenAIResponse.class);
        return genAIResponse.getChoices().get(0).getMessage().getContent();
    }

 

Configuration Class:

Annotated with @Configuration, this class defines beans and settings for the application context. Pulling the Open API key from properties file and the a customized RestTemplate is created and configured to include the Authorization Bearer <API_KEY> header in all requests. This setup ensures that every call to OpenAI’s API is authenticated without manually adding headers in each request.

@Configuration
public class OpenAIAPIConfiguration {

    @Value("${openai.api.key}")
     private String openaiApiKey;

    @Bean
    public RestTemplate template(){
        RestTemplate restTemplate=new RestTemplate();
        restTemplate.getInterceptors().add((request, body, execution) -> {
            request.getHeaders().add("Authorization", "Bearer " + openaiApiKey);
            return execution.execute(request, body);
        });
        return restTemplate;
    }
    
}

Require getters and setters for request and response classes:

Based on the Curl structure and response, we generated the corresponding request and response java classes with appropriate getters and setters with selected attributes to repsesent request and response object. These getter/setter classes help turn JSON data into objects we can use in code, and also turn our code’s data back into JSON when interacting to the OpenAI API. We implemented a bot using the gpt-4o-mini model, integrating it with a REST controller and also handled the authentication via the API key.

//Request
@Data
public class GenAiRequest {

    private String model;
    private List<GenAIMessage> messages;

    public List<GenAIMessage> getMessages() {
        return messages;
    }

    public GenAiRequest(String model, String prompt) {
        this.model = model;
        this.messages = new ArrayList<>();
        this.messages.add(new GenAIMessage("user",prompt));
    }
}

@Data
@AllArgsConstructor
@NoArgsConstructor
public class GenAIMessage {

    private String role;
    private String content;   
    
    public String getContent() {
        return content;
    }
    public void setContent(String content) {
        this.content = content;
    }
}

//Response
@Data
@AllArgsConstructor
@NoArgsConstructor
public class GenAIResponse {

    private List<Choice> choices;

    public List<Choice> getChoices() {
        return choices;
    }

    @Data
    @AllArgsConstructor
    @NoArgsConstructor
    public static class Choice {

        private int index;
        private GenAIMessage message;
        public GenAIMessage getMessage() {
            return message;
        }
        public void setMessage(GenAIMessage message) {
            this.message = message;
        }

    }

}

 

Essential Configuration for OpenAI Integration in Spring Boot

To connect your Spring Boot application with OpenAI’s API, you need to define a few key properties in your application.properties or application.yml file:

  • server.port: Specifies the port on which your Spring Boot application will run. You can set it to any available port like 8080, 9090, etc. (The default port for a Spring Boot application is 8080)
  • openai.model: Defines the OpenAI model to be used. In this case, gpt-4o-mini is selected for lightweight and efficient responses.
  • openai.api.key: Your secret API key from OpenAI. This is used to authenticate requests. Make sure to keep it secure and never expose it publicly.
  • openai.api.url: The endpoint URL for OpenAI’s chat completion API. (This is where your application sends prompts and receives responses)
server.port=<add server port>
openai.model=gpt-4o-mini
openai.api.key=	XXXXXXXXXXXXXXXXXXXXXXXXXXXX
openai.api.url=https://api.openai.com/v1/chat/completions

 

Postman Collection:

GET API: http://localhost:<port>/bot/chat?prompt=What is spring boot used for ?

Content-Type: application/json

Prompt

Usage of Spring Boot + OpenAI Integration

  • AI-Powered Chatbots: Build intelligent assistants for customer support, internal helpdesks, or onboarding systems.
  • Content Generation Tools: Automate blog writing, email drafting, product descriptions, or documentation, generate personalized content based on user input.
  • Code Assistance & Review: Create tools that help developers write, refactor, or review code using AI, Integrate with IDEs or CI/CD pipelines for smart suggestions.
  • Data Analysis & Insights: Use AI to interpret data, generate summaries, answer questions about datasets combine with Spring Boot APIs to serve insights to dashboards or reports.
  • Search Enhancement: Implement semantic search or question-answering systems over documents or databases, use embeddings and GPT to improve relevance and accuracy.
  • Learning & Training Platforms: Provide personalized tutoring, quizzes, and explanations using AI & adapt content based on user performance and feedback.
  • Email & Communication Automation: Draft, summarize, or translate emails and messages, integrate with enterprise communication tools.
  • Custom usages: In a business-to-business context, usage can be customized according to specific client requirements.
]]>
https://blogs.perficient.com/2025/10/27/spring-boot-openai-a-developers-guide-to-generative-ai-integration/feed/ 0 387157
Perficient Wins Silver w3 Award for AI Utility Integration https://blogs.perficient.com/2025/10/24/perficient-awarded-w3-award-for-ai-integration/ https://blogs.perficient.com/2025/10/24/perficient-awarded-w3-award-for-ai-integration/#respond Fri, 24 Oct 2025 15:49:49 +0000 https://blogs.perficient.com/?p=387677

We’re proud to announce that we’ve been honored with a Silver w3 Award in the Emerging Tech Features – AI Utility Integration category for our work with a top 20 U.S. utility provider. This recognition from the Academy of Interactive and Visual Arts (AIVA) celebrates our commitment to delivering cutting-edge, AI-powered solutions that drive real-world impact in the energy and utilities sector.

“Winning this w3 Award speaks to our pragmatism–striking the right balance between automation capabilities and delivering true business outcomes through purposeful AI adoption,” said Mwandama Mutanuka, Managing Director of Perficient’s Intelligent Automation practice. “Our approach focuses on understanding the true cost of ownership, evaluating our clients’ existing automation tech stack, and building solutions with a strong business case to drive impactful transformation.”

Modernizing Operations with AI

The award-winning solution centered on the implementation of a ServiceNow Virtual Agent to streamline internal service desk operations for a major utility provider serving millions of homes and businesses across the United States. Faced with long wait times and a high volume of repetitive service requests, the client sought a solution that would enhance productivity, reduce costs, and improve employee satisfaction.

Our experts delivered a two-phase strategy that began with deploying an out-of-the-box virtual agent capable of handling low-complexity, high-volume requests. We then customized the solution using ServiceNow’s Conversational Interfaces module, tailoring it to the organization’s unique needs through data-driven topic recommendations and user behavior analysis. The result was an intuitive, AI-powered experience that allowed employees and contractors to self-serve common IT requests, freeing up service desk agents to focus on more complex work and significantly improving operational efficiency.

Driving Adoption Through Strategic Change Management

Adoption is the key to unlocking the full value of any technology investment. That’s why our team partnered closely with the client’s corporate communications team to launch a robust change management program. We created a branded identity for the virtual agent, developed engaging training materials, and hosted town halls to build awareness and excitement across the organization. This holistic approach ensured high engagement and a smooth rollout, setting the foundation for long-term success.

Looking Ahead

The w3 Award is a reflection of our continued dedication to innovation, collaboration, and excellence. As we look to the future, we remain committed to helping enterprises across industries harness the full power of AI to transform their operations. Explore the full success story to learn more about how we’re powering productivity with AI, and visit the w3 Awards Winners Gallery to see our recognition among the best in digital innovation.

For more information on how Perficient can help your business with integrated AI services, contact us today.

]]>
https://blogs.perficient.com/2025/10/24/perficient-awarded-w3-award-for-ai-integration/feed/ 0 387677
See Perficient’s Amarender Peddamalku at the Microsoft 365, Power Platform & Copilot Conference https://blogs.perficient.com/2025/10/23/see-perficients-amarender-peddamalku-at-the-microsoft-365-power-platform-copilot-conference/ https://blogs.perficient.com/2025/10/23/see-perficients-amarender-peddamalku-at-the-microsoft-365-power-platform-copilot-conference/#respond Thu, 23 Oct 2025 17:35:19 +0000 https://blogs.perficient.com/?p=388040

As the year wraps up, so does an incredible run of conferences spotlighting the best in Microsoft 365, Power Platform, and Copilot innovation. We’re thrilled to share that Amarender Peddamalku, Microsoft MVP and Practice Lead for Microsoft Modern Work at Perficient, will be speaking at the Microsoft 365, Power Platform & Copilot Conference in Dallas, November 3–7.

Amarender has been a featured speaker at every TechCon365, DataCon, and PWRCon event this year—and Dallas marks the final stop on this year’s tour. If you’ve missed him before, now’s your chance to catch his insights live!

With over 15 years of experience in Microsoft technologies and a deep focus on Power Platform, SharePoint, and employee experience, Amarender brings practical, hands-on expertise to every session. Here’s where you can find him in Dallas:

Workshops & Sessions

  • Power Automate Bootcamp: From Basics to Brilliance
    Mon, Nov 3 | 9:00 AM – 5:00 PM | Room G6
    A full-day, hands-on workshop for Power Automate beginners.

 

  • Power Automate Multi-Stage Approval Workflows
    Tue, Nov 4 | 9:00 AM – 5:00 PM | Room G2
    Wed, Nov 5 | 3:50 PM – 5:00 PM | Room G6
    Learn how to build dynamic, enterprise-ready approval workflows.

 

  • Ask the Experts
    Wed, Nov 5 | 12:50 PM – 2:00 PM | Expo Hall
    Bring your questions and get real-time answers from Amarender and other experts.

 

  • Build External-Facing Websites Using Power Pages
    Thu, Nov 6 | 1:00 PM – 2:10 PM | Room D
    Discover how to create secure, low-code websites with Power Pages.

 

  • Automate Content Processing Using AI & SharePoint Premium
    Thu, Nov 6 | 4:20 PM – 5:30 PM | Room G6
    Explore how AI and SharePoint Premium (formerly Syntex) can transform content into knowledge.

 

Whether you’re just getting started with Power Platform or looking to scale your automation strategy, Amarender’s sessions will leave you inspired and equipped to take action.

Register now!

]]>
https://blogs.perficient.com/2025/10/23/see-perficients-amarender-peddamalku-at-the-microsoft-365-power-platform-copilot-conference/feed/ 0 388040
Perficient Wins Silver W3 Award for Mobile Innovation in Travel & Tourism https://blogs.perficient.com/2025/10/23/perficient-wins-silver-w3-award-for-mobile-innovation-in-travel-tourism/ https://blogs.perficient.com/2025/10/23/perficient-wins-silver-w3-award-for-mobile-innovation-in-travel-tourism/#respond Thu, 23 Oct 2025 15:35:54 +0000 https://blogs.perficient.com/?p=388024

We’re pleased that Perficient has been honored with a second award for our transformative work with a budget-friendly international airline. The Silver W3 Award in the Mobile Apps & Sites – Travel & Tourism category from the Academy of Interactive and Visual Arts (AIVA) celebrates our commitment to delivering exceptional digital experiences that drive real-world impact.

The W3 Awards, now in their 20th year, spotlight the best in digital creativity across websites, mobile apps, video, social media, and emerging tech. With thousands of global entries, only the top 20% earn Silver distinction—making this achievement especially meaningful.

A Budget-Friendly Airline, Reimagined

Our award-winning submission showcased how Perficient partnered with the international airline to modernize their digital experience and better serve budget-conscious travelers. The project focused on:

  • Enhancing mobile usability for travelers booking international flights
  • Streamlining the user journey from search to checkout
  • Improving accessibility and performance across devices

The result? A mobile experience that’s not only intuitive and visually engaging but also aligned with the organization’s mission to offer affordable travel without compromising quality. You can read the full success story here.

Celebrating Digital Excellence

The W3 Awards are judged by AIVA, a prestigious panel of experts from top-tier organizations including Disney, Netflix, Deloitte Digital, and IBM. Entries are evaluated against a standard of excellence, not each other, ensuring that every winner truly represents the best in their category.

Looking Ahead

This award is a testament to the talent and dedication of our teams who consistently push boundaries to deliver impactful digital solutions. We’re proud to be recognized among the industry’s top innovators and look forward to continuing our work with clients to elevate digital experiences across industries.

]]>
https://blogs.perficient.com/2025/10/23/perficient-wins-silver-w3-award-for-mobile-innovation-in-travel-tourism/feed/ 0 388024
Datadog Synthetic Monitoring Integration with Azure DevOps Pipeline for Sitecore https://blogs.perficient.com/2025/10/23/datadog-synthetic-monitoring-integration-with-azure-devops-pipeline-for-sitecore/ https://blogs.perficient.com/2025/10/23/datadog-synthetic-monitoring-integration-with-azure-devops-pipeline-for-sitecore/#respond Thu, 23 Oct 2025 15:35:10 +0000 https://blogs.perficient.com/?p=387828

Datadog Synthetic Monitoring provides automated, simulated user journeys to proactively confirm the health and performance of websites and APIs, helping detect issues before users experience them. Integrating this into our Azure DevOps pipeline ensures that only builds where core site functionality is verified get promoted, reducing the risk of regressions in production. This approach is especially valuable in Sitecore projects, where critical web journeys and API endpoints are essential to user experience.

Why Use This Approach?

  • Immediate feedback: Failing releases are blocked before merging, saving post-release firefighting.
  • Coverage: Synthetic tests simulate real browser actions and API calls over real user flows.
  • Reliability: Automated testing delivers consistent, repeatable validation without manual steps.
  • Visibility: Results are unified within both Datadog and Azure DevOps for full traceability.
  • Scalability: As Sitecore projects grow, synthetic tests can be expanded to cover new endpoints and user scenarios without significant pipeline changes.
  • Environment parity: Tests can be run against staging, UAT, or pre-production environments before the live rollouts for safer releases.

Prerequisites

  • Active Datadog account with Synthetic Monitoring enabled.
  • Datadog API and Application keys created with the appropriate access scope.
  • Azure DevOps project with a working YAML-based CI/CD pipeline.
  • Secure variable storage in Azure DevOps (e.g., Variable Groups, Secret Variables) for credentials.
  • Stable and accessible endpoint URLs for Sitecore environment(s) under test.

High-Level Integration Process

1. Datadog Synthetic Test Creation

  • Create Browser and/or HTTP Synthetic Tests in Datadog tailored for key Sitecore application flows, such as:
    • Homepage load and rendering
    • Login flow and user dashboard navigation
    • Core API calls (search, content retrieval)
    • Critical commerce or form submissions
  • Use relevant tags (e.g., premerge) for search/query filtering by the CI pipeline.
  • Configure assertions to confirm critical elements:
    • Content correctness
    • HTTP status codes
    • Redirect targets
    • Response time SLAs
  • Validate tests in Datadog’s UI with multiple runs before pipeline integration.

Datadogdashboard1

2. Azure DevOps Pipeline Configuration

The Azure DevOps YAML pipeline is set up to invoke Datadog CI, run all tests matching our tag criteria, and fail the pipeline if any test fails.

Key Pipeline Steps

  • Install Datadog CI binary: Downloads and installs the CLI in the build agent.
  • Run Synthetic Tests: Uses the environment variables and search tags to pick synthetic tests (e.g., all with type: browser tag: remerge) and runs them directly.
  • JUnit Reporting & Artifacts: The CLI output is saved, and a JUnit-formatted result file is generated for Azure DevOps’ Tests UI. All test outputs are attached as build artifacts.
  • Conditional Fast-forward Merge: The pipeline proceeds to a gated merge to release/production only if all synthetics pass.

How Results and Flow Work

When All Tests Pass

  • The pipeline completes the Premerge_Datadog_Synthetics stage successfully.
  • Test summaries (JUnit) and CLI outputs are attached to the pipeline run.
  • Approval-gated merge to the Release branch is unblocked; approvers can verify test results before promotion.

Build artifacts include full logs for further review.

     Pipelinepassed

When Any Test Fails

  • If any synthetic (browser/API) test fails, the CLI exits with a non-zero exit code.
  • The JUnit summary will contain failure info and a link to the log details.
  • The pipeline stage fails (Premerge_Datadog_Synthetics), halting the fast-forward merge.
  • Approvers can review the failure in test results and attached artifacts within Azure DevOps.

Only successful resolution and green reruns allow code promotion.

Pipelinefailed

Best Practices for Datadog Synthetic Monitoring

  • Run tests in parallel to reduce wait times.
  • Use separate synthetic tests per microservice or major Sitecore area to isolate failures.
  • Monitor test trends in Datadog to detect gradual performance regression over time.
  • Limit sensitive data in synthetic flows by avoiding the storage of actual credentials.
  • Schedule periodic synthetic runs outside CI/CD to catch environment fluctuations unrelated to code changes.

Security Considerations

  • Store Datadog keys as secret variables in Azure DevOps.
  • Restrict permission for synthetic management to trusted CICD admins.
  • Avoid embedding credentials or sensitive payloads in test scripts.

Conclusion

By integrating Datadog Synthetic Monitoring directly into our CI/CD pipeline with Azure DevOps. Sitecore teams gain a safety net that blocks faulty builds before they hit production, while keeping a detailed audit trail. Combined with careful test design, secure key management, and continuous expansion of coverage, this approach becomes a cornerstone of proactive web application quality assurance.

 

]]>
https://blogs.perficient.com/2025/10/23/datadog-synthetic-monitoring-integration-with-azure-devops-pipeline-for-sitecore/feed/ 0 387828
Mastering Modular Front-End Development with Individual AEM ClientLibs https://blogs.perficient.com/2025/10/22/quit-bundling-all-your-code-together/ https://blogs.perficient.com/2025/10/22/quit-bundling-all-your-code-together/#respond Wed, 22 Oct 2025 11:37:35 +0000 https://blogs.perficient.com/?p=387954

Are you still combining everything into a single clientlib-all for your entire AEM project? If that sounds like you, then you are probably dealing with heavy page loads, sluggish deployments, and tangled code that’s hard to manage.

Here is the fix: break up those ClientLibs!

By tapping into modern build tools like Webpack through the ui.frontend module, you can build individual, focused Client Libraries that really boost performance, make things more straightforward, and keep your code much easier to maintain.

Why You Really Need Individual ClientLibs

Ditching that one huge ClientLib is not just about keeping things neat, and it gives you some solid technical wins.

1) Better Performance Through Smart Loading

When you use just one ClientLib, every bit of CSS and JavaScript gets loaded on every single page. But when you split things up into libraries that focuses on specific needs (like clientlib-form or clientlib-carousel) you are only pulling in the code you need for each template or component. This significantly reduces the initial page load time for your visitors.

2) Adaptive Cache Management

When you tweak the CSS for just one component, only that small, specific ClientLibs cache gets cleared out. Your large Vendor ClientLib, which rarely changes, remains in the user’s browser cache, resulting in better caching for repeat visitors and reduced server workload.

3) Cleaner Code That’s Easier To Work With

When you use separate ClientLibs, you are basically forcing yourself to keep different parts of your code separate, which makes it way easier for new developers to figure out what’s going on:

  • Vendor and Third-Party Information: Gets its own dedicated library
  • Main Project Styles: Goes in another library
  • Component-Specific Features: Each gets its own detailed library

 

The Current Way of Doing Things: Webpack Plus Individual ClientLibs

Today’s AEM projects use the typical AEM Project Archetype setup, which keeps the source code separate from how things get deployed:

ModuleRoleKey Function
ui.frontendSource & BuildContains all source files (JS/CSS/Less/Sass) and the Webpack configuration to bundle and optimize them.
ui.appsDeploymentReceives the final bundled assets from ui.frontend and deploys them into the JCR as ClientLibs.

Step 1: Organize Your Source Code (in the ui.frontend)

You’ll want to structure your source code in a way that makes sense, keeping it separate from your Webpack setup files.

/ui.frontend
    /src
        /components
            /common
                /card.css
                /card.js
                /index.js       <-- The Webpack Entry Point
            /vendor            
                /select2.css
                /select2.js

 

Why index.js is So Useful: Rather than letting AEM manually piece together files, we use one main index.js file as our single Webpack starting point. This file brings in all the component files you need: – Webpack handles the bundling from here

// ui.frontend/src/components/common/index.js

Main Index Include All Css Js

Step 2: Configure Webpack Output & ClientLib Generation

Your Webpack setup points to this main index.js file. Once it’s done compiling, Webpack creates the final, compressed bundle files (like clientlib-common.css, clientlib-common.js) and puts them in a target folder usually called dist.

Common Component Bundle

Step 3: Deploy the Bundle (The ui.apps ClientLib)

The last crucial step involves putting these bundles into the AEM ClientLib structure inside your ui.apps module.

This usually happens automatically through a Maven plugin.

Your ClientLib needs to have a unique category property, that is how you’ll reference it in your components.

Path in JCR (deployed through ui.apps)

Aem Module 1

/apps/my-project/clientlibs/clientlib-common
    /css
        clientlib-common.css     //The bundled Webpack output
    /js
        clientlib-common.js      //The bundled Webpack output
    /.content.xml           // <jcr:root jcr:primaryType="cq:ClientLibraryFolder" categories="[my-project.common]"/>
    /css.txt                //Lists the files in CSS folder
    /js.txt                 // Lists the files in JS folder

Step 4: Bundle Things Together with the Embed Property

While you can load a single clientlib-common, a better practice is to have a master ClientLib that loads everything the site needs. This library utilizes the powerful embed property to incorporate the contents of smaller, targeted libraries.

The Main Aggregator ClientLib ( In clientlib-site-all )

Siteall

The embed feature is essential here. It combines all your JS and CSS files into one request when the site runs, but your original ClientLibs stay organized separately in the JCR, which keeps things tidy.

Step 5: Add the Libraries to Your HTL

When it comes to your page component or template, you just need to reference that main, bundled ClientLib category using the regular Granite ClientLib template:

Htl

By setting up separate, Webpack-built ClientLibs, you are building a solid, modular, and fast front-end setup. Your ui.frontend takes care of organizing and bundling everything, while your ui.apps module handles getting it all into the AEM JCR.

Do not keep wrestling with those big, unwieldy systems; start using categories and embedding to break up your code correctly.

 

]]>
https://blogs.perficient.com/2025/10/22/quit-bundling-all-your-code-together/feed/ 0 387954