Innovation + Product Development Articles / Blogs / Perficient https://blogs.perficient.com/category/services/innovation-product-development/ Expert Digital Insights Wed, 05 Nov 2025 08:25:09 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Innovation + Product Development Articles / Blogs / Perficient https://blogs.perficient.com/category/services/innovation-product-development/ 32 32 30508587 Simplifying Redirect Management in Sitecore XM Cloud with Next.js and Vercel Edge Config https://blogs.perficient.com/2025/10/31/simplifying-redirects-in-sitecore-xm-cloud-using-vercel-edge-config/ https://blogs.perficient.com/2025/10/31/simplifying-redirects-in-sitecore-xm-cloud-using-vercel-edge-config/#respond Fri, 31 Oct 2025 18:19:55 +0000 https://blogs.perficient.com/?p=388136

As organizations continue their journey toward composable and headless architectures, the way we manage even simple things like redirects evolves too. Redirects are essential for SEO and user experience, but managing them within a CMS often introduces unnecessary complexity. In this blog, I will share how we streamlined redirect management for a Sitecore XM Cloud + Next.js implementation using Vercel Edge Config  – a modern, edge-based approach that improves performance, scalability, and ease of maintenance.

Why Move Redirects Out of Sitecore?

Traditionally, redirects were managed within Sitecore through redirect items stored in the Content Tree. While functional, this approach introduced challenges such as scattered items, and added routing overhead. With Sitecore XM Cloud and Next.js, we now have the opportunity to offload this logic to the frontend layer – closer to where routing happens. By using Vercel Edge Config, redirects are processed at the edge, improving site performance and allowing instant updates without redeployments.

By leveraging Vercel Edge Config and Next.js Middleware, redirects are evaluated before the request reaches the application’s routing or backend systems. This approach ensures:

  1. Redirects are processed before routing to Sitecore.
  2. Updates are instant and do not require deployments.
  3. Configuration is centralized and easily maintainable.

The New Approach: Redirects at the Edge

In the new setup:

  1. Redirect rules are stored in Vercel Edge Config in JSON format.
  2. Next.js middleware runs at the edge layer before routing.
  3. Middleware fetches redirect rules and checks for matches.
  4. Matching requests are redirected immediately – bypassing Sitecore.
  5. Non-matching requests continue to the standard rendering process.

Technical Details and Implementation

Edge Config Setup in Vercel

Redirect rules are stored in Vercel Edge Config, a globally distributed key-value store that allows real-time configuration access at the edge. In Vercel, each project can be linked to one or more Edge Config stores.

You can create edge config stores at project level as well as at account level. In this document, we will be creating the store at account level and this edge config store will be shared across all the projects within the account.

Steps:

  1.  Open the Vercel Dashboard.
  2. Go to Storage -> Edge Config.
  3. Create a new store (for example: redirects-store).
    Createedgeconfig
  4. Add a key named redirects with redirect data in JSON format.
    Example JSON structure:

    {
      "redirects": {
        "/old-page": {
          "destination": "/new-page",
          "permanent": true
        },
        "/old-page/item-1": {
          "destination": "/new-page./item-1",
          "permanent": false
        }
      }
    }
  1. To connect your store to a project, navigate to Projects tab and click on Connect Project button.

  2. Select the project from the dropdown and click Connect.
    Nextjs Dashboard Projects

  3. Vercel automatically generates a unique Edge Config Connection String for your project which is stored as an environment variable in your project. This connection string securely links your Next.js app to the Edge Config store. You can choose to edit the environment variable name and token name from the Advanced Options while connecting a project.

  4. Please note that EDGE_CONFIG environment that is added by default (if you do not update the name of the env. variable as mentioned in step #7). This environment variable is automatically available inside the Edge Runtime and used by the Edge Config SDK.

Implementing Redirect Logic in Next.js Middleware

  1. Install the Vercel Edge Config SDK to fetch data from the Edge Config store:
    npm install @vercel/edge-config

    The SDK provides low-latency, read-only access to configuration data replicated across Vercel’s global edge network. Import the SDK and use it within your middleware to fetch redirect data efficiently.

  2. Middleware Configuration: All redirect logic is handled in the middleware.ts file located at the root of the Next.js application. This setup ensures that every incoming request is intercepted, evaluated against the defined redirect rules, and redirected if necessary – before the request proceeds through the rest of the lifecycle.Code when using single store and the default env. variable EDGE_CONFIG
    import { NextResponse } from 'next/server';
    import type { NextFetchEvent, NextRequest } from 'next/server';
    import { get } from '@vercel/edge-config';
    
    export async function middleware(req: NextRequest, ev: NextFetchEvent) {
      try {
        const pathname = req.nextUrl.pathname;
    
        // Normalize the pathname to ensure consistent matching
        const normalizedPathname = pathname.replace(/\/$/, '').toLowerCase();
    
        // Fetch redirects from Vercel Edge Config using the EDGE_CONFIG connection
        const redirects = await get('redirects');
    
        const redirectEntries = typeof redirects === 'string' ? JSON.parse(redirects) : redirects;
    
        // Match redirect rule
        const redirect = redirectEntries[normalizedPathname];
    
        if (redirect) {
          const statusCode = redirect.permanent ? 308 : 307;
          let destinationUrl = redirect.destination;
          //avoid cyclic redirects
          if (normalizedPathname !== destinationUrl) {
            // Handle relative URLs
            if (!/^https?:\/\//.test(redirect.destination)) {
              const baseUrl = `${req.nextUrl.protocol}//${req.nextUrl.host}`;
              destinationUrl = new URL(redirect.destination, baseUrl).toString();
            }
            return NextResponse.redirect(destinationUrl, statusCode);
          }
        }
    
        return middleware(req, ev);
      } catch (error) {
        console.error('Error in middleware:', error);
        return middleware(req, ev);
      }
    }
    
    export const config = {
      /*
       * Match all paths except for:
       * 1. /api routes
       * 2. /_next (Next.js internals)
       * 3. /sitecore/api (Sitecore API routes)
       * 4. /- (Sitecore media)
       * 5. /healthz (Health check)
       * 6. all root files inside /public
       */
      matcher: ['/', '/((?!api/|_next/|healthz|sitecore/api/|-/|favicon.ico|sc_logo.svg|throw/).*)'],
    };

    Code when using multiple stores and custom environment variables. In this example, there are two Edge Config stores, each linked to its own environment variable: EDGE_CONFIG_CONSTANT_REDIRECTS and EDGE_CONFIG_AUTHORABLE_REDIRECTS. The code first checks for a redirect in the first store, and if not found, it checks the second. An Edge Config Client is required to retrieve values from each store.

    import { NextRequest, NextFetchEvent } from 'next/server';
    import { NextResponse } from 'next/server';
    import middleware from 'lib/middleware';
    import { createClient } from '@vercel/edge-config';
    
    export default async function (req: NextRequest, ev: NextFetchEvent) {
      try {
        const pathname = req.nextUrl.pathname;
    
        // Normalize the pathname to ensure consistent matching
        const normalizedPathname = pathname.replace(/\/$/, '').toLowerCase();
    
        // Fetch Redirects from Store1
        const store1RedirectsClient = createClient(process.env.EDGE_CONFIG_CONSTANT_REDIRECTS);
        const store1Redirects = await store1RedirectsClient .get('redirects');
    
        //Fetch Redirects from Store2
        const store2RedirectsClient = createClient(process.env.EDGE_CONFIG_AUTHORABLE_REDIRECTS);
        const store2Redirects = await store2RedirectsClient.get('redirects');
    
        let redirect;
    
        if (store1Redirects) {
          const redirectEntries =
            typeof store1Redirects === 'string'
              ? JSON.parse(store1Redirects)
              : store1Redirects;
    
          redirect = redirectEntries[normalizedPathname];
        }
    
        // If redirect is not present in permanent redirects, lookup in the authorable redirects store.
        if (!redirect) {
          if (store2Redirects) {
            const store2RedirectEntries =
              typeof store2Redirects === 'string'
                ? JSON.parse(store2Redirects)
                : store2Redirects;
    
            redirect = store2RedirectEntries[normalizedPathname];
          }
        }
    
        if (redirect) {
          const statusCode = redirect.permanent ? 308 : 307;
          let destinationUrl = redirect.destination;
    
          if (normalizedPathname !== destinationUrl) {
            // Handle relative URLs
            if (!/^https?:\/\//.test(redirect.destination)) {
              const baseUrl = `${req.nextUrl.protocol}//${req.nextUrl.host}`;
              destinationUrl = new URL(redirect.destination, baseUrl).toString();
            }
            return NextResponse.redirect(destinationUrl, statusCode);
          }
        }
    
        return middleware(req, ev);
      } catch (error) {
        console.error('Error in middleware:', error);
        return middleware(req, ev);
      }
    }
    
    export const config = {
      /*
       * Match all paths except for:
       * 1. /api routes
       * 2. /_next (Next.js internals)
       * 3. /sitecore/api (Sitecore API routes)
       * 4. /- (Sitecore media)
       * 5. /healthz (Health check)
       * 6. all root files inside /public
       */
      matcher: [
        '/',
        '/((?!api/|_next/|healthz|sitecore/api/|-/|favicon.ico|sc_logo.svg|throw/).*)',
      ],
    };

Summary

With this setup:

  • The Edge Config store is linked to your Vercel project via environment variables.
  • Redirect data is fetched instantly at the Edge Runtime through the SDK.
  • Each project can maintain its own independent redirect configuration.
  • All updates reflect immediately – no redeployment required.

Points to Remember:

  • Avoid overlapping or cyclic redirects.
  • Keep all redirects lowercase and consistent.
  • The Edge Config connection string acts as a secure token – it should never be exposed in the client or source control.
  • Always validate JSON structure before saving in Edge Config.
  • A backup is created on every write, maintaining a version history that can be accessed from the Backups tab of the Edge Config store.
  • Sitecore-managed redirects remain supported when necessary for business or content-driven use cases.

Managing redirects at the edge has made our Sitecore XM Cloud implementations cleaner, faster, and easier to maintain. By shifting this responsibility to Next.js Middleware and Vercel Edge Config, we have created a more composable and future-ready approach that aligns perfectly with modern digital architectures.

At Perficient, we continue to adopt and share solutions that simplify development while improving site performance and scalability. If you are working on XM Cloud or planning a headless migration, this edge-based redirect approach is a great way to start modernizing your stack.

]]>
https://blogs.perficient.com/2025/10/31/simplifying-redirects-in-sitecore-xm-cloud-using-vercel-edge-config/feed/ 0 388136
Node.js vs PHP, Which one is better? https://blogs.perficient.com/2025/10/31/node-js-vs-php-which-one-is-better/ https://blogs.perficient.com/2025/10/31/node-js-vs-php-which-one-is-better/#respond Fri, 31 Oct 2025 10:39:08 +0000 https://blogs.perficient.com/?p=388128

In the world of server-side scripting, two heavyweight contenders keep reappearing in discussions, RFPs, and code reviews: Node.js and PHP. This article dives into a clear, pragmatic comparison for developers and technical leads who need to decide which stack best fits a given project. Think of it as a blunt, slightly witty guide that respects both the history and the present-day realities of server-side development.

Background and History

PHP began as a personal project in the mid-1990s and evolved into a dominant server-side language for the web. Its philosophy centered on simplicity and rapid development for dynamic websites. Node.js, introduced in 2009, brought JavaScript to the server, leveraging the event-driven, non-blocking I/O model that underpins modern asynchronous web applications. The contrast is telling: PHP grew out of the traditional request‑response cycle, while Node.js grew out of the need for scalable, event-oriented servers.

Today, both technologies are mature, with active ecosystems and broad hosting support. The choice often comes down to project requirements, team expertise, and architectural goals.

Performance and Concurrency

Node.js shines in scenarios that require high concurrency with many I/O-bound operations. Its single-threaded event loop can handle numerous connections efficiently, provided you design for non-blocking I/O.

PHP’s traditional model is multi-threaded or process-per-request in its common web server setups; each request runs in a separate process. Modern PHP runtimes and frameworks offer asynchronous capabilities and improved performance, but Node.js tends to be more naturally aligned with non-blocking patterns.

Important takeaway: for CPU-intensive tasks, Node.js can struggle without worker threads or offloading to services.
PHP can be equally challenged by long-running tasks unless you use appropriate background processing (e.g., queues, workers) or switch to other runtimes.

Brief benchmark explanation: consider latency under high concurrent requests and throughput (requests per second). Node.js often maintains steady latency under many simultaneous I/O operations, while PHP tends to perform robustly for classic request/response workloads. Real-world results depend on code quality, database access patterns, and server configuration.

Ecosystem and Package Managers

Node.js features npm (and yarn/pnpm) with a vast, fast-growing ecosystem. Packages range from web frameworks like Express and Fastify to tooling for testing, deployment, and microservices.

PHP’s ecosystem centers around Composer as its package manager, with Laravel, Symfony, and WordPress shaping modern PHP development. Both ecosystems offer mature libraries, but the Node.js ecosystem tends to emphasize modularity and microservice-ready tooling, while PHP communities often emphasize rapid web application development with integrated frameworks.

Development Experience and Learning Curve

Node.js appeals to front-end developers who already speak JavaScript. A unified language stack can reduce cognitive load and speed up onboarding. Its asynchronous style, however, can introduce complexity for beginners (callbacks, promises, async/await).

PHP, by contrast, has a gentler entry path for many developers. Modern PHP with frameworks emphasizes clear MVC patterns, readable syntax, and synchronous execution that aligns with many developers’ mental models.

Recommendation: if your team is JS-fluent and you’re building highly interactive, I/O-bound services, Node.js is compelling. If you need rapid server-side web development with minimal context switching and a stable, synchronous approach, PHP remains a solid choice.

Tooling and Deployment

Deployment models for Node.js often leverage containerization, orchestration (Kubernetes), and serverless options. The lightweight, event-driven nature of Node.js fits microservices and API gateways well.

PHP deployment typically benefits from proven traditional hosting stacks (LAMP/LEMP) or modern containerized approaches. Frameworks like Laravel add modern tooling—routing, queues, events, and packaging—that pair nicely with robust deployment pipelines.

Security Considerations

Security is not tied to the language alone but to the ecosystem, coding practices, and configuration. Node.js projects must guard against prototype pollution, dependency vulnerabilities, and insecure defaults in npm packages.

PHP projects should be mindful of input validation, dependency integrity, and keeping frameworks up to date. In both ecosystems, employing a secure development lifecycle, dependency auditing, and automated tests is essential.

Scalability and Architecture Patterns

Node.js is often favored for horizontal scaling, stateless services, and API-driven architectures. Microservices, edge functions, and real-time features align well with Node.js’s strengths.

PHP-based architectures commonly leverage stateless app servers behind load balancers, with robust support for queues and background processing via workers. For long-running tasks and heavy CPU work, both stacks perform best when using dedicated services or offloading workloads to separate workers or service layers.

Typical Use Cases

  • Node.js: highly concurrent APIs, real-time applications, microservices, serverless functions, and streaming services.
  • PHP: traditional web applications with rapid development cycles, CMS-backed sites, monolithic apps, and projects with established PHP expertise.

Cost and Hosting Considerations

Both ecosystems offer broad hosting options. Node.js environments may incur slightly higher operational complexity in some managed hosting scenarios, but modern cloud providers offer scalable, cost-effective solutions for containerized or serverless Node.js apps.

PHP hosting is widely supported, often with economical LAMP/LEMP stacks. Total cost of ownership hinges on compute requirements, maintenance overhead, and the sophistication of deployment automation.

Developer Productivity

Productivity benefits come from language familiarity, tooling quality, and ecosystem maturity. Node.js tends to accelerate frontend-backend collaboration due to shared JavaScript fluency and a rich set of development tools.

PHP offers productivity through mature frameworks, extensive documentation, and a strong pool of experienced developers. The right choice depends on your teams’ strengths and project goals.

Community and Long-Term Viability

Both Node.js and PHP have vibrant communities and long-standing track records. Node.js maintains robust corporate backing, broad adoption in modern stacks, and a continuous stream of innovations. PHP remains deeply entrenched in the web with steady updates and widespread usage across many domains. For sustainability, prefer active maintenance, regular security updates, and a healthy ecosystem of plugins and libraries.

Pros and Cons Summary

  • Node.js Pros: excellent for high-concurrency I/O, single language across stack, strong ecosystem for APIs and microservices, good for real-time features.
  • Node.js Cons: can be challenging for CPU-heavy tasks, callback complexity (mitigated by async/await and worker threads).
  • PHP Pros: rapid web development with mature frameworks, straightforward traditional hosting, stable performance for typical web apps.
  • PHP Cons: historically synchronous model may feel limiting for highly concurrent workloads, ecosystem fragmentation in some areas.

Recommendation Guidance Based on Project Type

Choose Node.js when building highly scalable APIs, real-time features, or microservices that demand non-blocking I/O and a unified JavaScript stack.

Choose PHP when you need rapid development of traditional web applications, rely on established CMS ecosystems, or have teams with deep PHP expertise.

Hybrid approaches are also common: use Node.js for specific microservices and PHP for monolithic web interfaces, integrating through well-defined APIs.

Conclusion

Node.js and PHP each have a well-earned place in modern software architecture. The right choice isn’t a dogmatic rule but a thoughtful alignment of project goals, team capabilities, and operational realities. As teams grow and requirements evolve, a pragmatic blend—leveraging Node.js for scalable services and PHP for dependable, rapid web delivery—often yields the best of both worlds. With disciplined development practices and modern tooling, you can build resilient, maintainable systems regardless of the core language you choose.

Code Snippets: Simple HTTP Server

// Node.js: simple HTTP server
const http = require('http');
const port = 3000;

const server = http.createServer((req, res) => {
  res.statusCode = 200;
  res.setHeader('Content-Type', 'text/plain');
  res.end('Hello from Node.js server!\\n');
});

server.listen(port, () => {
  console.log(`Node.js server running at http://localhost:${port}/`);
});

 

PHP (built-in server):

// PHP: simple HTTP server (CLI)
<?php
// save as server.php and run: php -S localhost:8080
echo "Hello from PHP server!\\n";
?>

Note: In production, prefer robust frameworks and production-grade servers (e.g., Nginx + PHP-FPM, or Node.js with a process manager and reverse proxy).

]]>
https://blogs.perficient.com/2025/10/31/node-js-vs-php-which-one-is-better/feed/ 0 388128
Building for Humans – Even When Using AI https://blogs.perficient.com/2025/10/29/building-for-humans-even-when-using-ai/ https://blogs.perficient.com/2025/10/29/building-for-humans-even-when-using-ai/#comments Thu, 30 Oct 2025 01:03:55 +0000 https://blogs.perficient.com/?p=388108

Artificial Intelligence (AI) is everywhere. Every month brings new features promising “deeper thinking” and “agentic processes.” Tech titans are locked in trillion-dollar battles. Headlines scream about business, economic, and societal concerns. Skim the news and you’re left excited and terrified!

Here’s the thing: we’re still human – virtues, flaws, quirks, and all. We’ve always had our agency, collectively shaping our future. Even now, while embracing AI, we need to keep building for us.

We Fear What We Do Not Know

“AI this… AI that…” Even tech leaders admit they don’t fully understand it. Sci-fi stories warn us with cautionary tales. News cycles fuel anxiety about job loss, disconnected human relationships, and cognitive decline.

Luckily, this round of innovation is surprisingly transparent. You can read the Attention is All You Need paper (2017) that started it all. You can even build your own AI if you want! This isn’t locked behind a walled garden. That’s a good thing.

What the Past Can Tell Us

I like to look at the past to gauge what we can expect from the future. Humans have feared every major invention and technological breakthrough. We expect the worst, but most have proven to improve life.

We’ve always had distractions from books, movies, games, to TikTok brain-rot. Some get addicted and go too deep, while others thrive. People favor entertainment and leisure activities – this is nothing new – so I don’t feel like cognitive decline is anything to worry about. Humanity has overcome all of it before and will continue to do so.

 

.

 

Humans are Simple (and Complicated) Creatures

We look for simplicity and speed. Easy to understand, easy to look at, easy to interact with, easy to buy from. We skim read, we skip video segments, we miss that big red CTA button. The TL;DR culture rules. Even so, I don’t think we’re at risk of the future from Idiocracy (2006).

That’s not to say that we don’t overcomplicate things. The Gods Must Be Crazy movie (1980) has a line that resonates, “The more [we] improved [our] surroundings to make life easier, the more complicated [we] made it.” We bury our users (our customers) in detail when they just want to skim, skip, and bounce.

Building for Computers

The computer revolution (1950s-1980s) started with machines serving humans. Then came automation. And eventually, systems talking to systems.

Fast-forward to the 2010s, where marketers gamed the algorithms to win at SEO, SEM, and social networking. Content was created for computers, not humans. Now we have the dead internet theory. We were building without humans in mind.

We will still have to build for systems to talk to systems. That won’t change. APIs are more important than ever, and agentic AI relies on them. Because of this, it is crucial to make sure what you are building “plays well with others”. But AIs and APIs are tools, not the audience.

Building for Humans

Google used to tell us all to build what people want, as opposed to gaming their systems. I love that advice. However, at first it felt unrealistic…gaming the system worked. Then after many updates, for a short bit, it felt like Google was getting there! Then it got worse and feels like pay-to-play recently.

Now AI is reshaping search and everything else. You can notice the gap between search results and AI recommendations. They don’t match. AI assistants aim to please humans, which is great, until it inevitably changes.

Digital teams must build for AI ingestion, but if you neglect the human aspect and the end user experience, then you will only see short-term wins.

Examples of Building for Humans

  • Make it intuitive and easy. Simple for end users means a lot of work for builders, but it is worth it! Reduce their cognitive load.
  • Build with empathy. Appeal to real people, not just personas and bots. Include feedback loops so they can feel heard.
  • Get to the point. Don’t overwhelm users, instead help them take action! Delight your customers by saving them time.
  • Add humor when appropriate. Don’t be afraid to be funny, weird, or real…it connects on a human level.
  • Consider human bias. Unlike bots and crawlers, humans aren’t always logical. Design for human biases.
  • Watch your users. Focus groups or digital tracking tools are great for observing. Learn from real users and iterate.

Conclusion

Building for humans never goes out of style. Whatever comes after AI will still need to serve people. So as tech evolves, let’s keep honing systems that work with and around our human nature.

……

If you are looking for that extra human touch (built with AI), reach out to your Perficient account manager or use our contact form to begin a conversation.

]]>
https://blogs.perficient.com/2025/10/29/building-for-humans-even-when-using-ai/feed/ 1 388108
Executing a Sitecore Migration: Development, Performance, and Beyond https://blogs.perficient.com/2025/10/28/executing-a-sitecore-migration-development/ https://blogs.perficient.com/2025/10/28/executing-a-sitecore-migration-development/#respond Tue, 28 Oct 2025 12:23:25 +0000 https://blogs.perficient.com/?p=388061

In previous blog, the strategic and architectural considerations that set the foundation for a successful Sitecore migration is explored. Once the groundwork is ready, it’s time to move from planning to execution, where the real complexity begins. The development phase of a Sitecore migration demands precision, speed, and scalability. From choosing the right development environment and branching strategy to optimizing templates, caching, and performance, every decision directly impacts the stability and maintainability of your new platform.

This blog dives into the practical side of migration, covering setup best practices, developer tooling (IDE and CI/CD), coding standards, content model alignment, and performance tuning techniques to help ensure that your transition to Sitecore’s modern architecture is both seamless and future-ready.Title (suggested): Executing a Successful Sitecore Migration: Development, Performance, and Beyond

 

1. Component and Code Standards Over Blind Reuse

  • In any Sitecore migration, one of the biggest mistakes teams make is lifting and shifting old components into the new environment. While this may feel faster in the short term, it creates long-term problems.
  • Missed product offerings: Old components were often built around constraints of an earlier Sitecore version. Reusing them as-is means you can’t take advantage of new product features like improved personalization, headless capabilities, SaaS integrations, and modern analytics.
  • Outdated standards: Legacy code usually does not meet current coding, security, and performance standards. This can introduce vulnerabilities and inefficiencies into your new platform.
    Accessibility gaps: Many older components don’t align with WCAG and ADA accessibility standards — missing ARIA roles, semantic HTML, or proper alt text. Reusing them will carry accessibility debt into your fresh build.
  • Maintainability issues: Old code often has tight coupling, minimal test coverage, and obsolete dependencies. Keeping it will slow down future upgrades and maintenance.

Best practice: Treat the migration as an opportunity to raise your standards. Audit old components for patterns and ideas, but don’t copy-paste them. Rebuild them using modern frameworks, Sitecore best practices, security guidelines, and accessibility compliance. This ensures the new solution is future-proof and aligned with the latest Sitecore roadmap.

 

2. Template Creation and Best Practices

  • Templates define the foundation of your content structure, so designing them carefully is critical.
  • Analyze before creating: Study existing data models, pages, and business requirements before building templates.
  • Use base templates: Group common fields (e.g., Meta, SEO, audit info) into base templates and reuse them across multiple content types.
  • Leverage branch templates: Standardize complex structures (like a landing page with modules) by creating branch templates for consistency and speed.
  • Follow naming and hierarchy conventions: Clear naming and logical organization make maintenance much easier.

 

3. Development Practices and Tools

A clean, standards-driven development process ensures the migration is efficient, maintainable, and future-proof. It’s not just about using the right IDEs but also about building code that is consistent, compliant, and friendly for content authors.

  • IDEs & Tools
    • Use Visual Studio or VS Code with Sitecore- and frontend-specific extensions for productivity.
    • Set up linting, code analysis, and formatting tools (ESLint, Prettier in case of JSS code, StyleCop) to enforce consistency.
    • Use AI assistance (GitHub Copilot, Codeium, etc.) to speed up development, but always review outputs for compliance and quality. There are many different AI tools available in market that can even change the design/prototypes into specified code language.
  • Coding Standards & Governance
    • Follow SOLID principles and keep components modular and reusable.
    • Ensure secure coding standards: sanitize inputs, validate data, avoid secrets in code.
    • Write accessible code: semantic HTML, proper ARIA roles, alt text, and keyboard navigation.
    • Document best practices and enforce them with pull request reviews and automated checks.
  • Package & Dependency Management
    • Select npm/.NET packages carefully: prefer well-maintained, community-backed, and security-reviewed ones.
    • Avoid large, unnecessary dependencies that bloat the project.
    • Run dependency scanning tools to catch vulnerabilities.
    •  Keep lockfiles for environment consistency.
  • Rendering Variants & Parameters
    • Leverage rendering variants (SXA/headless) to give flexibility without requiring code changes.
    • Add parameters so content authors can adjust layouts, backgrounds, or alignment safely.
    • Always provide sensible defaults to protect design consistency.
  • Content Author Experience

Build with the content author in mind:

    • Use clear, meaningful field names and help text.
    • Avoid unnecessary complexity: fewer, well-designed fields are better.
    • Create modular components that authors can configure and reuse.
    • Validate with content author UAT to ensure the system is intuitive for day-to-day content updates.

Strong development practices not only speed up migration but also set the stage for easier maintenance, happier authors, and a longer-lasting Sitecore solution.

 

4. Data Migration & Validation

Migrating data is not just about “moving items.” It’s about translating old content into a new structure that aligns with modern Sitecore best practices.

  • Migration tools
    Sitecore does provides migration tools to shift data like XM to XM Cloud. Leverage these tools for data that needs to be copied.
  • PowerShell for Migration
    • Use Sitecore PowerShell Extensions (SPE) to script the migration of data from the old system that does not need to be as is but in different places and field from old system.
    • Automate bulk operations like item creation, field population, media linking, and handling of multiple language versions.
    • PowerShell scripts can be run iteratively, making them ideal as content continues to change during development.
    • Always include logging and reporting so migrated items can be tracked, validated, and corrected if needed.
  • Migration Best Practices
    • Field Mapping First: Analyze old templates and decide what maps directly, what needs transformation, and what should be deprecated.
    • Iterative Migration: Run migration scripts in stages, validate results, and refine before final cutover.
    • Content Cleanup: Remove outdated, duplicate, or unused content instead of carrying it forward.
    • SEO Awareness: Ensure titles, descriptions, alt text, and canonical fields are migrated correctly.
    • Audit & Validation:
      • Use PowerShell reports to check item counts, empty fields, or broken links.
      • Crawl both old and new sites with tools like Screaming Frog to compare URLs, metadata, and page structures.

 

5. SEO Data Handling

SEO is one of the most critical success factors in any migration — if it’s missed, rankings and traffic can drop overnight.

  • Metadata: Preserve titles, descriptions, alt text, and Open Graph tags. Missing these leads to immediate SEO losses.
  • Redirects: Map old URLs with 301 redirects (avoid chains). Broken redirects = lost link equity.
  • Structured Data: Add/update schema (FAQ, Product, Article, VideoObject). This improves visibility in SERPs and AI-generated results.
  • Core Web Vitals: Ensure the new site is fast, stable, and mobile-first. Poor performance = lower rankings.
  • Emerging SEO: Optimize for AI/Answer Engine results, focus on E-E-A-T (author, trust, freshness), and create natural Q&A content for voice/conversational search.
  • Validation: Crawl the site before and after migration with tools like Screaming Frog or Siteimprove to confirm nothing is missed.

Strong SEO handling ensures the new Sitecore build doesn’t just look modern — it retains rankings, grows traffic, and is ready for AI-powered search.

 

6. Serialization & Item Deployment

Serialization is at the heart of a smooth migration and ongoing Sitecore development. Without the right approach, environments drift, unexpected items get deployed, or critical templates are missed.

  • ✅ Best Practices
    • Choose the Right Tool: Sitecore Content Serialization (SCS), Unicorn, or TDS — select based on your project needs.
    • Scope Carefully: Serialize only what is required (templates, renderings, branches, base content). Avoid unnecessary content items.
    • Organize by Modules: Structure serialization so items are grouped logically (feature, foundation, project layers). This keeps deployments clean and modular.
    • Version Control: Store serialization files in source control (Git/Azure devops) to track changes and allow safe rollbacks.
    • Environment Consistency: Automate deployment pipelines so serialized items are promoted consistently from dev → QA → UAT → Prod.
    • Validation: Always test deployments in lower environments first to ensure no accidental overwrites or missing dependencies.

Properly managed serialization ensures clean deployments, consistent environments, and fewer surprises during migration and beyond.

 

7. Forms & Submissions

In Sitecore XM Cloud, forms require careful planning to ensure smooth data capture and migration.

  •  XM Cloud Forms (Webhook-based): Submit form data via webhooks to CRM, backend, or marketing platforms. Configure payloads properly and ensure validation, spam protection, and compliance.
  • Third-Party Forms: HubSpot, Marketo, Salesforce, etc., can be integrated via APIs for advanced workflows, analytics, and CRM connectivity.
  • Create New Forms: Rebuild forms with modern UX, accessibility, and responsive design.
  • Migrate Old Submission Data: Extract and import previous form submissions into the new system or CRM, keeping field mapping and timestamps intact.
  • ✅ Best Practices: Track submissions in analytics, test end-to-end, and make forms configurable for content authors.

This approach ensures new forms work seamlessly while historical data is preserved.

 

8. Personalization & Experimentation

Migrating personalization and experimentation requires careful planning to preserve engagement and insights.

  • Export & Rebuild: Export existing rules, personas, and goals. Review them thoroughly and recreate only what aligns with current business requirements.
  • A/B Testing: Identify active experiments, migrate if relevant, and rerun them in the new environment to validate performance.
  • Sitecore Personalize Implementation:
    • Plan data flow into the CDP and configure event tracking.
    • Implement personalization via Sitecore Personalize Cloud or Engage SDK for xm cloud implementation, depending on requirements.

✅Best Practices:

  • Ensure content authors can manage personalization rules and experiments without developer intervention.
  • Test personalized experiences end-to-end and monitor KPIs post-migration.

A structured approach to personalization ensures targeted experiences, actionable insights, and a smooth transition to the new Sitecore environment.

 

9. Accessibility

Ensuring accessibility is essential for compliance, usability, and SEO.

  • Follow WCAG standards: proper color contrast, semantic HTML, ARIA roles, and keyboard navigation.
  • Validate content with accessibility tools and manual checks before migration cutover.
  • Accessible components improve user experience for all audiences and reduce legal risk.

 

10. Performance, Caching & Lazy Loading

Optimizing performance is critical during a migration to ensure fast page loads, better user experience, and improved SEO.

  • Caching Strategies:
    • Use Sitecore output caching and data caching for frequently accessed components.
    • Implement CDN caching for media assets to reduce server load and improve global performance.
    • Apply cache invalidation rules carefully to avoid stale content.
  • Lazy Loading:
    • Load images, videos, and heavy components only when they enter the viewport.
    • Improves perceived page speed and reduces initial payload.
  • Performance Best Practices:
    • Optimize images and media (WebP/AVIF).
    • Minimize JavaScript and CSS bundle size, and use tree-shaking where possible.
    • Monitor Core Web Vitals (LCP, CLS, FID) post-migration.
    • Test performance across devices and regions before go-live.
    • Content Author Consideration:
    • Ensure caching and lazy loading do not break dynamic components or personalization.
    • Provide guidance to authors on content that might impact performance (e.g., large images or embeds).

Proper caching and lazy loading ensure a fast, responsive, and scalable Sitecore experience, preserving SEO and user satisfaction after migration.

 

11. CI/CD, Monitoring & Automated Testing

A well-defined deployment and monitoring strategy ensures reliability, faster releases, and smooth migrations.

  • CI/CD Pipelines:
    • Set up automated builds and deployments according to your hosting platform: Azure, Vercel, Netlify, or on-premise.
    • Ensure deployments promote items consistently across Dev → QA → UAT → Prod.
    • Include code linting, static analysis, and unit/integration tests in the pipeline.
  • Monitoring & Alerts:
    • Track website uptime, server health, and performance metrics.
    • Configure timely alerts for downtime or abnormal behavior to prevent business impact.
  • Automated Testing:
    • Implement end-to-end, regression, and smoke tests for different environments.
    • Include automated validation for content, forms, personalization, and integrations.
    • Integrate testing into CI/CD pipelines to catch issues early.
  • ✅ Best Practices:
    • Ensure environment consistency to prevent drift.
    • Use logs and dashboards for real-time monitoring.
    • Align testing and deployment strategy with business-critical flows.

A robust CI/CD, monitoring, and automated testing strategy ensures reliable deployments, reduced downtime, and faster feedback cycles across all environments.

 

12. Governance, Licensing & Cutover

A successful migration is not just technical — it requires planning, training, and governance to ensure smooth adoption and compliance.

  • License Validation: Compare the current Sitecore license with what the new setup requires. Ensure coverage for all modules, environments. Validate and provide accurate rights to users and roles.
  • Content Author & Marketer Readiness:
    • Train teams on the new workflows, tools, and interface.
    • Provide documentation, demos, and sandbox environments to accelerate adoption.
  • Backup & Disaster Recovery:
    • Plan regular backups and ensure recovery procedures are tested.
    • Define RTO (Recovery Time Objective) and RPO (Recovery Point Objective) for critical data.
  • Workflow, Roles & Permissions:
    • Recreate workflows, roles, and permissions in the new environment.
    • Implement custom workflows if required.
    • Governance gaps can lead to compliance and security risks — audit thoroughly.
  • Cutover & Post-Go-Live Support:
    • Plan the migration cutover carefully to minimize downtime.
    • Prepare a support plan for immediate issue resolution after go-live.
    • Monitor KPIs, SEO, forms, personalization, and integrations to ensure smooth operation.

Proper governance, training, and cutover planning ensures the new Sitecore environment is compliant, adopted by users, and fully operational from day one.

 

13. Training & Documentation

Proper training ensures smooth adoption and reduces post-migration support issues.

  • Content Authors & Marketers: Train on new workflows, forms, personalization, and content editing.
  • Developers & IT Teams: Provide guidance on deployment processes, CI/CD, and monitoring.
  • Documentation: Maintain runbooks, SOPs, and troubleshooting guides for ongoing operations.
  • Encourage hands-on sessions and sandbox practice to accelerate adoption.

 

Summary:

Sitecore migrations are complex, and success often depends on the small decisions made throughout development, performance tuning, SEO handling, and governance. This blog brings together practical approaches and lessons learned from real-world implementations — aiming to help teams build scalable, accessible, and future-ready Sitecore solutions.

While every project is different, the hope is that these shared practices offer a useful starting point for others navigating similar journeys. The Sitecore ecosystem continues to evolve, and so do the ways we build within it.

 

]]>
https://blogs.perficient.com/2025/10/28/executing-a-sitecore-migration-development/feed/ 0 388061
Spring Boot + OpenAI : A Developer’s Guide to Generative AI Integration https://blogs.perficient.com/2025/10/27/spring-boot-openai-a-developers-guide-to-generative-ai-integration/ https://blogs.perficient.com/2025/10/27/spring-boot-openai-a-developers-guide-to-generative-ai-integration/#respond Mon, 27 Oct 2025 08:02:27 +0000 https://blogs.perficient.com/?p=387157

Introduction

In this blog, we’ll explore how to connect OpenAI’s API with a Spring Boot application, step by step.

We’ll cover the setup process, walk through the implementation with a practical example.

By integrating OpenAI with Spring Boot, you can create solutions that are not only powerful but also scalable and reliable.

Prerequisites

  • Java 17+
  • Maven
  • Spring Boot (3.x recommended)
  • OpenAI API Key (get it from platform.openai.com)
  • Basic knowledge of REST APIs

OpenAI’s platform helps developers to understand how to prompt a models to generate meaningful text. It’s basically a cheat sheet for how to communicate to AI so it gives you smart and useful answers by providing prompts. 

Implementation in Spring Boot

To integrate OpenAI’s GPT-4o-mini model into a Spring Boot application, we analyzed the structure of a typical curl request and response provided by OpenAI.

API docs reference:

https://platform.openai.com/docs/overview

https://docs.spring.io/spring-boot/index.html

Curl Request

<html>
curl https://api.openai.com/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{
    "model": "gpt-4o-mini",
    "messages": [
      {"role": "assistant", "content": "Hello"},
      {"role": "user", "content": "Hi"}
    ]
  }'
</html>

Note-

“role”: “user” – Represents the end-user interacting with the assistant

“role”: “assistant” – Represents the assistant’s response.

The response generated from the model and it looks like this:

{
  "id": "chatcmpl-B9MBs8CjcvOU2jLn4n570S5qMJKcT",
  "object": "chat.completion",
  "created": 1741569952,
  "model": "gpt-4o-mini-2025-04-14",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "Hello! How can I assist you today?",
        "refusal": null,
        "annotations": []
      },
      "logprobs": null,
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 19,
    "completion_tokens": 10,
    "total_tokens": 29,
    "prompt_tokens_details": {
      "cached_tokens": 0,
      "audio_tokens": 0
    },
    "completion_tokens_details": {
      "reasoning_tokens": 0,
      "audio_tokens": 0,
      "accepted_prediction_tokens": 0,
      "rejected_prediction_tokens": 0
    }
  },
  "service_tier": "default"
}

 

Controller Class:

In below snippet, we will explore a simple spring boot controller to interact with Open AI’s API. When end user sends a prompt to that url (e.g /bot/chat?prompt=what is spring boot), the controller reads the model name and API url from applocation.properties file. It then creates a request using prompt provided and sends it to Open AI using rest call(RestTemplate). After verifying the request, OpenAI sends back a response.

@RestController
@RequestMapping("/bot")
public class GenAiController {

    @Value("${openai.model}")
    private String model;

    @Value(("${openai.api.url}"))
    private String apiURL;

    @Autowired
    private RestTemplate template;

    @GetMapping("/chat")
    public String chat(@RequestParam("prompt") String prompt) {
        GenAiRequest request = new GenAiRequest(model, prompt);
        System.out.println("Request: " + request );
        GenAIResponse genAIResponse = template.postForObject(apiURL, request, GenAIResponse.class);
        return genAIResponse.getChoices().get(0).getMessage().getContent();
    }

 

Configuration Class:

Annotated with @Configuration, this class defines beans and settings for the application context. Pulling the Open API key from properties file and the a customized RestTemplate is created and configured to include the Authorization Bearer <API_KEY> header in all requests. This setup ensures that every call to OpenAI’s API is authenticated without manually adding headers in each request.

@Configuration
public class OpenAIAPIConfiguration {

    @Value("${openai.api.key}")
     private String openaiApiKey;

    @Bean
    public RestTemplate template(){
        RestTemplate restTemplate=new RestTemplate();
        restTemplate.getInterceptors().add((request, body, execution) -> {
            request.getHeaders().add("Authorization", "Bearer " + openaiApiKey);
            return execution.execute(request, body);
        });
        return restTemplate;
    }
    
}

Require getters and setters for request and response classes:

Based on the Curl structure and response, we generated the corresponding request and response java classes with appropriate getters and setters with selected attributes to repsesent request and response object. These getter/setter classes help turn JSON data into objects we can use in code, and also turn our code’s data back into JSON when interacting to the OpenAI API. We implemented a bot using the gpt-4o-mini model, integrating it with a REST controller and also handled the authentication via the API key.

//Request
@Data
public class GenAiRequest {

    private String model;
    private List<GenAIMessage> messages;

    public List<GenAIMessage> getMessages() {
        return messages;
    }

    public GenAiRequest(String model, String prompt) {
        this.model = model;
        this.messages = new ArrayList<>();
        this.messages.add(new GenAIMessage("user",prompt));
    }
}

@Data
@AllArgsConstructor
@NoArgsConstructor
public class GenAIMessage {

    private String role;
    private String content;   
    
    public String getContent() {
        return content;
    }
    public void setContent(String content) {
        this.content = content;
    }
}

//Response
@Data
@AllArgsConstructor
@NoArgsConstructor
public class GenAIResponse {

    private List<Choice> choices;

    public List<Choice> getChoices() {
        return choices;
    }

    @Data
    @AllArgsConstructor
    @NoArgsConstructor
    public static class Choice {

        private int index;
        private GenAIMessage message;
        public GenAIMessage getMessage() {
            return message;
        }
        public void setMessage(GenAIMessage message) {
            this.message = message;
        }

    }

}

 

Essential Configuration for OpenAI Integration in Spring Boot

To connect your Spring Boot application with OpenAI’s API, you need to define a few key properties in your application.properties or application.yml file:

  • server.port: Specifies the port on which your Spring Boot application will run. You can set it to any available port like 8080, 9090, etc. (The default port for a Spring Boot application is 8080)
  • openai.model: Defines the OpenAI model to be used. In this case, gpt-4o-mini is selected for lightweight and efficient responses.
  • openai.api.key: Your secret API key from OpenAI. This is used to authenticate requests. Make sure to keep it secure and never expose it publicly.
  • openai.api.url: The endpoint URL for OpenAI’s chat completion API. (This is where your application sends prompts and receives responses)
server.port=<add server port>
openai.model=gpt-4o-mini
openai.api.key=	XXXXXXXXXXXXXXXXXXXXXXXXXXXX
openai.api.url=https://api.openai.com/v1/chat/completions

 

Postman Collection:

GET API: http://localhost:<port>/bot/chat?prompt=What is spring boot used for ?

Content-Type: application/json

Prompt

Usage of Spring Boot + OpenAI Integration

  • AI-Powered Chatbots: Build intelligent assistants for customer support, internal helpdesks, or onboarding systems.
  • Content Generation Tools: Automate blog writing, email drafting, product descriptions, or documentation, generate personalized content based on user input.
  • Code Assistance & Review: Create tools that help developers write, refactor, or review code using AI, Integrate with IDEs or CI/CD pipelines for smart suggestions.
  • Data Analysis & Insights: Use AI to interpret data, generate summaries, answer questions about datasets combine with Spring Boot APIs to serve insights to dashboards or reports.
  • Search Enhancement: Implement semantic search or question-answering systems over documents or databases, use embeddings and GPT to improve relevance and accuracy.
  • Learning & Training Platforms: Provide personalized tutoring, quizzes, and explanations using AI & adapt content based on user performance and feedback.
  • Email & Communication Automation: Draft, summarize, or translate emails and messages, integrate with enterprise communication tools.
  • Custom usages: In a business-to-business context, usage can be customized according to specific client requirements.
]]>
https://blogs.perficient.com/2025/10/27/spring-boot-openai-a-developers-guide-to-generative-ai-integration/feed/ 0 387157
Mastering Modular Front-End Development with Individual AEM ClientLibs https://blogs.perficient.com/2025/10/22/quit-bundling-all-your-code-together/ https://blogs.perficient.com/2025/10/22/quit-bundling-all-your-code-together/#respond Wed, 22 Oct 2025 11:37:35 +0000 https://blogs.perficient.com/?p=387954

Are you still combining everything into a single clientlib-all for your entire AEM project? If that sounds like you, then you are probably dealing with heavy page loads, sluggish deployments, and tangled code that’s hard to manage.

Here is the fix: break up those ClientLibs!

By tapping into modern build tools like Webpack through the ui.frontend module, you can build individual, focused Client Libraries that really boost performance, make things more straightforward, and keep your code much easier to maintain.

Why You Really Need Individual ClientLibs

Ditching that one huge ClientLib is not just about keeping things neat, and it gives you some solid technical wins.

1) Better Performance Through Smart Loading

When you use just one ClientLib, every bit of CSS and JavaScript gets loaded on every single page. But when you split things up into libraries that focuses on specific needs (like clientlib-form or clientlib-carousel) you are only pulling in the code you need for each template or component. This significantly reduces the initial page load time for your visitors.

2) Adaptive Cache Management

When you tweak the CSS for just one component, only that small, specific ClientLibs cache gets cleared out. Your large Vendor ClientLib, which rarely changes, remains in the user’s browser cache, resulting in better caching for repeat visitors and reduced server workload.

3) Cleaner Code That’s Easier To Work With

When you use separate ClientLibs, you are basically forcing yourself to keep different parts of your code separate, which makes it way easier for new developers to figure out what’s going on:

  • Vendor and Third-Party Information: Gets its own dedicated library
  • Main Project Styles: Goes in another library
  • Component-Specific Features: Each gets its own detailed library

 

The Current Way of Doing Things: Webpack Plus Individual ClientLibs

Today’s AEM projects use the typical AEM Project Archetype setup, which keeps the source code separate from how things get deployed:

ModuleRoleKey Function
ui.frontendSource & BuildContains all source files (JS/CSS/Less/Sass) and the Webpack configuration to bundle and optimize them.
ui.appsDeploymentReceives the final bundled assets from ui.frontend and deploys them into the JCR as ClientLibs.

Step 1: Organize Your Source Code (in the ui.frontend)

You’ll want to structure your source code in a way that makes sense, keeping it separate from your Webpack setup files.

/ui.frontend
    /src
        /components
            /common
                /card.css
                /card.js
                /index.js       <-- The Webpack Entry Point
            /vendor            
                /select2.css
                /select2.js

 

Why index.js is So Useful: Rather than letting AEM manually piece together files, we use one main index.js file as our single Webpack starting point. This file brings in all the component files you need: – Webpack handles the bundling from here

// ui.frontend/src/components/common/index.js

Main Index Include All Css Js

Step 2: Configure Webpack Output & ClientLib Generation

Your Webpack setup points to this main index.js file. Once it’s done compiling, Webpack creates the final, compressed bundle files (like clientlib-common.css, clientlib-common.js) and puts them in a target folder usually called dist.

Common Component Bundle

Step 3: Deploy the Bundle (The ui.apps ClientLib)

The last crucial step involves putting these bundles into the AEM ClientLib structure inside your ui.apps module.

This usually happens automatically through a Maven plugin.

Your ClientLib needs to have a unique category property, that is how you’ll reference it in your components.

Path in JCR (deployed through ui.apps)

Aem Module 1

/apps/my-project/clientlibs/clientlib-common
    /css
        clientlib-common.css     //The bundled Webpack output
    /js
        clientlib-common.js      //The bundled Webpack output
    /.content.xml           // <jcr:root jcr:primaryType="cq:ClientLibraryFolder" categories="[my-project.common]"/>
    /css.txt                //Lists the files in CSS folder
    /js.txt                 // Lists the files in JS folder

Step 4: Bundle Things Together with the Embed Property

While you can load a single clientlib-common, a better practice is to have a master ClientLib that loads everything the site needs. This library utilizes the powerful embed property to incorporate the contents of smaller, targeted libraries.

The Main Aggregator ClientLib ( In clientlib-site-all )

Siteall

The embed feature is essential here. It combines all your JS and CSS files into one request when the site runs, but your original ClientLibs stay organized separately in the JCR, which keeps things tidy.

Step 5: Add the Libraries to Your HTL

When it comes to your page component or template, you just need to reference that main, bundled ClientLib category using the regular Granite ClientLib template:

Htl

By setting up separate, Webpack-built ClientLibs, you are building a solid, modular, and fast front-end setup. Your ui.frontend takes care of organizing and bundling everything, while your ui.apps module handles getting it all into the AEM JCR.

Do not keep wrestling with those big, unwieldy systems; start using categories and embedding to break up your code correctly.

 

]]>
https://blogs.perficient.com/2025/10/22/quit-bundling-all-your-code-together/feed/ 0 387954
3 Digital Payment Strategies Shaping the Future of Financial Services https://blogs.perficient.com/2025/10/21/3-digital-payment-strategies-shaping-the-future-of-financial-services/ https://blogs.perficient.com/2025/10/21/3-digital-payment-strategies-shaping-the-future-of-financial-services/#respond Tue, 21 Oct 2025 15:10:58 +0000 https://blogs.perficient.com/?p=386962

Financial services leaders are turning our 2025’s top digital payments trends into reality—from leveraging AI for smarter decision-making to embedding finance for seamless customer experiences. Insights from events like Fintech South and Money20/20 reinforce that leading firms are moving fast to make these priorities into real-world strategies. We sat down with Amanda Estiverne, Director – Head of Payments, to explore how organizations are acting on these trends today, and what it means for the future of payments.

Key Trends Driving 2025 Digital Payment Strategies

AI-Driven Payment Innovations With Purpose

The next wave of artificial intelligence (AI) in payments isn’t just about efficiency. It’s about amplifying human creativity and problem‑solving to deliver hyper‑personalized, conversational payment experiences and smarter decisions while meeting rising expectations for safety and compliance.

“We’re living in a pivotal moment where artificial intelligence is no longer just about automating tasks — it’s about amplifying human creativity and problem-solving.”

At Fintech South, this theme came through loud and clear. Leaders explored how AI can empower us—not replace us—to achieve things never thought possible before. This aligns with our previous trend prediction that AI-driven payment innovations would dominate 2025, provided firms balance innovation with compliance and ethics.

Where firms are focusing:

  • Personalization at scale: Using GenAI to tailor offers, loyalty, and checkout flows without adding friction.
  • Conversational experiences: Voice and chat interfaces that are grounded in robust data privacy and make payments feel frictionless.
  • Responsible adoption: Governance, data minimization, and explainability baked into model life cycles, not bolted on later.

Success in Action: Intelligently Mining Complex Content With an LLM Assistant

Embedded Finance for Social Good and Customer Loyalty

Embedded payments are moving beyond convenience to purpose-driven experiences to meet people where they are and close real gaps. We’re seeing momentum in:

  • Earned wage access (EWA): Helping hourly and frontline workers improve resilience by accessing earned pay when it’s needed.
  • Frictionless giving: Removing steps from donation flows and matching programs so generosity fits naturally into digital journeys.

“What happens when we challenge employers, financial institutions, and fintech innovators — the true system builders — to see themselves not just as service providers, but as architects of opportunity, agents of equity, and accelerators of change?”

Strategic partnerships can move the needle when they pair impact with disciplined risk and compliance. This validates our earlier prediction that embedded finance would expand beyond retail into sectors like healthcare, philanthropy, and payroll.

Where firms are focusing:

  • Designing connected experiences that deepen trust and retention.
  • Partnering for scale and co-creating with employers, financial institutions, and mission-driven orgs to reach underserved populations.
  • Operational rigor to create clear controls for fraud, data sharing, and disclosures as embedded use cases expand.

Explore More: Build a Powerful Connected Products Strategy

Designing Payments for Fairness, Trust, and Compliance

Trust is becoming a design requirement, not a line item. Frameworks like “Fairness by Design” from Consumer Reports underscore principles including transparency, privacy, user-centricity, and financial well-being that are increasingly decisive for adoption and loyalty.

Building for inclusivity and clarity reduces drop-off and disputes while strengthening brand equity. This aligns with our February trend on navigating the regulatory landscape—where compliance and user experience converge.

Where firms are focusing:

  • Designing experiences with authentication, reconciliation, and fraud prevention baked in from the first interaction.
  • Leveraging advanced analytics and AI to strengthen compliance, minimize false positives, and accelerate dispute resolution.

Success In Action: Ensuring Interoperable, Compliant Real-Time Payments

How These Trends Are Shaping the Future of Payments

Payments are becoming faster, smarter, and more embedded—but also more complex. Based on our previous outlook and what we’re hearing now, here’s where firms are concentrating their investments:

  • AI with accountability: Scaling intelligent automation while embedding fairness and explainability.
  • Embedded finance with purpose: Turning the support of financial wellbeing and social impact into competitive differentiators.
  • Real-time payments with resilience: Moving beyond speed to orchestration, fraud prevention, and liquidity optimization.

“When civic leaders, corporate visionaries, and mission-driven organizations work together, fintech becomes more than technology — it becomes a powerful force for equity, inclusion, and opportunity.”

As McKinsey notes, global payments are entering a “simpler interface, complex reality” era where user experience feels effortless, but the infrastructure behind it demands precision and trust.

You May Also Enjoy: Efi Pylarinou, Top Global Tech Thought Leader On FinTech

Design Responsibly and Innovate Inclusive Payment Experiences

We help payment and fintech firms innovate and boost market position with transformative digital experiences and efficient operations.

  • Business Transformation: Create a roadmap to innovate products, enhance experiences, and reduce transactional risk.
  • Modernization: Implement technology to improve payment processing, fraud management, and omnichannel experiences.
  • Data + Analytics: Proactively leverage integrated data and AI to optimize transactions, manage fraud, and personalize experiences.
  • Risk + Compliance: Enhance compliance and risk management to safeguard transactions and customer data.
  • Consumer Experience: Deliver convenient, seamless experiences with user-friendly secure payment solutions.

Our approach to designing and implementing AI and machine learning (ML) solutions promotes secure and responsible adoption and ensures demonstrated and sustainable business value.

Discover why we have been trusted by 25+ leading payments and card processing companies. Explore our financial services expertise and contact us to learn more.

]]>
https://blogs.perficient.com/2025/10/21/3-digital-payment-strategies-shaping-the-future-of-financial-services/feed/ 0 386962
Transform Your Data Workflow: Custom Code for Efficient Batch Processing in Talend-Part 2 https://blogs.perficient.com/2025/10/03/transform-your-data-workflow-custom-code-for-efficient-batch-processing-in-talend-part-2/ https://blogs.perficient.com/2025/10/03/transform-your-data-workflow-custom-code-for-efficient-batch-processing-in-talend-part-2/#comments Fri, 03 Oct 2025 07:25:24 +0000 https://blogs.perficient.com/?p=387517

Introduction:

Custom code in Talend offers a powerful way to enhance batch processing efficiently by allowing developers to implement specialized logic that is not available through Talend’s standard components. This can involve data transformations, custom code as per use case and integration with flat files as per specific project needs. By leveraging custom code, users can optimize performance, improve data quality, and streamline complex batch workflows within their Talend jobs.

Talend Components:

Key components for batch processing as mention below:

  • tDBConnection: Establish and manage database connections within a job & allow configuration with single connection to reuse within Talend job.
  • tFileInputDelimited: For reading data from flat files.
  • tFileRowCount: Reads file row by row to calculate the number of rows.
  • tLoop: Executes a task automatically, based on a loop size.
  • tHashInput, tHashOutput: For high-speed data transfer and processing within a job. tHashOutput writes data to cache memory, while tHashInput reads from that cached data.
  • tFilterRow: For filtering rows from a dataset based on specified.
  • tMap: Data transformation allows you to map input data with output data and enables you to perform data filtering, complex data manipulation, typecasting, and multiple input source joins.
  • tJavaRow: It can be used as an intermediate component, and we are able to access the input flow and transform the data using custom Java code.
  • tJava: It has no input or output data flow & can be used independently to Integrate custom Java code.
  • tPreJob, tPostJob: PreJob start the execution before the job & PostJob at the end of the job.
  • tDBOutput: Supports wide range of databases & used to write data to various databases.
  • tDBCommit:It retains and verifies the alterations applied to a connected database throughout a Talend job, guaranteeing that it permanently records the data changes.
  • tDBClose:  It explicitly close a database connection that was opened by a tDBConnection component.
  • tLogCatcher: It is used in error handling within Talend job for adding runtime logging information. It catches all the exceptions and warnings raised by tWarn and tDie components during Talend job execution.
  • tLogRow: It is employed in error handling to display data or keep track of processed data in the run console.
  • tDie: We can stop the job execution explicitly if it fails. In addition, we can create a customized warning message and exit code.

Workflow with example:

To process the bulk of data in Talend, we can implement batch processing to efficiently process flat file data within a minimal execution time. We can read the flat file data & after the execution, we can process it to insert it into MySQL database table as a target & we can achieve this without batch processing. But this data flow will take quite a longer time to execute. If we use batch processing using the custom code, it takes minimal execution time to write the entire source file data into batch of records into MySQL database table at the target location.

Talend Job Design

Talend Job Design 

Solution:

  • Establish the database connection at the start of the execution so that we can reuse.
  • Read the number of rows in the source flat file using tFileRowCount component.
  • To determine the batch size, subtract the header count from the total row count and then divide the number by the total batch size. Take the whole number nearby which indicates the total number of batch or chunk.

    Calculate the batch size from total row count

    Calculate the batch size from total row count

  • Now use tFileInputDelimited component to read the source file content. In the tMap component, utilize the sequence Talend function to generate row numbers for your data mapping and transformation tasks. Then, load all of the data into the tHashOutput component, which stores the data into a cache.
  • Iterate the loop based on the calculated whole number using tLoop
  • Retrieve all the data from tHashInput component.
  • Filter the dataset retrieved from tHashInput component based on the rowNo column in the schema using tFilterRow

Filter the dataset using tFilterRow

Filter the dataset using tFilterRow

  • If First Iteration is in progress & batch size is 100 then rowNo range will be as 1 to 100.
    If Third Iteration is in progress & batch size is 100 then rowNo range will be as 201 to 300.
    For example, if the value of current iteration is 3 then [(3-1=2)* 100]+1 = 201 & [3*100=300]. So final dataset range for the 3rd iteration will be 201 to 300.
  • Finally extract the dataset range between the rowNo column & write the batch data MySQL database table using tDBOutput
  • The system uses the tLogCatcher component for error management by capturing runtime logging details, including warning or exception messages, and employs tLogRow to display the information in the execution console.
  • Regarding performance tuning, tMap component that maps source data to output data, allows for complex data transformation, and offers unique join, first join, and all other join options for looking up data within the tMap component.
  • The temporary data that the tHashInput & tHashOutput components store in cache memory enhances runtime performance.
  • At the end of the job execution, we are committing the database modification & closing the connection to release the database resource.

Advantages of Batch Processing:

  • Batch processing can efficiently handle large datasets.
  • It takes minimal time to process the data even after data transformation.
  • By grouping records from a large dataset and processing them as a single unit, it can be highly beneficial for improving performance.
  • With the batch processing, it can easily scale to accommodate growing data volumes.
  • It is particularly useful for operations like generating reports, performing data integration, and executing complex transformations on large datasets.

For more details: Get-started-talend-open-studio-data-integration

Note: Efficient Batch Processing in Talend-Part 1

]]>
https://blogs.perficient.com/2025/10/03/transform-your-data-workflow-custom-code-for-efficient-batch-processing-in-talend-part-2/feed/ 3 387517
Transform Your Data Workflow: Custom Code for Efficient Batch Processing in Talend-Part 1 https://blogs.perficient.com/2025/10/03/transform-data-workflow-custom-code-for-efficient-batch-processing-in-talend-part-1-2/ https://blogs.perficient.com/2025/10/03/transform-data-workflow-custom-code-for-efficient-batch-processing-in-talend-part-1-2/#respond Fri, 03 Oct 2025 07:22:35 +0000 https://blogs.perficient.com/?p=387572

Introduction:

Custom code in Talend offers a powerful way to enhance batch processing efficiently by allowing developers to implement specialized logic that is not available through Talend’s standard components. This can involve data transformations, custom code as per use case and integration with flat files as per specific project needs. By leveraging custom code, users can optimize performance, improve data quality, and streamline complex batch workflows within their Talend jobs.

Understand Batch Processing:

            Batch processing is a method of running high-volume, repetitive data within Talend jobs. The batch method allows users to process a bunch of data when computing resources are available, and with little or no user interaction.

Through batch processing, users gather and retain data, subsequently processing it during a designated period referred to as a “batch window.” This method enhances efficiency by establishing processing priorities and executing data tasks in a timeframe that is optimal.

Here, Talend job takes the total row count from source file then load the data from the flat file, processes it in a batch, provided input through context variable & then write the data into smaller flat files. This implementation made it possible to process enormous amounts of data more precisely and quickly than other implementation.

Batch processing is a method of executing a series of jobs sequentially without user interaction, typically used for handling large volumes of data efficiently. Talend, a prominent and extensively employed ETL (Extract, Transform, Load) tool, utilizes batch processing to facilitate the integration, transformation, and loading of data into data warehouse and various other target systems.

Talend Components:

Key components for batch processing as mention below:

  • tFileInputDelimited, tFileOutputDelimited: For reading & writing data from/to files.
  • tFileRowCount: Reads file row by row to calculate the number of rows.
  • tLoop: Executes a task automatically, based on a loop size.
  • tHashInput, tHashOutput: For high-speed data transfer and processing within a job. tHashOutput writes data to cache memory, while tHashInput reads from that cached data.
  • tFilterRow: For filtering rows from a dataset based on specified.
  • tMap: Use for data transformation which allow to map input data with output data along with use to perform data filtering, complex data manipulation, typecasting & multiple input source join.
  • tJavaRow: It can be used as an intermediate component, and we are able to access the input flow and transform the data using custom Java code.
  • tJava: It has no input or output data flow & can be used independently to Integrate custom Java code.
  • tLogCatcher: It is used in error handling within Talend job for adding runtime logging information. It catches all the exceptions and warnings raised by tWarn and tDie components during Talend job execution.
  • tLogRow: It is employed in error handling to display data or keep track of processed data in the run console.

Workflow with example:

To process the bulk of data in Talend, we can implement batch processing to efficiently process flat file data within a minimal execution time. We can read the flat file data & after execution, we can write it into a chunk of another flat file as a target & we can achieve this without batch processing. But this data flow will take quite a larger execution time to execute. If we use batch processing using the custom code, it takes minimal execution time to write the entire source file data into chunks of files at the target location.

Talend job design

Talend job design

Solution:

  • Read the number of rows in the source flat file using tFileRowCount component.
  • To determine the batch size, subtract the header count from the total row count and then divide the number by the total batch size. Take the whole number nearby which indicates the total number of batch or chunk.

    Calculate the batch / chunk by reducing the header from total row count & then divide the number by the total batch size. Take the whole number nearby which indicates the total number of batch or chunk.

    Calculate the batch size from total row count

  • Now use tFileInputDelimited component to read the source file content. In the tMap component, utilize the sequence Talend function to generate row numbers for your data mapping and transformation tasks. Then, load all of the data into the tHashOutput component, which stores the data into a cache.
  • Iterate the loop based on the calculated whole number using tLoop
  • Retrieve all the data from tHashInput component.
  • Filter the dataset retrieved from tHashInput component based on the rowNo column in the schema using tFilterRow

    Filter the dataset retrieved from tHashInput component based on the rowNo column in the schema using tFilterRow

    Filter the dataset using tFilterRow

  • If First Iteration is in progress & batch size is 100 then rowNo range will be as 1 to 100.
    If Third Iteration is in progress & batch size is 100 then rowNo range will be as 201 to 300.
    For example, if the value of current iteration is 3 then [(3-1=2)* 100]+1 = 201 & [3*100=300]. So final dataset range for the 3rd iteration will be 201 to 300.
  • Finally extract the dataset range between the rowNo column & write it into chunk of output target file using tFileOutputDelimited
  • The system uses the tLogCatcher component for error management by capturing runtime logging details, including warning or exception messages, and employs tLogRow to display the information in the execution console.
  • Regarding performance tuning, we have a tMap component that maps source data to output data, allows for complex data transformation, and offers unique join, first join, and all other join options for looking up data within the tMap component.
  • The temporary data that the tHashInput & tHashOutput components store in cache memory enhances runtime performance.

 

Advantages of Batch Processing:

  • Batch processing can efficiently handle large datasets.
  • It takes minimal time to process the data even after data transformation.
  • By grouping records from a large dataset and processing them as a single unit, it can be highly beneficial for improving performance.
  • With the batch processing, it can easily scale to accommodate growing data volumes.
  • It is particularly useful for operations like generating reports, performing data integration, and executing complex transformations on large datasets.

For more details: Get-started-talend-open-studio-data-integration

Note: Efficient Batch Processing in Talend-Part 2

]]>
https://blogs.perficient.com/2025/10/03/transform-data-workflow-custom-code-for-efficient-batch-processing-in-talend-part-1-2/feed/ 0 387572
Why Laravel Nova Is My Go-To Admin Panel https://blogs.perficient.com/2025/09/29/why-laravel-nova-is-my-go-to-admin-panel/ https://blogs.perficient.com/2025/09/29/why-laravel-nova-is-my-go-to-admin-panel/#respond Mon, 29 Sep 2025 16:46:00 +0000 https://blogs.perficient.com/?p=387533

Introduction

When it comes to building web applications, the admin panel is often the unsung hero. It’s the control room where data is managed, users are monitored, and business logic comes to life. Over the years, there were several admin solutions from open-source dashboards to custom-built interfaces but none have matched the elegance, power, and developer experience of Laravel Nova.

What Is Laravel Nova?

Laravel Nova is a beautifully crafted admin dashboard built by the creators of Laravel. It’s not just another CRUD generator, it’s a developer-focused tool that integrates seamlessly with Laravel’s ecosystem. Nova allows you to create, read, update, and delete resources, manage relationships, run custom actions, and more all with minimal effort.

The Problem with Traditional Admin Panels 

Before Nova, managing backend operations felt like a chore. Most admin panels were either too rigid, too bloated, or too disconnected from the Laravel ecosystem. Custom solutions took time to build and maintain, and third-party packages often lacked polish or flexibility. 

  • Integrated seamlessly with Laravel 
  • Was easy to customize 
  • Looked professional out of the box 
  • Didn’t require reinventing the wheel 

What Makes Nova Stand Out 

  1. Resource-Driven Simplicity

Nova’s core concept is simple: define resources that map to your Eloquent models, and it generates a beautiful UI for managing them. No complex configuration files or clunky drag-and-drop builders just clean, expressive PHP. 

Text::make('Title')->sortable(),

BelongsTo::make('Author'),

With just a few lines, you get a fully functional CRUD interface. 

  1. Custom Fields, Cards, and Tools

Nova is built with Vue.js under the hood, which means you can create custom components to extend its functionality. Whether it’s a dashboard card showing real-time metrics or a custom input field for color selection, Nova makes it easy to tailor the experience. 

  1. Filters, Lenses, and Actions

Need to filter data by status or date? Want to create a custom view for high-value customers? Nova’s filters and lenses make it simple. Batch operations like sending emails or updating records are handled through actions no extra packages required. 

  1. Authorization That Just Works

Nova respects Laravel’s policy system, so you don’t have to duplicate access logic. If a user isn’t authorized to view or edit a resource, Nova handles it gracefully. 

  1. Polished UI and UX

Nova’s interface is clean, responsive, and intuitive. It feels like a premium product, and clients often comment on how professional the admin panel looks—even though it took me very little time to set up. 

Is It Worth the Price? 

Nova is a paid product, and that might be a deal-breaker for some. But for developers, the time saved, the quality of the UI, and the ease of integration make it well worth the investment. It’s not just a tool it’s a productivity multiplier. 

Nova vs. Alternatives 

There were other admin panels like Voyager and Filament. While they have their strengths, Nova’s developer-first approach, extensibility, and tight Laravel integration make it my top choice. It’s not just about features it’s about how those features fit into my workflow. 

Final Thoughts 

Laravel Nova isn’t just an admin panel, it’s a developer’s dream. It lets us focus on building features, not fiddling with backend interfaces. Whether we are launching a new product or managing an internal dashboard, Nova gives me the tools we need with the polish we want. 

If you’re a Laravel developer looking for a powerful, elegant, and customizable admin solution, give Nova a try. It might just become your go-to too. 

]]>
https://blogs.perficient.com/2025/09/29/why-laravel-nova-is-my-go-to-admin-panel/feed/ 0 387533
Top 5 Drupal AI Modules to Transform Your Workflow https://blogs.perficient.com/2025/09/29/top-5-drupal-ai-modules-to-transform-your-workflow/ https://blogs.perficient.com/2025/09/29/top-5-drupal-ai-modules-to-transform-your-workflow/#respond Mon, 29 Sep 2025 14:58:30 +0000 https://blogs.perficient.com/?p=387495

The AI Revolution is in Drupal CMS 

The way we create, optimize, and deliver content has fundamentally changed. Artificial Intelligence is no longer a futuristic concept; it’s a practical, indispensable tool for content teams. For years, Drupal has been the gold standard for structured, enterprise-level content management. Now, with the rapid maturation of the community’s Artificial Intelligence Initiative, Drupal is emerging as the premier platform for an Intelligent CMS. 

This post is for every content editor, site builder, and digital marketer who spends too much time on repetitive tasks like writing alt text, crafting meta descriptions, or translating copy. We’re moving the AI power from external tools directly into your Drupal admin screen. 

We will explore five essential Drupal modules that leverage AI to supercharge your content workflow, making your team faster, your content better, and your website more effective. This is about making Drupal work smarter, not just harder. 

The collective effort to bring this intelligence to Drupal is being driven by the community, and you can see the foundational work, including the overview of many related projects, right here at the Drupal Artificial Intelligence Initiative. 

 

  1. AI CKEditor Integration: The Content Co-Pilot

This functionality is typically provided by a suite of modules, with the core framework being the AI (Artificial Intelligence) module and its submodules like AI CKEditor. It integrates large language models (LLMs) like those from OpenAI or Anthropic directly into your content editor. 

Role in the CMS 

This module places an AI assistant directly inside the CKEditor 5 toolbar, the primary rich-text editor in Drupal. It turns the editor from a passive text field into an active, helpful partner. It knows the context of your page and is ready to assist without ever requiring you to leave the edit screen. 

How It’s Useful 

  • For Content Editors: It eliminates the dreaded “blank page syndrome.” Highlight a bulleted list and ask the AI to “turn this into a formal paragraph” or “expand this summary into a 500-word article.” You can instantly check spelling and grammar, adjust the tone of voice (e.g., from professional to friendly), and summarize long blocks of text for teasers or email excerpts. It means spending less time writing the first draft and more time editing and refining the final, human-approved version. 
  • For Site Builders: It reduces the need for editors to jump between Drupal and external AI tools, streamlining the entire content creation workflow and keeping your team focused within the secure environment of the CMS. 

 

  1. AI Image Alt Text: The SEO Automator

AI Image Alt Text is a specialized module that performs one critical task exceptionally well: using computer vision to describe images for accessibility and SEO. 

Role in the CMS 

This module hooks into the Drupal Media Library workflow. The moment an editor uploads a new image, the module sends that image to a Vision AI service (like Google Vision or an equivalent LLM) for analysis. The AI identifies objects, actions, and scenes, and then generates a descriptive text which is automatically populated into the image’s Alternative Text (Alt Text) field. 

How It’s Useful 

  • For Accessibility: Alt text is crucial for WCAG compliance. Screen readers use this text to describe images to visually impaired users. This module ensures that every image, regardless of how busy the editor is, has a meaningful description, making your site more inclusive right from the start. 
  • For SEO & Editors: Alt text is a ranking signal for search engines. It also saves the editor the most tedious part of their job. Instead of manually typing a description like “Woman sitting at a desk typing on a laptop with a cup of coffee,” the AI provides a high-quality, descriptive draft instantly, which the editor can quickly approve or slightly refine. It’s a huge time-saver and compliance booster. 

 

  1. AI Translation: The Multilingual Enabler

This feature is often a submodule within the main AI (Artificial Intelligence) framework, sometimes leveraging a dedicated integration like the AI Translate submodule, or integrating with the Translation Management Tool (TMGMT). 

Role in the CMS 

Drupal is one of the world’s most powerful platforms for building multilingual websites. This module builds upon that strength by injecting AI as a Translation Provider. Instead of waiting for a human translator for the first pass, this module allows content to be translated into dozens of languages with the click of a button. 

How It’s Useful 

  • For Global Content Teams: Imagine launching a product page simultaneously across five markets. This tool performs the initial, high-quality, machine-generated translation and saves it as a draft in the corresponding language node. The local editor then only needs to perform post-editing (reviewing and culturally adapting the text), which is significantly faster and cheaper than translating from scratch. 
  • For Site Owners: It drastically cuts the time-to-market for multilingual content and ensures translation consistency across technical terms. It leverages the AI’s power for speed while retaining the essential human oversight for cultural accuracy. 

 

  1. AI Automators: The Smart Curator

AI Automators (a powerful submodule of the main AI project) allows you to set up rules that automatically populate or modify fields based on content entered in other fields. 

Role in the CMS 

This is where the magic of “smart” content happens. An Automator is a background worker that monitors when a piece of content is saved. You can configure it to perform chained actions using an LLM. For instance, when an editor publishes a new blog post: 

  1. Read the content of the Body field. 
  2. Use a prompt to generate five relevant keywords/topics. 
  3. Automatically populate the Taxonomy/Tags field with those terms. 
  4. Use another prompt to generate a concise post for X (formerly Twitter). 
  5. Populate a new Social Media Post field with that text. 

How It’s Useful 

  • For Content Strategists: It enforces content standards and completeness. Every piece of content is automatically tagged and optimized, reducing the chance of human error and improving content discoverability through precise categorization. It ensures your SEO and content strategy is executed flawlessly on every save. 
  • For Site Builders: It brings the power of Event-Condition-Action (ECA) workflows into the AI space. It’s a no-code way to build complex, intelligent workflows that ensure data integrity and maximize the usefulness of content metadata. 

 

  1. AI Agents: The Operational Assistant

AI Agents, typically used in conjunction with the main AI framework, is a powerful new tool that uses natural language to execute administrative and site-building tasks. 

Role in the CMS

An AI Agent is like a virtual assistant for your Drupal back-end. Instead of navigating through multiple complex configuration forms to, say, create a new field on a content type, you simply tell the Agent what you want it to do in plain English. The Agent interprets your request, translates it into the necessary Drupal API calls, and executes the changes. The module comes with various built-in agents (like a Field Type Agent or a Content Type Agent). 

How It’s Useful 

  • For Site Builders and Non-Technical Admins: This is a revolutionary step toward conversational configuration. You can issue a command like: “Please create a new Content Type called ‘Product Review’ and add a new text field named ‘Reviewer Name’.” The agent handles the creation process instantly. This dramatically reduces the learning curve and time needed for common site-building tasks. 
  • For Automation: Agents can be chained together or triggered by other systems to perform complex, multi-step actions on the CMS structure itself. Need to update the taxonomy on 50 terms? A dedicated agent can handle the large-scale configuration change based on a high-level instruction, making system maintenance far more efficient. It turns administrative management into a conversation. 

 

Conclusion:

The integration of AI into Drupal is one of the most exciting developments in the platform’s history. It is a powerful affirmation of Drupal’s strength as a structured content hub. These modules—the AI CKEditor, AI Image Alt Text, AI Translation, AI Automators, and now the transformative AI Agentsare not here to replace your team. They are here to empower them. 

By automating the mundane, repetitive, and technical aspects of content management and even site configuration, these tools free up your content creators and site builders to focus on what humans do best: strategy, creativity, and high-level decision-making. The future of content management in Drupal is intelligent, efficient, and, most importantly, human-powered. It’s time to equip your team with these new essentials and watch your digital experiences flourish. 

]]>
https://blogs.perficient.com/2025/09/29/top-5-drupal-ai-modules-to-transform-your-workflow/feed/ 0 387495
Terraform Code Generator Using Ollama and CodeGemma https://blogs.perficient.com/2025/09/25/terraform-code-generator-using-ollama-and-codegemma/ https://blogs.perficient.com/2025/09/25/terraform-code-generator-using-ollama-and-codegemma/#comments Thu, 25 Sep 2025 10:34:37 +0000 https://blogs.perficient.com/?p=387185

In modern cloud infrastructure development, writing Terraform code manually can be time-consuming and error-prone—especially for teams that frequently deploy modular and scalable environments. There’s a growing need for tools that:

  • Allow natural language input to describe infrastructure requirements.
  • Automatically generate clean, modular Terraform code.
  • Integrate with cloud authentication mechanisms.
  • Save and organize code into execution-ready files.

This model bridges the gap between human-readable Infrastructure descriptions and machine-executable Terraform scripts, making infrastructure-as-code more accessible and efficient. To build this model, we utilize CodeGemma, a lightweight AI model optimized for coding tasks, which runs locally via Ollama.

Qadkyxzvpwpsnkuajbujylwozlw36aeyw Mos4qgcxocvikd9fqwlwi18nu1eejv9khrb52r Ak3lastherfdzlfuhwfzzf4kelmucdplzzkdezh90a

In this blog, we explore how to build a Terraform code generator web app using:

  • Flask for the web interface
  • Ollama’s CodeGemma model for AI-powered code generation
  • Azure CLI authentication using service principal credentials
  • Modular Terraform file creation based on user queries

This tool empowers developers to describe infrastructure needs in natural language and receive clean, modular Terraform code ready for deployment.

Technologies Used

CodeGemma

CodeGemma is a family of lightweight, open-source models optimized for coding tasks. It supports code generation from natural language.

Running CodeGemma locally via Ollama means:

  • No cloud dependency: You don’t need to send data to external APIs.
  • Faster response times: Ideal for iterative development.
  • Privacy and control: Your infrastructure queries and generated code stay on your machine.
  • Offline capability: Ideal for use in restricted or secure environments.
  • Zero cost: Since the model runs locally, there’s no usage fee or subscription required—unlike cloud-based AI services.

Flask

We chose Flask as the web framework for this project because of its:

  • Simplicity and flexibility: Flask is a lightweight and easy-to-set-up framework, making it ideal for quick prototyping.

Initial Setup

  • Install Python.
winget install Python.Python.3
ollama pull codegemma:7b
ollama run codegemma:7b
  • Install the Ollama Python library to use Gemma 3 in your Python projects.
pip install ollama

Folder Structure

Folder Structure

 

Code

from flask import Flask, jsonify, request, render_template_string
from ollama import generate
import subprocess
import re
import os

app = Flask(__name__)
# Azure credentials
CLIENT_ID = "Enter your credentials here."
CLIENT_SECRET = "Enter your credentials here."
TENANT_ID = "Enter your credentials here."

auth_status = {"status": "not_authenticated", "details": ""}
input_fields_html = ""
def authenticate_with_azure():
    try:
        result = subprocess.run(
            ["cmd.exe", "/c", "C:\\Program Files\\Microsoft SDKs\\Azure\\CLI2\\wbin\\az.cmd",
             "login", "--service-principal", "-u", CLIENT_ID, "-p", CLIENT_SECRET, "--tenant", TENANT_ID],
            capture_output=True, text=True, check=True
        )
        auth_status["status"] = "success"
        auth_status["details"] = result.stdout
    except subprocess.CalledProcessError as e:
        auth_status["status"] = "failed"
        auth_status["details"] = e.stderr
    except Exception as ex:
        auth_status["status"] = "terminated"
        auth_status["details"] = str(ex)

@app.route('/', methods=['GET', 'POST'])
def home():
    terraform_code = ""
    user_query = ""
    input_fields_html = ""

    if request.method == 'POST':
        user_query = request.form.get('query', '')

        base_prompt = (
            "Generate modular Terraform code using best practices. "
            "Create separate files for main.tf, vm.tf, vars.tf, terraform.tfvars, subnet.tf, kubernetes_cluster etc. "
            "Ensure the code is clean and execution-ready. "
            "Use markdown headers like ## Main.tf: followed by code blocks."
        )

        full_prompt = base_prompt + "\n" + user_query
        try:
            response_cleaned = generate(model='codegemma:7b', prompt=full_prompt)
            terraform_code = response_cleaned.get('response', '').strip()
        except Exception as e:
            terraform_code = f"# Error generating code: {str(e)}"

            provider_block = f"""
              provider "azurerm" {{
              features {{}}
              subscription_id = "Enter your credentials here."
              client_id       = "{CLIENT_ID}"
              client_secret   = "{CLIENT_SECRET}"
              tenant_id       = "{TENANT_ID}"
            }}"""
            terraform_code = provider_block + "\n\n" + terraform_code

        with open('main.tf', 'w', encoding='utf-8') as f:
            f.write(terraform_code)


        # Create output directory
        output_dir = r"C:\Users\riya.achkarpohre\Desktop\AI\test7\terraform_output"
        os.makedirs(output_dir, exist_ok=True)

        # Define output paths
        paths = {
            "main.tf": os.path.join(output_dir, "Main.tf"),
            "vm.tf": os.path.join(output_dir, "VM.tf"),
            "subnet.tf": os.path.join(output_dir, "Subnet.tf"),
            "vpc.tf": os.path.join(output_dir, "VPC.tf"),
            "vars.tf": os.path.join(output_dir, "Vars.tf"),
            "terraform.tfvars": os.path.join(output_dir, "Terraform.tfvars"),
            "kubernetes_cluster.tf": os.path.join(output_dir, "kubernetes_cluster.tf")
        }

        # Split response using markdown headers
        sections = re.split(r'##\s*(.*?)\.tf:\s*\n+```(?:terraform)?\n', terraform_code)

        # sections = ['', 'Main', '<code>', 'VM', '<code>', ...]
        for i in range(1, len(sections), 2):
            filename = sections[i].strip().lower() + '.tf'
            code_block = sections[i + 1].strip()

            # Remove closing backticks if present
            code_block = re.sub(r'```$', '', code_block)

            # Save to file if path is defined
            if filename in paths:
                with open(paths[filename], 'w', encoding='utf-8') as f:
                    f.write(code_block)
                    print(f"\n--- Written: {filename} ---")
                    print(code_block)
            else:
                print(f"\n--- Skipped unknown file: {filename} ---")

        return render_template_string(f"""
        <html>
        <head><title>Terraform Generator</title></head>
        <body>
            <form method="post">
                <center>
                    <label>Enter your query:</label><br>
                    <textarea name="query" rows="6" cols="80" placeholder="Describe your infrastructure requirement here..."></textarea><br><br>
                    <input type="submit" value="Generate Terraform">
                </center>
            </form>
            <hr>
            <h2>Generated Terraform Code:</h2>
            <pre>{terraform_code}</pre>
            <h2>Enter values for the required variables:</h2>
            <h2>Authentication Status:</h2>
            <pre>Status: {auth_status['status']}\n{auth_status['details']}</pre>
        </body>
        </html>
        """)

    # Initial GET request
    return render_template_string('''
    <html>
    <head><title>Terraform Generator</title></head>
    <body>
        <form method="post">
            <center>
                <label>Enter your query:</label><br>
                <textarea name="query" rows="6" cols="80" placeholder="Describe your infrastructure requirement here..."></textarea><br><br>
                <input type="submit" value="Generate Terraform">
            </center>
        </form>
    </body>
    </html>
    ''')

authenticate_with_azure()
@app.route('/authenticate', methods=['POST'])
def authenticate():
    authenticate_with_azure()
    return jsonify(auth_status)

if __name__ == '__main__':
    app.run(debug=True)

Open Visual Studio, create a new file named file.py, and paste the code into it. Then, open the terminal and run the script by typing:

python file.py

Flask Development Server

Out1

Code Structure Explanation

  • Azure Authentication
    • The app uses the Azure CLI (az.cmd) via Python’s subprocess.run() to authenticate with Azure using a service principal. This ensures secure access to Azure resources before generating Terraform code.
  • User Query Handling
    • When a user submits a query through the web form, it is captured using:
user_query = request.form.get('query', '')
  • Prompt Construction
    • The query is appended to a base prompt that instructs CodeGemma to generate modular Terraform code using best practices. This prompt includes instructions to split the code into files, such as main.tf, vm.tf, subnet.tf, etc.
  • Code Generation via CodeGemma
    • The prompt is sent to the CodeGemma:7b model using:
response_cleaned = generate(model='codegemma:7b', prompt=full_prompt)
  • Saving the Full Response
    • The entire generated Terraform code is first saved to a main.tf file as a backup.
  • Output Directory Setup
    • A specific output directory is created using os.makedirs() to store the split .tf files:
output_dir = r"C:\Users\riya.achkarpohre\Desktop\AI\test7\terraform_output"
  • File Path Mapping
    • A dictionary maps expected filenames (such as main.tf and vm.tf) to their respective output paths. This ensures each section of the generated code is saved correctly.
  • Code Splitting Logic
    • The response is split using a regex-based approach, based on markdown headers like ## main.tf: followed by Terraform code blocks. This helps isolate each module.
  • Conditional File Writing
    • For each split section, the code checks if the filename exists in the predefined path dictionary:
      • If defined, the code block is written to the corresponding file.
      • If not defined, the section is skipped and logged as  “unknown file”.
  • Web Output Rendering
    • The generated code and authentication status are displayed on the webpage using render_template_string().

Terminal

Term1

The Power of AI in Infrastructure Automation

This project demonstrates how combining AI models, such as CodeGemma, with simple tools like Flask and Terraform can revolutionize the way we approach cloud infrastructure provisioning. By allowing developers to describe their infrastructure in natural language and instantly receive clean, modular Terraform code, we eliminate the need for repetitive manual scripting and reduce the chances of human error.

Running CodeGemma locally via Ollama ensures:

  • Full control over data
  • Zero cost for code generation
  • Fast and private execution
  • Seamless integration with existing workflows

The use of Azure CLI authentication adds a layer of real-world applicability, making the generated code deployable in enterprise environments.

Whether you’re a cloud engineer, DevOps practitioner, or technical consultant, this tool empowers you to move faster, prototype smarter, and deploy infrastructure with confidence.

As AI continues to evolve, tools like this will become essential in bridging the gap between human intent and machine execution, making infrastructure-as-code not only powerful but also intuitive.

]]>
https://blogs.perficient.com/2025/09/25/terraform-code-generator-using-ollama-and-codegemma/feed/ 3 387185