CMS Articles / Blogs / Perficient https://blogs.perficient.com/tag/cms/ Expert Digital Insights Wed, 20 Aug 2025 07:43:59 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png CMS Articles / Blogs / Perficient https://blogs.perficient.com/tag/cms/ 32 32 30508587 Deconstructing the Request Lifecycle in Sitecore Headless – Part 2: SSG and ISR Modes in Next.js https://blogs.perficient.com/2025/08/20/deconstructing-the-request-lifecycle-in-sitecore-headless-part-2-ssg-and-isr-modes-in-next-js/ https://blogs.perficient.com/2025/08/20/deconstructing-the-request-lifecycle-in-sitecore-headless-part-2-ssg-and-isr-modes-in-next-js/#comments Wed, 20 Aug 2025 07:43:59 +0000 https://blogs.perficient.com/?p=385891

In my previous post, we explored the request lifecycle in a Sitecore headless application using Next.js, focusing on how Server Side Rendering (SSR) works in tandem with Sitecore’s Layout Service and the Next.js middleware layer.. But that’s only one part of the story.

This follow-up post dives into Static Site Generation (SSG) and Incremental Static Regeneration (ISR) – two powerful rendering modes offered by Next.js that can significantly boost performance and scalability when used appropriately in headless Sitecore applications.

Why SSG and ISR Matter?

In Sitecore XM Cloud-based headless implementations, choosing the right rendering strategy is crucial for balancing performance, scalability, and content freshness. Static Site Generation (SSG) pre-renders pages at build time, producing static HTML that can be instantly served via a CDN. This significantly reduces time-to-first-byte (TTFB), minimizes server load, and is ideal for stable content like landing pages, blogs, and listing pages.

Incremental Static Regeneration (ISR) builds on SSG by allowing pages to be regenerated in the background after deployment, based on a configurable revalidation interval. This means you can serve static content with the performance benefits of SSG, while still reflecting updates without triggering a full site rebuild.

These strategies are especially effective in Sitecore environments where:

  • Most pages are relatively static and don’t require real-time personalization.
  • Content updates are frequent but don’t demand immediate global propagation.
  • Selective regeneration is acceptable, enabling efficient publishing workflows.

For Sitecore headless implementations, understanding when and how to use these strategies is key to delivering scalable, performant experiences without compromising on content freshness.

SSG with Sitecore JSS: The Request Lifecycle

In a Static Site Generation (SSG) setup, the request lifecycle transitions from being runtime-driven (like SSR) to being build-time-driven. This fundamentally alters how Sitecore, Next.js, and the JSS application work together to produce HTML. Here’s how the lifecycle unfolds in the context of Sitecore JSS with Next.js:

1. Build-Time Route Generation with getStaticPaths

At build time, Next.js executes the getStaticPaths function to determine which routes (i.e., Sitecore content pages) should be statically pre-rendered. This typically involves calling Sitecore’s Sitemap Service or querying layout paths via GraphQL or REST.

2. Layout Data Fetching with getStaticProps

For every path returned by getStaticPaths, Next.js runs getStaticProps to fetch the corresponding layout data from Sitecore. This data is fetched via: Sitecore Layout Service REST endpoint, or Experience Edge GraphQL endpoint

At this stage:

  • Sitecore’s middleware is not executed.
  • There is no personalization, since requests are not user-specific.
  • The component factory in the JSS app maps layout JSON to UI components and renders them to static HTML.

3. Static HTML Generation

Next.js compiles the entire page into an HTML file using:

  • The Layout JSON from Sitecore.
  • Mapped UI components from the JSS component factory.
  • Placeholder content populated during build

This results in fully static HTML output that represents the Sitecore page as it existed at build time.

4. Deployment & Delivery via CDN

Once built, these static HTML files are deployed to a CDN or static hosting platform (e.g., Vercel, Netlify), enabling:

  • Sub-second load times as no runtime rendering is required.
  • Massively scalable delivery.

5. Runtime Request Handling

When a user requests a statically generated page:

  • CDN Cache Hit: The CDN serves the pre-built HTML directly from cache
  • No Server Processing: No server-side computation occurs
  • Client-Side Hydration: React hydrates the static HTML, making it interactive
  • Instant Load: Users experience near-instantaneous page loads

Incremental Static Regeneration (ISR): The Best of Both Worlds

While SSG provides excellent performance, it has a critical limitation: content becomes stale immediately after build. ISR addresses this by enabling selective page regeneration in the background, maintaining static performance while ensuring content freshness.

ISR Request Lifecycle in Sitecore JSS Applications

1. Initial Request (Cached Response)

When a user requests an ISR-enabled page:

export const getStaticProps: GetStaticProps = async (context) => {
  const layoutData = await fetchLayoutData(context.params?.path);
  
  return {
    props: { layoutData },
    revalidate: 3600, // Revalidate every hour
  };
};
  • Next.js checks if a static version exists and is within the revalidation window
  • If valid, the cached static HTML is served immediately
  • If the cache is stale (beyond revalidation time), Next.js triggers background regeneration

2. Background Regeneration Process

When regeneration is triggered:

  1. Next.js makes a fresh API call to Sitecore’s Layout Service or GraphQL endpoint
  2. Sitecore resolves the current item, applies any layout changes, and returns updated JSON
  3. The JSS component factory processes the new layout data
  4. The newly rendered HTML replaces the cached version
  5. Updated content propagates across the CDN network

3. Subsequent Requests

After regeneration completes:

  • New requests serve the updated static content
  • The cycle repeats based on the revalidation interval
  • Users always receive static performance, even during regeneration

Best Practices for Sitecore SSG/ISR Implementation

When implementing SSG and ISR in Sitecore headless applications, align your rendering strategy with content characteristics: use SSG for truly static pages like landing pages, ISR for semi-dynamic content such as blogs and product catalogs with appropriate revalidation intervals (5 minutes for news, 30 minutes for blogs, 1 hour for products), and continue using SSR for personalized experiences. Focus on selective pre-rendering by only building high-traffic, SEO-critical, and core user journey pages at build time while using fallback strategies for less critical content.

Conclusion: Choosing the Right Strategy

The choice between SSR, SSG, and ISR isn’t binary – modern Sitecore headless applications often employ a hybrid approach:

  • SSG for truly static content that rarely changes
  • ISR for content that updates periodically but doesn’t require real-time freshness
  • SSR for personalized experiences and rapidly changing content

By understanding the request lifecycle for each rendering strategy, you can architect Sitecore headless solutions that deliver exceptional performance while maintaining content flexibility. The key is aligning your technical approach with your content strategy and user experience requirements. So, choose your rendering strategy wisely!

]]>
https://blogs.perficient.com/2025/08/20/deconstructing-the-request-lifecycle-in-sitecore-headless-part-2-ssg-and-isr-modes-in-next-js/feed/ 1 385891
Deconstructing the Request Lifecycle in Sitecore Headless (with a JSS + Next.js Deep Dive) https://blogs.perficient.com/2025/07/31/deconstructing-the-request-lifecycle-in-sitecore-headless-with-a-jss-next-js-deep-dive/ https://blogs.perficient.com/2025/07/31/deconstructing-the-request-lifecycle-in-sitecore-headless-with-a-jss-next-js-deep-dive/#respond Thu, 31 Jul 2025 17:48:34 +0000 https://blogs.perficient.com/?p=385650

In the era of traditional Sitecore MVC, the rendering lifecycle was tightly coupled to the Sitecore server. HTML generation, content retrieval, and presentation logic were all orchestrated within a single monolithic application. With the advent of headless architectures built using Sitecore JSS and platforms like XM Cloud, this paradigm has significantly shifted. Rendering responsibilities now move to decoupled frontend applications, enabling greater flexibility, scalability, and performance.

The responsibility for rendering has been decoupled and offloaded to a dedicated front-end application (e.g., React, Next.js, Vue.js), transforming Sitecore into a highly optimized content and layout delivery platform via robust APIs. For developers building Sitecore headless applications, a profound understanding of how a request traverses from the browser, through the front-end rendering host, interacts with Sitecore, and ultimately returns a rendered page, is paramount. This intricate knowledge forms the bedrock for effective debugging and advanced performance optimization.

This blog post will meticulously breaks down:

  • The generalized request processing flow in Sitecore headless applications.
  • The specific instantiation of this flow within JSS applications built leveraging the Next.js framework.
  • Debugging tips.

Sitecore XM Cloud and other headless Sitecore setups embody the principle of separation of concerns, decoupling content management from presentation logic. Rather than Sitecore generating the final HTML markup, your front-end rendering application (React/Next.js) dynamically fetches content and layout data via API endpoints and orchestrates the rendering process, whether client-side or server-side. Comprehending this architectural decoupling is critical for engineering performant, scalable, flexible, and personalized digital experiences.

The General Request Flow in Sitecore Headless Architectures

Irrespective of the specific front-end rendering host, the foundational request processing flow in Sitecore headless applications remains consistent:

  1. Client Request Initiation: A user initiates a request by navigating to a specific URL (e.g., https://www.example.com/about) in their web browser. This request is directed towards your front-end rendering host.
  2. Front-end Rendering Host Interception: The front-end rendering host (e.g. a Next.js application deployed on Vercel, or Netlify) receives the incoming HTTP request.
  3. Data Fetching from Sitecore: The rendering host, acting as a data orchestrator, makes an API call to Sitecore to retrieve the necessary page layout and content data. This can occur via two primary mechanisms:
    • Sitecore Layout Service : A traditional RESTful endpoint that delivers a comprehensive JSON representation of the page’s layout, components, and associated field values. This service is part of the Sitecore Headless Services module.
    • Sitecore Experience Edge GraphQL API: A more flexible and performant GraphQL endpoint that allows for precise data querying. This is the preferred mechanism for XM Cloud-native applications, providing a single endpoint for diverse data retrieval.  Critical parameters passed in this request typically include route (the requested URL path), sc_lang (the desired content language), sc_site (the target Sitecore site definition), and potentially additional context parameters for personalization or A/B testing.
  4. Sitecore Route and Context Resolution: Upon receiving the data request, the following server-side operations are performed:
    • Item Resolution: It resolves the incoming route parameter to a specific Sitecore content item within the content tree, based on defined route configurations (e.g., sitecore/content/MyTenant/MySite/About).
    • Context Establishment: It establishes the current request context, including the site, language, and user session, personalized page variant.
    • Layout Computation: Based on the resolved item and evaluated personalization, Sitecore computes the final page layout, including the arrangement of renderings within placeholders and the specific data sources for each component.
  5. Sitecore Response Generation: A structured JSON payload is returned to the rendering host. This payload typically includes:
    • Layout Metadata: Information about the overall page structure, placeholder definitions, and associated rendering components.
    • Component Data: For each component on the page, its type (e.g., “Hero”, “RichText”), its associated data source item ID (if applicable), and all serialized field values (e.g., Title, Body, Image).
  6. Front-end Rendering: The rendering host receives the JSON payload and, using its component factory (a mapping between Sitecore component names and UI component implementations), dynamically constructs the HTML for the requested page.
    • Component Mapping: Each JSON-defined component type is mapped to its corresponding React/Next.js UI component.
    • Data Binding: The serialized field values from the JSON are passed as props to the UI components.
    • Placeholder Resolution: The rendering host iterates through the placeholder definitions in the JSON, rendering child components into their designated placeholder regions.
    • Client-side Hydration: For server-rendered applications (SSR/SSG), the initial HTML is sent to the browser, where React then “hydrates” it, attaching event listeners and making the page interactive.
    • Post-render Actions: Any client-side personalization or analytics integration (e.g., Sitecore Personalize Engage SDK) may occur after the initial page render.

Key takeaway: In a headless setup, Sitecore acts as the intelligent provider of content and layout data via APIs, while the front-end application takes full ownership of rendering the final HTML and handling all user interface interactions.

Deep Dive: Request Lifecycle in JSS + Next.js Applications

The general headless flow finds its specific implementation within a JSS application leveraging the Next.js framework, benefiting from Next.js’s powerful data fetching and rendering capabilities. We’ll focus specifically on Server-Side Rendering (SSR) here, while a separate post will cover Static Site Generation (SSG) and Incremental Static Regeneration (ISR).

1. User Request Initiation

A user navigates to a specific route, such as /products, initiating an HTTP GET request directed to your deployed Next.js application, which acts as the unified rendering endpoint.

2. Next.js Middleware and Sitecore Add-on Integration (Edge-Based Execution)

If implemented, the middleware.ts file in your Next.js application executes at the Edge (close to the user) before the request even reaches your application’s pages. This provides an opportune moment for early request manipulation and context enrichment:

  • Authentication & Authorization: Redirecting unauthorized users or validating session tokens.
  • Request Rewrites & Redirects: URL transformations based on dynamic conditions.
  • Header Manipulation: Injecting custom headers or modifying existing ones.
  • Contextual Data Injection: Reading user-specific cookies, geolocation data, and potentially passing this context to downstream services via HTTP headers.

This middleware layer is where Sitecore XMCJSS Next.js add-ons particularly shine, streamlining complex Sitecore-specific functionalities.

2.1 Sitecore JSS Next.js Add-ons: Extending Middleware Capabilities

Sitecore provides specialized add-ons for JSS Next.js applications that are designed to integrate seamlessly with Next.js middleware, enhancing data fetching and other critical functionalities at the Edge. These add-ons abstract away much of the boilerplate code, allowing developers to focus on business logic.

Key add-ons relevant to the request lifecycle and that are compatible with XM Cloud are:

  • SXA (nextjs-sxa):
    • Includes example components and the setup for Headless SXA projects.
  • Next.js Multisite Add-on (nextjs-multisite):
    • Enables a single Next.js rendering host to serve multiple Sitecore sites.
    • Leverages middleware to resolve the correct Sitecore site (sc_site parameter) based on the incoming request’s hostname, path, or other routing rules. This ensures the correct site context is passed to the Layout Service or GraphQL calls.
    • Often uses a GraphQLSiteInfoService to fetch site definitions from Sitecore Experience Edge at build time or runtime.
  • Next.js Personalize Add-on (nextjs-personalize):
    • Integrates with Sitecore Personalize (formerly Boxever/CDP) for advanced client-side personalization and experimentation.
    • Its core component, the PersonalizeMiddleware, is designed to run at the Edge.
    • The PersonalizeMiddleware makes a call to Sitecore Experience Edge to fetch personalization information (e.g., page variants).
    • It then interacts with the Sitecore CDP endpoint using the request context to determine the appropriate page variant for the current visitor.
    • Crucially, if a personalized variant is identified, the middleware can perform a rewrite of the request path (e.g., to /_variantId_<variantId>/<original-path>). This personalized rewrite path is then read by the Next.js app to manipulate the layout and feed into client-side Page View events. This allows client-side logic to render the specific personalized content.
    • Also includes a Page Props Factory plugin to simplify data retrieval for personalized content.

These add-ons are included in your application when you create a Next.js project with the JSS initializer script, you can include multiple add-ons in your application. In addition to the above add-ons, you also get middleware plugin  redirects.ts that supports and enables redirects defined in Sitecore.

3. Next.js Server-Side Rendering (SSR) via getServerSideProps

JSS Next.js apps commonly employ a catch-all route (e.g., pages/[[...path]].tsx) to dynamically handle arbitrary Sitecore content paths. The getServerSideProps function, executed on the rendering host server for each request, is the primary mechanism for fetching the Sitecore layout data.

While Sitecore add-ons in middleware can pre-fetch data, getServerSideProps remains a critical point, especially if you’re not fully relying on middleware for all data, or if you need to merge data from multiple sources. The layoutData fetched here will already reflect any server-side personalization applied by Sitecore based on context passed from middleware.

4. Sitecore Layout Service / Experience Edge GraphQL Processing

Upon receiving the data fetch request from the Next.js application, Sitecore’s backend performs a series of crucial operations as mentioned earlier – resolves the route, evaluates the context (such as language, site, and device), and assembles the appropriate renderings based on the presentation details. It then serializes this information – comprising route-level fields, component content, placeholder hierarchy, and any server-side personalization – into a structured JSON or GraphQL response. This response is returned to the rendering host, enabling the front-end application to construct the final HTML output using the data provided by Sitecore.

5. Rendering with the JSS Component Factory

Upon receiving the layoutData JSON, the JSS Next.js application initiates the client-side (or server-side during SSR) rendering process using its component factory. This factory is a crucial mapping mechanism that links Sitecore component names (as defined in the componentName field in the Layout Service JSON) to their corresponding React UI component implementations.

6. HTML Response to Browser

Next.js completes the server-side rendering process, transforming the React component tree into a fully formed HTML string. This HTML, along with any necessary CSS and JavaScript assets, is then sent as the HTTP response to the user’s browser. If personalization rules were applied by Sitecore, the returned HTML will reflect the specific component variants or content delivered for that particular user.

7. Client-Side Hydration

Once the browser receives the HTML, React takes over on the client-side, “hydrating” the static HTML by attaching event listeners and making the page interactive. This ensures a seamless transition from a server-rendered page to a fully client-side interactive single-page application (SPA).

 Debugging Tips for Sitecore JSS Applications

When working with Sitecore JSS applications, in headless setups, debugging can be a crucial part of development and troubleshooting when components fail to render as expected, personalization rules seem to misfire, or data appears incorrect.

1. Enable Debug Logs in the JSS App

JSS uses a logging mechanism based on debug – a npm debugging module. The module provides a debug() function, which works like an enhanced version of console.log(). Unlike console.log, you don’t need to remove or comment out debug() statements in production – you can simply toggle them on or off using environment variables whenever needed.

To activate detailed logs for specific parts of your JSS app, set the DEBUG environment variable.

  1. To output all debug logs available, set the DEBUG environment variable to sitecore-jss:*. The asterisk (*) is used as a wildcard. DEBUG=sitecore-jss:*
  2. To filter out and display log messages from specific categories, such as those related to the Layout Service, set the variable like so  DEBUG=sitecore-jss:layout
  3. To exclude logs of specific categories use - prefix: DEBUG=sitecore-jss:*,-sitecore-jss:layout

Please refer to this document to get details of all the available debug logging.

2. Use Browser DevTools to Inspect Logs

If your app runs client-side, and the debug package is configured, JSS logs will appear in the browser console.

To enable this manually in the browser, set this in the browser console:

localStorage.debug = 'jss:*';

Then refresh the page. You’ll start seeing logs for:

  • Layout service requests

  • Component-level rendering

  • Data fetching and personalization events

3. Leveraging Server-Side Logging within Next.js

Next.js’s server-side data fetching functions (getServerSideProps, getStaticProps) provide excellent points for detailed logging. Add console.log statements within your getServerSideProps or getStaticProps functions to get more details of the request and its data. When deployed to platforms like Vercel, or Netlify, these console.log statements will appear in your serverless function logs (e.g., Vercel Function Logs).

Additionally, when deploying your Sitecore headless application on Vercel, you can leverage Vercel’s built-in request logging and observability features. These tools allow you to track incoming requests, inspect headers, view response times, and monitor serverless function executions. This visibility can be especially helpful when debugging issues related to routing, personalization, or data fetching from the Layout Service or other backend APIs.

Wrapping It All Up: Why This Matters

Understanding how requests are processed in Sitecore Headless applications – especially when using JSS with Next.js – gives developers a strong foundation for building high-performing and maintainable solutions. By grasping the complete request lifecycle, from incoming requests to Layout Service responses and component rendering, you gain the clarity needed to architect more efficient and scalable applications. Coupled with effective debugging techniques and observability tools, this knowledge enables you to identify bottlenecks, troubleshoot issues faster, and deliver seamless user experiences. With Sitecore’s architecture already embracing composable and headless paradigms, understanding these fundamentals is essential for developers looking to build modern, future-ready digital experiences.

]]>
https://blogs.perficient.com/2025/07/31/deconstructing-the-request-lifecycle-in-sitecore-headless-with-a-jss-next-js-deep-dive/feed/ 0 385650
Perficient Nagpur Celebrates Contentstack Implementation Certification Success! https://blogs.perficient.com/2025/07/11/perficient-nagpur-celebrates-contentstack-implementation-certification-success/ https://blogs.perficient.com/2025/07/11/perficient-nagpur-celebrates-contentstack-implementation-certification-success/#respond Fri, 11 Jul 2025 07:26:14 +0000 https://blogs.perficient.com/?p=383970

At Perficient, we believe that continuous learning is essential for excellence. This commitment drives us to evolve and master the latest technologies, ensuring the best possible delivery for our clients. This belief fuels our team’s pursuit of mastering cutting-edge technology.

On that note, we’re incredibly proud to announce a significant achievement for our Nagpur team! Six dedicated team members have successfully completed their Contentstack Implementation training and certification, equipping them with the expertise to deliver advanced headless CMS solutions to our clients.

A huge congratulations to the following achievers for their hard work and dedication in earning this significant certification! Their commitment to continuous learning and mastering new technologies is a reflection of the strong talent pool we have.

  1. Mahima Patel
  2. Sandeep Reddy Duddeda
  3. Aditi Paliwal
  4. Ashish Chinchkhede
  5. Akshay Gawande
  6. Vikrant Punwatkar

Our newly certified Contentstack implementers have undergone extensive training. They gained solid knowledge in content modeling, API integration, workflow automation, and best practices for creating scalable and efficient digital experiences with Contentstack. This expertise allows us to better assist our clients in maximizing their digital presence.

Why is this a big deal for Perficient and our clients?

In today’s fast-moving digital world, businesses really need to be agile, flexible, and able to deliver content smoothly across a growing number of channels. That’s exactly where Contentstack stands out as a top-notch headless CMS, and we’re genuinely thrilled to have our team leading the charge in this game-changing technology.

We’re looking forward to the exciting opportunities this enhanced capability offers our clients, and we are eager to leverage Contentstack to create even more innovative, impactful, and future-ready digital solutions.

 

]]>
https://blogs.perficient.com/2025/07/11/perficient-nagpur-celebrates-contentstack-implementation-certification-success/feed/ 0 383970
Common Errors When Using GraphQL with Optimizely https://blogs.perficient.com/2025/05/05/common-errors-when-using-graphql-with-optimizely/ https://blogs.perficient.com/2025/05/05/common-errors-when-using-graphql-with-optimizely/#respond Mon, 05 May 2025 17:00:55 +0000 https://blogs.perficient.com/?p=380453

What is GraphQL?

GraphQL is a powerful query language for APIs that allows clients to request only the data they need. Optimizely leverages GraphQL to serve content to your platform agnostic presentation layer. This approach to headless architecture with Optimizely CMS is gaining traction in the space and developers often encounter new challenges when transitioning from the more common MVC approach.

In this blog post, we will explore some common errors you’ll encounter and how to troubleshoot them effectively.

 

Common Errors

1. Schema Mismatches

Description

Some of the most frequent issues arise from mismatches between the GraphQL schema and the content models in Optimizely. This can occur from mistyping fields in your queries, not synchronizing content in the CMS, or using the wrong authentication key.

Example Error

{
"errors": [
    {
      "message": "Field \"Author\" is not defined by type \"BlogPage\".",
      "locations": [
        {
          "line": 2,
          "column": 28
        }
      ]
    }
  ]
}

Solution

  • Double check your query for any mismatches between field and type names
    • Case-sensitivity is enforced on Types/Properties
  • Validate that the API Key in your GraphQL Query matches the API Key in the CMS environment you’ve updated
  • Ensure that your GraphQL schema is up-to-date with the latest data model changes in Optimizely.
    • If you are running the CMS with the same Graph API Keys, check the GraphQL Explorer tab and validate that your type shows in the listing
  • Run the ‘Optimizely Graph content synchronization job’ from the CMS Scheduled Jobs page.
    • After you see the Job Status change from ‘Starting execution of ContentTypeIndexingJob’ to ‘Starting execution of ContentIndexingJob’ you can stop the job and re-run your query.
  • Reset the Account
    • If all else fails you may want to try to reset your GraphQL account to clear the indices. (/EPiServer/ContentGraph/GraphQLAdmin)
    • If you are sharing the key with other developers the schema can become mismatched when making local changes and synchronizing your changes to the same index.

2. Maximum Depth

Description

When querying nested content, you may see an empty Content object in the response rather than your typed content.

Example Error

In this scenario, we are trying to query an accordion set block which has multiple levels of nested content areas.

Query

query MyQuery {
  Accordion {
    items {
      PanelArea{
        ContentLink {
          Expanded {
            ... on AccordionPanel{
              PanelContent{
                ContentLink {
                  Expanded {
                    ... on CardGrid{
                      CardArea {
                        ContentLink {
                          Expanded {
                            ... on Card{
                              CtaArea{                                
                                ContentLink{
                                  Expanded{
                                    __typename
                                    ...on Button {
                                      __typename
                                    }

Response

{
  "data": {
    "Accordion": {
      "items": [
        {
          "PanelArea": [
            {
              "ContentLink": {
                "Expanded": {
                  "PanelContent": [
                    {
                      "ContentLink": {
                        "Expanded": {
                          "CardGrid": [
                            {
                              "ContentLink": {
                                "Expanded": {
                                  "CtaArea": [
                                    {
                                      "ContentLink": {
                                        "Expanded": {
                                          "__typename": "Content"
                                        }
       ...
}

Solution

  • Configure GraphQL to use higher maximum depth in appsettings
    • The default level of nesting content is 3, but that can be modified in Startup.CS
      services.AddContentGraph(options => { options.ExpandLevel.Default =options.ExpandLevel.ContentArea = 5; });
    • Note that increasing this will increase the document size and make the synchronization job much slower depending on the amount of content and level of nesting in your site.
  • Break-up requests into multiple queries.
    • Instead of expanding the inline fragment (… on Block) instead get the GuidValue of the ContentModelReference and use subsequent queries to get deeply nested content.
    • Consider making this extra request asynchronously on the client-side to minimize performance impact.

3. Authentication Errors

Description

There are a few different scenarios where you can get a 401 Authentication Error response on your GraphQL query.

{
  "code": "AUTHENTICATION_ERROR",
  "status": 401,
  "details": {
    "correlationId": "1234657890"
  }
}

Solution

  • Check your authentication tokens and ensure they are valid.
  • If you are querying draft content you need to configure and enable preview tokens Documentation

4. Unsynchronized Content

Description

When making updates to content in the CMS, you will occasionally run into issues where you don’t see the updated content on the page or in the graph response.

Solution

  • Confirm that Content has been synchronized
    • In the CMS you can determine whether or not Content has been synchronized by the checkmark icon in the Publish Options ‘Synchronize with Optimizely Graph’ button
      Optimizely Graph Publish Options
    • If the ‘Synchronize with Optimizely Graph’ button is not triggering the content to be synced check to see if either of the Optimizley Graph Synchronization Jobs are in progress.  When they are running, manually syncing content will be delayed until job completion.
  • Validate that your CMS Graph API Key matches the API Key in your front-end/graph query
]]>
https://blogs.perficient.com/2025/05/05/common-errors-when-using-graphql-with-optimizely/feed/ 0 380453
Personalized Optimizely CMS Website Search Experiences Azure AI Search & Personalizer https://blogs.perficient.com/2025/04/10/personalized-optimizely-cms-website-search-experiences-azure-ai-search-personalizer/ https://blogs.perficient.com/2025/04/10/personalized-optimizely-cms-website-search-experiences-azure-ai-search-personalizer/#respond Thu, 10 Apr 2025 23:00:48 +0000 https://blogs.perficient.com/?p=379901

In the last blog, we discussed Integrating the Optimizely CMS website with Azure AI search. Now let’s take a bit more advanced topic to serve Personalization experience with Azure AI search with Azure personalizer. Together, they enable you to serve dynamic customized content and search results across user  behaviour, preferences, and context.

What is Azure Personalizer?

Azure Personalizer Cognitive Service for Real-time Association using Reinforcement Learning. It gives you the ability to serve content or experiences that are most relevant to a user — informed by past behaviour and current context.

Benefits of AI Personalizer:

  • So it can study and evolve as people engage with it.
  • Amazingly helpful for ranking search results.
  • Can customize direct calls to action, highlighted articles, or goods.

How It Works with Azure AI Search and Optimizely

  1. The user performs a search on your Optimizely site.
  2. Azure AI Search simply gives a  list of matching documents
  3. These documents are sent to Azure Personalizer as “rankable actions.
  4. The personalized orders results using the context of the user.
  5. Your app serves personalized results and the user’s feedback helps Personalizer to learn & evolve further.

Set Up Azure Personalizer

  • Navigate to Azure Portal → Personalizer resource creation
  • Save your endpoint and API key.
  • In step 3, specify the Content that you want to be ranked (i.e., search results)

Integration Code

Model for Rankable Action

public class RankableDocument
{
    public string Id { get; set; }
    public string Title { get; set; }
    public string Summary { get; set; }
    public string Category { get; set; }
}

Send Info to Personalizer with Context:

private object GetUserContext(HttpRequestBase request)
{
    return new
    {
        timeOfDay = DateTime.Now.Hour,
        device = request.Browser.IsMobileDevice ? "mobile" : "desktop",
        userAgent = request.UserAgent,
        language = request.UserLanguages?.FirstOrDefault() ?? "en"
    };
}
public async Task<List<RankableDocument>> GetPersonalizedResultsAsync(List<RankableDocument> documents, string userId)
{
    var contextFeatures = new[] { GetUserContext(Request) };

    var actions = documents.Select(doc => new
    {
        id = doc.Id,
        features = new[]
        {
            new { category = doc.Category },
            new { title = doc.Title }
        }
    });

    _eventId = Guid.NewGuid().ToString();

    var request = new
    {
        contextFeatures = contextFeatures,
        actions = actions,
        excludedActions = new string[] {},
        eventId = _eventId,
        deferActivation = false
    };

    var client = new HttpClient();
    client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", "--YOUR API KEY ---");
    var response = await client.PostAsync("--Endpoint--/personalizer/v1.0/rank",
        new StringContent(JsonSerializer.Serialize(request), Encoding.UTF8, "application/json"));

    var result = JsonDocument.Parse(await response.Content.ReadAsStringAsync());
    var topActionId = result.RootElement.GetProperty("rewardActionId").GetString();

    return documents.OrderByDescending(d => d.Id == topActionId).ToList();
}

Now let’s consider our previous example of search page controller & view and extend it

Search Controller

public class AzureSearchPageController : PageController<AzureSearchPage>
{
    private static string _eventId;

    public async Task<ActionResult> Index(AzureSearchPage currentPage, string q = "")
    {
        var results = new List<RankableDocument>();

        if (!string.IsNullOrEmpty(q))
        {
            var url = $"https://<search-service>.search.windows.net/indexes/<index-name>/docs?api-version=2021-04-30-Preview&search={q}";
            using var client = new HttpClient();
            client.DefaultRequestHeaders.Add("api-key", "<your-query-key>");
            var response = await client.GetStringAsync(url);

            var doc = JsonDocument.Parse(response);
            results = doc.RootElement.GetProperty("value")
                .EnumerateArray()
                .Select(x => new RankableDocument
                {
                    Id = x.GetProperty("id").GetString(),
                    Title = x.GetProperty("name").GetString(),
                    Category = x.GetProperty("type").GetString(),
                    Summary = x.GetProperty("content").GetString()
                }).ToList();

            results = await GetPersonalizedResultsAsync(results, "user123");
        }

        ViewBag.Results = results;
        ViewBag.Query = q;
        ViewBag.EventId = _eventId;
        return View(currentPage);
    }

    [HttpPost]
    public async Task<ActionResult> Reward(string eventId, double rewardScore)
    {
        using var client = new HttpClient();
        client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", "<your-api-key>");

        var rewardUrl = $"<your-endpoint>/personalizer/v1.0/events/{eventId}/reward";
        var result = await client.PostAsync(rewardUrl, new StringContent(rewardScore.ToString(), Encoding.UTF8, "application/json"));

        return Json(new { success = result.IsSuccessStatusCode });
    }
}

Search Page View

@model AzureSearchPage
<h1>Personalized Search Results</h1>
<form method="get">
    <input type="text" name="q" value="@ViewBag.Query" placeholder="Search..." />
    <button type="submit">Search</button>
</form>

<ul>
@foreach (var result in ViewBag.Results as List<RankableDocument>)
{
    <li>
        <h4>@result.Title</h4>
        <p>@result.Summary</p>
        <button onclick="sendReward('@ViewBag.EventId', 1.0)">Like</button>
        <button onclick="sendReward('@ViewBag.EventId', 0.0)">Not Relevant</button>
    </li>
}
</ul>
<script>
function sendReward(eventId, score) {
    fetch('/AzureSearchPage/Reward', {
        method: 'POST',
        headers: { 'Content-Type': 'application/json' },
        body: JSON.stringify({ eventId: eventId, rewardScore: score })
    }).then(r => {
        if (r.ok) alert("Thanks! Your feedback was recorded.");
    });
}
</script>

With Azure AI Search delivering relevant results and Azure Personalizer re-ranking them based on real-time context, your Optimizely site becomes an intelligent experience engine.

This blog has also been published here.

]]>
https://blogs.perficient.com/2025/04/10/personalized-optimizely-cms-website-search-experiences-azure-ai-search-personalizer/feed/ 0 379901
Integrating Optimizely CMS with Azure AI Search – A Game-Changer for Site Search https://blogs.perficient.com/2025/04/09/integrating-optimizely-cms-with-azure-ai-search-a-game-changer-for-site-search/ https://blogs.perficient.com/2025/04/09/integrating-optimizely-cms-with-azure-ai-search-a-game-changer-for-site-search/#respond Wed, 09 Apr 2025 22:21:09 +0000 https://blogs.perficient.com/?p=379830

Want to elevate your Optimizely PaaS CMS site’s search capabilities? Azure AI Search could be just the tool you need! In this blog, I’ll discuss how to connect your CMS with Microsoft’s advanced AI-driven search platform to create fast, smart search experiences that surpass regular keyword searches. Optimizely Azure Ai Search

What is Azure AI Search?

Azure AI Search is Microsoft’s cloud-based search service powered by AI. It enables you to index, search, and analyze extensive amounts of content utilizing full-text searches, faceted navigation, and machine-learning features (such as language comprehension and semantic search).

Why it’s great

  • Super fast and scalable search experiences.
  • Built-in AI for enhanced relevance.
  • Smooth integration with other Azure services.

In short: it’s a smart search made user-friendly.

Advantages of Integrating with Optimizely CMS

Before we get into the benefits, let’s take a moment to consider how Azure AI Search compares to Optimizely’s native search functionalities. Optimizely Search (which relies on Lucene or Find/Search & Navigation) works well for straightforward keyword searches and basic filters, and it’s closely tied to the CMS. However, it doesn’t offer the advanced AI features, scalability, or flexibility that Azure provides right off the bat. Azure AI Search enriches the search experience with functionalities like semantic search, cognitive enhancements, and external data indexing, making it perfect for enterprise-level sites with intricate search requirements.

Here’s why merging these two solutions is beneficial:

  • Improved search experiences with AI-based relevance.
  • Scalable and dependable – allow Azure to manage the heavy lifting.
  • Customized content indexing from your CMS using APIs or jobs.
  • Advanced options such as filtering, faceting, auto-complete, and more.

Get Started with Azure AI Search

To set up Azure AI Search, just follow these steps:

  1. Log in to the Azure Portal and look for AI Search.
  2. Click ‘Create’ to configure the following:
    • Name
    • Resource Group
    • Pricing Tier (including a free tier!)
    • Region

Once created, make sure to note down the Search Service Name and Admin API Key – you’ll need these to send and retrieve

Custom Scheduled Job to Sync Updated Content with Azure AI Search Using ServiceAPI

By utilizing the Optimizely ServiceAPI, we can effectively get updated content and synchronize it with Azure AI Search. This process avoids the need to re-index the entire site, which helps boost performance.

[ScheduledPlugIn(DisplayName = "Sync Updated Content to Azure Search")]
public class AzureSearchJob : ScheduledJobBase
{
    private readonly HttpClient _httpClient;
    private readonly string _serviceApiBaseUrl = "https://yourwebsite.com/episerverapi/content/";

    public AzureSearchJob()
    {
        _httpClient = new HttpClient();
        IsStoppable = true;
    }

    public override string Execute()
    {
        // Step 1: Get content updated in the last 24 hours
        var yesterday = DateTime.UtcNow.AddDays(-1).ToString("o");
        var contentApiUrl = $"{_serviceApiBaseUrl}?updatedAfter={Uri.EscapeDataString(yesterday)}";

        var response = _httpClient.GetAsync(contentApiUrl).Result;
        if (!response.IsSuccessStatusCode)
            return "Failed to fetch updated content from ServiceAPI.";

        var contentJson = response.Content.ReadAsStringAsync().Result;
        var documents = JsonSerializer.Deserialize<JsonElement>(contentJson).EnumerateArray()
            .Select(content => new Dictionary<string, object>
            {
                ["id"] = content.GetProperty("ContentGuid").ToString(),
                ["name"] = content.GetProperty("Name").GetString(),
                ["content"] = content.GetProperty("ContentLink").GetRawText(),
                ["type"] = content.GetProperty("ContentTypeName").GetString()
            }).ToList();

        // Step 2: Push to Azure AI Search
        var json = JsonSerializer.Serialize(new { value = documents });
        var request = new HttpRequestMessage(HttpMethod.Post, "https://servicename.search.windows.net/indexes/<index-name>/docs/index?api-version=2021-04-30-Preview")
        {
            Content = new StringContent(json, Encoding.UTF8, "application/json")
        };
        request.Headers.Add("api-key", "<your-admin-key>");

        var result = _httpClient.SendAsync(request).Result;
        return result.IsSuccessStatusCode ? "Success" : "Failed to index in Azure Search.";
    }
}

You can filter and transform the ServiceAPI response further to match your index schema.

Custom Page Type and Controller/View to Query Azure Search

Create a new page type to serve as a Search Results page.

Search Page Type

[ContentType(DisplayName = "Search Results Page", GUID = "3C918F3E-D82B-480B-9FD8-A3A1DA3ECB1B", Description = "Search using Azure Search")]
public class AzureSearchPage : PageData
{
    [Display(Name = "Search Placeholder")]
    public virtual string PlaceholderText { get; set; }
}

Page Controller

public class AzureSearchPageController : PageController<AzureSearchPage>
{
    public ActionResult Index(AzureSearchPage currentPage, string q = "")
    {
        var results = new List<string>();

        if (!string.IsNullOrEmpty(q))
        {
            var url = $"https://<search-service>.search.windows.net/indexes/<index-name>/docs?api-version=2021-04-30-Preview&search={q}";
            using var client = new HttpClient();
            client.DefaultRequestHeaders.Add("api-key", "<your-query-key>");
            var response = client.GetStringAsync(url).Result;

            var doc = JsonDocument.Parse(response);
            results = doc.RootElement.GetProperty("value")
                .EnumerateArray()
                .Select(x => x.GetProperty("name").GetString())
                .ToList();
        }

        ViewBag.Results = results;
        ViewBag.Query = q;
        return View(currentPage);
    }
}

Search Page View

@model AzureSearchPage
@{
    Layout = "~/Views/Shared/_Layout.cshtml";
}

<h1>Search Results</h1>
<form method="get">
    <input type="text" name="q" value="@ViewBag.Query" placeholder="@Model.PlaceholderText" />
    <button type="submit">Search</button>
</form>

<ul>
@foreach (var result in ViewBag.Results as List<string>)
{
    <li>@result</li>
}
</ul>

Optimizely CMS / Azure AI Search Advanced Use Cases

  • Semantic Search: Let Azure understand intent, not just keywords.
  • Auto-complete & Suggestions: Hook into search-as-you-type features.
  • Faceted Navigation: Create filters by category, tags, etc.
  • AI Enrichment: Use Azure’s skillsets to extract metadata, and analyse images, or OCR PDFs.
  • Multilingual Search: Azure supports search across multiple languages out of the box.

Summary

Integrating Azure AI Search with Optimizely CMS can truly take your site search from basic to brilliant. With a bit of setup and some clean code, you’re empowering users with fast, smart, and scalable content discovery.

This blog is also published here

]]>
https://blogs.perficient.com/2025/04/09/integrating-optimizely-cms-with-azure-ai-search-a-game-changer-for-site-search/feed/ 0 379830
What To Expect When Migrating Your Site To A New Platform https://blogs.perficient.com/2025/02/26/what-to-expect-when-migrating-your-site-to-a-new-platform/ https://blogs.perficient.com/2025/02/26/what-to-expect-when-migrating-your-site-to-a-new-platform/#respond Wed, 26 Feb 2025 15:59:30 +0000 https://blogs.perficient.com/?p=377633

This series of blog posts will cover the main areas of activity for your marketing, product, and UX teams before, during, and after site migration to a new digital experience platform.

Migrating your site to a different platform can be a daunting prospect, especially if the site is sizable in both page count and number of assets, such as documents and images. However, this can also be a perfect opportunity to freshen up your content, perform an asset library audit, and reorganize the site overall.

Once you’ve hired a consultant, like Perficient, to help you implement your new CMS and migrate your content over, you will work with them to identify several action items your team will need to tackle to ensure successful site migration.

Whether you are migrating from/to some of the major enterprise digital experiences platforms like Sitecore, Optimizely, Adobe, or from the likes of Sharepoint or WordPress, there are some common steps to take to make sure content migration runs smoothly and is executed in a manner that adds value to your overall web experience.

Part I – “Keep, Kill, Merge”

One of the first questions you will need to answer is“What do we need to carry over?” The instinctive answer would be everything. The rational answer is that we will migrate the site over as is and then worry about optimization later. There are multiple reasons why this is usually not the best option.

  • This is a perfect opportunity to do a high-level overview of the entire sitemap and dive a bit deeper into the content. It will help determine if you still need a long-forgotten page about an event that ended years ago or a product that is no longer being offered in a certain market. Perhaps it hasn’t been purged simply because there is always higher-priority work to be done.
  • It is far more rational to do this type of analysis ahead of the migration rather than after. If nothing else, it is simply for efficiency purposes. By trimming down the number of pages, you ensure that the migration process is shorter and more purposeful. You also save time and resources.

Even though this activity might take time, it is essential to use this opportunity in the best possible manner. A consultant like Perficient can help drive the process. They will pull up an initial list of active pages, set up simple audit steps, and ensure that decisions are recorded clearly and organized.

Step I – Site Scan

The first step is to ensure all current site pages are accounted for. As simple as this may seem, it doesn’t always end up being so, especially on large multi-language sites. You might have pages that are not crawlable, are temporarily unpublished, are still in progress, etc.

Depending on your current system capabilities, putting together a comprehensive list can be relatively easy. Getting a CMS export is the safest way to confirm that you have accounted for everything in the system.

Crawling tools, such as Screaming Frog, are frequently used to generate reports that can be exported for further refinement. Cross-referencing these sources will ensure you get the full picture, including anything that might be housed externally.

An Analyst Uses A Computer And Dashboard For Data Business Analysis And Data Management System With Kpi And Metrics Connected To The Database For Technology Finance, Operations, Sales, Marketing

Step II – Deep Dive

Once you’ve ensured that all pages made it to a comprehensive list you can easily filter, edit, and share, the fun part begins.

The next step involves reviewing and analyzing the sitemap and each page. The goal is to determine those that will stay vs candidates for removal. Various different factors can impact this decision from business goals, priorities, page views, conversion rate, SEO considerations, and marketing campaigns to compliance and regulations. Ultimately, it is important to assess each page’s value to the business and make decisions accordingly.

This audit will likely require input from multiple stakeholders, including subject matter experts, product owners, UX specialists, and others. It is essential to involve all interested parties at an early stage. Securing buy-in from key stakeholders at this point is critical for the following phases of the process. This especially applies to review and sign-off prior to going live.

Depending on your time and resources, the keep-kill-merge can either be done in full or limited to keep-kill. The merge option might require additional analysis, as well as follow-up design and content work. Leaving that effort for after the site migration is completed might just be the rational choice.

Step III – Decisions and Path Forward

Once the audit process has been completed, it is important to record findings and decisions simply and easily consumable for teams that will implement those updates. Proper documentation is essential when dealing with large sets of pages and associated content. This will inform the implementation team’s roadmap and timelines.

At this point, it is crucial to establish regular communication between a contact person (such as a product owner or content lead) and the team in charge of content migration from the consultant side. This partnership will ensure that all subsequent activities are carried out respecting the vision and business needs identified at the onset.

Completing the outlined activities properly will help smooth the transition into the next process phase, thus setting your team up for a successful site migration.

]]>
https://blogs.perficient.com/2025/02/26/what-to-expect-when-migrating-your-site-to-a-new-platform/feed/ 0 377633
How Optimizely Grew from a CMS to a Composable Powerhouse https://blogs.perficient.com/2025/02/17/how-optimizely-grew-from-cms-to-composable-powerhouse/ https://blogs.perficient.com/2025/02/17/how-optimizely-grew-from-cms-to-composable-powerhouse/#respond Mon, 17 Feb 2025 18:58:45 +0000 https://blogs.perficient.com/?p=380736

Before the term Digital Experience Platform (DXP) became a fixture in martech conversations, content management systems were the workhorses of digital strategy. Brands turned to CMS platforms like Episerver (now Optimizely) to manage web content, create publishing workflows, and maintain consistent brand identity online.

But the evolution from CMS to DXP didn’t happen overnight — and Optimizely has been at the heart of that journey.

From CMS to More: Optimizely’s Foundation as a Marketing-Ready Platform

Long before it was officially labeled a DXP, Optimizely’s CMS stood out for its extensibility. It offered robust content modeling, multi-site management, and support for editorial workflows — but where it really gained traction was in its ability to integrate.

Through connectors, add-ons, and APIs, the CMS expanded far beyond traditional publishing. Key integrations included:

  • CRM Platforms: Microsoft Dynamics, Salesforce, and others could be integrated to sync customer data for personalized content delivery.
  • Translation Providers: Tools like Lionbridge, Smartling, and Translations.com were easily added to support global content publishing workflows.
  • Marketing Automation: Platforms like Marketo, HubSpot, and Eloqua could be connected for form tracking, lead capture, and campaign orchestration.
  • Search Providers: Episerver Find (now Optimizely Search & Navigation) was a built-in option, but clients could also use Coveo, Algolia, and Lucidworks.
  • Commerce Engines: Its deep integration with Episerver Commerce (now Optimizely Commerce) made it a favorite among hybrid content-and-commerce teams.

Optimizely’s modular and API-first approach helped clients build tailored digital experiences well before the concept of a “DXP” was formally defined.

Enter the Modern DXP: Composable, Scalable, and Open

Today, Optimizely is more than a CMS — it’s a full-fledged, composable Digital Experience Platform. It combines content, commerce, experimentation, data, and personalization into a unified ecosystem. But what truly sets Optimizely apart in the modern era is its composable architecture.

Rather than forcing brands into a monolithic suite, Optimizely allows them to pick and choose the capabilities they need, and plug in best-of-breed tools for everything else. Whether it’s connecting to a PIM like Salsify, a DAM like Bynder, or a front-end framework hosted on Vercel — the platform is built to support:

  • Composable Integrations: APIs and GraphQL endpoints power seamless data access for headless and hybrid deployments.
  • Open Architecture: Built-in support for third-party search (Coveo, HawkSearch), CDPs, analytics tools, and design systems.
  • Cloud-Native SaaS: As a SaaS-first platform, Optimizely offers automatic updates, built-in scalability, and security out-of-the-box.
  • AI-Driven Insights: Tools like Optimizely Content Recommendations, ODP (Optimizely Data Platform), and Experimentation bring data into the experience layer.

Why It Matters

Organizations today need agility. Customer expectations shift fast, and the technology that powers those experiences must be flexible. Optimizely’s journey from a CMS with strong marketing integrations to a composable DXP allows teams to evolve their digital strategy at their own pace — without starting from scratch.

By supporting both integrated suites and headless-first deployments, Optimizely gives marketers, developers, and digital leaders a platform that meets them where they are — and grows with them into the future.

]]>
https://blogs.perficient.com/2025/02/17/how-optimizely-grew-from-cms-to-composable-powerhouse/feed/ 0 380736
Perficient Honored as a 2024 Acquia Partner Award Winner https://blogs.perficient.com/2025/02/12/perficient-honored-as-a-2024-acquia-partner-award-winner/ https://blogs.perficient.com/2025/02/12/perficient-honored-as-a-2024-acquia-partner-award-winner/#respond Wed, 12 Feb 2025 18:36:01 +0000 https://blogs.perficient.com/?p=377158

Perficient is thrilled to announce its recognition as a winner in the 2024 Acquia Partner Awards for DXP Champion of the Year. This esteemed accolade highlights Perficient’s commitment to delivering superior customer outcomes, driving innovation, and achieving outstanding revenue performance within the Acquia partner ecosystem.

Acquia, a leader in open digital experience software, honored 22 organizations worldwide for their exceptional use of Acquia technologies. These awards celebrate partners who have set new standards for technical excellence by implementing high-quality solutions that help customers improve marketing outcomes and enhance business results.

“We’re honored to be recognized as Acquia’s DXP Champion & Partner of the Year! This award is a testament to the strong partnership we’ve built, working hand in hand to deliver comprehensive, end-to-end digital solutions that drive success for our clients. Together with Acquia, we’re pushing the boundaries of what’s possible in the digital experience space!” said Joshua Hover, DXP Platforms at Perficient. “We are proud to be recognized alongside such an esteemed group of partners and remain committed to advancing the digital experience landscape through our innovative solutions.”

Partner of the Year – Perficient

Perficient is a leader in DXP solutions, helping organizations modernize their platforms and drive long-term success. As one of Acquia’s first Elite Partners and a multi-year Partner of the Year award winner, we have a proven track record of delivering innovative, future-ready digital experiences. Our expertise in strategy, development, and optimization ensures our clients stay ahead in an ever-evolving digital landscape.

“At Perficient, we are dedicated to not only delivering top-tier digital solutions but also forming lasting partnerships that foster our clients’ growth and success,” said Roger Walker, Senior Business Manager of the Perficient Acquia practice. “This recognition from Acquia reinforces our commitment to aligning with our clients’ needs, helping them achieve their digital transformation goals, and driving measurable business impact.”

Acquia empowers ambitious digital innovators to craft the most productive, frictionless digital experiences that make a difference to their customers, employees, and communities. We provide the world’s leading open digital experience platform (DXP), built on open-source Drupal, as part of our commitment to shaping a digital future that is safe, accessible, and available to all. With Acquia Open DXP, you can unlock the potential of your customer data and content, accelerating time to market and increasing engagement, conversion, and revenue.

Learn more at : https://www.acquia.com/partner-of-the-year

]]>
https://blogs.perficient.com/2025/02/12/perficient-honored-as-a-2024-acquia-partner-award-winner/feed/ 0 377158
Sitecore Content Migration Considerations https://blogs.perficient.com/2025/02/06/sitecore-content-migration-considerations/ https://blogs.perficient.com/2025/02/06/sitecore-content-migration-considerations/#comments Thu, 06 Feb 2025 17:41:44 +0000 https://blogs.perficient.com/?p=376976

Migrating to a new platform—whether upgrading from an older version or moving to Sitecore XM Cloud—is an great time to modernize your digital experience. Beneath the surface, a major challenge presents itself that some do not consider or put enough though into, content migration. Many organizations assume that migrating content is as simple as scripting your existing site, or copying and pasting, but messy, outdated, or disorganized content can lead to long-term problems and debt. If bad data makes its way into the new CMS, it can create tech debt, slow down performance, and impact the ability to deliver a seamless digital experience. So how do you ensure a migration sets you up for success? Understanding the biggest challenges and how to tackle them is the first step.

Not All Content Should Be Migrated

One of the most overlooked issues in a Sitecore migration is the quality of the content itself. Many organizations take an “everything must go” approach, assuming all existing content should move to the new platform. This often results in duplicate pages, outdated messaging, and unstructured data that doesn’t fit into the new CMS. Without a clear strategy, the antiquated content & media from the old system follows into the new one, making it harder to manage content effectively. Before migration, conducting a content audit can help determine what should be migrated, archived, or rewritten. A thoughtful approach ensures that only clean, relevant, and well-structured content moves into the new CMS, improving efficiency for marketing teams and enhancing the user experience.

Content Structure and it’s Significance

Sitecore is a powerful platform, but its effectiveness depends on how well content is structured. If the previous site had inconsistent templates, scattered media assets, or missing metadata, those issues will carry over—leading to a disorganized backend that slows down teams. Without a solid content model, marketers may find themselves constantly working around a flawed system rather than leveraging Sitecore’s capabilities to their full potential. Defining a content structure before migration ensures that pages are organized properly, metadata is applied consistently, and assets are easy to find and manage. Working closely with content strategists and developers to create a structured approach will make content creation and personalization more efficient in the long run.

SEO Challenges Without Planning

SEO can also take a hit during a poorly planned migration. When URLs change, internal links break, or metadata is lost, search rankings can suffer. Many teams assume that simply moving content over will maintain visibility, but a lack of planning often leads to missing redirects, duplicate pages, and unexpected drops in organic traffic. To prevent this, mapping high-value URLs and ensuring proper redirects are in place before migration is critical. Sitecore’s built-in SEO tools and third-party integrations can help manage metadata, maintain ranking authority, and provide a seamless experience for users who arrive from search engines.

Bad Data Creates Long-Term Debt

Beyond content structure and SEO, there’s also the issue of tech debt. When bad data moves into a new CMS without being cleaned up, it creates inefficiencies that affect content teams and developers alike. Pages become slow due to unnecessary assets, content authors struggle to find or reuse existing components, and site performance suffers. Over time, these issues compound, making it harder to scale digital efforts. A Sitecore migration shouldn’t just be about moving content; it should be about improving it. Taking the time to optimize workflows, remove outdated content, and implement governance ensures that the new environment is built for long-term success.

A Migration Empowers Future Growth

A successful migration isn’t just about getting content into the new CMS—it’s about setting the foundation for better user experiences, stronger SEO, and a scalable Sitecore implementation that supports business goals. By treating migration as an opportunity to refine content strategy, organizations can prevent common pitfalls and make the most of their investment. The key takeaway is simple: invest in content hygiene before migration to avoid long-term headaches. A little effort upfront will pay off in a cleaner, more efficient Sitecore environment that drives real results.

]]>
https://blogs.perficient.com/2025/02/06/sitecore-content-migration-considerations/feed/ 1 376976
Limitations of MVC Frameworks and CMS in Recent Days https://blogs.perficient.com/2025/01/30/limitations-of-mvc-frameworks-and-cms-in-recent-days/ https://blogs.perficient.com/2025/01/30/limitations-of-mvc-frameworks-and-cms-in-recent-days/#respond Thu, 30 Jan 2025 16:51:52 +0000 https://blogs.perficient.com/?p=376505

Software development techniques have rapidly improved because of the quick growth of technology, with Content Management Systems (CMS) and MVC (Model-View-Controller) frameworks performing a key role in the creation of contemporary online applications. These approaches do, however, have challenges in the fast-paced, high-demanding technological environment of today. Some of the significant limitations of CMS platforms and MVC frameworks in recent years are listed below.

 

1. Scalability Challenges

MVC Frameworks:

In general, MVC frameworks are created to manage applications of a moderate size. The standard MVC architecture may find it difficult to sustain performance when applications reach a high scale, such as maintaining large databases or millions of concurrent users. Components that are tightly coupled may experience bottlenecks, which makes scalability an expensive and difficult task.

CMS Platforms:

A huge user base is the goal of many CMS platforms, especially WordPress, Drupal, and Joomla. Since they provide extensions and plugins that improve scalability, these add-ons frequently result in performance overhead, which raises the consumption of resources and response times.

 

2. Performance Overhead

MVC Frameworks:

Even though MVC frameworks are hierarchical, they frequently need several levels of abstractions and database queries in order to handle requests. Slower responses may result from this extra complexity, particularly when processing complicated activities or in high-traffic areas.

CMS Platforms:

CMS platforms are heavy by nature because of their “one-size-fits-all” design. Features that serve a large number of users, even if they are not used, use resources. Multiple plugins enabled, for example, may result in conflicts, slow down page loads, and impact server performance.

 

3. Security Vulnerabilities

MVC Frameworks:

An important consideration for MVC frameworks is security. Problems like SQL injections, cross-site scripting (XSS), and cross-site request forgery (CSRF) must be addressed by developers. Although frameworks offer instruments to lessen these risks, careful execution is crucial to their efficacy, which leaves room for flaws.

CMS Platforms:

CMS platforms are so widely used, they are often the target of attacks. Systems are frequently at risk from incorrect configurations, delayed updates, and flaws in third-party themes and plugins. To increase the threat level, automated bots deliberately search for common CMS exploits.

 

4. Complexity in Customization

MVC Frameworks:

Especially when working with intricate business requirements, customizing applications built on MVC frameworks frequently takes a substantial amount of labour. It can become difficult and expensive to change the way the model, view, and controller work together.

CMS Platforms:

Although CMS platforms are easy to use, it might be difficult to achieve very particular features. Frequently, developers must modify default behaviors or write custom plugins, which can lead to maintenance headaches and compatibility problems.

 

5. Dependency on Third-Party Tools

MVC Frameworks:

For additional functionality, a lot of MVC frameworks rely significantly on third-party libraries. Compatibility problems may arise from this reliance, particularly if certain libraries are no longer actively maintained or are deprecated.

CMS Platforms:

For further functionality, CMS platforms rely significantly on plugins and extensions. But not all third-party tools are built with the same quality standards, and relying on them increases the risk of technical debt, security vulnerabilities, and broken functionality during updates.

 

6. Rigid Architecture

MVC Frameworks:
Although advantageous, the rigid division of responsibilities in MVC frameworks may prove to be a drawback in situations that call for strange workflows. These limitations frequently force developers to create code that is more difficult to maintain and debug.

CMS Platforms:

CMS systems frequently force users to adhere to preset procedures and frameworks, which might inhibit innovation and adaptability. They are less appropriate for projects that call for innovative designs or highly customized user experiences because of their rigidity.

 

7. Lack of Modern Features

MVC Frameworks:

Even while MVC frameworks are reliable, they frequently fall behind in embracing contemporary development trends like real-time processing, serverless architectures, and microservices. Usually, integrating these functionalities takes a lot of work and knowledge.

CMS Platforms:

Many CMS systems were developed with traditional content delivery mechanisms in mind. Extensive customization is frequently necessary to adapt them for contemporary requirements like headless CMS, progressive web apps (PWAs), and personalized user experiences.

 

8. High Maintenance Costs

MVC Frameworks:

To maintain and upgrade applications created using MVC frameworks, qualified developers are frequently needed. It becomes more expensive to maintain clear, effective, and error-free code as projects get bigger.

CMS Platforms:

Updating core software, plugins, and themes on a regular basis can be expensive and time-consuming. Significant work is also frequently needed for big version updates, including data migration and compatibility testing.

 

Conclusion
Although CMS platforms and MVC frameworks are still important in contemporary web development, their drawbacks emphasize the need for new solutions. Developers must evaluate these systems’ benefits and drawbacks considering project needs. They should also consider new options like low-code platforms, headless CMS, and microservices. By tackling the issues mentioned above, companies can make better decisions to guarantee performance, scalability, and security in their digital solutions.

]]>
https://blogs.perficient.com/2025/01/30/limitations-of-mvc-frameworks-and-cms-in-recent-days/feed/ 0 376505
How to Enable Full-Width Layouts in Optimizely Commerce (Spire) https://blogs.perficient.com/2025/01/06/how-to-enable-full-width-layouts-in-optimizely-commerce-spire/ https://blogs.perficient.com/2025/01/06/how-to-enable-full-width-layouts-in-optimizely-commerce-spire/#respond Tue, 07 Jan 2025 04:31:58 +0000 https://blogs.perficient.com/?p=374906

When building websites in Optimizely Commerce (Spire), you may need to create sections that span the entire page width. Full-width sections are essential for design elements such as banners, hero images, and background sections. Optimizely Commerce (Spire) provides a flexible framework that makes it easy to configure and implement full-width layouts, allowing developers to create visually engaging designs with minimal effort. This guide will walk you through utilizing this feature to seamlessly create full-width sections.

How to Create Full-Width Sections

Step 1: Folder Structure

  • First, ensure you have already created a blueprint under the directory src\FrontEnd\modules\blueprints. For example, you might have a blueprint named CustomBlueprint.
  • Navigate to the CustomBlueprint/src. Ensure that a Start.tsx file exists in this directory. If it does not, create and add this file.

Step 2: Understand the Options in Start.tsx

The Start.tsx file is the entry point for setting up the main themes, custom widgets, and pages. It uses Mobius styling principles to ensure everything looks consistent, flexible, and accessible. These principles provide a unified design, allowing for easy customization of themes, colors, typography, and other UI elements while maintaining a seamless, responsive user experience across devices.

In the Start.tsx file, you will find two options for configuring the full-width layout through the style guide:

  • setPreStyleGuideTheme: If you add your code under this method, you can update the full-width settings directly from the content admin interface
  • setPostStyleGuideTheme: If you use this function, the full-width settings will be fixed, and you won’t be able to modify them from the content admin interface.

Step 3: Full-Width Configuration Code

Optimizely Commerce (Spire) already provides a built-in solution to configure sections like the header, content, and footer to span the full page width. To enable full-width for these sections, use the following code snippet:

Basic code configuration

Explanation

  • header: { isFullWidth: true }: Ensures the header section spans the full width of the page.
  • content: { isFullWidth: true }: The main content area extends from edge to edge, perfect for displaying banners or immersive visuals.
  • footer: { isFullWidth: true }: Sets the footer to full width, ideal for footers with background colors or design elements that must reach the screen’s edges.

Step 4: Integrate the Code

Add the above code to the Start.tsx file within one of the theme configuration functions (setPreStyleGuideTheme or setPostStyleGuideTheme), depending on whether you want to allow updates to the full-width settings from the content admin interface.

Code integration

Step 5: How to Update the Full-Width Configuration from the Content Admin

After configuring the full-width settings in the Start.tsx file, Optimizely Commerce (Spire) provides an easy way to manage and update these configurations directly from the Content Admin interface.

  • Go to the Content Admin and navigate to the Style GuideStyle guide section admin
  • In the Site Configurations section, you will find the Full Width settings.

Admin full width

  • Click on each option (Header, Content, and Footer) to see a toggle that allows you to make the section full-width.

Admin full width edit

Note: You can only update the full-width setting using the setPreStyleGuideTheme option in the Start.tsx file.

  • After updating the value in the Settings modal, ensure you save the changes.

Step 6: How to Use the Full-width Option on Actual Pages

  • To use the full-width option on pages, add a Row widget. Edit the Row widget, and you will see a Full Width Checkbox option (by default, this option will be unchecked).

Full width row default state

  • To make the section full width, check the checkbox.

     Note: Once the checkbox is checked, any content in that row will be displayed at full width.

Full width row checked state

Conclusion

Optimizely Commerce (Spire) provides a straightforward and flexible solution for creating full-width sections, making it easier to design visually engaging websites. Following the steps outlined in this guide, you can quickly enable full-width layouts for your header, content, and footer. Whether you prefer to manage these configurations through the Content Admin interface or directly in the code, Optimizely Commerce offers the flexibility to create seamless, immersive designs that enhance the overall user experience. With full control over these settings, you can customize your site’s layout to fit your specific design needs while maintaining a consistent and responsive interface across devices.

]]>
https://blogs.perficient.com/2025/01/06/how-to-enable-full-width-layouts-in-optimizely-commerce-spire/feed/ 0 374906