Experience Management Articles / Blogs / Perficient https://blogs.perficient.com/category/services/customer-experience-design/digital-experience/experience-management/ Expert Digital Insights Tue, 04 Feb 2025 15:36:30 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Experience Management Articles / Blogs / Perficient https://blogs.perficient.com/category/services/customer-experience-design/digital-experience/experience-management/ 32 32 30508587 How Sitecore Drives Digital Strategy Value as a Composable DXP https://blogs.perficient.com/2025/01/31/how-sitecore-drives-digital-strategy-value-as-a-composable-dxp/ https://blogs.perficient.com/2025/01/31/how-sitecore-drives-digital-strategy-value-as-a-composable-dxp/#respond Sat, 01 Feb 2025 03:04:11 +0000 https://blogs.perficient.com/?p=376719

Have you seen the speed at which the digital landscape is shifting and evolving and thought to yourself, how can I keep up? How can I level up my organization’s digital customer experience and futureproof my website and digital ecosystem to ensure consistent growth for years to come?

The answer might just be a shift to a Composable Digital Experience Platform (DXP) like Sitecore. This is the latest approach to providing digital experiences that offer flexibility, scalability and faster iteration. Sitecore is a true leader in digital experience management and is fully embracing this composable future, while empowering businesses to create personalized experiences for their customers. Let’s take a closer look at what this means for your strategy and how Sitecore can help you navigate this transition.

What are the key benefits of a composable DXP?

We are coming from a place where monolithic DXP’s were the norm. While this type of platform offered convenience, they could be expensive, required regular upgrades and were difficult to scale, especially with the introduction of AI technologies.

Some of the benefits that migrating to a composable DXP can offer include, but are certainly not limited to:

  • Greater Flexibility
  • Scalability
  • Faster Innovation

How can Sitecore specifically power your composable digital strategy?

Sitecore has shifted from a one-size-fits-all platform to a modular ecosystem, where companies can seamlessly integrate custom components, API’s and third-party platforms. Here are some key areas Sitecore’s composable DXP is driving results for customers across numerous industries.

  1. Sitecore XM Cloud: Sitecore’s cloud-based platform supports headless content delivery. This means that businesses can expect faster time to market for strategic content publishes, reduces maintenance costs and ensures consistency across all digital channels.
  2. Sitecore CDP & Personalize: Sitecore’s Customer Data Platform (CDP) and personalization features help businesses extract real-time customer insights to dynamically display content. This leads to increased conversion and improved customer experience.
  3. Sitecore Content Hub & Sitecore Stream: While Content Hub provides a centralized digital asset management (DAM) system, it also helps automate content creation workflows. Sitecore Stream transforms content lifecycles with AI workflows, generative copilots, and brand aware AI.

Final Thoughts

As you can see, there are a lot of reasons why a composable DXP makes a lot of sense for organizations across all industry verticals, and Sitecore specifically can add a ton of value to Marketing and Technology teams alike in a world of constantly change. At Perficient, we have a team of dedicated and experienced folks ready to help you tackle the transformation and transition into the world of Composable DXP. Reach out to us today, and see how we can work with you to drive outstanding digital customer experiences for your customers.

]]>
https://blogs.perficient.com/2025/01/31/how-sitecore-drives-digital-strategy-value-as-a-composable-dxp/feed/ 0 376719
Think Big, Start Epic: Harnessing Agile Epics for Project Success https://blogs.perficient.com/2024/09/19/think-big-start-epic-harnessing-agile-epics-for-project-success/ https://blogs.perficient.com/2024/09/19/think-big-start-epic-harnessing-agile-epics-for-project-success/#respond Thu, 19 Sep 2024 17:01:07 +0000 https://blogs.perficient.com/?p=369524

Let’s be honest – projects can get messy fast. It’s all too easy to get tangled up in the details and lose sight of the bigger picture. That’s where Agile epics step in, helping you think big while staying grounded in the steps that lead to success. Epics act as the link between your grand strategy and the day-to-day tasks, giving your team the clarity to drive meaningful progress. Whether you’re steering a massive project or managing smaller innovations, mastering epics is key to unlocking the flexibility and focus that Agile promises. In this post, we’ll show you how epics empower teams to think big, act smart, and deliver results.

What is an Epic?

In a hierarchy of work, epics are formed by breaking down higher-level themes or business goals. They are large initiatives that encompass all the development work needed to implement a larger deliverable. An epic is too large to be completed in a single scrum team’s sprint, but it is smaller than the highest-level goals and initiatives. Epics are intentionally broad, light on details, and flexible.

Here’s what that means: The epic is broken down into smaller pieces of work. Your team may call these smaller pieces product backlog items/tickets, user stories, issues, or something else. As conditions or customer requirements change over time, these smaller pieces can be modified, removed, or added to a team’s product backlog with each sprint. In this way, the epic is flexible, providing direction without requiring heavy investment in its plans and details.

Agile Requirements Image

Why Are Epics Important?

Instead of tackling the whole epic at once with a deadline in a few months, you and your teammates deliver small increments of value to your customers, users, or stakeholders each sprint. When changes are needed, you adapt the plan easily. Had your team taken on the entire epic at once, they might find that changes have rendered the epic obsolete by the end.

How to Identify Epics?

Agile epics should describe major product requirements or areas of functionality that define the user experience. You can think of them as categories or parents for user stories that may not directly relate to each other but fall under the same umbrella of functionality (e.g. UI Improvements). Epics can become unwieldy quickly, so it’s worth examining them along the following lines to determine if the size is appropriate or not. Remember, the goal is for the epic to be fully delivered!

  • Does the epic span products? If so, it may be more appropriate to split the epic along product lines.
  • Do the success criteria support each other entirely? If there is conflict between measurements, splitting the epic would be warranted.
  • Is the epic for multiple customer segments? Targeting different customer groups is likely to lead to contention between measurement and goals.
  • How risky is the epic? An effective mitigation strategy may be to compartmentalize the risk across several epics rather than concentrating it in one.
  • Would working on the epic effectively shut down all other development work? This may be an indication that the epic is too large (even if the business priority is clearly highest) and could introduce an extra level of risk that may not have been considered or can be easily mitigated.

Who Creates and Manages Epics?

In Agile, the creation of epics typically starts with the product manager, who has a deep understanding of the project’s long-term vision and business objectives. The product manager identifies major areas of work, shaping them into epics that guide the team’s efforts. While the product manager leads this process, it often involves input from various stakeholders and team members to ensure that each epic aligns with overall project goals. Once established, the product manager is responsible for managing these epics, breaking them down into smaller tasks, and prioritizing them with the product owner to support effective sprint planning and execution.

How to Craft Effective Epics?

  • Define Clear Goals: Begin by identifying the epic’s objectives. Understand the problem it seeks to address and clarify how it will drive value for the project and stakeholders.
  • Collaborate for Alignment: Involve key stakeholders—such as team members, users, and business leaders—to ensure the epic is well-rounded and matches user needs and business priorities.
  • Maintain Flexibility: Though the epic should offer clear direction, it’s important to leave space for changes as new insights or requirements emerge during development.
  • Prioritize Value: Ensure that every aspect of the epic contributes meaningfully to delivering tangible value to both the customer and the overall project.

Epic Structure: Key Components of a Well-Written Epic

  • Title: The title should succinctly summarize the core of the epic, giving the team and stakeholders a quick understanding of its focus.
  • Overview: Write a concise summary that outlines the epic’s objectives and the value it delivers to both the project and the end-user. Consider the target audience and competitors while framing this.
  • Actionable Features: Break the epic down into smaller, actionable features that are measurable and align with the epic’s primary goals. These features should be traceable to specific user needs or project requirements.
  • Success Criteria: Clearly define how the success of the epic will be measured. This should go beyond basic acceptance criteria and include broader business outcomes that may evolve over time.
  • Dependencies: Identify any interdependencies with other epics, projects, or external factors that could influence the epic’s progress.
  • Timeline: While the exact timeframe might not be locked, establishing a rough schedule helps prioritize the work and manage stakeholder expectations.

Next Steps

In conclusion, epics are fundamental to Agile methodology and critical to the Scrum framework. They help product managers, product owners, and key stakeholders manage and organize the product backlog effectively. Developers can also use epics to plan iterations, breaking them into manageable sprints, and systematically collect customer feedback. As outlined, epics serve as an asset for Agile teams, allowing for the grouping of user stories to aid in prioritization and incremental value delivery.

Effectively creating and managing epics can be challenging without the right approach. If you’re finding it difficult to structure your epics, align them with business goals, or manage their scope within your team, don’t hesitate to reach out to us at Perficient. Our experts can help you refine your process, ensuring that your epics are well-defined, manageable, and strategically aligned with your project’s success.

Contact us today to learn how we can assist your team in mastering Agile epics!

]]>
https://blogs.perficient.com/2024/09/19/think-big-start-epic-harnessing-agile-epics-for-project-success/feed/ 0 369524
Giving the Power of Speech Real Horsepower with Voice-to-Everything Capabilities https://blogs.perficient.com/2024/08/28/giving-the-power-of-speech-real-horsepower-with-voice-to-everything-capabilities/ https://blogs.perficient.com/2024/08/28/giving-the-power-of-speech-real-horsepower-with-voice-to-everything-capabilities/#respond Wed, 28 Aug 2024 20:05:04 +0000 https://blogs.perficient.com/?p=368309

With the 2024 Paris Summer Olympics now behind us, I pause for a moment to reflect on a time when the last summer games were held in Europe. The year was 2012, and the Olympics had just wrapped up in London, the queen had celebrated 60 years upon the throne, and in true royal fashion, I had just purchased the latest Ford Explorer.  

This Ford Explorer came in Triple Black with every feature including the latest version of sync with voice control. I was giddy with excitement and felt like I was Captain Kirk at the helm of the Starship Enterprise steering towards new horizons. But… the voice activation was not what I had hoped for.  

When attempting to call my mother, I got my friend Monica, and when trying to dial a colleague, I received a childhood friend. If you know me, then you understand that navigation isn’t my strong suit, and when searching for directions to Birmingham in Michigan, I would consequently be sent to Alabama. You get the picture.  

Speed back to 2024 and voice-to-everything is transforming the automotive industry. Thankfully, the voice control in my 2023 Ford Edge is now working much better — the way it was intended. 

Voice-to-Everything Technology Allows for Expanded Vehicle Control  

The automotive industry is undergoing a significant transformation driven by advancements in technology that are reshaping the way we interact with our vehicles. One of the most exciting developments in this space is the rise of voice-to-everything (VTE) technology. This innovation is poised to redefine the driving experience, increasing intuition, safety, and making it more connected than ever before.  

VTE technology refers to the integration of voice-controlled systems throughout a vehicle, allowing drivers and passengers to interact with the car’s functions using simple voice commands. This technology leverages advancements in artificial intelligence (AI) and natural language processing (NLP) to understand and execute spoken instructions, minimizing the need for physical controls or manual inputs. In essence, VTE in automotive transforms your voice into the primary interface for controlling the vehicle, including everything from adjusting the climate controls, to navigating to destinations, or even managing entertainment options.  

Just Like Language Itself, Voice Technology Has Evolved Over Time 

The Evolution of VTE in cars isn’t entirely new, but it has come a long way from the rudimentary systems of the past. Early voice-activated systems often struggled with accuracy, limited vocabulary, and rigid command structures. However, recent advancements in AI and machine learning have dramatically improved these systems, enabling them to understand context, recognize natural speech patterns, and respond accurately even in noisy environments. Modern vehicles are now equipped with sophisticated voice assistants that can manage a wide range of functions. These systems are no longer just limited to basic commands; they can engage in complex interactions, understand conversational language, and even learn from user preferences over time. 

How Voice-to-Everything is Transforming the Driving Experience

The integration of VTE in vehicles offers several significant benefits, fundamentally changing how drivers and passengers interact with their cars.

To begin, VTE makes the driving experience more convenient and user-friendly. Instead of fumbling with buttons or touchscreens, drivers can simply speak their commands. This ease of use is particularly beneficial in complex, multitasking scenarios, such as driving in heavy traffic or during long trips. Modern VTE systems can learn from the driver’s habits and preferences, offering a personalized experience. For instance, the system can remember your preferred routes, favorite radio stations, or climate settings, automatically adjusting to your preferences as soon as you step into the car. 

Further, as vehicles become more connected, VTE plays a crucial role in integrating the car with other smart devices and services. Drivers can use voice commands to interact with their smartphones, smart homes, and other connected systems, creating a seamless experience that extends beyond the vehicle.

This hands-free approach is not only more convenient but also significantly enhances safety by reducing distractions.  By enabling drivers to control various functions without taking their hands off the wheel or eyes off the road, VTE greatly enhances driving safety.  Whether it’s making a phone call, changing a song, or setting up navigation, voice commands allow drivers to stay focused on the road. An additional benefit is increased productivity during long commutes, which significantly improves the driver experience. 

Finally, VTE is paving the way for the future of autonomous driving. As cars become more autonomous, voice commands will likely become the primary mode of interaction between the driver and the vehicle, allowing for smooth control of the car’s functions even when manual driving is no longer required. 

Let’s Drive Towards a Voice-Powered Future Together 

Voice-to-everything is rapidly becoming a cornerstone of the modern automotive experience. By making driving safer, more convenient, and more connected, this technology is set to revolutionize the way we interact with our vehicles. As it continues to evolve, VTE will play a crucial role in shaping the future of transportation, bringing us closer to a world where the sound of our voice is all that’s needed to command the road. Just to be clear, I am not yet ready to include my vehicle in my friend group, or as part of my fantasy team, but it’s clear that the voice-driven car is more than just a concept—it’s the future.  

As I’ve mentioned in a previous blog, Perficient is in the middle of conducting primary research on connected products. We also have a robust innovations lab that routinely helps OEMs with their customer experiences, data needs, and cloud infrastructure.  Please explore our automotive expertise and schedule a meeting, as we would love to discuss how we can help create a sustainable competitive advantage for you. 

]]>
https://blogs.perficient.com/2024/08/28/giving-the-power-of-speech-real-horsepower-with-voice-to-everything-capabilities/feed/ 0 368309
How Data and Personalization are Shaping the Future of Travel https://blogs.perficient.com/2024/08/26/how-data-and-personalization-are-shaping-the-future-of-travel/ https://blogs.perficient.com/2024/08/26/how-data-and-personalization-are-shaping-the-future-of-travel/#respond Mon, 26 Aug 2024 17:53:57 +0000 https://blogs.perficient.com/?p=367966

Generic travel brochures and one-size-fits-all itineraries are becoming less prevalent in today’s travel and tourism industry. Travelers crave truly unique experiences, and the industry is responding with a powerful tool: data. By harnessing the power of data and personalization, travel companies are unlocking a new era of customer engagement, satisfaction, and loyalty.

Bespoke Travel Experiences Powered by Data

Travel recommendations shouldn’t be generic suggestions that can be found by a cursory Google search, but rather curated experiences that anticipate your every desire. With data, that can be the case. By analyzing everything from past booking history to social media preferences, travel companies can build a rich profile for your travel dossier. This data goldmine allows them to personalize itineraries that cater to your specific interests. Whether it’s reservations to a hidden culinary gem for the adventurous gourmet who devoured Anthony Bourdain’s: Part’s Unknown or serene nature escapes for those who indicating that they want to get away and unplug. Data doesn’t just personalize experiences; it also fuels intelligent recommendations.

Unlocking Customer Loyalty Through Personalization

Personalization is no longer a perk, it’s the expectation. Travelers crave experiences that feel designed just for them, and data empowers travel companies to make the perfect menu just for them. Imagine receiving exclusive deals on flights to destinations you’ve been dreaming of, or automatic upgrades to experiences that resonate with your passions. By leveraging data and personalization, travel companies can build deeper connections with their customers thus fostering lifelong loyalty. This translates into repeat business and positive word-of-mouth recommendations as well as glowing five-star reviews.

Creating A Hands-Off Travel Experience with Customer Data

Data and personalization extend far beyond basic recommendations. Travel companies can leverage this powerful duo to elevate the entire travel journey. What if, when booking a flight, your preferred seat was automatically preselected based on past choices, or the hotel you book remembers your favorite room temperature and sleep number and adjusts accordingly upon arrival. Data can even personalize in-destination experiences. Instead of calling around for restaurants that can accommodate dietary restrictions, your recommendations will have already taken them into account. Rather than searching through various tour programs, they’ve already been curated so that they align with your historical interests.

On-the-spot Location Tailored Experiences

Location data adds another exciting dimension to personalization. You could be exploring a new city and receive real-time notifications about off-the-beaten-path cafes or historical landmarks that are right around the corner. Travel companies can use location data to send personalized offers for nearby attractions or cultural events, ensuring you make the most of every moment. Thanks to real-time suggestions and knowledge of your current location, you can be assured that you’ll be updated on impending weather conditions. This ensures a comfortable travel experience, providing a safe and cozy hideaway for you if there’s a need to duck in off the road and enjoy some shelter.

Personalization with Privacy in Mind

While travelers crave personalization, they also value privacy. The key lies in striking a balance. Transparency is crucial, allowing travelers to understand how their data is used along with giving them the power to control their privacy settings. Finally, travel companies must ensure data security and improve transparency about their policies to build trust with their customers.

AI Enables Travel Companies to Embrace New and Unknown Terrain

Data and personalization, especially enabled by artificial intelligence, will continue to evolve, and the travel landscape will transform with it. We’re entering a future where AI-powered travel companions will use data to anticipate your needs, suggest local experiences, and deftly navigate language barriers. Travel companies that embrace the power of data and personalization will be the ones who unlock the greatest opportunities, fostering strong customer relationships and defining the future of travel.

Forge the future of adventures and accommodations with our travel and hospitality expertise.

 

]]>
https://blogs.perficient.com/2024/08/26/how-data-and-personalization-are-shaping-the-future-of-travel/feed/ 0 367966
Creating a Sound A/B Test Hypothesis https://blogs.perficient.com/2024/08/15/creating-a-sound-a-b-test-hypothesis/ https://blogs.perficient.com/2024/08/15/creating-a-sound-a-b-test-hypothesis/#respond Thu, 15 Aug 2024 14:16:18 +0000 https://blogs.perficient.com/?p=367439

A Hypothesis is important for understanding what you are trying to prove with your A/B test. A well-formed hypothesis acts as a test guide.

A hypothesis is going to challenge an assumption you have about your website’s performance and/or visitor behavior. What is the assumption you want to validate as right or wrong?

Ask yourself these questions when coming up with your test hypothesis:

  • What assumption are you addressing? Is there data to support your assumption?
  • What solution are you proposing to address the challenged assumption?
  • What is the anticipated outcome of your challenge? What metrics will be impacted if you make the specific change?

Asking those questions will help us ensure the hypothesis is S.O.U.N.D.:

Specific – the hypothesis should clearly define the change that is being tested.
Objective – while the test is proving or disproving an assumption – that assumption should be based upon actual insights – analytics, industry research, or user feedback for example.
User-focused – the hypothesis should address a user pain point. Focusing on user experience will increase test engagement and result in better outcomes.
Needs-based – the hypothesis should address a business need. Spend time on tests that will bring value to the business as well as the user. Keep ROI front of mind.
Data-driven – always make sure the hypothesis has measurable metrics and a clear quantitative goal.

Some examples of a solid hypothesis are:

The current headline on our landing page lacks a clear value proposition, so changing the headline to a more concise and benefit-oriented version will increase conversion rate.

Our promo banners blend in with the page design causing users to scroll by them, so testing a more contrasting color will increase CTA clicks on the banners.

The lead capture form is too long causing users to exit the site, so reducing the number of form fields from 20 to 10 will increase the number of leads.

 

]]>
https://blogs.perficient.com/2024/08/15/creating-a-sound-a-b-test-hypothesis/feed/ 0 367439
Composable Martech: Orchestration & Federation https://blogs.perficient.com/2024/05/06/composable-martech-orchestration-federation/ https://blogs.perficient.com/2024/05/06/composable-martech-orchestration-federation/#respond Mon, 06 May 2024 14:41:12 +0000 https://blogs.perficient.com/?p=362120

Part 3 in our “unpack the stack” series on composable martech is all about the data – specifically, access to the data – the middle layer of the stack. The next set of capabilities we’re exploring is Orchestration and Federation. These two capabilities go well together because they are very similar and have some overlap, so let’s unpack ’em.

Orchestration and Federation in a Composable Architecture

At a high level, the “orchestration and federation” category represents the underlying access and routing to data across a variety of back-end martech products – from PIM, CMS, Order Management, DAM, Marketing Automation, internal and external proprietary databases, etc. While the prior topics of FEaaS and Experience Builders focus on the visual expression of content, data, and layout, orchestration and federation capabilities provide access (and intelligence!) to the actual content and data to hydrate those experiences. Let’s better understand the differences here.

Orchestration vs. Federation

The reality is these terms are often used interchangeably, so the definitions below are my take based on how they are often used in reality and… a bit of the dictionary:

  • Federation means bringing multiple sources of data/content together into a consolidated and predictable “place” – in reality the “place” may be a martech tool that holds a copy of all of the data/content, or simply an API facade that sits on top of the underlying systems’ APIs. More on this in a bit. The key point here is it’s a unification layer in the martech stack, a single entry point to get access to the back-end systems via a single API.
  • Orchestration is the same as Federation, however, is brings a bit more logic to the data party, providing some level of intelligence and control on exactly what data/content is provided for consumption. It’s like air traffic control for the flow of data from back-end systems.

Examples of Content Federation

Content Federation is a unification capability where you can combine multiple back-end sources together in a composable stack. A few examples include:

Hygraph Content Federation

Hygraph Content Federation

Hygraph Remote Sources unify multiple back-end source system APIs directly into the Hygraph API so the consumer (e.g. a web app) only needs to access the Hygraph API and not all of the underlying other systems directly. You can read more about the content federation concept from Hygraph or see it live in a video! One thing to note is that Hygraph does not actually fetch and store external data inside Hygraph, instead the remote source’s API schema is added to the Hygraph API so a single call to the Hygraph API will make an “under the hood” call to the external API from Hygraph at query time.

Contentful Content Orchestration’s External References

Contentful External References

Contentful External References (ironically) is a feature of Contentful “content orchestration” (see what I mean by these being used interchangeably?). External References allows system users to register external API sources that get merged into the Contentful GraphQL API so a consumer only needs to use one API. This is nearly identical in capability to Hygraph, however, one important thing to note is that Contentful allows for bi-directional editing of external data.  That means a CMS user can directly edit the external data from the Contentful CMS UI (assuming the API is setup to handle that). One key advantage of bi-directional editing is that a business user does not need to log into the other systems to make edits, they can stay inside the Contentful interface to do all of the editing.

Netlify Connect

Netlify Connect

Netlify Connect is another good example of federation following a similar model to Hygraph and Contentful. In Netlify Connect you can configure multiple “data layers” to back-end systems using pre-built integrations provided by Netlify, or to your own proprietary system using the Netlify SDK. A great use case for this custom approach is if you have a proprietary system that is difficult to get data out of and requires custom code.

The most notable difference with Netlify Connect is that it actually fetches and caches your external data into its own database and exposes snapshots of the historical data. This means you can use historical data revisions to query a specific set of data at a point in time, especially if you need to troubleshoot or rollback the state of an experience.

Optimizely Graph

Optimizely Graph

Unlike the prior examples, Optimizely is a more traditional DXP that is leaning heavily into headless with the likes of Sitecore, Adobe, dotCMS, and others.

Optimizely Graph is the new GraphQL API to serve headless experiences built on Optimizely. One subtle (and maybe overlooked?) feature of Graph is the ability to register external data sources and synchronize them into Graph. Based on the documentation as it stands today, it appears this work is primary developer-driven and requires developers to write custom code to fetch, prepare, and submit the data to Graph. That said, the benefits mentioned previously still stand. This allows headless experiences to consume content from a single API while behind the scenes the synchronization process fetches and stores the external data into Graph.

Enterspeed

"</p

Enterspeed is a good example of a pureplay product that focuses on unification as the middle layer in a composable architecture. It allows you to ingest external data, transform that data, and deliver that data to various touchpoints, all via a high-speed edge network.

WunderGraph Cosmo

Wundergraph Cosmo

WunderGraph provides GraphQL microservice federation. It’s an open source and hosted product that helps you manage multiple back-end databases, APIs, authentication providers, etc. Additionally, its designed in a way for developers to declare the type of compositions of APIs they want using code, following a Git-based approach, instead of requiring UI-based setup and configuration.

Hasura

Hasura

Hasura provides GraphQL federation similar to WunderGraph. It provides a single GraphQL API to consumers with the ability to connect several underlying systems such as REST APIs and databases (e.g. Postgres, SQL Server, Oracle, etc.).

Examples of Orchestration

Digital Experience Orchestration with Conscia

DXO is an emerging capability pioneered by Conscia.ai to help solve the problem of integrating many back-ends to many front-ends. DXO helps to orchestrate the complexity in the middle via a unified layer that all back-end services and systems of records communicate with, as well as a unified front-end API for experiences to consume:

Conscia

A key tenet to this approach is to continue to leverage real-time APIs from the back-end systems, for example, a pure play headless CMS and a commerce engine. The DXO not only acts as a façade in front-end of these back-end system (similar to an API gateway), but it also provides other benefits:

  • Unifies data across back-end systems of record like you see with Federation
  • Provides enhanced business logic and rules by allowing business users to chain APIs together, avoiding static logic written into code by developers
  • Offers performance improvements by caching the real-time calls to the back-end APIs as well as pushing as much computation (e.g. business logic) to the edge closer to the end users

One key value proposition of Conscia’s DXO is a business user-friendly canvas to integrate multiple API call responses and create chains of calls. For example, the response of one API call might become an input to another API, which is often hard-coded logic written by developers:

Conscia Canvas

Conscia’s DXO provides two key capabilities:

  • DX Engine as the federation layer to unify multiple back-end sources of data and content, as well as a rules engine and intelligent decisioning to orchestrate the right content and data for the individual experience
  • DX Graph as a centralized hub of all data, especially useful if you have legacy back-end systems or proprietary systems with hard to access data. The DX Graph can connect to modern APIs to visualize (via a graph!) all of your data, but crucially it becomes a centralized hub for proprietary data as well that may require scheduled sync jobs, batch file processing, and similar ways to integrate.

Similar patterns: API Gateways & BFFs

Is this like an API Gateway?
Yes and no. An API gateway provides a façade on top of multiple back-end services and APIs, however it mostly performs choreography as an event broker between the back-end and front-end (client). An orchestration system puts the brain in API gateway, being a centralized hub, and allows business users to control more of the logic.

Is this similar to the BFF (backend for frontend) design pattern?
Sort of. If the specific federation or orchestration tooling you are using allows you to control the shape of your API responses for specific consumers (e.g. frontend clients in a BFF), then you can accomplish a BFF. This is definitely a good use case for Conscia.

Why do orchestration and federation matter in composable architecture?

In a truly composable stack, we need to consider the fact that multiple systems in use means multiple sources of truth: CMS, another CMS, maybe another, PIM, DAM, OMS, the list goes on. It is absolutely possible to integrate directly with all of these systems directly from your head (the experience implementation you are providing such as a web site, mobile app, etc.). However, direct integrations like this tend to break down when you scale to multiple experiences since all of the back-end data integration logic is in a specific experience implementation/head (e.g. the website application code).

So, what’s the alternative to putting the integrations directly in your head?

  • Abstract it out and build a DIY integration layer: this sounds like a lot of work, but it certainly is possible. However, it may be hard to scale, add features, and maintain since it will turn into a bespoke product within your architecture.
  • Buy a federation/orchestration tool: why build it when there are products that already handle this? Focus on your specific business instead of building (and maintaining!) a bespoke product (like a CMS, and PIM, and OMS, etc.)

A dedicated federation/orchestration layer offers the following key benefits:

  • A single unified API for consumers (marketing site, web portal, native mobile app, data syndication to other systems/channels, etc.)
  • Promotes the concept that systems of record should truly own their data and avoids needing to write custom middleware to handle the orchestration and logic across many systems (e.g. head-based integration or a custom integration layer)
  • Encourages reuse of data and content: it offers data-as-a-service, so you can focus on how to activate it on your channels.
  • May provide contextual intelligence to control and personalize API responses to individual visitors in a dedicated data layer to power tailored omnichannel experiences. 

We have it all, so what’s next?

Seems like we have everything we need here, what else is there? Let’s (re)package up the first three capabilities into a larger topic – stay tuned for part 4 where we will talk about Digital Experience Composition (DXC).

]]>
https://blogs.perficient.com/2024/05/06/composable-martech-orchestration-federation/feed/ 0 362120
Composable Martech: Experience Builders https://blogs.perficient.com/2024/04/18/composable-martech-experience-builders/ https://blogs.perficient.com/2024/04/18/composable-martech-experience-builders/#respond Thu, 18 Apr 2024 19:20:54 +0000 https://blogs.perficient.com/?p=361485

Welcome back for Part 2 in a series on composable martech, where we unpack the stack into the emerging capabilities that make modern architectures. This time we’re talking about Experience Builders, which go hand-in-hand with Part 1’s topic, Front End as a Service.

Experience Builders have been around for a long time. In fact, the concept of drag and drop or point and click page template creation has been around for ages, going back to early content management systems as the visual on-page editor experience, very common with traditional monolithic all-in-one (!) suites. In this context we’re talking about Experience Builders in a composable stack, where the underlying CMS may not be an all-in-one solution or a hybrid headless solution.

As I mentioned, Experience Builders go together with the prior topic, Front End as a Service (FEaaS). While FEaaS focuses on the individual UI component creation, design, and data binding, Experience Builders allow business users to compose multiple components together to form bespoke web pages or reusable page templates. In fact, many FEaaS providers also offer the Experience Builder capability, allowing users to build the repository of components and assemble them together into page layouts.

Traditional Experience Builders

As a baseline of understanding, let’s first look at experience builders that come embedded in traditional CMS/DXP solutions – whether you consider them to be monolithic, all-in-one, unified, etc.

Traditional experience builders are typically offered as the out of the box visual in-line on-page editors provided with a CMS (or DXP). They often support either drag and drop or point and click page layout creation with placeholders or content areas to assign UI components. The fidelity of freedom of page design is often configurable, from blue sky blank slate where you can design the layout of the whole page, to what I like to call “paint by numbers” where the page layout and components are “fixed” and content just needs to be entered.

Below are a few examples of traditional experience builders that you may have seen or worked with over the years.

Sitecore’s Experience Editor (fka Page Editor) is the point-and-click turned drag-and-drop visual editor of the XP product with the ability to in-line edit content and add UI components into page-level placeholders:

Sitecore Editor1

Sitecore Editor2

Optimizely’s On Page Editor provides similar capabilities as Sitecore with on-page in-line editing and placement of content and blocks into content areas:

Optimizely Editor

HubSpot’s SaaS CMS has a visual editor as well, likely what you would expect from a CMS that manages and delivers its own content:

Hubspot Editor

Composable Experience Builders

Now let’s move on to modern composable experience builders since that’s the topic of this series. This is where things get a bit blurry between traditional suite providers going composable and headless-first pure play point solutions. Monolithic-turned-composable DXP players (if you believe that’s even a thing) and pure play headless CMSs are both “meeting in the middle” (a great Deane Barker phrase that I completely agree with) with a lot of emphasis on Visual Editing with headless-based solutions. This is a big topic that is being discussed a lot, recently from industry veteran Preston So with the idea of a Universal CMS. I’ve even written about it before as an emerging trend in the space.

As you may know, headless CMSs were initially celebrated by developers for their ability to use web APIs to fetch content, freeing developers from the underlying CMS platform technology, however this came at a price for marketers, often losing their visual editing ability that we talked about in the traditional experience builders above. Well, its 2024 now, and times have changed. Many solutions out there are turning to visual editors that work with headless technology to bring the power back to the marketers while keeping developers happy with modern web APIs. Let’s take a look at some examples and the wide spectrum of solutions.

dotCMS’s Universal Visual Editor

Dotcms Editor

dotCMS’s new Universal Visual Editor is an on-page in-line editing app built to support headless sites, including SPAs, and other external apps that may fetch content from dotCMS.

AEM’s Universal Editor

Aem Editor

AEM’s new Universal Editor supports visual editing for any type of AEM implementation, from server-side, client side, React, Angular, etc.

Contentful Studio

Contentful Studio

Contentful Studio includes a newly released visual builder called Experiences. This is a very interesting example where a traditional pure play headless CMS is clearly pushing upmarket, innovating to capture the needs to marketers with this no code drag and drop tooling.

Optimizely’s Visual Builder

Opti Visual Builder

Optimizely’s yet-to-be-released (likely arriving in H2 of 2024) Visual Builder appears to follow suit with the likes of Adobe, dotCMS, and other traditional suites by offering a SaaS-based editor for headless implementations. The timing of this is likely to appear after their release on the upcoming SaaS CMS offering.

Sitecore XM Cloud Components & Pages

Xmc Pages

Sitecore XM Cloud’s Components and Pages go hand-in-hand: FEaaS and Experience Builder working in harmony. Interestingly, these interfaces not only support Sitecore-based components such as built-in UI components (OOTB), Sitecore SDK-developed components (via developers), or FEaaS components (via authors), they also allow developers to “wrap” external components and allow marketers to use and author with them, bringing a whole new level of flexibility outside the world of Sitecore with bring your own components (BYOC). This brings composability to a whole new level when coupled with using external data sources for component data binding.

Uniform

Uniform Visual Workspace

In many ways Uniform lives in a category of its own. We’ll unpack that (pun intended) a bit more in a future article in this series. That said, one of the most well-known capabilities of Uniform is its Visual Workspace. Uniform empowers marketers though the visual editor by focusing on the expression of the underlying content and data, bringing the actual experience to the forefront. There’s much more to be said about Uniform on this topic, so stay tuned.

Netlify Create & Builder.io

As I mentioned in the prior article on FEaaS, Netlify Create and Builder.io both offer modern composable Experience Builders that are very similar in nature but have some notable nuances:

  • Builder.io goes to market as a Visual Headless CMS, offering the experience builder visual editing with its own CMS under the hood so you are not required to bring in another tool to manage the content. However Builder.io also supports using an outside CMS such as Contentful, Contentstack, and Kontent.ai
  • Create provides an Experience Builder canvas that works solely with external CMSs supporting Contentful, Sanity, DatoCMS, and other custom options available

Features of a modern Experience Builder

So, what features do modern experience builders provide?

  • Design reusable page layout templates through structural components (columns, splitters, etc) on a visual canvas
  • Assemble and arrange page layouts with UI components (often built via FEaaS tools)
  • Configure visual design systems for consistent look and feel when multiple components come together, for example, typefaces, colors, sizing, padding/margin, etc.
  • Ability to communicate with multiple back-ends to compose an experience sourced from multiple systems (e.g. a product detail page sourcing data from a CMS, PIM, and DAM)
    • Note: many pure play providers offer this while some suite providers are “closed” to their own content – but perhaps other features in the stack can solve this. Keep reading this series to learn more 🤔
  • Flexibility to build both reusable page layout templates and one-off bespoke pages (e.g. unique landing pages such as campaigns)

The combination of FEaaS and an Experience Builder is very common in a composable stack made up of many headless products as it empowers business users without needing developers to implement integrations with back-end APIs. I’ve written about this topic before in the CMSWire article “5 Visual Editing Trends in Composable Martech Tools.” At the end of the day, Experience Builders are the “visual editor” that was missing in the early days of pure play headless CMS’s that mostly favored a great developer experience with elegant APIs and SDKs.

What’s next?

Where are we going to get all of the content from? It’s all about the data! The next topic will cover the underlying systems that can serve the right content and data to these experiences. Stay tuned to learn more about orchestration and federation!

]]>
https://blogs.perficient.com/2024/04/18/composable-martech-experience-builders/feed/ 0 361485
Composable Martech: What is Front End as a Service (FEaaS)? https://blogs.perficient.com/2024/04/10/composable-martech-what-is-front-end-as-a-service-feaas/ https://blogs.perficient.com/2024/04/10/composable-martech-what-is-front-end-as-a-service-feaas/#respond Wed, 10 Apr 2024 20:03:04 +0000 https://blogs.perficient.com/?p=361483

Welcome to Part 1 in a series on composable martech, where we unpack the stack into the emerging capabilities that make modern architectures. This round is focused on Front End as a Service.

Front End as a Service (FEaaS) is a fairly new term for a concept that isn’t necessarily new – however it is becoming more common as martech tools pivot towards SaaS solutions with API-first headless capabilities.

FEaaS represents the top of a composable technology stack, the front-end of an experience that end users interact with. This can be a traditional website, a web-based logged-in portal/application, a commerce storefront, or even a touch-based kiosk web application.

Benefits of FEaaS

FEaaS provides an overall goal of empowering business users to take control of the front-end experience power by headless products without requiring developers. These tools provide business users with the ability to design and layout UI components from atomic elements onto a canvas, and bind “back end” data from various martech tools, sometimes called “glue code” in the headless world and the biggest criticism of headless products.

FEaaS offers the following benefits in a composable architecture:

  • Drag and drop low/no code tooling for business users to layout and design individual UI components to be used in a digital experience
  • Ability to bind custom components to back-end data via APIs, for example mapping new components on the front-end to a CMS API response to fetch content
  • Some FEaaS providers also offer ways to make components reusable and delivered via a CDN, for example wrapping them in Web Components for generic reusability.

Examples of FEaaS

The spectrum of FEaaS providers is quite wide and includes pureplay headless content providers that are pushing upmarket to provide more business user tools beyond their roots as API-first developer-friendly tools. Additionally, traditional enterprise suite providers are pushing down-market to appease business users that want more flexibility and control of the digital experience without requiring “expensive” platform developers to build UI components and integrate them with back-end APIs.

Netlify Create (fka Stackbit) offers a canvas to design and layout components. Like many other providers, this offers both the component creation capability (FEaaS) as well as an Experience Builder (or Page Builder) to compose multiple components together into a page template. We’ll get more into Experience Builders in the next post.

Netlify Create

Sitecore XM Cloud Components is another example of FEaaS, offering the ability to create bespoke components from atomic elements, like sections, boxes, images, text elements (paragraphs, headings, etc). It’s basically browser-based Figma with the output being functional UI components that can accept data from various martech systems (PIM, CMS, DAM, etc.).

XM Cloud Components

To illustrate the key point about accepting data from various systems, Sitecore XM Cloud Components offers a data sources section to register external (and internal) API endpoints to fetch data from other systems via API endpoints and GraphQL:

XM Cloud Components Data Sources

Builder.io offers similar functionality as Netlify Create with a canvas for drag and drop atomic elements. Additionally, it provides a page template builder which we will discuss in the next post, as well as its own built-in headless CMS. 

Builder.io

Finally, Shogun is a FEaaS provider (among other things) for commerce storefronts. Like others, it provides a drag and drop canvas to compose elements together into bespoke reusable components intended for a commerce store with the integration “glue” into Shopify and BigCommerce as the underlying commerce engines.

Shogun

What’s next after FEaaS?

As you can see FEaaS is an incredibly powerful capability in the composable stack, especially when multiple headless products are being integrated. FEaaS brings back the power to the business user to design and build UI components without relying on developers to build them as custom components and handle all of the data binding with back-end systems. That said, FEaaS is a complex capability and should be limited to mature digital teams that can govern component creation and usage.

The next topic we’ll cover in this series will focus on how we put these components together to compose actual page layouts and designs: Experience Builders.

]]>
https://blogs.perficient.com/2024/04/10/composable-martech-what-is-front-end-as-a-service-feaas/feed/ 0 361483
Composable Martech: Unpack the Stack https://blogs.perficient.com/2024/04/10/composable-martech-unpack-the-stack/ https://blogs.perficient.com/2024/04/10/composable-martech-unpack-the-stack/#respond Wed, 10 Apr 2024 20:02:15 +0000 https://blogs.perficient.com/?p=361481

It’s clear in 2024 that the martech spaces continues to adopt a composable mindset towards technology architectures. Traditional martech players are offering more flexible product offerings by decoupling architectures, breaking up large suites into independent composable products. Newer cloud-native pureplay and point solutions are offering extension points and turn-key integrations with other composable tools to fit nicely into new modern architectures. New and emerging categories of composable products are being born, helping to integrate and orchestrate workflows, content, and data across a variety of loosely connected martech products.

It’s clear that products across the full spectrum are doubling down on improving marketing and business user tooling, ensuring they provide a delightful experience to business users and developers alike.

The purpose of this series is to unpack many of the new trends, capabilities, and categories of products in modern martech architectures. We’ll cover a variety of topics, from visual authoring experiences, content aggregation, multi-product orchestration, and integration tools. Let’s dive into the series from here – below you will find an up-to-date index of everything in the series.

Part 1: Frontend as a Service (FEaaS)

Part 2: Experience Builders

Part 3: Orchestration & Federation

]]>
https://blogs.perficient.com/2024/04/10/composable-martech-unpack-the-stack/feed/ 0 361481
Overview and Basic Concepts of Adobe Experience Manager (AEM) Components https://blogs.perficient.com/2024/04/05/overview-and-basic-concepts-of-adobe-experience-manager-aem-components/ https://blogs.perficient.com/2024/04/05/overview-and-basic-concepts-of-adobe-experience-manager-aem-components/#comments Fri, 05 Apr 2024 08:08:52 +0000 https://blogs.perficient.com/?p=361159

Adobe Experience Manager (AEM) is a sophisticated and versatile content management tool. Components are the elements that help structure the page, for example, the header, body, and footer, through authoring. AEM’s core components, have always allowed authors to create pages that are both efficient and simple to use, whereas developers can create components that are customizable and expandable. In this blog, we will look at the mysterious world of AEM components and how they impact the platform’s overall functionality and user experience. 

Fig. Core Components Library

Classification of AEM Components: 

Definition and Types: AEM components are customizable elements that offer unique features or content. Components are classified into two categories:

  1. Core components accessible through AEM out of the box, and custom components designed to fit specific business needs.
  2. Foundational components comprise text, pictures, and navigation components. 

Aem Components Overview

Fig. Types of Components

However, users may create, manage, and publish digital content with Adobe Experience Manager (AEM), a potent web content management system. AEM components are crucial to the process of creating webpages. They have a variety of functions and can be tailored to meet certain requirements. Developing effective and efficient websites requires a thorough understanding of the various AEM component types and their applications. 

Applications of the AEM components:

  1. Content Authoring: Components offer efficient content creation and management. Authors can leverage the user-friendly AEM interface to drag and drop components in pages, resulting in dynamic and visually appealing layouts. This visual creation method minimizes the need for developers to do routine content changes.

    Fig. Author and Publisher

  2. Responsive Design: AEM components are especially important for responsive design. Furthermore, mobile devices are becoming more common, components are designed to work easily across a wide range of platforms, ensuring a positive user experience. This flexible design feature is critical to reaching a wider demographic.

    Fig. Responsive Viewer

  3. Customization and Extensibility: AEM’s custom components enable business entities to build digital experiences to meet specific business demands. Developers can develop customized components that interact with other applications, furnish specialized functionality, or comply with certain design specifications, giving them outstanding flexibility. 

Merits of AEM Components:

  1. AEM Core Components offer uniformity, which shortens development processes and saves time. 
  2. Core Components are intended to be robust and adaptable. They provide a stable basis for website development, helping you to easily add unique AEM functionality, connect external apps, and adapt to shifting business requirements. 
  3. Adaptability ensures that websites designed with Core Components tend to evolve and develop alongside the business you represent. 
  4. AEM Core Components adhere to best practices for effective rendering, caching, and structured data since they are performance and SEO (Search Engine Optimization) emphasized. 

Behind the Scenes: A Glimpse into Component Development 

  1. Component Structure: Components in AEM are developed following a structured approach. This includes creating the component’s Java logic, defining its dialog for content authoring, and designing the component’s frontend using HTML (HTML Template Language) or JSP (Java Server Pages). This separation of concerns ensures maintainability and ease of development. 
  2. Component Lifecycle: Understanding the lifecycle of an AEM component is crucial for developers. Components go through initialization, rendering, and destruction phases. Developers need to be mindful of these stages to specifically optimize component behavior and performance. 

Conclusion: 

The components of Adobe Experience Manager are integral in creating customized and engaging digital experiences, as they join together to form a smooth and fascinating user experience. Their importance cannot be overstated in content writing, responsive design, and extensibility, as they enable businesses to create customized and engaging digital experiences. The importance of AEM components in forming the online world is constant, even as we negotiate the ever-changing terrain of digital encounters. 

 

]]>
https://blogs.perficient.com/2024/04/05/overview-and-basic-concepts-of-adobe-experience-manager-aem-components/feed/ 1 361159
Adobe AEMaaCS Integration with OpenAI Assistants API Demo https://blogs.perficient.com/2024/02/28/adobe-aemaacs-integration-with-openai-assistants-api/ https://blogs.perficient.com/2024/02/28/adobe-aemaacs-integration-with-openai-assistants-api/#comments Wed, 28 Feb 2024 16:48:29 +0000 https://blogs.perficient.com/?p=355679

About the OpenAI Assistants API

The OpenAI Assistants API allows you to build AI assistants within your own applications. An Assistant has instructions and can leverage models, tools, and knowledge to respond to user queries. The Assistants API is designed to help developers build powerful AI assistants capable of performing a variety of tasks.

Different from OpenAI’s Chat Completions API, Assistants API is an agent framework. Your instruction is similar to a system prompt but is just one part of the message that includes other messaging and internal functions that are out of your control.

Assistants allow the AI, and the AI is encouraged, to make multiple calls persistently in calling for retrieval of parts of uploaded documents via internal functions, writing Python code and emitting it by function to a sandbox that can run the code, and then finally emitting functions back to you in a similar fashion to chat completions.

It also has a record of user input and AI answers that make up a conversation, only allowing you to place a user question, run the thread, wait for an answer to be created, check the status, and then download the finished answer. Or you find that the AI has been waiting for you to run a tool function for it.

The AI is loaded with the maximum amount of conversation and documents that will fit in the model. The API return has no token usage statistic to show how much you will be billed.

To get more information on the Assistance APi, please check the introduction here: https://platform.openai.com/docs/assistants/overview

The API document can be found here: https://platform.openai.com/docs/api-reference/assistants/createAssistant

How to Integrate OpenAI Assistances API With AEMaaCS

In this blog, you’ll learn how we customized AEM’s Core Teaser component to help you understand how to integrate the OpenAI Assistances API with AEMaaCS. You can view a video demo or follow the written instructions below.

The AEM implementation includes the following parts:

  • A customized AEM Core Teaser component.
  • An AEM servlet to receive the request from the Teaser component dialog, and call the OSGi service, then return the response to the Teaser component dialog UI.
  • An AEM OSGi Service to access the OpenAI Assistances API
  • An OSGi Configuration to provide required configuration values during execution.
  • An AEM OSGi service that works as the HTTP Client factory. This factory class uses the values from the above OSGi configuration to access the OpenAI Assistance API.

Other related sources can be found in the shared GitHub repository.

1. Customize the AEM Core Teaser Component

First, we want to customize the dialog of the Teaser component.

Under the Text tab of the dialog user interface, if the author de-selects the checkbox for ‘Get description from linked page’, then the description text will be customized but not retrieved from the blueprint page. We can add a text area to let the author input the instruction/prompt here. We also added a button under it, when the author clicks the button, a request will be sent to the AEM servlet to access the Assistances API.  The RTE under the button is used to display the response from the Assistances API. The author can make changes to the RTE and click the ‘Done’ button to update the Teaser component.

The new Teaser Dialog will look like the image below.

Customized Teaser Dialog

Customized Teaser Dialog UI

Below is the updated source code on the Core Teaser dialog.

<?xml version="1.0" encoding="UTF-8"?>
...
                                            <descriptionFromLinkedPage
                                                    jcr:primaryType="nt:unstructured"
                                                    sling:resourceType="granite/ui/components/coral/foundation/form/checkbox"
                                                    checked="{Boolean}true"
                                                    fieldDescription="When checked, populate the description with the linked page's description."
                                                    name="./descriptionFromPage"
                                                    text="Get description from linked page"
                                                    uncheckedValue="{Boolean}false"
                                                    value="{Boolean}true"/>
                                            <descriptionGroup
                                                jcr:primaryType="nt:unstructured"
                                                sling:resourceType="granite/ui/components/coral/foundation/include"
                                                path="/mnt/overlay/openai-sample/components/commons/editor/dialog/chatgpt-rte-2">
                                            </descriptionGroup>
                                            <id
                                                jcr:primaryType="nt:unstructured"
                                                sling:resourceType="granite/ui/components/coral/foundation/form/textfield"
                                                fieldDescription="HTML ID attribute to apply to the component."
                                                fieldLabel="ID"
                                                name="./id"
                                                validation="html-unique-id-validator"/>
                                        </items>
                                    </column>
                                </items>
                            </columns>
                        </items>
                    </text>
...

From lines 12-15, the dialog is reusing a common widget’s definition under ‘/mnt/overlay/openai-sample/components/commons/editor/dialog/chatgpt-rte-2’.  This is the source code of it:

<?xml version="1.0" encoding="UTF-8"?>
<jcr:root xmlns:sling="http://sling.apache.org/jcr/sling/1.0" xmlns:granite="http://www.adobe.com/jcr/granite/1.0" xmlns:cq="http://www.day.com/jcr/cq/1.0" xmlns:jcr="http://www.jcp.org/jcr/1.0" xmlns:nt="http://www.jcp.org/jcr/nt/1.0"
    jcr:primaryType="nt:unstructured"
    jcr:title="ChatGPT RTE"
    sling:resourceType="granite/ui/components/coral/foundation/well">
    <items jcr:primaryType="nt:unstructured">
        <prompt
                jcr:primaryType="nt:unstructured"
                sling:resourceType="granite/ui/components/coral/foundation/form/textarea"
                fieldDescription="ChatGPT prompt: Tags, keyworks, phrases, etc..."
                emptyTextstring="Enter prompt for ChatGPT here"
                fieldLabel="Promps for ChatGPT"
                name="./txt_gptPrompt"
                rows="{Long}5"/>
        <gptButton jcr:primaryType="nt:unstructured"
                   name="./btnGroup"
                   required="{Boolean}false"
                   selectionMode="single"
                   sling:resourceType="granite/ui/components/coral/foundation/form/buttongroup">

            <items jcr:primaryType="nt:unstructured">
                <default jcr:primaryType="nt:unstructured"
                         name="./callAPI"
                         text="Generate"
                         value="false"
                         checked="{Boolean}false"
                         granite:class="chatGPTButton"
                         cq-msm-lockable="default"/>
            </items>
        </gptButton>
        <description
                jcr:primaryType="nt:unstructured"
                sling:resourceType="cq/gui/components/authoring/dialog/richtext"
                fieldDescription="A description to display as the subheadline for the teaser."
                fieldLabel="Description"
                name="./jcr:description"
                useFixedInlineToolbar="{Boolean}true">
            <rtePlugins jcr:primaryType="nt:unstructured">
                <format
                        jcr:primaryType="nt:unstructured"
                        features="bold,italic"/>
                <justify
                        jcr:primaryType="nt:unstructured"
                        features="-"/>
                <links
                        jcr:primaryType="nt:unstructured"
                        features="modifylink,unlink"/>
                <lists
                        jcr:primaryType="nt:unstructured"
                        features="*"/>
                <misctools jcr:primaryType="nt:unstructured">
                    <specialCharsConfig jcr:primaryType="nt:unstructured">
                        <chars jcr:primaryType="nt:unstructured">
                            <default_copyright
                                    jcr:primaryType="nt:unstructured"
                                    entity="&amp;copy;"
                                    name="copyright"/>
                            <default_euro
                                    jcr:primaryType="nt:unstructured"
                                    entity="&amp;euro;"
                                    name="euro"/>
                            <default_registered
                                    jcr:primaryType="nt:unstructured"
                                    entity="&amp;reg;"
                                    name="registered"/>
                            <default_trademark
                                    jcr:primaryType="nt:unstructured"
                                    entity="&amp;trade;"
                                    name="trademark"/>
                        </chars>
                    </specialCharsConfig>
                </misctools>
                <paraformat
                        jcr:primaryType="nt:unstructured"
                        features="*">
                    <formats jcr:primaryType="nt:unstructured">
                        <default_p
                                jcr:primaryType="nt:unstructured"
                                description="Paragraph"
                                tag="p"/>
                        <default_h1
                                jcr:primaryType="nt:unstructured"
                                description="Heading 1"
                                tag="h1"/>
                        <default_h2
                                jcr:primaryType="nt:unstructured"
                                description="Heading 2"
                                tag="h2"/>
                        <default_h3
                                jcr:primaryType="nt:unstructured"
                                description="Heading 3"
                                tag="h3"/>
                        <default_h4
                                jcr:primaryType="nt:unstructured"
                                description="Heading 4"
                                tag="h4"/>
                        <default_h5
                                jcr:primaryType="nt:unstructured"
                                description="Heading 5"
                                tag="h5"/>
                        <default_h6
                                jcr:primaryType="nt:unstructured"
                                description="Heading 6"
                                tag="h6"/>
                        <default_blockquote
                                jcr:primaryType="nt:unstructured"
                                description="Quote"
                                tag="blockquote"/>
                        <default_pre
                                jcr:primaryType="nt:unstructured"
                                description="Preformatted"
                                tag="pre"/>
                    </formats>
                </paraformat>
                <table
                        jcr:primaryType="nt:unstructured"
                        features="-">
                    <hiddenHeaderConfig
                            jcr:primaryType="nt:unstructured"
                            hiddenHeaderClassName="cq-wcm-foundation-aria-visuallyhidden"
                            hiddenHeaderEditingCSS="cq-RichText-hiddenHeader--editing"/>
                </table>
                <tracklinks
                        jcr:primaryType="nt:unstructured"
                        features="*"/>
            </rtePlugins>
            <uiSettings jcr:primaryType="nt:unstructured">
                <cui jcr:primaryType="nt:unstructured">
                    <inline
                            jcr:primaryType="nt:unstructured"
                            toolbar="[format#bold,format#italic,format#underline,#justify,#lists,links#modifylink,links#unlink,#paraformat]">
                        <popovers jcr:primaryType="nt:unstructured">
                            <justify
                                    jcr:primaryType="nt:unstructured"
                                    items="[justify#justifyleft,justify#justifycenter,justify#justifyright]"
                                    ref="justify"/>
                            <lists
                                    jcr:primaryType="nt:unstructured"
                                    items="[lists#unordered,lists#ordered,lists#outdent,lists#indent]"
                                    ref="lists"/>
                            <paraformat
                                    jcr:primaryType="nt:unstructured"
                                    items="paraformat:getFormats:paraformat-pulldown"
                                    ref="paraformat"/>
                        </popovers>
                    </inline>
                    <dialogFullScreen
                            jcr:primaryType="nt:unstructured"
                            toolbar="[format#bold,format#italic,format#underline,justify#justifyleft,justify#justifycenter,justify#justifyright,lists#unordered,lists#ordered,lists#outdent,lists#indent,links#modifylink,links#unlink,table#createoredit,#paraformat,image#imageProps]">
                        <popovers jcr:primaryType="nt:unstructured">
                            <paraformat
                                    jcr:primaryType="nt:unstructured"
                                    items="paraformat:getFormats:paraformat-pulldown"
                                    ref="paraformat"/>
                        </popovers>
                    </dialogFullScreen>
                    <tableEditOptions
                            jcr:primaryType="nt:unstructured"
                            toolbar="[table#insertcolumn-before,table#insertcolumn-after,table#removecolumn,-,table#insertrow-before,table#insertrow-after,table#removerow,-,table#mergecells-right,table#mergecells-down,table#mergecells,table#splitcell-horizontal,table#splitcell-vertical,-,table#selectrow,table#selectcolumn,-,table#ensureparagraph,-,table#modifytableandcell,table#removetable,-,undo#undo,undo#redo,-,table#exitTableEditing,-]"/>
                </cui>
            </uiSettings>
        </description>
    </items>
</jcr:root>

To manipulate the behaviors of the ‘Generate’ button and the Description RTE, we need to create customized JavaScript in a Client Library and let the teaser component dialog use it.

In the new Client Library definition, we give the categories name which is called ‘core.wcm.components.teaser.v2.gpt.editor3’

Crxde Lite 2024 01 03 14 17 31

 

In the dialog properties, we add a new property called ‘extraClientlibs’. The value of this property is the category name of the new client library. When the Teaser dialog is open, the JavaScript in the client library will be loaded automatically.

 

Crxde Lite 2024 01 03 14 12 16

 

Here is the JavaScript file created in the Client Library:

(function($, Granite) {
    "use strict";

    var dialogContentSelector = ".cmp-teaser__editor";
    var actionsMultifieldSelector = ".cmp-teaser__editor-multifield_actions";
    var titleCheckboxSelector = 'coral-checkbox[name="./titleFromPage"]';
    var titleTextfieldSelector = 'input[name="./jcr:title"]';
    var descriptionCheckboxSelector = 'coral-checkbox[name="./descriptionFromPage"]';
    var descriptionCheckboxChatGPT = 'coral-checkbox[name="./descriptionFromChatGPT"]';
    var descriptionTextfieldSelector = '.cq-RichText-editable[name="./jcr:description"]';
    var titleTypeSelectElementSelector = "coral-select[name='./titleType']";
    var linkURLSelector = '[name="./linkURL"]';
    var chatGptDisplayGroupSelector = ".chatGPTGroup";
    var CheckboxTextfieldTuple = window.CQ.CoreComponents.CheckboxTextfieldTuple.v1;
    var titleTuple;
    var descriptionTuple;
    var linkURL;
    var gptButton = ".chatGPTButton";
    var gptPromptTextSelector = 'textarea[name="./txt_gptPrompt"]';


    $(document).on("dialog-loaded", function(e) {
        var $dialog = e.dialog;
        var $dialogContent = $dialog.find(dialogContentSelector);
        var dialogContent = $dialogContent.length > 0 ? $dialogContent[0] : undefined;

        if (dialogContent) {
            var $descriptionTextfield = $(descriptionTextfieldSelector);
            if ($descriptionTextfield.length) {
                if (!$descriptionTextfield[0].hasAttribute("aria-labelledby")) {
                    associateDescriptionTextFieldWithLabel($descriptionTextfield[0]);
                }
                var rteInstance = $descriptionTextfield.data("rteinstance");
                // wait for the description textfield rich text editor to signal start before initializing.
                // Ensures that any state adjustments made here will not be overridden.
                if (rteInstance && rteInstance.isActive) {
                    init(e, $dialog, $dialogContent, dialogContent);
                } else {
                    $descriptionTextfield.on("editing-start", function() {
                        init(e, $dialog, $dialogContent, dialogContent);
                    });
                }
            } else {
                // init without description field
                init(e, $dialog, $dialogContent, dialogContent);
            }
            manageTitleTypeSelectDropdownFieldVisibility(dialogContent);
        }
    });

    // Initialize all fields once both the dialog and the description textfield RTE have loaded
    function init(e, $dialog, $dialogContent, dialogContent) {
        titleTuple = new CheckboxTextfieldTuple(dialogContent, titleCheckboxSelector, titleTextfieldSelector, false);
        descriptionTuple = new CheckboxTextfieldTuple(dialogContent, descriptionCheckboxSelector, descriptionTextfieldSelector, true);
        retrievePageInfo($dialogContent);

        var $linkURLField = $dialogContent.find(linkURLSelector);
        if ($linkURLField.length) {
            linkURL = $linkURLField.adaptTo("foundation-field").getValue();
            $linkURLField.on("change", function() {
                linkURL = $linkURLField.adaptTo("foundation-field").getValue();
                retrievePageInfo($dialogContent);
            });
        }

        var $actionsMultifield = $dialogContent.find(actionsMultifieldSelector);
        $actionsMultifield.on("change", function(event) {
            var $target = $(event.target);
            if ($target.is("foundation-autocomplete")) {
                updateText($target);
            } else if ($target.is("coral-multifield")) {
                var $first = $(event.target.items.first());
                if (event.target.items.length === 1 && $first.is("coral-multifield-item")) {
                    var $input = $first.find(".cmp-teaser__editor-actionField-linkUrl");
                    if ($input.is("foundation-autocomplete")) {
                        var value = $linkURLField.adaptTo("foundation-field").getValue();
                        if (!$input.val() && value) {
                            $input.val(value);
                            updateText($input);
                        }
                    }
                }
            }
            retrievePageInfo($dialogContent);
        });

        //If get description from linked page: Unselect chatGPT checkbox and disable it,
        var $chatGPTChkBox = $(descriptionCheckboxChatGPT);
        $chatGPTChkBox.change(function() {
            if(this.checked) {
                $(chatGptDisplayGroupSelector).toggleClass('hide', false);
            }else{
                $(chatGptDisplayGroupSelector).toggleClass('hide', true);
            }
        });

        //Show hide ChatGPTGroup components when click checkbox
        var $fromLinkChkBox = $(descriptionCheckboxSelector);
        $fromLinkChkBox.change(function() {
            if(this.checked) {
                $chatGPTChkBox.attr("disabled", true);
                $chatGPTChkBox.prop('checked', false);
                $(chatGptDisplayGroupSelector).toggleClass('hide', true);
            }else{
                $chatGPTChkBox.removeAttr("disabled");
            }
        });

        //Call ChatGPT API when click button
        $(gptButton).on("click", function(event) {
            console.info("Calling ChatGPT api v3...");
            updateDescriptionWithChatGPT($dialogContent);
        });

    }


    function retrievePageInfo(dialogContent) {
        var url;
        if (linkURL === undefined || linkURL === "") {
            url = dialogContent.find('.cmp-teaser__editor-multifield_actions [data-cmp-teaser-v2-dialog-edit-hook="actionLink"]').val();
        } else {
            url = linkURL;
        }
        // get the info from the current page in case no link is provided.
        if ((url === undefined || url === "") && (Granite.author && Granite.author.page)) {
            url = Granite.author.page.path;
        }

        if (url && url.startsWith("/")) {
            return $.ajax({
                url: url + "/_jcr_content.json"
            }).done(function(data) {
                if (data) {
                    titleTuple.seedTextValue(data["jcr:title"]);
                    titleTuple.update();
                    descriptionTuple.seedTextValue(data["jcr:description"]);
                    descriptionTuple.update();
                }
            });
        } else {
            titleTuple.update();
            descriptionTuple.update();
        }
    }

    function updateText(target) {
        var url = target.val();
        if (url && url.startsWith("/")) {
            var textField = target.parents("coral-multifield-item").find('[data-cmp-teaser-v2-dialog-edit-hook="actionTitle"]');
            if (textField && !textField.val()) {
                $.ajax({
                    url: url + "/_jcr_content.json"
                }).done(function(data) {
                    if (data) {
                        textField.val(data["jcr:title"]);
                    }
                });
            }
        }
    }

    function associateDescriptionTextFieldWithLabel(descriptionTextfieldElement) {
        var richTextContainer = document.querySelector(".cq-RichText.richtext-container");
        if (richTextContainer) {
            var richTextContainerParent = richTextContainer.parentNode;
            var descriptionLabel = richTextContainerParent.querySelector("label.coral-Form-fieldlabel");
            if (descriptionLabel) {
                descriptionTextfieldElement.setAttribute("aria-labelledby", descriptionLabel.id);
            }
        }
    }

    /**
     * Hides the title type select dropdown field if there's only one allowed heading element defined in a policy
     *
     * @param {HTMLElement} dialogContent The dialog content
     */
    function manageTitleTypeSelectDropdownFieldVisibility(dialogContent) {
        var titleTypeElement = dialogContent.querySelector(titleTypeSelectElementSelector);
        if (titleTypeElement) {
            Coral.commons.ready(titleTypeElement, function(element) {
                var titleTypeElementToggleable = $(element.parentNode).adaptTo("foundation-toggleable");
                var itemCount = element.items.getAll().length;
                if (itemCount < 2) {
                    titleTypeElementToggleable.hide();
                }
            });
        }
    }


    function updateDescriptionWithChatGPT(dialogContent){

        var prompt = getPromptPhase();
        var urlChatGPTCall = "/bin/assistantServlet" + "?content=" + prompt + "&role=user";
        console.log("urlChatGPTCall = " + urlChatGPTCall);
        //For description
        return $.ajax({
            url: urlChatGPTCall
        }).done(function(data) {
            if (data) {
                data = JSON.parse(data);
                console.info("------ data = " + data);
                var chatGPTResponse = data.answer;
                console.info("------ chatGPTResponse = " + chatGPTResponse);

                var $descriptionTextfield = $(descriptionTextfieldSelector);
                if ($descriptionTextfield.length) {
                    console.log("I am here  ----- 1")
                    $descriptionTextfield.attr("data-previous-value","<p>"+chatGPTResponse+"</p>");
                    console.log("I am here  ----- 2")
                    $descriptionTextfield.html("<p>"+chatGPTResponse+"</p>");
                }
            }
        });
    }

    function getPromptPhase(){
        console.log("content = " + $(gptPromptTextSelector).val());
        return $(gptPromptTextSelector).val();
    }


})(jQuery, Granite);

When the author clicks the ‘Generate’ button in the dialog UI, the function updateDescriptionWithChatGPT() in JavaScript will be called.  This function will call the AEM servlet with endpoint ‘/bin/assitantServlet’.  Two parameters will be sent with the request:

  • content: the prompt text which is the content of the text area in dialog
  • role: role value required by the Assistants API. The default value is ‘user’

2. An AEM Servlet to Receive the Request From the Teaser Component Dialog

package com.perficient.aem.sample.openai.core.servlets;

import com.perficient.aem.sample.openai.core.services.ChatGPTAPIService;
import lombok.extern.slf4j.Slf4j;
import org.apache.commons.lang3.StringUtils;
import org.apache.sling.api.SlingHttpServletRequest;
import org.apache.sling.api.SlingHttpServletResponse;
import org.apache.sling.api.servlets.SlingSafeMethodsServlet;
import org.apache.sling.auth.core.AuthConstants;
import org.jetbrains.annotations.NotNull;
import org.json.JSONArray;
import org.json.JSONException;
import org.json.JSONObject;
import org.osgi.service.component.annotations.Component;
import org.osgi.service.component.annotations.Reference;

import javax.servlet.Servlet;
import javax.servlet.ServletException;
import javax.servlet.http.HttpSession;
import java.io.IOException;

@Slf4j
@Component(service = { Servlet.class }, property = {
        "sling.servlet.paths=" + ChatGPTAssistantServlet.RESOURCE_PATH,
        "sling.servlet.methods=GET",
        AuthConstants.AUTH_REQUIREMENTS + "=-"+ ChatGPTAssistantServlet.RESOURCE_PATH})
public class ChatGPTAssistantServlet extends SlingSafeMethodsServlet {

    private static final long serialVersionUID = 1L;
    static final String RESOURCE_PATH = "/bin/assistantServlet";
    static final String CONTENT = "content";
    static final String ROLE = "role";

    @Reference
    private ChatGPTAPIService apiService;

    HttpSession session;

    @Override
    protected void doGet(@NotNull SlingHttpServletRequest request, @NotNull SlingHttpServletResponse response) throws ServletException, IOException {
        String content = request.getParameter(CONTENT);
        String role = request.getParameter(ROLE);
        JSONObject jsonObject = new JSONObject();

        try {
            session = request.getSession(true);
            session.setMaxInactiveInterval(30*60);
            String sessionThreadId = null;
            sessionThreadId = (String)session.getAttribute("thread_id");
            if(sessionThreadId == null || StringUtils.isEmpty(sessionThreadId)){
                jsonObject = createNewThreadRun(role,content);
            }
            else{
                //A threadId exist in session: retrieve the thread
                String respThreadId = apiService.retrieveThread(sessionThreadId);
                if(respThreadId != null && respThreadId.equalsIgnoreCase(sessionThreadId)){
                    //This is a valid thread.
                    jsonObject = addNewMesssageOnThreadThenRun(respThreadId, role, content);
                }
                else{
                    //Thread not exist anymore, need work as a new request:
                    jsonObject = createNewThreadRun(role,content);
                }


            }
        } catch (JSONException e) {
            log.error(e.getMessage());
        }
        response.setContentType("text/html; charset=UTF-8");
        response.getWriter().print(jsonObject);

    }

    private JSONObject createNewThreadRun(String role, String content){
        JSONObject jsonObject = new JSONObject();
        String result = apiService.creaetThreadAndRun(role, content);
        JSONObject resultJson = new JSONObject(result);
        String run_Id = resultJson.getString("id");
        String thread_id = resultJson.getString("thread_id");

        //Add thread to session
        session.setAttribute("thread_id", thread_id);
        jsonObject = retrieveRunAndGetAnswer(thread_id, run_Id);

        return jsonObject;
    }

    private JSONObject addNewMesssageOnThreadThenRun(String threadId, String role, String content){
        JSONObject jsonObject = new JSONObject();
        String messageId =  apiService.createMessage(threadId,role,content);
        if(messageId != null){
            //Message is created. create a run
            String runId = apiService.createRun(threadId);
            //retrieveRun and get answers
            if(runId != null){
                jsonObject = retrieveRunAndGetAnswer(threadId, runId);
            }
        }
        return jsonObject;

    }


    private JSONObject retrieveRunAndGetAnswer(String thread_id, String run_Id){

        JSONObject jsonObject = new JSONObject();
        String allAnswsers = "";
        //Retrieve Run to check status
        String status = apiService.retrieveRun(thread_id, run_Id);
        if(status.equalsIgnoreCase("completed")){
            //Run completed, need list messages
            String listMessageResponse = apiService.listMessages(thread_id);
            JSONObject listMessageResponseJson = new JSONObject(listMessageResponse);
            String lastMessage_id = listMessageResponseJson.getString("last_id");
            JSONArray messages =  listMessageResponseJson.getJSONArray("data");
            for(int i=0; i<messages.length(); i++){
                JSONObject aMessage = (JSONObject)messages.get(i);
                if(aMessage.getString("role").equalsIgnoreCase("assistant")){
                    //This is the answer?
                    JSONArray contentArray = aMessage.getJSONArray("content");
                    JSONObject aContent = (JSONObject)contentArray.get(0);
                    allAnswsers += aContent.getJSONObject("text").getString("value") + "\n";
                }
            }

            jsonObject.put("answer", allAnswsers);
        }

        return jsonObject;
    }
}

Check lines 50-63. The Java code uses the HttpSession to save the Assistants thread ID. If the HttpSession does not exist (or the HttpSession is expired), or the thread ID is not valid anymore in OpenAI Assistants, then the createNewThreadRun() function will be called, otherwise we will reuse the thread and add a new message on it: the function addNewMessageOnThreadThenRun() will be called.

3. An AEM OSGi Service to Access the OpenAI Assistances API

package com.perficient.aem.sample.openai.core.services.impl;

import com.perficient.aem.sample.openai.core.bean.CreateThreadRun;
import com.perficient.aem.sample.openai.core.bean.Message;
import com.perficient.aem.sample.openai.core.bean.SummaryBean;
import com.perficient.aem.sample.openai.core.services.ChatGPTAPIService;
import com.perficient.aem.sample.openai.core.services.ChatGptHttpClientFactory;
import com.perficient.aem.sample.openai.core.services.JSONConverter;
import com.perficient.aem.sample.openai.core.services.config.ChatGptHttpClientFactoryConfig;
import com.perficient.aem.sample.openai.core.utils.StringObjectResponseHandler;
import lombok.extern.slf4j.Slf4j;
import org.apache.commons.lang3.StringUtils;
import org.apache.http.entity.ContentType;
import org.json.JSONObject;
import org.osgi.service.component.annotations.Component;
import org.osgi.service.component.annotations.Reference;

import java.io.IOException;
import java.util.concurrent.*;

@Slf4j
@Component(service = ChatGPTAPIService.class)
public class ChatGptAPIServiceImpl implements ChatGPTAPIService {

    private static final StringObjectResponseHandler HANDLER = new StringObjectResponseHandler();

    @Reference
    private ChatGptHttpClientFactory httpClientFactory;

    @Reference
    private JSONConverter jsonConverter;

    //1. Create Thread and Run (When session not exist)
    //POST   https://api.openai.com/v1/threads/runs
    @Override
    public String creaetThreadAndRun(String role, String content) {
        String responseString = StringUtils.EMPTY;
        try {
            ChatGptHttpClientFactoryConfig config =  httpClientFactory.getConfig();
            String assistantId= config.assistantId();
            String bodyString = generateMessage_createThreadRun(assistantId, content, role);

            responseString = httpClientFactory.getExecutor()
                    .execute(httpClientFactory.postCreateThreadRun().bodyString(bodyString, ContentType.APPLICATION_JSON))
                    .handleResponse(HANDLER);
        } catch (IOException e) {
            log.error("Error occured while create thread and run {}", e.getMessage());
        }
        log.debug("creaetThreadAndRun: {}", responseString);
        return responseString;
    }


    //2. list Message
    //GET  https://api.openai.com/v1/threads/{thread_id}/messages
    @Override
    public String listMessages(String threadId) {
        String responseString = StringUtils.EMPTY;
        try {
            ChatGptHttpClientFactoryConfig config =  httpClientFactory.getConfig();
            String assistantId= config.assistantId();

            responseString = httpClientFactory.getExecutor()
                    .execute(httpClientFactory.listMessages(threadId))
                    .handleResponse(HANDLER);
        } catch (IOException e) {
            log.error("Error occured while list messages {}", e.getMessage());
        }
        log.debug("listMessages: {}", responseString);
        return responseString;
    }

    //3. Create a message
    //POST   https://api.openai.com/v1/threads/{thread_id}/messages
    @Override
    public String createMessage(String threadId,String role, String content) {
        String responseString = StringUtils.EMPTY;
        try {

            String bodyString = getCreateMessageBody(role, content);

            responseString = httpClientFactory.getExecutor()
                    .execute(httpClientFactory.postCreateMessage(threadId).bodyString(bodyString, ContentType.APPLICATION_JSON))
                    .handleResponse(HANDLER);
        } catch (IOException e) {
            log.error("Error occured while create Message {}", e.getMessage());
        }
        log.debug("creaetThreadAndRun: {}", responseString);
        JSONObject jsonObject = new JSONObject(responseString);
        String messageId = jsonObject.getString("id"); //?completed
        return messageId;
    }

    //4. Create a Run
    //POST   https://api.openai.com/v1/threads/{thread_id}/runs
    @Override
    public String createRun(String threadId) {
        String responseString = StringUtils.EMPTY;
        try {
            ChatGptHttpClientFactoryConfig config =  httpClientFactory.getConfig();
            String assistantId= config.assistantId();
            String bodyString = getCreateRunBody(assistantId);

            responseString = httpClientFactory.getExecutor()
                    .execute(httpClientFactory.postCreateRun(threadId).bodyString(bodyString, ContentType.APPLICATION_JSON))
                    .handleResponse(HANDLER);
        } catch (IOException e) {
            log.error("Error occured while run Assistant {}", e.getMessage());
        }
        log.debug("creaetThreadAndRun: {}", responseString);
        JSONObject jsonObject = new JSONObject(responseString);
        String respRunId = jsonObject.getString("id"); //thread id
        return respRunId;
    }

    //5. Check Run
    //GET  https://api.openai.com/v1/threads/{thread_id}/runs/{run_id}
    @Override
    public String retrieveRun(String threadId, String runId) {

        int max_count = 10;
        String status = "";
        try{
            ScheduleGetRun scheduleGetRun = new ScheduleGetRun(httpClientFactory, threadId, runId);
            ScheduledExecutorService scheduledExecutorService = new ScheduledThreadPoolExecutor(1);
            for(int i=0; i<max_count;i++){
                ScheduledFuture<String> future = scheduledExecutorService.schedule(scheduleGetRun, 2, TimeUnit.SECONDS);
                status = future.get();
                if(status.equalsIgnoreCase("completed")){
                    break;
                }
            }
        } catch (Exception e) {
            log.error("Error occured while list messages {}", e.getMessage());
        }
        return status;
    }

    private class ScheduleGetRun implements Callable<String> {
        ChatGptHttpClientFactory httpClientFactory;
        String runId;
        String threadId;

        public ScheduleGetRun(ChatGptHttpClientFactory httpClientFactory, String threadId, String runId) {
            this.httpClientFactory = httpClientFactory;
            this.runId = runId;
            this.threadId = threadId;
        }

        @Override
        public String call() throws Exception {
            String responseString = StringUtils.EMPTY;
            try {
                responseString = httpClientFactory.getExecutor()
                        .execute(httpClientFactory.retrieveRun(threadId, runId))
                        .handleResponse(HANDLER);
            } catch (IOException e) {
                log.error("Error occured while list messages {}", e.getMessage());
            }
            JSONObject jsonObject = new JSONObject(responseString);
            String status = jsonObject.getString("status"); //?completed

            return status;
        }
    }

    //6. Retrive thread
    //GET  https://api.openai.com/v1/threads/{thread_id}
    @Override
    public String retrieveThread(String threadId) {
        String responseString = StringUtils.EMPTY;
        try {
            responseString = httpClientFactory.getExecutor()
                    .execute(httpClientFactory.retrieveThread(threadId))
                    .handleResponse(HANDLER);
        } catch (IOException e) {
            log.error("Error occured while list messages {}", e.getMessage());
        }
        log.debug("retrieveThread: {}", responseString);
        JSONObject jsonObject = new JSONObject(responseString);
        String respThreadId = jsonObject.getString("id"); //thread id
        return respThreadId;
    }

     //-----------------------------------
    //Functions to generate request body
    //----------------------------------


    //Generate Prompt for Complete API
    private String generatePrompt(String bodyText, int maxTokens) {
        SummaryBean bodyBean = new SummaryBean();
        if(maxTokens != 0) {
            bodyBean.setMaxTokens(maxTokens);
        }
        bodyBean.setPrompt(bodyText);
        return jsonConverter.convertToJsonString(bodyBean);
    }

    //Generate body for Assistant API - Generate thread and run
    private String generateMessage_createThreadRun(String assistantId, String content, String role) {
        CreateThreadRun body = new CreateThreadRun(assistantId,content,role);
        return jsonConverter.convertToJsonString(body);
    }

    private String getCreateMessageBody(String role, String content){
        Message message = new Message();
        message.setContent(content);
        message.setRole(role);
        return jsonConverter.convertToJsonString(message);
    }

    private String getCreateRunBody(String assistantId){
        return "{\"assistant_id\":\"" + assistantId + "\"}";
    }
}

The above service class follows the logic of Assistants API concepts:

  1. Create an Assistant in the API by defining its custom instructions and picking a model. In our demo, we assume the Assistant was already created, and the Assistant ID was provided to developers.
  2. Create a Thread on the assistant when a user starts a conversation.
  3. Add Messages to the Thread as the user asks questions.
  4. Run the Assistant on the Thread to trigger responses. This automatically calls the relevant tools.

Be careful about the highlighted lines 119-137.  Since the Assistants API may take time to generate answers for questions, we are checking the processing status every 2 seconds by using the scheduled executor. The maximum is 10 times, which means we expect to get an answer in 20 seconds. Otherwise, we will get a timeout error.

4. An OSGi Configuration to Provide Required Configuration Values During Execution

In this demo, we are using AEM OSGi configuration to provide data for the program. Below is the source code of the Java class for the configuration definition.

package com.perficient.aem.sample.openai.core.services.config;

import org.osgi.service.metatype.annotations.AttributeDefinition;
import org.osgi.service.metatype.annotations.AttributeType;
import org.osgi.service.metatype.annotations.ObjectClassDefinition;

@ObjectClassDefinition(name = "ChatGPT API Client Configuration", description = "ChatGPT Client Configuration")
public @interface ChatGptHttpClientFactoryConfig {

    @AttributeDefinition(name = "API Host Name", description = "API host name, e.g. https://example.com", type = AttributeType.STRING)
    String apiHostName() default "https://api.openai.com";

    @AttributeDefinition(name = "'Completion' API URI Type Path", description = "API URI type path, e.g. /v1/engines/davinci/completions", type = AttributeType.STRING)
    String uriType() default "/v1/engines/davinci/completions";

    @AttributeDefinition(name = "API Key", description = "Chat GPT API Key", type = AttributeType.STRING)
    String apiKey() default "";

    @AttributeDefinition(name = "Assistant ID", description = "Assistant ID", type = AttributeType.STRING)
    String assistantId() default "";

    @AttributeDefinition(name = "Relaxed SSL", description = "Defines if self-certified certificates should be allowed to SSL transport", type = AttributeType.BOOLEAN)
    boolean relaxedSSL() default true;

    @AttributeDefinition(name = "Maximum number of total open connections", description = "Set maximum number of total open connections, default 5", type = AttributeType.INTEGER)
    int maxTotalOpenConnections() default 4;

    @AttributeDefinition(name = "Maximum number of concurrent connections per route", description = "Set the maximum number of concurrent connections per route, default 5", type = AttributeType.INTEGER)
    int maxConcurrentConnectionPerRoute() default 2;

    @AttributeDefinition(name = "Default Keep alive connection in seconds", description = "Default Keep alive connection in seconds, default value is 1", type = AttributeType.LONG)
    int defaultKeepAliveconnection() default 15;

    @AttributeDefinition(name = "Default connection timeout in seconds", description = "Default connection timout in seconds, default value is 30", type = AttributeType.LONG)
    long defaultConnectionTimeout() default 30;

    @AttributeDefinition(name = "Default socket timeout in seconds", description = "Default socket timeout in seconds, default value is 30", type = AttributeType.LONG)
    long defaultSocketTimeout() default 30;

    @AttributeDefinition(name = "Default connection request timeout in seconds", description = "Default connection request timeout in seconds, default value is 30", type = AttributeType.LONG)
    long defaultConnectionRequestTimeout() default 30;

}

The com.perficient.aem.sample.openai.core.services.impl.ChatGptHttpClientFactoryImpl~openai-sample.cfg.json configuration file provides the values for each configuration variable in the author runmode.

{
  "apiHostName": "https://api.openai.com",
  "uriType": "/v1/engines/davinci/completions",
  "apiKey": "XXXXXXXXXXXXXXX",
  "assistantId": "asst_XXXXXXXXXXXX",
  "relaxedSSL": true,
  "maxTotalOpenConnections": 4,
  "maxConcurrentConnectionPerRoute": 2,
  "defaultKeepAliveconnection": 15,
  "defaultConnectionTimeout": 30,
  "defaultSocketTimeout": 30,
  "defaultConnectionRequestTimeout": 30
}

You can get the apiKey value (https://platform.openai.com/api-keys) and assistantId (https://platform.openai.com/assistants) value from your own OpenAI account settings.

You can check the configuration values from the AEM configuration manager http://localhost:4502/system/console/configMgr.

Adobe Experience Manager Web Console Configuration 2024 01 03 16 53 03(1)

 

5. An AEM OSGi Service That Works as the HTTP Client Factory

In this demo, we are using Apache HTTP Client in our HTTP Client factory to send requests to OpenAI Assistants API. Here is the source code.

package com.perficient.aem.sample.openai.core.services.impl;

import com.perficient.aem.sample.openai.core.services.ChatGptHttpClientFactory;
import com.perficient.aem.sample.openai.core.services.config.ChatGptHttpClientFactoryConfig;
import lombok.extern.slf4j.Slf4j;
import org.apache.commons.lang3.StringUtils;
import org.apache.http.Header;
import org.apache.http.HttpResponse;
import org.apache.http.client.config.RequestConfig;
import org.apache.http.client.fluent.Executor;
import org.apache.http.client.fluent.Request;
import org.apache.http.config.Registry;
import org.apache.http.config.RegistryBuilder;
import org.apache.http.conn.ConnectionKeepAliveStrategy;
import org.apache.http.conn.socket.ConnectionSocketFactory;
import org.apache.http.conn.socket.PlainConnectionSocketFactory;
import org.apache.http.conn.ssl.NoopHostnameVerifier;
import org.apache.http.conn.ssl.SSLConnectionSocketFactory;
import org.apache.http.conn.ssl.TrustAllStrategy;
import org.apache.http.impl.client.CloseableHttpClient;
import org.apache.http.impl.client.HttpClientBuilder;
import org.apache.http.impl.conn.PoolingHttpClientConnectionManager;
import org.apache.http.message.BasicHeader;
import org.apache.http.osgi.services.HttpClientBuilderFactory;
import org.apache.http.protocol.HttpContext;
import org.apache.http.ssl.SSLContextBuilder;
import org.osgi.service.component.annotations.*;
import org.osgi.service.metatype.annotations.Designate;

import java.io.IOException;
import java.security.KeyManagementException;
import java.security.KeyStoreException;
import java.security.NoSuchAlgorithmException;
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.TimeUnit;

/**
 * Implementation of @{@link ChatGptHttpClientFactory}.
 * <p>
 * HttpClientFactory provides service to handle API connection and executor.
 */
@Slf4j
@Component(service = ChatGptHttpClientFactory.class)
@Designate(ocd = ChatGptHttpClientFactoryConfig.class, factory = true)
public class ChatGptHttpClientFactoryImpl implements ChatGptHttpClientFactory {

    private Executor executor;
    private String baseUrl;
    private CloseableHttpClient httpClient;
    private ChatGptHttpClientFactoryConfig config;

    @Reference
    private HttpClientBuilderFactory httpClientBuilderFactory;

    @Activate
    @Modified
    protected void activate(ChatGptHttpClientFactoryConfig config) throws KeyManagementException, NoSuchAlgorithmException, KeyStoreException {
        log.info("########### OSGi Configs Start ###############");
        log.info("API Host Name : {}", config.apiHostName());
        log.info("URI Type: {}", config.uriType());
        log.info("########### OSGi Configs End ###############");
        closeHttpConnection();
        this.config = config;
        if (this.config.apiHostName() == null) {
            log.debug("Configuration is not valid. Both hostname is mandatory.");
            throw new IllegalArgumentException("Configuration is not valid. Both hostname is mandatory.");
        }
        this.baseUrl = StringUtils.join(this.config.apiHostName(), this.config.uriType());
        initExecutor();
    }

    private void initExecutor() throws KeyManagementException, NoSuchAlgorithmException, KeyStoreException {
        PoolingHttpClientConnectionManager connMgr = null;
        RequestConfig requestConfig = initRequestConfig();
        HttpClientBuilder builder = httpClientBuilderFactory.newBuilder();
        builder.setDefaultRequestConfig(requestConfig);
        if (config.relaxedSSL()) {
            connMgr = initPoolingConnectionManagerWithRelaxedSSL();
        } else {
            connMgr = new PoolingHttpClientConnectionManager();
        }
        connMgr.closeExpiredConnections();
        connMgr.setMaxTotal(config.maxTotalOpenConnections());
        connMgr.setDefaultMaxPerRoute(config.maxConcurrentConnectionPerRoute());
        builder.setConnectionManager(connMgr);
        List<Header> headers = new ArrayList<>();
        headers.add(new BasicHeader("Content-Type", "application/json"));
        headers.add(new BasicHeader("Authorization", "Bearer " + config.apiKey()));
        headers.add(new BasicHeader("OpenAI-Beta", "assistants=v1"));
        builder.setDefaultHeaders(headers);
        builder.setKeepAliveStrategy(keepAliveStratey);
        httpClient = builder.build();
        executor = Executor.newInstance(httpClient);
    }

    private PoolingHttpClientConnectionManager initPoolingConnectionManagerWithRelaxedSSL()
            throws NoSuchAlgorithmException, KeyStoreException, KeyManagementException {
        PoolingHttpClientConnectionManager connMgr;
        SSLContextBuilder sslbuilder = new SSLContextBuilder();
        sslbuilder.loadTrustMaterial(new TrustAllStrategy());
        SSLConnectionSocketFactory sslsf = new SSLConnectionSocketFactory(sslbuilder.build(),
                NoopHostnameVerifier.INSTANCE);
        Registry<ConnectionSocketFactory> socketFactoryRegistry = RegistryBuilder.<ConnectionSocketFactory>create()
                .register("http", PlainConnectionSocketFactory.getSocketFactory()).register("https", sslsf).build();
        connMgr = new PoolingHttpClientConnectionManager(socketFactoryRegistry);
        return connMgr;
    }

    private RequestConfig initRequestConfig() {
        return RequestConfig.custom()
                .setConnectTimeout(Math.toIntExact(TimeUnit.SECONDS.toMillis(config.defaultConnectionTimeout())))
                .setSocketTimeout(Math.toIntExact(TimeUnit.SECONDS.toMillis(config.defaultSocketTimeout())))
                .setConnectionRequestTimeout(
                        Math.toIntExact(TimeUnit.SECONDS.toMillis(config.defaultConnectionRequestTimeout())))
                .build();
    }

    @Deactivate
    protected void deactivate() {
        closeHttpConnection();
    }

    private void closeHttpConnection() {
        if (null != httpClient) {
            try {
                httpClient.close();
            } catch (final IOException exception) {
                log.debug("IOException while clossing API, {}", exception.getMessage());
            }
        }
    }

    @Override
    public Executor getExecutor() {
        return executor;
    }

    @Override
    public ChatGptHttpClientFactoryConfig getConfig(){
        return this.config;
    }


    @Override
    public Request post() {
        return Request.Post(baseUrl);
    }

    @Override
    public Request postCreateThreadRun() {
        String url = config.apiHostName();
        url += "/v1/threads/runs";
        return Request.Post(url);
    }

    @Override
    public Request listMessages(String threadId) {
        String url = config.apiHostName();
        url += "/v1/threads/"+threadId+"/messages";
        return Request.Get(url);
    }

    @Override
    public Request postCreateMessage(String threadId) {
        String url = config.apiHostName();
        url += "/v1/threads/"+threadId+"/messages";
        return Request.Post(url);
    }

    @Override
    public Request postCreateRun(String threadId) {
        String url = config.apiHostName();
        url += "/v1/threads/"+threadId+"/runs";
        return Request.Post(url);
    }

    @Override
    public Request retrieveRun(String threadId, String runId) {
        String url = config.apiHostName();
        url += "/v1/threads/"+threadId+"/runs/"+runId;
        return Request.Get(url);
    }

    @Override
    public Request retrieveThread(String threadId) {
        String url = config.apiHostName();
        url += "/v1/threads/"+threadId;
        return Request.Get(url);
    }

    ConnectionKeepAliveStrategy keepAliveStratey = new ConnectionKeepAliveStrategy() {

        @Override
        public long getKeepAliveDuration(HttpResponse response, HttpContext context) {
            /*
             * HeaderElementIterator headerElementIterator = new BasicHeaderElementIterator(
             * response.headerIterator(HTTP.CONN_KEEP_ALIVE));
             *
             * while (headerElementIterator.hasNext()) { HeaderElement headerElement =
             * headerElementIterator.nextElement(); String param = headerElement.getName();
             * String value = headerElement.getValue(); if (value != null &&
             * param.equalsIgnoreCase("timeout")) { return
             * TimeUnit.SECONDS.toMillis(Long.parseLong(value)); } }
             */

            return TimeUnit.SECONDS.toMillis(config.defaultKeepAliveconnection());
        }
    };
}

 

Above are all the key points to integrate OpenAI Assistants API with Adobe AEM as Cloud Service (AEMaaCS). You can get the source code of the demo from here:

https://github.com/perficient1977/Blog-OpenAI-Demo

Note:

In the source code, we have their versions of customization on the Core Teaser component. The v3 is for the OpenAI Assistants API integration with AEMaaCS.

V1 and V2 are based on the OpenAI ChatGPT’s Chat Completion API integration in the source code. You can refer them if you are interested.

]]>
https://blogs.perficient.com/2024/02/28/adobe-aemaacs-integration-with-openai-assistants-api/feed/ 1 355679
Differentiating Between Dialog and Design Dialog in AEM https://blogs.perficient.com/2024/02/28/differentiating-between-dialog-and-design-dialog-in-aem/ https://blogs.perficient.com/2024/02/28/differentiating-between-dialog-and-design-dialog-in-aem/#respond Wed, 28 Feb 2024 07:07:15 +0000 https://blogs.perficient.com/?p=353867

AEM components dialog is the most important part of AEM. We cannot reuse components without the AEM components dialog.

What is a Design Dialog?

A design dialog is a special kind of dialog that is available only in page design mode in AEM. It allows authors to customize the design of a component by adjusting properties (Ex. color, images, and layout).

This is a dialog that can store content and configuration that can be used on different pages. This is used when we require the component and configuration to be the same across the application and which are created in the same template.

Ex. We can create a logo and use the design dialog for the logo component because the logo will be the same throughout the component.

What Does the Design Dialog Look Like?

  1. Open the author page and change the editing mode to Design mode.

    Editdesignmode

    Fig: Author Page

  2. Below are the design modes for the image component.

    Designmode Image

    Fig: Design Dialog & Normal Dialog

Differences Between Dialog and Design Dialog

Difference

Fig: Difference between Dialog & Design Dialog

How to Create a Design Dialog

Tab component before creating design dialog.

Beforedesign

Fig: Tabs component Before Design Dialog

Step 1: Log in with crxde lite (http://localhost:4502/crx/de/index.jsp#/apps).

Step 2: Navigate to folder structure (/weretail/components/content/tabs).

Step 3: Select the component and create a node called cq:design_dialog.

Createnote

Fig: Create Node

Step 4: Create different nodes with component names, add attributes, and save.

Nodes Structure

Step 5: Open the author URL, navigate the content page template, and verify the tab component.

Afterdesigndialog

Fig: Tab component after Design dialog

Ex. Design dialog of the text component

Design Textcomp

Fig: Design dialog of Text component

Design dialog isolates problems. It is used to change its layout, design, and style. Components are easier to maintain and update over time.

]]>
https://blogs.perficient.com/2024/02/28/differentiating-between-dialog-and-design-dialog-in-aem/feed/ 0 353867