AEM Universal Editor: A Simpler, Smarter Way to Create Content
Adobe Experience Manager (AEM) has come a long way in empowering content authors. We started with static templates — where even small changes required a developer — and evolved to editable templates and Layout Mode, which finally gave authors visual control over their pages.
Now, Adobe is flipping the script again with the Universal Editor. And at first glance, it might feel like a step backward.
For authors used to dragging, dropping, and designing layouts, this can feel like a loss of creative control.
So… what’s really going on?
The Shift from Layout to Experience
The reactions firsthand:
“Where’s Layout Mode?” “Why can’t I just place things wherever I want?”
It’s a valid response. But after spending time with the Universal Editor, I’ve realized this change isn’t about taking power away — it’s about refocusing.
It’s about removing layout distractions and putting the spotlight back on what matters most: creating meaningful, consistent content experiences.
Why Layout Mode Wasn’t the Answer
Layout Mode felt like freedom at first. You could finally design your own pages — no developers needed.
But with that freedom came complexity.
To use it well, authors had to learn:
Sure, Layout Mode was powerful — but it made content creation more complex than it needed to be.
What Makes the Universal Editor Different?
The Universal Editor brings a fresh approach — one that separates content creation from layout engineering.
Here’s what it introduces:
Let’s break these down.
Goodbye Layout Mode, Hello Clean Structure
Layout Mode may be gone, but layout control isn’t.
Instead of manually managing layout containers, authors now build pages using Section Blocks and Content Blocks — all styled using design tokens or CSS classes provided via Edge Delivery Services (EDS).
This shift prevents layout spaghetti and bloated code. The result?
Want columns? You still can — but the approach is smarter and cleaner.
Visual Authoring: Document-Based, Fragment-Based
Universal Editor allows for:
Whether you’re editing a marketing page, a landing page, or a transactional email, you stay in the context of the real experience.
JSON-Driven UI & Content Source Mapping
Under the hood, Universal Editor operates on a JSON-driven UI model. That means:
Using content source mapping, you can bind content blocks to headless CMS data, JSON APIs, or structured content fragments. This makes the Universal Editor incredibly flexible — and future-proof.
The Properties Rail: Simple, Unified Editing
Editing in classic AEM was… chaotic. Hidden dialogs, floating pop-ups, custom UIs for each component.
The Properties Rail fixes that. It’s a clean side panel where you can edit any block’s content — all in one place.
Why it rocks:
It might feel unfamiliar at first, but once you get the hang of it, there’s no going back.
Section Blocks: Structure with Purpose
In the old days, pages could become layout jungles:
Section Blocks help authors avoid that. They provide clear boundaries and promote structured, semantic content.
Why they matter:
Channel Previews: One Editor, Any Experience
The Universal Editor isn’t just for web pages. You can preview and author across:
And it all works from the same authoring UI — which makes the experience channel-agnostic.
Why This Change Matters
This isn’t just a tool upgrade — it’s a shift in mindset:
The Universal Editor doesn’t remove flexibility — it refines it. It’s not the freedom we thought we wanted, but it’s the one we actually needed.
Universal Editor Extras
A few more highlights:
Visual Reference
Universal Editor UI Interface (Forms)
The Universal Editor interface is divided into four logical parts:
1. A: Experience Cloud Header : The Experience Cloud Header appears at the top of the console and provides navigation context within the broader Adobe Experience Cloud ecosystem. It shows your current location and allows quick access to other Experience Cloud applications.
2. B: Universal Editor Toolbar : The toolbar provides essential navigation and editing tools. With it, you can move between forms, publish or unpublish forms, edit form properties, and access the rule editor for adding dynamic behaviors.
3. C: Properties Panel : The Properties Panel appears on the right side of the interface and displays contextual information based on what you’ve selected in the form. When no component is selected, it shows the overall form structure.
4. D: Editor : The Editor is the central workspace where you create and modify your form. It displays the form specified in the location bar and provides a WYSIWYG experience that shows exactly how your form will appear to users. In preview mode, you can interact with the form just as your users would, testing navigation through buttons and links.
For More Information, you can refer to this adobe site: Getting Started with the Universal Editor in AEM | Adobe Experience Manager
]]>
Adobe launched GenStudio for Performance Marketing and has made many improvements and updates leading up to Adobe Summit 2025. We’ve had an opportunity to use it here at Perficient, and have discovered a number of exciting features (along with nuances) of the product.
We see an evolving future of its rollout, especially as more and more marketing teams adopt the capabilities it has into their own digital marketing ecosystems.
GenStudio may very well be a marketer’s dream. We do see it as a game-changer for how marketing content is created, activated, and measured. That’s because it greatly reduces the amount of time that is typically required to request, build, assemble, review, and publish content for marketing campaigns.
These various flows in creating content can now be handled by the AI capabilities of GenStudio. Not only that, but the content generated can follow brand standards and guidelines that are established in GenStudio.
Some of the main features to highlight:
We’d like to note that although there are many Generative AI capabilities within creating content, human review is always a part of the approval and publication process.
There are a few use cases that have been described by Adobe that can be addressed with GenStudio.
Our experience so far has been focused on the content creation process, and seeing how our content looks and behaves in some of our channels. We look forward to creating personalized experiences, along with seeing how the content performs based on things like Content Analytics, recently announced at Adobe Summit.
After onboarding, defining users and groups, and establishing some processes for adopting GenStudio, the first step is to establish the Brand Guidelines.
New Brands can be created (along with Storage Permissions) within the interface, either using a guidelines document or manually.
Expert Tip: Use a PDF document that has all your brand guidelines defined to upload, and GenStudio will create the various guidelines based on the document(s).
Once a brand is uploaded, review the guidelines, add new ones, and make necessary adjustments.
The following example illustrates the WKND brand:
Note that the permissions to edit and publish a brand should be kept to brand owners. Changes to the brand which are then published may also impact other systems that use these brand guidelines, such as Adobe Experience Manager, or Orchestration Agents.
Once the brand has been published, it can then be used to generate emails, meta ads, banner ads, and other display ads.
Content creation is based on templates. These templates allow the creation of content that may greatly reduce the amount of time to build out content with existing tools. What we would like to see eventually from Adobe in this area is the ability to create and design layouts within the tool, as opposed to having to upload HTML files that need to adhere to certain frameworks. Another approach may be to create a process that can reference existing layouts such as emails from Marketo, or Experience Fragments in AEM, and them brought into GenStudio.
Assets can also be brought into GenStudio and then used in generating content. Assets that are managed in AEM as a Cloud Service can also be used.
Note: The Assets that are part of AEMaaCS need to be marked as “Approved” before being made available in GenStudio. Assets can also be sourced from ContentHub.
Expert tip: Because there are several ways of sourcing Assets that are brought into GenStudio, we suggest working with a partner such as Perficient to guide these processes.
Example content generation for an event at Adobe Summit:
After the content creation process, content can then be sent for approval. For example, in the above display ad, a content reviewer may ask for re-phrasing to help improve the brand score, if appropriate. Once approved, the content is then published as part of a campaign and can be downloaded in the form of HTML, images, or CSV files for publication.
Activating content can also be done on various channels such as Meta, Google Campaign Manager 360, and others. (Note that as of this writing, 3/19/25, the only channel available for activation is Meta.) Once these additional channels are rolled out, we look forward to exploring those capabilities and insights based on those channels, which is another feature available as part of GenStudio.
We’re excited about the features that Adobe GenStudio for Performance Marketing provides now, and what will be rolled out over time as features become available. Working with the tool itself feels slick, and having the Generative AI features built on top of it makes us feel like we’re really using some cutting-edge technologies.
]]>As an AEM author, updating existing page content is a routine task. However, manual updates, like rolling out a new template, can become tedious and costly when dealing with thousands of pages.
Fortunately, automation scripts can save the day. Using Groovy scripts within AEM can streamline the content update process, reducing time and costs. In this blog, we’ll outline the key steps and best practices for using Groovy scripts to automate content updates.
Groovy is a powerful scripting language that integrates seamlessly with AEM. It allows developers to perform complex operations with minimal code, making it an excellent tool for tasks such as:
The Groovy Console for AEM provides an intuitive interface for running scripts, enabling rapid development and testing without redeploying code.
To illustrate how to use Groovy, let’s learn how to update templates for existing web pages authored inside AEM.
Our first step is to identify the following:
You should have source and destination template component mappings and page paths.
As a pre-requisite for this solution, you will need to have JDK 11, Groovy 3.0.9, and Maven 3.6.3.
1. Create a CSV File
The CSV file should contain two columns:
Save this file as template-map.csv.
Source,Target "/apps/legacy/templates/page-old","/apps/new/templates/page-new" "/apps/legacy/templates/article-old","/apps/new/templates/article-new"v
2. Load the Mapping File in migrate.groovy
In your migrate.groovy script, insert the following code to load the mapping file:
def templateMapFile = new File("work${File.separator}config${File.separator}template-map.csv") assert templateMapFile.exists() : "Template Mapping File not found!"
3. Implement the Template Mapping Logic
Next, we create a function to map source templates to target templates by utilizing the CSV file.
String mapTemplate(sourceTemplateName, templateMapFile) { /*this function uses the sourceTemplateName to look up the template we will use to create new XML*/ def template = '' assert templateMapFile : "Template Mapping File not found!" for (templateMap in parseCsv(templateMapFile.getText(ENCODING), separator: SEPARATOR)) { def sourceTemplate = templateMap['Source'] def targetTemplate = templateMap['Target'] if (sourceTemplateName.equals(sourceTemplate)) { template = targetTemplate } } assert template : "Template ${sourceTemplateName} not found!" return template }
After creating a package using Groovy script on your local machine, you can directly install it through the Package Manager. This package can be installed on both AEM as a Cloud Service (AEMaaCS) and on-premises AEM.
Execute the script in a non-production environment, verify that templates are correctly updated, and review logs for errors or skipped nodes. After running the script, check content pages to ensure they render as expected, validate that new templates are functioning correctly, and test associated components for compatibility.
Leveraging automation through scripting languages like Groovy can significantly simplify and accelerate AEM migrations. By following a structured approach, you can minimize manual effort, reduce errors, and ensure a smooth transition to the new platform, ultimately improving overall maintainability.
Don’t miss out on more AEM insights and follow our Adobe blog!
]]>This series of blog posts will cover the main areas of activity for your marketing, product, and UX teams before, during, and after site migration to a new digital experience platform.
Migrating your site to a different platform can be a daunting prospect, especially if the site is sizable in both page count and number of assets, such as documents and images. However, this can also be a perfect opportunity to freshen up your content, perform an asset library audit, and reorganize the site overall.
Once you’ve hired a consultant, like Perficient, to help you implement your new CMS and migrate your content over, you will work with them to identify several action items your team will need to tackle to ensure successful site migration.
Whether you are migrating from/to some of the major enterprise digital experiences platforms like Sitecore, Optimizely, Adobe, or from the likes of Sharepoint or WordPress, there are some common steps to take to make sure content migration runs smoothly and is executed in a manner that adds value to your overall web experience.
One of the first questions you will need to answer is, “What do we need to carry over?” The instinctive answer would be everything. The rational answer is that we will migrate the site over as is and then worry about optimization later. There are multiple reasons why this is usually not the best option.
Even though this activity might take time, it is essential to use this opportunity in the best possible manner. A consultant like Perficient can help drive the process. They will pull up an initial list of active pages, set up simple audit steps, and ensure that decisions are recorded clearly and organized.
The first step is to ensure all current site pages are accounted for. As simple as this may seem, it doesn’t always end up being so, especially on large multi-language sites. You might have pages that are not crawlable, are temporarily unpublished, are still in progress, etc.
Depending on your current system capabilities, putting together a comprehensive list can be relatively easy. Getting a CMS export is the safest way to confirm that you have accounted for everything in the system.
Crawling tools, such as Screaming Frog, are frequently used to generate reports that can be exported for further refinement. Cross-referencing these sources will ensure you get the full picture, including anything that might be housed externally.
Once you’ve ensured that all pages made it to a comprehensive list you can easily filter, edit, and share, the fun part begins.
The next step involves reviewing and analyzing the sitemap and each page. The goal is to determine those that will stay vs candidates for removal. Various different factors can impact this decision from business goals, priorities, page views, conversion rate, SEO considerations, and marketing campaigns to compliance and regulations. Ultimately, it is important to assess each page’s value to the business and make decisions accordingly.
This audit will likely require input from multiple stakeholders, including subject matter experts, product owners, UX specialists, and others. It is essential to involve all interested parties at an early stage. Securing buy-in from key stakeholders at this point is critical for the following phases of the process. This especially applies to review and sign-off prior to going live.
Depending on your time and resources, the keep-kill-merge can either be done in full or limited to keep-kill. The merge option might require additional analysis, as well as follow-up design and content work. Leaving that effort for after the site migration is completed might just be the rational choice.
Once the audit process has been completed, it is important to record findings and decisions simply and easily consumable for teams that will implement those updates. Proper documentation is essential when dealing with large sets of pages and associated content. This will inform the implementation team’s roadmap and timelines.
At this point, it is crucial to establish regular communication between a contact person (such as a product owner or content lead) and the team in charge of content migration from the consultant side. This partnership will ensure that all subsequent activities are carried out respecting the vision and business needs identified at the onset.
Completing the outlined activities properly will help smooth the transition into the next process phase, thus setting your team up for a successful site migration.
]]>Managing configurations in Adobe Experience Manager (AEM) can be challenging, especially when sharing configs across different websites, regions, or components. The Context-Aware Configuration (CAC) framework in AEM simplifies configuration management by allowing developers to define and resolve configurations based on the context, such as the content hierarchy. However, as projects scale, configuration needs can become more intricate, involving nested configurations and varying scenarios.
In this blog, we will explore Nested Context-Aware Configurations and how they provide a scalable solution to handle multi-layered and complex configurations in AEM. We’ll cover use cases, the technical implementation, and best practices for making the most of CAC.
AEM’s Context-Aware Configuration allows you to create and resolve configurations dynamically, based on the content structure, so that the same configuration can apply differently depending on where in the content tree it is resolved. However, some projects require deeper levels of configurations — not just based on content structure but also different categories within a configuration itself. This is where nested configurations come into play.
Nested Context-Aware Configuration involves having one or more configurations embedded within another configuration. This setup is especially useful when dealing with hierarchical or multi-dimensional configurations, such as settings that depend on both global and local contexts or component-specific configurations within a broader page configuration.
You can learn more about basic configuration concepts on Adobe Experience League.
Nested configurations are particularly useful for categorizing configurations based on broad categories like branding, analytics, or permissions, and then nesting more specific configurations within those categories.
For instance, at the parent level, you could define global categories for analytics tracking, branding, or user permissions. Under each category, you can then have nested configurations for region-specific overrides, such as:
This structure:
To implement the nested configurations, we need to define configurations for individual modules first. In the example below, we are going to create SiteConfig which will have some configs along with two Nested configs and then Nested config will have its own attributes.
Let’s define Individual config first. Th they will Look like this:
@Configuration(label = "Global Site Config", description = "Global Site Context Config.") public @interface SiteConfigurations { @Property(label = "Parent Config - Property 1", description = "Description for Parent Config Property 1", order = 1) String parentConfigOne(); @Property(label = "Parent Config - Property 2", description = "Description for Parent Config Property 2", order = 2) String parentConfigTwo(); @Property(label = "Nested Config - One", description = "Description for Nested Config", order = 3) NestedConfigOne NestedConfigOne(); @Property(label = "Nested Config - Two", description = "Description for Nested Config", order = 4) NestedConfigTwo[] NestedConfigTwo(); }
Following with this Nested ConfigOne and NestedConfigTwo will look like this:
public @interface NestedConfigOne { @Property(label = "Nested Config - Property 1", description = "Description for Nested Config Property 1", order = 1) String nestedConfigOne(); @Property(label = "Nested Config - Property 2", description = "Description for Nested Config Property 2", order = 2) String nestedConfigTwo(); }
And…
public @interface NestedConfigTwo { @Property(label = "Nested Config - Boolean Property 1", description = "Description for Nested Config Boolean Property 1", order = 1) String nestedBooleanProperty(); @Property(label = "Nested Config - Multi Property 1", description = "Description for Nested Config Multi Property 1", order = 1) String[] nestedMultiProperty(); }
Note that we didn’t annotate nested configs with Property as this is not the main config.
Let’s create service to read this and it will look like this:
public interface NestedConfigService { SiteConfigurationModel getAutoRentalConfig(Resource resource); }
Implementation of service will be like this:
@Component(service = NestedConfigService.class, immediate = true) @ServiceDescription("Implementation For NestedConfigService") public class NestedConfigServiceImpl implements NestedConfigService { @Override public SiteConfigurationModel getAutoRentalConfig(Resource resource) { final SiteConfigurations configs = getConfigs(resource); return new SiteConfigurationModel(configs); } private SiteConfigurations getConfigs(Resource resource) { return resource.adaptTo(ConfigurationBuilder.class) .name(SiteConfigurations.class.getName()) .as(SiteConfigurations.class); } }
SiteConfigurationModel will hold the final config including all the configs. We can modify getters based on need. So currently, I am just adding its dummy implementation.
public class SiteConfigurationModel { public SiteConfigurationModel(SiteConfigurations configs) { String parentConfigOne = configs.parentConfigOne(); NestedConfigOne nestedConfigOne = configs.NestedConfigOne(); NestedConfigTwo[] nestedConfigTwos = configs.NestedConfigTwo(); //Construct SiteConfigurationModel As per Need } }
Once you deploy the code On site config menu in context editor, it should look like :
We can see it has given us the ability to configure property 1 and property 2 directly but for Nested one it gave an additional Edit button which will take us to configure the Nested Configs and it will look like this :
Since Nested config two is multifield it gives the ability to add an additional entry.
Nested Context-Aware Configuration in AEM offers a powerful solution for managing complex configurations across global, regional, and component levels. By leveraging nested contexts, you can easily categorize configurations, enforce fallback mechanisms, and scale your configuration management as your project evolves.
Whether working on a multi-region site, handling diverse user segments, or managing complex components, nested configurations can help you simplify and streamline your configuration structure while maintaining flexibility and scalability.
Make sure to follow our Adobe blog for more Adobe platform insights!
]]>Earlier this year, Adobe introduced new generative AI capabilities in Adobe Experience Manager (AEM). As a Platinum partner of Adobe, Perficient has early adopter features provisioned for our environments. One of the more exciting and relevant features is the ability to use GenAI to generate variations within Content Fragments in AEM.
In this blog, we’ll talk about a sample use-case scenario, the steps involved with this new feature, and show how it can empower marketers and content authors who spend a lot of time in AEM and make their lives easier.
In this sample use case, we have contributors who write content for a site called WKND Adventures. We’d like to create a contributor biography to enable an engaging experience for the end user. A biography will further enhance the user experience and increase the chance of content leading to a conversion, such as booking a vacation.
After logging into AEM as a Cloud Service authoring environment, head over to a Content Fragment and open it up for editing.
Note: If you don’t see the new editor, try selecting the “Try New Editor” button to bring up the latest interface.
As you can see, we still have the standard editing features such as associating images, making rich text edits, and publishing capabilities.
Select the “Generate Variations” button on the top toolbar, and then a new window opens with the Generative Variations interface as seen in the image below.
What’s important to note here is that we are tied to the authoring environment in this interface. So, any variations that are generated will be brought back into our content fragment interface. Although a new prompt can be generated, we’ll start with the Cards option.
Note: There will be more prompt templates created after the writing of this blog.
The Cards option is pre-filled with some default helper text to provide guidance on a potential prompt and help fine-tune what’s being generated. Providing relevant and concise explanations to the user interaction will also improve the generated results. The interaction can also be explained. The generations can be further enhanced by providing Adobe Target, or a CSV file to further improve the generations. Providing a tone of voice also further defines the variations.
One of our favorite features is the ability to provide a URL for domain knowledge. In this case, we’re going to select a site from Rick Steves on winter escapes as seen in the image below.
After selecting the appropriate user interaction, tone, temperature intent, and number of variations, we select the “Generate” button.
Once the variations are created, we can review the results and then choose one to bring back into our Content Fragment editor.
After selecting a variation and giving it a name, we can then export that variation. This will create a new variation of that content fragment in AEM.
Although this is a simple example, many other prompt templates can be used to generate variations that can be used in AEM. Such as creating FAQs, a headline, hero banners, tiles, and more. Additional technical details can be found on Adobe’s GenAI page.
Having a direct integration to generate variations from an authoring environment will certainly speed up content creation and allow authors to create relevant and engaging content with the help of GenAI. We look forward to more features and improvements from Adobe in this exciting space, and helping customers adopt the technologies to effectively and safely create content to build exciting experiences.
]]>About eight years ago, I was introduced to Docker during a meetup at a restaurant with a coworker. He was so engrossed in discussing the Docker engine and containers that he barely touched the hors d’oeuvres. I was skeptical.
I was familiar with Virtual Machines (VMs) and appreciated the convenience of setting up application servers without worrying about hardware. I wanted to know what advantages Docker could offer that VMs couldn’t. He explained that instead of virtualizing the entire computer, Docker only virtualizes the OS, making containers much slimmer than their VM counterparts. Each container shares the host OS kernel and often binaries and libraries.
Curious, I wondered how AEM would perform inside Docker—a Java application running within the Java Virtual Machine, inside a Docker container, all on top of a desktop PC. I expected the performance to be terrible. Surprisingly, the performance was comparable to running AEM directly on my desktop PC. In hindsight, this should not have been surprising. The Docker container shared my desktop PC’s kernel, RAM, CPUs, storage, and network allowing the container to behave like a native application.
I’ve been using Docker for my local AEM development ever since. I love how I can quickly spin up a new author, publish, or dispatch environment whenever I need it and just as easily tear it down. Switching to a new laptop or PC is a breeze — I don’t have to worry about installing the correct version of Java or other dependencies to get AEM up and running.
In this blog, we’ll discuss running AEM author, publisher, and dispatcher within Docker and the setup process.
The AEM SDK, which includes the Quickstart JAR and Dispatcher tools, is necessary for this setup. Additionally, Apache Maven must be installed. For the Graphical User Interface, we will use Rancher Desktop by SUSE, which operates on top of Docker’s command-line tools. While the Docker engine itself is open source, Docker Desktop, the GUI distributed by Docker, is not.
Download and Install Rancher Desktop by SUSE. Installing Racker Desktop will provide the Docker CLI (command line interface). If you wish to install the Docker CLI without Rancher Desktop, run the following command:
Install WinGet via the Microsoft store.
winget install --id=Docker.DockerCLI -e
Install Homebrew:
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh) brew cask install docker
Create a folder named “aem-in-docker”. Unzip the contents of the AEM SDK into this folder. Copy your AEM “license.properties” file to this directory.
Make three subfolders within your “aem-in-docker” folder named “base”, “author”, and “publish”.
Your “aem-in-docker” folder should look something like this:
Create a file named “Dockerfile” within the “base” subdirectory.
Ensure the file does not have an extension. Set the contents of the file to the following:
FROM ubuntu # Setting the working directory WORKDIR /opt/aem # Copy the license file COPY license.properties . # Copy Quickstart jar file COPY aem-sdk-quickstart-2024.8.17465.20240813T175259Z-240800.jar cq-quickstart.jar # Install Java, Vim, and Wget. Install Dynamic Media dependencies. RUN apt-get update && \ apt-get install -y curl && \ apt-get install -y software-properties-common && \ add-apt-repository ppa:openjdk-r/ppa && \ apt-get update && \ apt-get install -y openjdk-11-jdk vim ca-certificates gnupg wget imagemagick ffmpeg fontconfig expat freetype2-demos # Unack the Jar file RUN java -jar cq-quickstart.jar -unpack # Set the LD_LIBRARY_PATH environmental variable ENV LD_LIBRARY_PATH=/usr/local/lib
This file directs Docker to build a new image using the official Ubuntu image as a base. It specifies the working directory, copies the license file and the quickstart file into the image (note that your quickstart file might have a different name), installs additional packages (like Java, Vim, Wget, and some Dynamic Media dependencies), unpacks the quickstart file, and sets some environment variables.
Run the following command from within the “aem-in-docker” folder.
docker build -f base/Dockerfile -t aem-base .
It should take a few minutes to run. After the command has been completed run:
docker image ls
You should see your newly created “aem-base” image.
Create a file named “Dockerfile” within the “author” subdirectory.
Set the contents of the file to the following:
# Use the previously created aem-base FROM aem-base # Expose AEM author in port 4502 and debug on port 5005 EXPOSE 4502 EXPOSE 5005 VOLUME ["/opt/aem/crx-quickstart/logs"] # Make the container always start in Author mode with Port 4502. Add additional switches to support JAVA 11: https://experienceleague.adobe.com/en/docs/experience-manager-65/content/implementing/deploying/deploying/custom-standalone-install. Add the Dynamic Media runmode. ENTRYPOINT ["java", "-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=*:5005", "-XX:+UseParallelGC", "--add-opens=java.desktop/com.sun.imageio.plugins.jpeg=ALL-UNNAMED", "--add-opens=java.base/sun.net.www.protocol.jrt=ALL-UNNAMED", "--add-opens=java.naming/javax.naming.spi=ALL-UNNAMED", "--add-opens=java.xml/com.sun.org.apache.xerces.internal.dom=ALL-UNNAMED", "--add-opens=java.base/java.lang=ALL-UNNAMED", "--add-opens=java.base/jdk.internal.loader=ALL-UNNAMED", "--add-opens=java.base/java.net=ALL-UNNAMED", "-Dnashorn.args=--no-deprecation-warning", "-jar", "cq-quickstart.jar", "-Dsling.run.modes=author,dynamicmedia_scene7", "-p", "4502", "-nointeractive"]
This file instructs Docker to create a new image based on the “aem-base” image. It makes ports 4502 and 5005 available (5005 for debugging purposes), sets up a mount point at “/opt/aem/crx-quickstart/logs”, and specifies the command to run when the image is executed.
Run the following command from within the “aem-in-docker” folder.
docker build -f author/Dockerfile -t aem-author .
After the command has been completed run:
docker image ls
You should see your newly created “aem-author” image.
Create a file named “Dockerfile” within the “publish” subdirectory.
Set the contents of the file to the following:
# Use the previously created aem-base FROM aem-base # Expose AEM publish in port 4503 and debug on port 5006 EXPOSE 4503 EXPOSE 5006 VOLUME ["/opt/aem/crx-quickstart/logs"] # Make the container always start in Author mode with Port 4503. Add additional switches to support JAVA 11: https://experienceleague.adobe.com/en/docs/experience-manager-65/content/implementing/deploying/deploying/custom-standalone-install. Add the Dynamic Media runmode. ENTRYPOINT ["java", "-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=*:5006", "-XX:+UseParallelGC", "--add-opens=java.desktop/com.sun.imageio.plugins.jpeg=ALL-UNNAMED", "--add-opens=java.base/sun.net.www.protocol.jrt=ALL-UNNAMED", "--add-opens=java.naming/javax.naming.spi=ALL-UNNAMED", "--add-opens=java.xml/com.sun.org.apache.xerces.internal.dom=ALL-UNNAMED", "--add-opens=java.base/java.lang=ALL-UNNAMED", "--add-opens=java.base/jdk.internal.loader=ALL-UNNAMED", "--add-opens=java.base/java.net=ALL-UNNAMED", "-Dnashorn.args=--no-deprecation-warning", "-jar", "cq-quickstart.jar", "-Dsling.run.modes=publish,dynamicmedia_scene7", "-p", "4503", "-nointeractive"]
Run the following command from within the “aem-in-docker” folder.
docker build -f publish/Dockerfile -t aem-publish .
After the command has been completed run:
docker image ls
You should see your newly created “aem-publish” image.
Let’s set up a network to connect Docker containers and facilitate data sharing between them.
docker network create adobe
It’s time to run our Author Docker Image. First, create a local directory for the logs volume specified in the Dockerfile. Within the author subdirectory, create a directory named “logs.” Run the following command within the new logs folder:
docker run -d --name author -p 4502:4502 -p 5005:5005 --network adobe -v ${PWD}:/opt/aem/crx-quickstart/logs aem-author
docker run -d --name author -p 4502:4502 -p 5005:5005 --network adobe -v `pwd`:/opt/aem/crx-quickstart/logs aem-author
The command will return the ID of the new Docker container. It may take some time for the new AEM instance to start. To check its status, you can monitor the “error.log” file in the logs directory to check its status.
Get-Content -Path .\error.log -Wait
tail -f error.log
After AEM has finished starting up, check that everything is loading correctly by visiting: http://localhost:4502/aem/start.html.
Let’s stop the AEM container for the time being:
docker stop author
It’s time to run our Publisher Docker Image. First, create a local directory for the logs volume specified in the Dockerfile. Within the publish subdirectory, create a directory named “logs.” Run the following command within the new logs folder:
docker run -d --name publish -p 4503:4503 -p 5006:5006 --network adobe -v ${PWD}:/opt/aem/crx-quickstart/logs aem-publish
docker run -d --name publish -p 4503:4503 -p 5006:5006 --network adobe -v `pwd`:/opt/aem/crx-quickstart/logs aem-publish
The command will return the ID of the new Docker container. It may take some time for the new AEM instance to start. To check its status, you can monitor the “error.log” file in the logs directory to check its status.
Get-Content -Path .\error.log -Wait
tail -f error.log
After AEM has finished starting up, check that everything is loading correctly by visiting: http://localhost:4503/content.html. You will see a “Not Found” page. That is fine for now.
Let’s stop the AEM container for the time being:
docker stop publish
Open Rancher Desktop and go to the Containers tab in the left navigation pane. To start individual containers, check the box in the State column for each container you want to start, then click the Start button. To start all containers at once, check the box in the header row of the State column, and then click the Start button. Let’s go ahead and start all containers.
If you prefer using the command line, you can run:
docker start author docker start publish
Since Docker’s mascot is a whale, I thought it would be fun to name our new AEM project after a famous fictional whale: Monstro from Pinocchio.
Run the following command from a command line (Note: you may have to run this command with elevated privileges):
mvn -B archetype:generate -D archetypeGroupId=com.adobe.aem -D archetypeArtifactId=aem-project-archetype -D archetypeVersion=50 -D aemVersion=cloud -D appTitle="Monstro" -D appId="monstro" -D groupId="com.monstro" -D frontendModule=general -D includeExamples=n
Once this project has been created, let us build and deploy it to our Author instance.
Run the following command from within the “Monstro” project:
mvn clean install -PautoInstallSinglePackage
Check that the project is installed by visiting the following URL to view the results: http://localhost:4502/editor.html/content/monstro/us/en.html. You should see the following:
Now, let us build and deploy the project to our Publish instance.
Run the following command from within the “Monstro” project:
mvn clean install -PautoInstallSinglePackagePublish
Verify that the project is installed by visiting this URL: http://localhost:4503/content/monstro/us/en.html. Installation may take up to five minutes. After this period, you should see the following:
It’s time to configure the publish agent on our author instance. Go to this URL: http://localhost:4502/etc/replication/agents.author/publish.html.
Click the “Edit” button (next to settings).
Go back to http://localhost:4502/editor.html/content/monstro/us/en.html. Edit the “Hello, World” component by changing the text from “lalala :)” to “Monstro is the enormous, fearsome whale from Disney’s 1940 animated film Pinocchio.” Verify the update and publish the page. Then, check http://localhost:4503/content/monstro/us/en.html to see your changes on the Publisher as well.
Make sure the publisher instance is running before proceeding. Extract the AEM SDK Dispatcher tools.
Expand-Archive .\aem-sdk-dispatcher-tools-2.0.222-windows.zip Rename-Item -Path .\aem-sdk-dispatcher-tools-2.0.222-windows -NewName dispatcher-sdk-2.0.222
chmod +x ./aem-sdk-dispatcher-tools-2.0.222-unix.sh ./aem-sdk-dispatcher-tools-2.0.222-unix.sh
Since we’ve set up a custom network for our AEM containers, the docker run script won’t function correctly because it doesn’t recognize this network. Let’s modify the docker run script.
Open “dispatcher-sdk-2.0.222\bin\docker_run.cmd” in your favorite editor.
Add the “–network adobe” argument to the docker command inside the “else” statement.
Open “dispatcher-sdk-2.0.222/bin/docker_run.sh” in your favorite editor.
Add the “–network adobe” argument to the docker command inside the “else” statement.
Execute the docker run script with the following parameters. Be sure to replace the dispatcher source path with the path to your “monstro” source.
.\ dispatcher-sdk-2.0.222\bin\docker_run.cmd C:\Users\shann\Sites\monstro\dispatcher\src publish:4503 8080
./dispatcher-sdk-2.0.222/bin/docker_run.sh ~/Sites/monstro/dispatcher/src publish:4503 8080
Once the text stream in your terminal has stopped, go to http://localhost:8080/. You should see the following:
Open Rancher Desktop and navigate to the Containers tab. Locate the container with an unusual name. If you stop this container, it won’t be possible to start it again. Please go ahead and stop this container. The dispatcher code running in your terminal will also terminate. We want this container to be more permanent, so let’s make some additional changes to the docker run script.
Open “dispatcher-sdk-2.0.222\bin\docker_run.cmd” in your favorite editor.
Open “dispatcher-sdk-2.0.222/bin/docker_run.sh” in your favorite editor.
Add the “–name dispatcher” argument to the “docker” command within the “else” statement. Also, remove the “–rm” switch. According to Docker documentation, the “–rm” switch automatically removes the container and its associated anonymous volumes when it exits, which is not what we want.
Run the docker run command in your terminal again:
.\ dispatcher-sdk-2.0.222\bin\docker_run.cmd C:\Users\shann\Sites\monstro\dispatcher\src publish:4503 8080
./dispatcher-sdk-2.0.222/bin/docker_run.sh ~/Sites/monstro/dispatcher/src publish:4503 8080
Open Rancher Desktop and go to the Containers tab. You should see a container named “dispatcher.” Stop this container. The dispatcher code running in your terminal will terminate, but the container will remain in Rancher Desktop. You can now stop and restart this container as many times as you’d like. You can also start and stop the dispatcher via the command line:
docker start dispatcher
docker stop dispatcher
We have an author and publisher AEM instance running inside a Docker container. Additionally, we have a dispatcher container created using the source from the Monstro project. Although this dispatcher container isn’t very useful, the advantage of Docker is that you can easily delete and create new containers as needed.
I hope you found this blog helpful. I’ve been using Docker on my local machine for the past eight years and value the flexibility it provides. I can’t imagine going back to managing a local AEM instance or dealing with Apache configurations to get the dispatcher working. Those days are behind me.
]]>Design patterns are pivotal in crafting application solutions that are both maintainable and scalable. The Strategy Design Pattern is ideal for scenarios that require runtime selection among various available algorithms. In this blog, we’ll cover how to implement the Strategy Design Pattern in an AEM OSGi service, boosting your code’s flexibility and manageability.
The Strategy Design Pattern comes under the category of behavioral design pattern which defines a family of algorithms, encapsulates each one, and makes them interchangeable. This pattern allows the algorithm to vary independently from the clients that use it. It’s particularly useful for scenarios where multiple methods can be applied to achieve a specific goal, and the method to be used can be selected at runtime.
You can read more about Strategy Pattern in general here.
Here’s a UML diagram that illustrates the Strategy Design Pattern:
So, is there any scenario where we can use it in AEM? Yes! There are many AEM solution use cases where we have multiple strategies and the proper strategy is chosen at run time. We generally use conditional statements and switch statements in such scenarios. Here are a few examples where Strategy Pattern can be applied.
Implementing the Strategy Design Pattern in AEM OSGi services offers numerous benefits, aligning well with SOLID principles and other best practices in software development.
In AEM, we can utilize regular classes for strategy implementation, but with OSGi, we get a better experience of handling strategies. A typical implementation of the Strategy Design Pattern in AEM will consist of:
First, create a Java interface that defines the common behavior for all strategies.
public interface StrategyService { /** * Name of the strategy. * @return Strategy name. */ String getName(); /** * Executes the strategy based on criteria. * @param strategyBean Dummy Strategy bean object consists of the fields required for strategy execution. * @return Strategy Result bean. You can replace it with result object according to your need. */ StrategyResult execute(StrategyBean strategyBean); /** * Method for executing isFit operation if given strategy is fit for scenario. * @param strategyBean Strategy Object consisting of params needed for strategy is fit operation. * @return true if given strategy fits the criteria. */ boolean isFit(StrategyBean strategyBean); }
Comments are self-explanatory here but let’s still deep dive into what each method intends to do.
String getName()
Name of the strategy. The name should be self-explanatory. So, when someone sneaks into the code, they should come to know for what scenario a strategy is aiming for.
StrategyResult execute(StrategyBean strategyBean);
This method executes the logic for the operation required at runtime. StrategyResult Bean represented here is just for reference. You can update the same according to your operation.
StrategyBean consists of params needed for executing the operation and again this is also just for reference you can execute the same with your own object.
boolean isFit(StrategyBean strategyBean);
This method is where decisions will be made if the given strategy is fit for the scenario or not. StrategyBean will provide the necessary parameters to execute the operation.
Create multiple implementations of the StrategyService interface. Each class will implement the methods differently.
Here for reference, let’s create two concrete implementations for StrategyService namely FirstStrategyServiceImpl and SecondStrategyServiceImpl
@Component(service = StrategyService.class, immediate = true, property = { "strategy.type=FirstStrategy" }) @Slf4j public class FirstStrategyServiceImpl implements StrategyService { @Override public String getName() { return "FirstStrategy"; } @Override public StrategyResult execute(StrategyBean strategyBean) { log.info("Executing First Strategy Service Implementation...."); //Implement Logic return new StrategyResult(); } @Override public boolean isFit(StrategyBean strategyBean) { //Implement Logic return new Random().nextBoolean(); } }
And our Second Service implementation here will look like this:
@Component(service = StrategyService.class, immediate = true, property = { "strategy.type=SecondStrategy" }) @Slf4j public class SecondStrategyServiceImpl implements StrategyService { @Override public String getName() { return "SecondStrategy"; } @Override public StrategyResult execute(StrategyBean strategyBean) { log.info("Executing Second Strategy Service Implementation...."); //Implement Logic return new StrategyResult(); } @Override public boolean isFit(StrategyBean strategyBean) { //Implement Logic return new Random().nextBoolean(); } }
Here, we do not have any logic. We’re just giving dummy implementations. You can add your operational logic in execute and isFit method.
Our context service implementation will hold all the strategies with dynamic binding and will look like this.
@Component(service = StrategyContextService.class, immediate = true) @Slf4j public class StrategyContextServiceImpl implements StrategyContextService { private final Map<String, StrategyService> strategies = new HashMap<>(); @Reference(cardinality = ReferenceCardinality.MULTIPLE, policy = ReferencePolicy.DYNAMIC) protected void bindStrategy(final StrategyService strategy) { strategies.put(strategy.getName(), strategy); } protected void unbindStrategy(final StrategyService strategy) { strategies.remove(strategy.getName()); } public Map<String, StrategyService> getStrategies() { return strategies; } @Override public StrategyService getApplicableStrategy(final StrategyBean strategyBean) { return strategies .values() .stream() .filter(strategyService -> strategyService.isFit(strategyBean)) .findFirst() .orElse(null); } @Override public StrategyResult executeApplicableStrategy(final StrategyBean strategyBean) { final var strategyService = getApplicableStrategy(strategyBean); if (strategyService != null) { return strategyService.execute(strategyBean); } else { // Run Default Strategy Or Update Logic Accordingly return strategies.get("default").execute(strategyBean); } } @Override public Collection<StrategyService> getAvailableStrategies() { return strategies.values(); } @Override public Map<String, StrategyService> getAvailableStrategiesMap() { return strategies; } }
Here you can see we are dynamically binding instances of StrategyService implementations. So, we don’t need to worry about change in this context class whenever new Strategy is implemented. The context here will dynamically bind the same and update available strategies.
We are providing methods here to get the strategy, execute the strategies, and get all the available strategies.
The advantage we have is we don’t need to worry about changes needed for additional strategies. When a new strategy is needed, we just need its implementation class and don’t need to worry about its impact on others. If isFit logic is not causing conflict with others we don’t need to worry about other services. This isolation gives us confidence for complex solutions as we have the flexibility here to not worry about the impact of newly added functionalities.
Since this is OSGi implementation you can use this in any OSGi Services, Servlet, Workflows, Schedulers, Sling Jobs, Sling Models, etc.
You can either use this directly or you can create a wrapper service around this if any further post-processing of strategy results is needed.
Our example consumer service and sling model will look like this:
@Component(service = StrategyConsumer.class, immediate = true) public class StrategyConsumer { @Reference private StrategyContextService strategyContext; public StrategyResult performTask() { final StrategyBean strategyBean = new StrategyBean(); //Set strategy object based on data StrategyResult strategyResult = strategyContext.executeApplicableStrategy(strategyBean); //Do Post Processing return strategyResult; } public StrategyResult performTask(final StrategyBean strategyBean) { StrategyResult strategyResult = strategyContext.executeApplicableStrategy(strategyBean); //Do Post Processing return strategyResult; } }
The method can be updated to accommodate logical operation for the addition of data into strategy bean if needed.
Sample sling model which will use consumer service or directly connect with context service will look like this:
@Model(adaptables = SlingHttpServletRequest.class) public class StrategyConsumerModel { @OSGiService private StrategyConsumer strategyConsumer; @PostConstruct public void init(){ StrategyResult strategyResult = strategyConsumer.performTask(); //do processing if needed // OR StrategyBean strategyBean = new StrategyBean(); strategyResult = strategyConsumer.performTask(strategyBean); } }
And you are all set now.
This was just a dummy implementation, but you can twist it according to your business needs.
In the introduction, use cases have already been mentioned. You can use this in implementing such scenarios. Additional use cases can be any task where you need to select implementation dynamically at run time based on scenarios.
With the above implementation, we can see that The Strategy Design pattern is well suited for managing and extending complex algorithms in a clean and modular way which also adheres to best practices including SOLID principles. It promotes flexibility, scalability, maintainability, and ease of testing. As you develop AEM solutions consider incorporating this, it will significantly improve code quality and reusability ensuring that evolving business requirements can be accommodated easily.
Learn more about AEM developer tips and tricks by reading our Adobe blog!
]]>Do you remember the days of static templates? We had a plethora of templates, each with its own page components and CQ dialogs. It was a maintenance nightmare!
But then came editable templates, and everything changed. With this new approach, we can define a single-page component and create multiple templates from it. Sounds like a dream come true, right?
But there’s a catch. What if we need different dialogs for different templates? Do we really need to create separate template types for each one? That would mean maintaining multiple template types and trying to keep track of which template uses which type. Not exactly the most efficient use of our time.
In this post, we’ll explore the challenges of template management and how we can overcome them using Granite render conditions and context-aware configurations.
When managing page properties, we’re often faced with a dilemma. While context-aware configurations are ideal for setting up configurations at the domain or language level, they fall short when it comes to managing individual pages.
The usual go-to solution is to update the Page Properties dialog, but this approach has its own set of limitations. So, what’s a developer to do?
Fortunately, there’s a solution that combines the power of Granite render conditions with the flexibility of context-aware configurations.
What is Granite Render Condition?
Render condition is just conditional logic to render a specific section of the component UI. If you want a more detailed description, you can read Adobe’s official documentation.
Say we want to display and hide the page properties tab based on the template name, which can be configured using context-aware configuration without any hardcoded code.
First, we’d need to build the CAC which will contain fields for adding the template names and tab path to show.
We will create a service for context-aware configuration which will read config and provide the mapping.
public interface PageTabsMappingService { List<PageTabsMappingConfig> getPageTabsMappingConfigList(); }
Here PageTabsMappingConfig is just a POJO bean class that consists of a page tab path and template path.
@Data public class PageTabsMappingConfig { private String templatePath; private String tabPath; }
Now let’s create a context-aware configuration implementation class, which will consist of a template path and tabs path configuration ability.
We want this to be more author-friendly, so we will be using a custom data source. This data source can be found in this blog post.
For this example, we need two data sources, one for template path and one for tab paths.
So finally, our configuration will look like this:
@Configuration(label = "Page Tabs Mapping Configuration", description = "Page Tabs Mapping Config", property = {EditorProperties.PROPERTY_CATEGORY + "=TemplateAndTabs"}, collection = true) public @interface PageTabsMappingConfiguration { @Property(label = "Select Template To Be Mapped", description = "Select Template Name To Be Mapped", property = { "widgetType=dropdown", "dropdownOptionsProvider=templateDataSource" },order = 1) String getTemplatePath(); @Property(label = "Select Tab to be mapped", description = "Select Tab to be mapped", property = { "widgetType=dropdown", "dropdownOptionsProvider=tabDataSource" },order = 2) String getTabPath(); }
Now let’s implement Service to read this config.
public interface PageTabsMappingService { List<PageTabsMappingConfig> getPageTabsMappingConfigList(Resource resource); }
@Component(service = PageTabsMappingService.class, immediate = true) @ServiceDescription("Implementation For PageTabsMappingService ") @Slf4j public class PageTabsMappingServiceImpl implements PageTabsMappingService { @Override public List<PageTabsMappingConfig> getPageTabsMappingConfigList(final Resource resource) { final ConfigurationBuilder configurationBuilder = Optional.ofNullable(resource) .map(resource1 -> resource1.adaptTo(ConfigurationBuilder.class)) .orElse(null); return new ArrayList<>(Optional .ofNullable(configurationBuilder) .map(builder -> builder .name(PageTabsMappingConfiguration.class.getName()) .asCollection(PageTabsMappingConfiguration.class)) .orElse(new ArrayList<>())) .stream().map(pageTabsMappingConfiguration ->new PageTabsMappingConfig(pageTabsMappingConfiguration.getTabPath(),pageTabsMappingConfiguration.getTemplatePath())) .collect(Collectors.toList()); } }
In the above code, we are reading context-aware configuration and providing the list for further use.
Now let us create render condition to show and hide tabs in page properties which will utilize the CAC mapping configuration.
We will be using the Sling Model for the same. This will be invoked whenever Page properties tabs are opened, in page editor mode, creation wizard, or on sites wizard.
@Model(adaptables = SlingHttpServletRequest.class) public class TabsRenderConditionModel { @Self private SlingHttpServletRequest request; @OSGiService private PageTabsMappingService pageTabsMappingService; /** * This is to set render condition for tabs. */ @PostConstruct public void init() { final var resource = request.getResource() .getResourceResolver().getResource("/content"); //We are considering root level site config since this will be global. //For multitenant environment you can add additional OSGI Config and use the path accordingly final List<PageTabsMappingConfig> tabRenderConfig = pageTabsMappingService.getPageTabsMappingConfigList(resource); final var name = Optional.ofNullable(request.getResource().getParent()) .map(Resource::getName).orElse(StringUtils.EMPTY); final var props = (ValueMap) request.getAttribute("granite.ui.form.values"); final var template = Optional.ofNullable(props) .map(props1 -> props1.get("cq:template", String.class)) .orElse(StringUtils.EMPTY); final var renderFlag = tabRenderConfig.stream() .anyMatch(tabConfig -> BooleanUtils.and(new Boolean[]{StringUtils.equals(name, tabConfig.getTabName()), StringUtils.equals(template, tabConfig.getTemplatePath())})); request.setAttribute(RenderCondition.class.getName(), new SimpleRenderCondition(renderFlag)); } }
After reading template we simply check if this given tab name mapping exists or not. Based on that, using the simple render condition we are setting a flag for showing and hiding the tab.
Now it is time to use this Sling model in the actual render condition script file. In our project directory let’s assume /apps/my-project/render-conditions/tabs-renderconditions
Create tabs-renderconditions.html
And add content as:
<sly data-sly-use.tab="com.mybrand.demo.models.TabsRenderConditionModel" />
Build a customs tabs under the base page template folder as follows:
/apps/my-project/components/structure/page/base-page/tabs -landing-page-tab -home-page-tab -country-page-tab -state-page-tab -hero-page-tab
And our cq:dialog will be referring the same as this:
<?xml version="1.0" encoding="UTF-8"?> <jcr:root xmlns:sling="http://sling.apache.org/jcr/sling/1.0" xmlns:cq="http://www.day.com/jcr/cq/1.0" xmlns:jcr="http://www.jcp.org/jcr/1.0" xmlns:nt="http://www.jcp.org/jcr/nt/1.0" jcr:primaryType="nt:unstructured"> <content jcr:primaryType="nt:unstructured"> <items jcr:primaryType="nt:unstructured"> <tabs jcr:primaryType="nt:unstructured"> <items jcr:primaryType="nt:unstructured"> <additionalHeroPage jcr:primaryType="nt:unstructured" sling:resourceType="granite/ui/components/foundation/include" path="/mnt/override/apps/my-project/components/structure/page/tabs/additional-hero-page"/> <additionalStatePage jcr:primaryType="nt:unstructured" sling:resourceType="granite/ui/components/foundation/include" path="/mnt/override/apps/my-project/components/structure/page/tabs/additionalstatepage"/> </items> </tabs> </items> </content> </jcr:root>
And our sample tab with render condition config will looks like this:
<additionalHeroPage cq:showOnCreate="{Boolean}true" jcr:primaryType="nt:unstructured" jcr:title="Additional Hero Page Setting" sling:resourceType="granite/ui/components/coral/foundation/fixedcolumns"> <items jcr:primaryType="nt:unstructured"> <column jcr:primaryType="nt:unstructured" sling:resourceType="granite/ui/components/coral/foundation/container"> <items jcr:primaryType="nt:unstructured"> <section1 jcr:primaryType="nt:unstructured" jcr:title="Settings" sling:resourceType="granite/ui/components/coral/foundation/form/fieldset"> <items jcr:primaryType="nt:unstructured"> <testProperty cq:showOnCreate="{Boolean}true" jcr:primaryType="nt:unstructured" sling:resourceType="granite/ui/components/coral/foundation/form/textfield" fieldDescription="Test Property" fieldLabel="Test Property" name="./testProp" required="{Boolean}true"> <granite:data jcr:primaryType="nt:unstructured" cq-msm-lockable="./testProp"/> </testProperty> </items> </section1> </items> </column> </items> <granite:rendercondition jcr:primaryType="nt:unstructured" sling:resourceType="my-project/render-conditions/tabs-renderconditions"/> </additionalHomePage>
In the below Template and Tabs CAC configuration, the “Additional Home Page Setting” tab will be displayed in page properties when an author is opening a page created using the hero-page template.
Finally, when you open any page made with a configured template, like the Hero page in the image below, you can see the tabs configured for it.
I hope you have a better understanding of how to overcome some of the challenges of managing templates in AEM.
For more AEM tips and tricks, keep up with us on our Adobe blog!
]]>When troubleshooting issues in Adobe Experience Manager (AEM), the first step is often to identify which code version is deployed for the affected projects. However, OSGi bundle versions only provide a partial picture, lacking crucial details like the exact branch used. This becomes especially problematic when managing multiple tenants in the same environment or comparing code across different environments, which requires tedious navigation through CI/CD tool histories like Jenkins.
To streamline development workflows and issue resolution, it’s essential to implement a solution that clearly displays vital Git information, including branch names and commit IDs. By accurately tracking and recording this data, teams like Dev/QA can verify code integrity before starting their tasks, ensuring a smoother and more efficient experience.
The purpose of the git-commit-id plugin is to generate a property file containing Git build information every time a Maven build is executed. This file can then be utilized in our backend systems and exposed for tracking purposes.
Now, let’s delve into the steps for implementing and exposing this build information.
We begin by adding and configuring the git-commit-id plugin in our project’s POM.xml file. This plugin will be responsible for generating the necessary property file containing Git build details during the Maven build process.
Next, create a custom OSGi service within our Adobe Experience Manager (AEM) project. This service is designed to read the generated property file and extract the relevant build information. By doing so, we ensure that this data is readily available within our AEM environment.
To expose the build information retrieved by the OSGi service, we developed a Sling Servlet. This servlet leverages the OSGi service to access the Git build details and then makes this information accessible through a designated endpoint. Through this endpoint, other components of our AEM project or external systems can easily access and utilize the build information as needed.
Let’s start with adding POM changes for the plugin.
Under core bundle POM in the plugin sections of build add an entry for git-commit-id plugin.
<plugin> <groupId>pl.project13.maven</groupId> <artifactId>git-commit-id-plugin</artifactId> <version>2.2.4</version> <executions> <execution> <id>get-the-git-infos</id> <goals> <goal>revision</goal> </goals> </execution> </executions> <configuration> <dotGitDirectory>${project.basedir}/.git</dotGitDirectory> <prefix>git</prefix> <verbose>false</verbose> <generateGitPropertiesFile>true</generateGitPropertiesFile> <generateGitPropertiesFilename>src/main/resources/my-project-git-response </generateGitPropertiesFilename> <format>json</format> <dateFormat>yyyy-MM-dd-HH-mm</dateFormat> <gitDescribe> <skip>false</skip> <always>false</always> <dirty>-dirty</dirty> </gitDescribe> <includeOnlyProperties> <includeOnlyProperty>git.branch</includeOnlyProperty> <includeOnlyProperty>git.build.time</includeOnlyProperty> <includeOnlyProperty>git.build.version</includeOnlyProperty> <includeOnlyProperty>git.commit.id</includeOnlyProperty> <includeOnlyProperty>git.commit.time</includeOnlyProperty> </includeOnlyProperties> </configuration> </plugin>
It gives us the ability to configure lots of options based on our needs and we can include the properties. Currently, we’ve only considered the git branch which will help us identify the branch name, the time it takes to complete the build process, the git build version, the commit ID, and the timestamp of the exact commit.
Since the file will be created under src/main/resources we want to make sure the resources entry is added in the build section. If not, we need to make sure the following entry exists there.
<resources> <resource> <directory>src/main/resources</directory> <filtering>true</filtering> </resource> </resources>
Now let’s create a bean class representing our Build Info.
@Data public class BuildInformation { @SerializedName("git.branch") private String branch; @SerializedName("git.build.time") private String buildTime; @SerializedName("git.build.version") private String buildVersion; @SerializedName("git.commit.id") private String commitId; @SerializedName("git.commit.id.abbrev") private String commitIdAbbrev; @SerializedName("git.commit.id.describe") private String commitIdDescribe; @SerializedName("git.commit.id.describe-short") private String commitIdDescribeShort; @SerializedName("git.commit.time") private String commitTime; private String aemVersion; private List<String> runModes; }
Our Service interface will look like this:
public interface BuildInformationService { BuildInformation getBuildInformation(); }
Finally, our Implementation class will look like this:
@Component(service = BuildInformationService.class) @ServiceDescription("BuildInformationService Implementation.") @Slf4j public class BuildInformationServiceImpl implements BuildInformationService { @Reference private ProductInfoProvider productInfoProvider; @Reference private SlingSettingsService slingSettingsService; public static final String GIT_INFO_FILE_PATH = "my-project-git-response.json"; private BuildInformation buildInformation; @Activate public void activate() { buildInformation = new BuildInformation(); try { final var inputStream = this.getClass().getClassLoader().getResourceAsStream(GIT_INFO_FILE_PATH); final var jsonString = IOUtils.toString(inputStream, StandardCharsets.UTF_8); final var gson = new Gson(); buildInformation = gson.fromJson(jsonString, BuildInformation.class); var productInfo = productInfoProvider.getProductInfo(); final var version = productInfo.getVersion().toString(); // Set additional fields not part of JSON response buildInformation.setAemVersion(version); buildInformation.setRunModes(slingSettingsService.getRunModes()); } catch (Exception e) { log.error("Error while generating build information"); } } @Override public BuildInformation getBuildInformation() { return buildInformation; } }
We are just reading the properties file created by the plugin and setting up the object.
ProductInfo provider API will help to get the AEM Version while the Sling setting service provides run modes information.
Now let’s create a sample servlet that will utilize this service and will print the information.
import com.google.gson.Gson; import org.apache.sling.api.SlingHttpServletRequest; import org.apache.sling.api.SlingHttpServletResponse; import org.apache.sling.api.servlets.SlingSafeMethodsServlet; import org.apache.sling.servlets.annotations.SlingServletPaths; import org.osgi.service.component.annotations.Component; import org.osgi.service.component.annotations.Reference; import org.osgi.service.component.propertytypes.ServiceDescription; import javax.servlet.Servlet; import javax.servlet.ServletException; import java.io.IOException; @Component(service = Servlet.class) @ServiceDescription("This servlet will provide Build Information") @SlingServletPaths(BuildInformationServlet.BUILD_INFO_PATH) public class BuildInformationServlet extends SlingSafeMethodsServlet { public static final String BUILD_INFO_PATH = "/bin/my-project/buildinfo"; @Reference private BuildInformationService buildInformationService; @Override protected void doGet(final SlingHttpServletRequest request,final SlingHttpServletResponse response) throws ServletException, IOException { response.setContentType("application/json"); response.setCharacterEncoding("UTF-8"); final var writer = response.getWriter(); final var gson = new Gson(); final var jsonString = gson.toJson(buildInformationService.getBuildInformation()); writer.print(jsonString); writer.flush(); } }
After deploying the code and opening the service end point it will give response like this:
{ "git.branch": "feature/PROJECT-1234", "git.build.time": "2024-04-03-09-37", "git.build.version": "2.22.0-65-SNAPSHOT", "git.commit.id": "1fa29afb73e5b0cbfc90a2bb33db28741d98eec0", "git.commit.id.abbrev": "1fa29af", "git.commit.id.describe": "1fa29af-dirty", "git.commit.id.describe-short": "1fa29af-dirty", "git.commit.time": "2024-04-01-18-04", "aemVersion": "6.5.17.0", "runModes": [ "s7connect", "crx3", "author", "samplecontent", "crx3tar" ] }
Now that our Service is up and running, we can leverage it anywhere in our application. Let’s create a reusable component for displaying build information, powered by our trusty Sling model. This component can tap into the Service to fetch the necessary build details.
Want to include additional build details, such as Jenkins job names and build numbers? Easy! We can pass these attributes from Jenkins via command-line arguments. Then, we store these values in properties files, making them easily consumable by our Sling model, just like our Git information file.
The benefit of this solution is that it’s flexible, scalable, and reusable across multiple projects in a multitenant development environment.
Make sure to follow our Adobe blog for more Adobe solution tips and tricks!
]]>The official Adobe tutorial for setting up a local AEM development environment requests the reader to install Java JDK 11 for AEM 6.5 and above. It does not provide a download link for the Java JDK 11. If you were to do a quick Google search for “JDK 11 download,” you would be presented with a search results page containing links to Oracle.
Oracle Corporation acquired Sun Microsystems (the creators of the Java Programming Language) in 2010. In 2019, Oracle significantly changed its Java licensing model, impacting how businesses and developers could use Java. Oracle now requires payment for commercial use of Oracle JDK for updates and support.
Slightly lower on the Google search results page, you will see links to OpenLogic. OpenLogic offers free builds of JDK 11. OpenJDK is available free of charge and on an “as is” basis.
The simplest method I’ve found to install OpenJDK 11 is from this site: https://www.openlogic.com/openjdk-downloads.
From here, you are presented with a form where you select your Java version (11), operating system, architecture, and Java package (JDK). Select your preferred option, and the page will display a list of available Java versions. You can then choose to download either the installer for a quick and easy setup or a zip archive for manual installation. I recommend downloading and running the installer.
Another option is package managers. Package managers simplify OpenJDK installation across platforms. They’re especially efficient on Linux. macOS users can utilize Homebrew for easy installation and updates. Windows users now have Winget from Microsoft for managing applications like OpenJDK.
Links for installing OpenJDK via package managers:
Installing Maven 3.9 requires a few additional steps.
The Homebrew package manager is the best option for macOS users. Using the –ignore-dependencies flag is crucial to prevent it from installing a potentially conflicting version of OpenJDK.
brew install --ignore-dependencies maven
Once Maven has been installed, edit the Z Shell configuration file (.zshrc) to include the following directives (create the file if it doesn’t exist):
export JDK_HOME=$(/usr/libexec/java_home) export JAVA_HOME=$(/usr/libexec/java_home) export PATH=$PATH:${JAVA_HOME}/bin:/usr/local/bin
Open a new terminal window and verify Java and Maven are installed correctly:
java --version mvn --version
If the output shows the location (path) and version information for both Java and Maven, congratulations! You’ve successfully installed them on your macOS system.
Download the Maven Binary Archive here: https://maven.apache.org/download.cgi.
Unpack the archive and move it to the /opt directory:
tar -zxvf apache-maven-3.9.8-bin.tar.gz sudo mv apache-maven-3.9.8 /opt/apache-maven
Edit your shell configuration file and add the following directives:
export PATH=$PATH:/opt/apache-maven/bin
Open a new terminal window and verify Maven is installed correctly:
mvn --version
If the output shows the location (path) and version information for Maven, congratulations! You’ve successfully installed Maven on your Linux system.
Download the Maven Binary Archive here: https://maven.apache.org/download.cgi.
Run PowerShell as an administrator.
Unzip the Maven Binary Archive:
Expand-Archive .\apache-maven-3.9.8-bin.zip
Create an “Apache Maven” folder within Program Files:
New-Item 'C:\Program Files\Apache Maven' -ItemType Directory -ea 0
Move the extracted directory to the “Apache Maven” folder:
Move-Item -Path .\apache-maven-3.9.8-bin\apache-maven-3.9.8 -Destination 'C:\Program Files\Apache Maven\'
Add the Maven directory to the Path Environment Variables:
Click the “OK” button and open a new PowerShell Prompt to verify Maven is installed correctly:
mvn --version
If the output shows the location (path) and version information for Maven, congratulations! You’ve successfully installed Maven on Windows.
Maven 3.9 will be the last version compatible with Adobe AEM 6.5. Future versions of Maven require JDK 17, which Adobe AEM does not yet support.
When using Java 11, Adobe recommends adding additional switches to your command line when starting AEM. See: https://experienceleague.adobe.com/en/docs/experience-manager-65/content/implementing/deploying/deploying/custom-standalone-install#java-considerations
Make sure to follow our Adobe blog for more Adobe solution tips and tricks!
]]>Webpack is an amazing bundler for JavaScript and, with the correct loader, it can also transform CSS, HTML, and other assets. When a new AEM project is created via the AEM Project Archetype and the front-end module is set to general, Adobe provides a Webpack configuration to generate the project’s client libraries.
Vite is a new build tool that has recently come onto the scene. You can check the NPM trends here.
Compared to Webpack,
If you have any experience with Webpack, you know the challenges of configuring different loaders to preprocess your files. Many of these configurations are unnecessary with Vite. Vite supports TypeScript out of the box. Vite provides built-in support for .scss, .sass, .less, .styl, and .stylus files. There is no need to install Vite-specific plugins for them. If the project contains a valid PostCSS configuration, it will automatically apply to all imported CSS. It is truly a game-changer.
“Vite” comes from the French word for “fast”. In music, the term “Vite” refers to playing at a quickened pace. For the following tutorial, I have chosen the music term “Jete” for the name of our project. “Jete” refers to a bowing technique in which the player is instructed to let the bow bounce or jump off the strings. Let us take a cue from this musical term and “bounce” into our tutorial.
Create an AEM Project via the AEM Project Archetype:
mvn -B archetype:generate -D archetypeGroupId=com.adobe.aem -D archetypeArtifactId=aem-project-archetype -D archetypeVersion=49 -D aemVersion=cloud -D appTitle="Jete" -D appId="jete" -D groupId="com.jete" -D frontendModule=general -D includeExamples=n
Once your project has been created, install your project within your AEM instance:
mvn clean install -PautoInstallSinglePackage
After verifying the Jete site in AEM, we can start migrating our frontend project to Vite.
Backup the existing ui.frontend directory:
cd jete/ mv ui.frontend ../JeteFrontend From within “jete” run: npm create vite@latest
Use “aem-maven-archetype” for the project name, select Vanilla for the framework, and “TypeScript” for the variant.
Rename the directory “aem-maven-archetype” to “ui.frontend”. We chose that project name to match the name generated by the AEM Archetype.
mv aem-maven-archetype ui.frontend
Let’s put the pom.xml file back into the frontend directory:
mv ../JeteFrontend/pom.xml ui.frontend
Since we are updating the POM files, let’s update the Node and NPM versions in the parent.
pom.xml file. <configuration> <nodeVersion>v20.14.0</nodeVersion> <npmVersion>10.7.0</npmVersion> </configuration>
We will be using various Node utilities within our TypeScript files. Let us install the Node Types package.
npm install @types/node --save-dev Add the following compiler options to our tsconfig.json file: "outDir": "dist", "baseUrl": ".", "paths": { "@/*": [ "src/*" ] }, "types": [ "node" ]
These options set the output directory to “dist”, the base url to the current directory: “ui.frontend”, create an alias of “@” to the src directory, and add the Node types to the global scope.
Let’s move our “public” directory and the index.html file into the “src” directory.
Create a file named “vite.config.ts” within “ui.frontend” project.
Add the following vite configurations:
import path from 'path'; import { defineConfig } from 'vite'; export default defineConfig({ build: { emptyOutDir: true, outDir: 'dist', }, root: path.join(__dirname, 'src'), plugins: [], server: { port: 3000, }, });
Update the index.html file within the “src” directory. Change the reference of the main.ts file from “/src/main.ts” to “./main.ts”.
<script type="module" src="./main.ts"></script>
Run the Vite dev server with the following command:
npm run dev
You should see the following page:
We are making progress!
Let us make some AEM-specific changes to our Vite configuration.
Change “outDir” to:
path.join(__dirname, 'dist/clientlib-site')
Add the following within the build section:
lib: { entry: path.resolve(__dirname, 'src/main.ts'), formats: ['iife'], name: 'site.bundle', }, rollupOptions: { output: { assetFileNames: (file) => { if (file.name?.endsWith('.css')) { return 'site.bundle.[ext]'; } return `resources/[name].[ext]`; }, entryFileNames: `site.bundle.js`, }, },
These configurations set the entry file, wrap the output within an immediately invoked function expression (to protect against polluting the global namespace), set the JavaScript and CSS bundle names to site.bundle.js and site.bundle.css, and set the output path for assets to a directory named “resources”. Using the “iife” format requires setting the “process.env.NODE_ENV” variable.
Add a “define” section at the same level as “build” with the following option:
define: { 'process.env.NODE_ENV': '"production"', }, Add a “resolve” section at the same level as “define” and “build” to use our “@” alias: resolve: { alias: { '@': path.resolve(__dirname, './src'), }, }, Add the following “proxy” section within the “server” section: proxy: { '^/etc.clientlibs/.*': { changeOrigin: true, target: 'http://localhost:4502', }, },
These options inform the dev server to proxy all requests starting with /etc.clientlibs to localhost:4502.
It is time to remove the generated code. Remove “index.html”, “conter.ts”, “style.css”, “typescript.svg”, “public/vite.svg” from within the “src” directory. Remove everything from “main.ts”.
Move the backup of index.html file to the src directory:
cp ../JeteFrontend/src/main/webpack/static/index.html ui.frontend/src/
Edit the index.html file. Replace the script including the “clientlib-site.js” with the following:
<script type="module" src="./main.ts"></script>
Save the following image to “src/public/resources/images/”:
Add the following element within the head section of the index.html file:
<link rel="icon" href="./resources/images/favicon.ico" type="image/x-icon" />
While we are updating favicons, edit the
ui.apps/src/main/content/jcr_root/apps/jete/components/page/customheaderlibs.html file.
Add the following to the end of the file:
<link rel="icon" href="/etc.clientlibs/jete/clientlibs/clientlib-site/resources/images/favicon.ico" type="image/x-icon" />
Run the Vite dev server once more …
npm run dev
You should see the following:
It is not very attractive. Let us add some styling. Run the following command to install “sass”.
npm i -D sass
Create a “main.scss” file under the “src” directory.
touch main.scss
Edit the main.ts file and add the following line to the top of the file:
import '@/main.scss'
Copy the variables stylesheet from the frontend backup to the “src” directory:
cp ../JeteFrontend/src/main/webpack/site/_variables.scss ./ui.frontend/src/
Edit the _variables.scss file and add the following:
$color-foreground-rgb: rgb(32 32 32);
Copy the base stylesheet from the frontend backup to the “src” directory:
cp ../JeteFrontend/src/main/webpack/site/_base.scss ./ui.frontend/src/
Include references to these files within main.scss:
@import 'variables'; @import 'base';
Run the Vite dev server once more …
npm run dev
You should see the following:
Things are getting better, but there is still more work to do!
Copy the component and site stylesheets from the frontend backup to the “src” directory:
cp -R ../JeteFrontend/src/main/webpack/components ./ui.frontend/src/ cp -R ../JeteFrontend/src/main/webpack/site/styles ./ui.frontend/src/
Add the following to the main.scss file:
@import './components/**/*.scss'; @import './styles/**/*.scss';
Run the Vite dev server …
npm run dev
No luck this time! You will probably see this error:
Vite doesn’t understand “splat imports”, “wildcard imports”, or “glob imports”. We can fix this by installing a package and updating the Vite configuration file.
Install the following package:
npm i -D vite-plugin-sass-glob-import
Update the vite.config.ts file. Add the following to the import statements:
import sassGlobImports from 'vite-plugin-sass-glob-import';
Add “sassGlobImports” to the plugins section:
plugins: [sassGlobImports()],
Now, let’s run the Vite dev server again.
npm run dev
You should see the following:
Much better. The front end is looking great! Time to work on the JavaScript imports!
TypeScript has been working well for us so far, so there’s no need to switch back to JavaScript.
Remove the “helloworld” JavaScript file:
rm -rf src/components/_helloworld.js
Grab the TypeScript from this URL and save it as src/components/_helloworld.ts: https://raw.githubusercontent.com/PRFTAdobe/jete/main/ui.frontend/src/components/_helloworld.ts
To see the results of this script within our browser, we have to include this file within main.ts. Importing splats won’t work on a TypeScript file. So we can’t write: “import ‘@/components/**/*.ts’”. Instead, we will write:
import.meta.glob('@/components/**/*.ts', { eager: true });
Now, let’s run the Vite dev server.
npm run dev
You should see the following in Chrome DevTools:
Very good! The JavaScript is working as well!
The following section is optional, but it is good practice to add some linting rules.
Install the following:
npm i -D @typescript-eslint/eslint-plugin @typescript-eslint/parser autoprefixer eslint eslint-config-airbnb-base eslint-config-airbnb-typescript eslint-config-prettier eslint-import-resolver-typescript eslint-plugin-import eslint-plugin-prettier eslint-plugin-sort-keys eslint-plugin-typescript-sort-keys postcss postcss-dir-pseudo-class postcss-html postcss-logical prettier stylelint stylelint-config-recommended stylelint-config-standard stylelint-config-standard-scss stylelint-order stylelint-use-logical tsx
Save the following URLs to ui.frontend:
https://raw.githubusercontent.com/PRFTAdobe/jete/main/ui.frontend/.eslintrc.json
https://raw.githubusercontent.com/PRFTAdobe/jete/main/ui.frontend/.postcssrc.json
https://raw.githubusercontent.com/PRFTAdobe/jete/main/ui.frontend/.prettierrc.json
https://raw.githubusercontent.com/PRFTAdobe/jete/main/ui.frontend/.stylelintrc.json
Add the following to the “script” section of package.json:
"lint": "stylelint src/**/*.scss --fix && eslint . --ext ts,tsx --report-unused-disable-directives --max-warnings 0"
Let’s try out our new script by running:
npm run lint
You should see a fair amount of sass linting errors. You can fix the errors manually or overwrite your local versions with the ones from the git repo: https://github.com/PRFTAdobe/jete/tree/main/ui.frontend/src
We are ready to move on from linting. Let’s work on the AEM build.
Install the following:
npm i -D aem-clientlib-generator aemsync
Save the following URLs to ui.frontend:
https://github.com/PRFTAdobe/jete/blob/main/ui.frontend/aem-sync-push.ts
https://github.com/PRFTAdobe/jete/blob/main/ui.frontend/clientlib.config.ts
https://github.com/PRFTAdobe/jete/blob/main/ui.frontend/aem-clientlib-generator.d.ts
https://github.com/PRFTAdobe/jete/blob/main/ui.frontend/aemsync.d.ts
The files with the “d.ts” extensions are used to provide typescript type information about the referenced packages.
The “clientlib.config.ts” script, creates a client library based on the JS and CSS artifacts created during the build process. It also copies the artifacts to the “clientlib” directory within “ui.apps”.
The “aem-sync-push.ts” script takes the clientlib created above and pushes it to a running AEM instance.
It is time to update the “script” section of package.json.
Remove the existing “build” and “preview” commands. Add the following commands:
"build": "tsc && npm run lint && vite build && tsx ./clientlib.config.ts && tsx ./aem-sync-push.ts", "prod": "tsc && npm run lint && vite build && tsx ./clientlib.config.ts",
Let’s try out the build command first:
npm run build
If the command has been completed successfully, you will see messages indicating that the “generator has finished” and the “aem sync has finished”. You will also notice the creation of a “dist” directory under “ui.frontend”.
Our last step is to copy over the “assembly.xml” file from the backup we made earlier.
cp ../JeteFrontend/assembly.xml ui.frontend/
With that file in place, we are ready to rerun the AEM build:
mvn clean install -PautoInstallSinglePackage
The build should be complete without errors. You have successfully migrated from Webpack to Vite!
Make sure to follow our Adobe blog for more Adobe solution tips and tricks!
]]>