development Articles / Blogs / Perficient https://blogs.perficient.com/tag/development/ Expert Digital Insights Thu, 05 Jun 2025 16:03:33 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png development Articles / Blogs / Perficient https://blogs.perficient.com/tag/development/ 32 32 30508587 PWC-IDMC Migration Gaps https://blogs.perficient.com/2025/06/05/pwc-idmc-migration-gaps/ https://blogs.perficient.com/2025/06/05/pwc-idmc-migration-gaps/#respond Thu, 05 Jun 2025 05:26:54 +0000 https://blogs.perficient.com/?p=382445

In the age of technological advancements happening almost every minute, upgrading a business is essential to survive competition, offering a customer experience beyond expectations while deploying fewer resources to derive value from any process or business.

Platform upgrades, software upgrades, security upgrades, architectural enhancements, and so on are required to ensure stability, agility, and efficiency.

Customers prefer to move from Legacy systems to the Cloud due to the offerings it brings. From cost, monitoring, maintenance, operations, ease of use, and landscape, Cloud has transformed D&A businesses significantly over the last decade.

Movement from Informatica Powercenter to IDMC has been perceived as the need of the hour due to the humongous advantages it offers. Developers must understand both flavors to perform this code transition effectively.

This post explains the PWC vs IDMC CDI gaps from different perspectives.

  • Development
  • Data
  • Operations

Development

  • The difference in native datatypes can be observed in IDMC when importing Source, Target, or Lookup. Workaround as follows.,
    • If any consistency is observed in IDMC mappings with Native Datatype/Precision/Scale, ensure that the Metadata Is Edited to keep them in sync between DDL and CDI mappings.
  • In CDI, taskflow workflow parameter values experience read and consumption issues. Workaround as follows.,
    • A Dummy Mapping task has to be created where the list of Parameters/Variables needs to be defined for further consumption by tasks within the taskflows (Ex, Command task/Email task, etc)
    • Make sure to limit the # of Dummy Mapping tasks during this process
    • Best practice is to create 1 Dummy Mapping task for a folder to capture all the Parameters/Variables required for that entire folder.
    • For Variables whose value needs to be persistent for the next taskflow run, make sure the Variable value is mapped to the Dummy Mapping task via an Assignment task. This Dummy mapping task would be used at the start and end of the task flow to ensure that the overall task flow processing is enabled for Incremental Data processing.
  • All mapping tasks/sessions in IDMC are reusable. They could be used in any task flow. If some Audit sessions are expected to run concurrently within other taskflows, ensure that the property “Allow the mapping task to be executed simultaneously” is enabled.
  • Sequence generator: Data overlap issues in CDI. Workaround as follows.,
    • If a sequence generator is likely to be used in multiple sessions/workflows, it’s better to make it a reusable/SHARED Sequence.
  • VSAM Sources/Normalizer was not available in CDI. Workaround as follows.,
    • Use the Sequential File connector type for mappings using Mainframe VSAM Sources/Normalizer.
  • Sessions are configured to have STOP ON ERRORS >0. Workaround as follows.,
    • Ensure the LINK conditions for the next task to be “PreviousTask.TaskStatus – STARTS WITH ANY OF 1, 2” within CDI taskflows.
  • Partitions are not supported with Sources under Query mode. Workaround as follows.,
    • Ensure multiple sessions are created and run in parallel as a workaround.
  • Currently, parameterization of Schema/Table is not possible for Mainframe DB2. Workaround as follows.,
    • Use an ODBC-type connection to access DB2 with Schema/Table parameterization.
  • A mapping with a LOOKUP transformation used across two sessions cannot be overridden at the session or mapping task level to enable or disable caching. Workaround as follows.,
    • Use 2 different mappings with LOOKUP transformations if 1 mapping/session has to have cache enabled and the other mapping/session has to have cache disabled.

Data

  • IDMC Output data containing additional Double quotes. Workaround as follows.,
    • Session level – use this property – __PMOV_FFW_ESCAPE_QUOTE=No
    • Administrator settings level – use this property – UseCustomSessionConfig = Yes
  • IDMC Output data containing additional Scale values with Decimal datatype (ex., 11.00). Workaround as follows.,
    • Use IF-THEN-ELSE statement to remove Unwanted 0s in data (O/P : from 11.00 -> 11)

Operations

  • CDI doesn’t store logs beyond 1000 mapping tasks run in 3 days on Cloud (it does store logs in Secure Agent). Workaround as follows.,
    • To retain Cloud job run stats, create Audit tables and use the Data Marketplace utility to get the Audit info (Volume processes, Start/End time, etc) loaded to the Audit tables by scheduling this job at regular intervals (Hourly or Daily).
  • Generic Restartability issues occur during IDMC Operations. Workaround as follows.,
    • Ensure a Dummy assignment task is introduced whenever the code contains Custom error handling flow.
  • SKIP FAILED TASK and RESUME FROM NEXT TASK operations have issues in IDMC. Workaround as follows.,
    • Ensure every LINK condition has an additional condition appended, “Mapping task. Fault.Detail.ErrorOutputDetail.TaskStatus=1”
  • In PWC, any task can be run from anywhere within a workflow; however, this is not possible in IDMC. Workaround as follows.
    • Feature request worked upon by GCS to update the Software
  • IDMC mapping task config level is not capable due to parameter concatenation issues. Workaround as follows.,
    • Ensure to use a separate parameter within the parameter file to have the Mapping task log file names suffixed with the Concurrent run workflow instance name.
  • IDMC doesn’t honour the “Save Session log for these runs” property set at the mapping task level when the session log file name is parameterized. Workaround as follows.,
    • Ensure to copy the mapping task log files in the Secure agent server after the job run
  • If Session Log File Directory contains / (Slash) when used along with parameters (ex., $PMSessionLogDir/ABC) under Session Log Directory Path, this would append every run log to the same log file. Workaround as follows.,
    • Ensure to use a separate parameter within the parameter file for $PMSessionLogDir
  • In IDMC, the @numAppliedRows and @numAffectedRows features are not available to get the source and target success rows to load them in the audit table. Workaround as follows.,
    • @numAppliedRows is used instead of @numAffectedRows
  • Concurrent runs cannot be performed on taskflows from the CDI Data Integration UI. Workaround as follows.,
    • Use the Paramset utility to upload concurrent paramsets and use the runAJobCli utility to run taskflows with multiple concurrent run instances from the command prompt.

Conclusion

While performing PWC to IDMC conversions, the following Development and Operations workarounds will help avoid rework and save effort, thereby achieving customer satisfaction in delivery.

]]>
https://blogs.perficient.com/2025/06/05/pwc-idmc-migration-gaps/feed/ 0 382445
IDMC – CDI Best Practices https://blogs.perficient.com/2025/06/05/idmc-cdi-best-practices/ https://blogs.perficient.com/2025/06/05/idmc-cdi-best-practices/#respond Thu, 05 Jun 2025 05:01:33 +0000 https://blogs.perficient.com/?p=382442

Every end product must meet and exceed customer expectations. For a successful delivery, it is not just about doing what matters, but also about how it is done by following and implementing the desired standards.

This post outlines the best practices to consider with IDMC CDI ETL during the following phases.

  • Development
  • Operations 

Development Best Practices

  • Native Datatypes check between Database table DDLs and IDMC CDI Mapping Source, Target, and Lookup objects.
    • If any consistency is observed in IDMC mappings with Native Datatype/Precision/Scale, ensure that the Metadata Is Edited to keep them in sync between DDL and CDI mappings.
  • In CDI, workflow parameter values in order to be consumed by the taskflows, a Dummy Mapping task has to be created where the list of Parameters/Variables needs to be defined for further consumption by tasks within the taskflows (Ex, Command task/Email task, etc)
    • Make sure to limit the # of Dummy Mapping tasks during this process
    • Best practice is to create 1 Dummy Mapping task for a folder to capture all the Parameters/Variables required for that entire folder.
    • For Variables whose value needs to be persistent for the next taskflow run, make sure the Variable value is mapped to the Dummy Mapping task via an Assignment task. This Dummy mapping task would be used at the start and end of the task flow to ensure that the overall task flow processing is enabled for Incremental Data processing.
  • If some Audit sessions are expected to run concurrently within other taskflows, ensure that the property “Allow the mapping task to be executed simultaneously” is enabled.
  • Avoid using the SUSPEND TASKFLOW option, as it requires manual intervention during job restarts. Additionally, this property may cause issues during job restarts.
  • Ensure correct parameter representation using Single Dollar/Double Dollar. Incorrect representation will cause the parameters not to be read by CDI during Job runs.
  • While working with Flatfiles in CDI mappings, always enable the property “Retain existing fields at runtime”.
  • If a sequence generator is likely to be used in multiple sessions/workflows, it’s better to make it a reusable/SHARED Sequence.
  • Use the Sequential File connector type for mappings using Mainframe VSAM Sources/Normalizer.
  • If a session is configured to have STOP ON ERRORS >0, ensure the LINK conditions for the next task to be “PreviousTask.TaskStatus – STARTS WITH ANY OF 1, 2” within CDI taskflows.
  • For mapping task failure flows, set the LINK conditions for the next task to be “PreviousTask.Fault.Detail.ErrorOutputDetail.TaskStatus – STARTS WITH ANY OF 1, 2” within CDI taskflows.
  • Partitions are not supported with Sources under Query mode. Ensure multiple sessions are created and run in parallel as a workaround.
  • Currently, parameterization of Schema/Table is not possible for Mainframe DB2. Use an ODBC-type connection to access DB2 with Schema/Table parameterization.

Operations Best Practices

  • Use Verbose data Session log config only if absolutely required, and then only in the lower environment.
  • Ensure the Sessions pick the parameter values properly during job execution
    • This can be verified by changing the parameter names and values to incorrect values and determining if the job fails during execution. If the job fails, it means that the parameters are READ correctly by the CDI sessions.
  • Ensure the Taskflow name and API name always match. If different, the job will face issues during execution via the runAJobCli utility from the command prompt.
  • CDI doesn’t store logs beyond 1000 mapping tasks run in 3 days on Cloud (it does store logs in Secure Agent). To retain Cloud job run stats, create Audit tables and use the Data Marketplace utility to get the Audit info (Volume processes, Start/End time, etc) loaded to the Audit tables by scheduling this job at regular intervals (Hourly or Daily).
  • In order to ensure no issues with Generic Restartability during Operations, ensure a Dummy assignment task is introduced whenever the code contains Custom error handling flow.
  • In order to facilitate SKIP FAILED TASK and RESUME FROM NEXT TASK operations, ensure every LINK condition has an additional condition appended, “Mapping task. Fault.Detail.ErrorOutputDetail.TaskStatus=1”
  • If mapping task log file names are to be suffixed with the Concurrent run workflow instance name, ensure it is done within the Parameter file. IDMC mapping task config level is not capable due to parameter concatenation issues.
  • Ensure to copy mapping task log files in the Secure agent server after job run, since IDMC doesn’t honour the “Save Session log for these runs” property set at the mapping task level when the session log file name is parameterized.
  • Ensure Session Log File Directory doesn’t contain / (Slash) when used along with parameters (ex., $PMSessionLogDir/ABC) under Session Log Directory Path. When used, this would append every run log to the same log file.
  • Concurrent runs cannot be performed on taskflows from the  CDI Data Integration UI. Use the Paramset utility to upload concurrent paramsets and use the runAJobCli utility to run taskflows with multiple concurrent run instances from the command prompt.

Conclusion

In addition to coding best practices, following these Development and Operations best practices will help avoid rework and save efforts, thereby achieving customer satisfaction with the Delivery.

]]>
https://blogs.perficient.com/2025/06/05/idmc-cdi-best-practices/feed/ 0 382442
How Agile Helps You Improve Your Agility https://blogs.perficient.com/2025/05/12/how-agile-helps-you-improve-your-agility/ https://blogs.perficient.com/2025/05/12/how-agile-helps-you-improve-your-agility/#respond Mon, 12 May 2025 10:35:57 +0000 https://blogs.perficient.com/?p=380766

The objective of this topic is to explore how the Agile methodology enhances an individual’s agility. This blog highlights how Agile fosters adaptability, responsiveness, and continuous improvement by understanding and implementing Agile principles, practices, and frameworks.

The goal is to demonstrate how adopting Agile practices enables teams and individuals to:

  • Effectively manage change
  • Increase collaboration
  • Streamline decision-making
  • Improve overall performance and flexibility in dynamic environments

This study showcases the transformative power of Agile in driving greater efficiency and faster response times in both project management and personal development.

Let’s Get Started

In both professional and personal development, asking structured “WH” questions helps in gaining clarity and understanding. Let’s apply that approach to explore the connection between Agile and agility.

What is Agile?

Agile is a mindset and a way of thinking, based on its core principles and manifesto. It emphasizes:

  • Flexibility
  • Collaboration
  • Customer feedback
  • Over-rigid planning and control.

Initially popularized in project management and software development, Agile supports iterative progress and continuous value delivery.

What is Agility?

Agility in individuals refers to the ability to adapt and respond to change effectively and efficiently. It means adjusting quickly to:

  • Market conditions
  • Customer needs
  • Emerging technologies

Agility involves:

  • Flexible processes
  • Quick decision-making
  • Embracing change and innovation

Key Principles of Agile

  • Iterative Process – Work delivered in small, manageable cycles
  • Collaboration – Strong communication across teams
  • Flexibility & Adaptability – Open to change
  • Customer Feedback – Frequent input from stakeholders
  • Continuous Improvement – Learn and evolve continuously

Why Agile?

Every project brings daily challenges: scope changes, last-minute deliveries, unexpected blockers. Agile helps in mitigating these through:

  • Faster Delivery – Short iterations mean quicker output and release cycles
  • Improved Quality – Continuous testing, feedback, and refinements
  • Customer-Centric Approach – Ongoing engagement ensures relevance
  • Greater Flexibility – Agile teams quickly adapt to shifting priorities

When & Where to Apply Agile?

The answer is simple — Now and Everywhere.
Agile isn’t limited to a specific moment or industry. Whenever you experience challenges in:

  • Project delivery
  • Communication gaps
  • Changing requirements

You can incorporate the Agile principles. Agile is valuable in both reactive and proactive problem-solving.

How to Implement Agile?

Applying Agile principles can be a game-changer for both individuals and teams. Here are practical steps that have shown proven results:

  • Divide and do—Break down large features into smaller, manageable tasks. Each task should result in a complete, functional piece of work.
  • Deliver Incrementally – Ensure that you deliver a working product or feature by the end of each iteration.
  • Foster Communication – Encourage frequent collaboration within the team. Regular interactions build trust and increase transparency.
  • Embrace Change – Be open to changing requirements. Agile values responsiveness to feedback, enabling better decision-making.
  • Engage with Customers – Establish feedback loops with stakeholders to stay aligned with customer needs.

Agile Beyond Software

While Agile originated in software development, its principles can be applied across a range of industries:

  • Marketing – Running campaigns with short feedback cycles
  • Human Resources – Managing performance and recruitment adaptively
  • Operations – Streamlining processes and boosting team responsiveness

Agile is more than a methodology; it’s a culture of continuous improvement that extends across all areas of work and life.

Conclusion

Adopting Agile is not just about following a process but embracing a mindset. When effectively implemented, Agile can significantly elevate an individual’s and team’s ability to:

  • Respond to change
  • Improve performance
  • Enhance collaboration

Whether in software, marketing, HR, or personal development, Agile has the power to transform how we work and grow.

]]>
https://blogs.perficient.com/2025/05/12/how-agile-helps-you-improve-your-agility/feed/ 0 380766
Optimizing Core Web Vitals for Modern React Applications https://blogs.perficient.com/2024/12/31/optimizing-core-web-vitals-for-modern-react-applications/ https://blogs.perficient.com/2024/12/31/optimizing-core-web-vitals-for-modern-react-applications/#respond Tue, 31 Dec 2024 11:01:57 +0000 https://blogs.perficient.com/?p=374843

Introduction

In today’s dynamic web development landscape, ensuring an exceptional user experience is more critical than ever. Core Web Vitals, introduced by Google, are key performance metrics that help evaluate the overall quality of a website’s interaction. React applications often involve complex UI and dynamic content. Optimizing Core Web Vitals ensures not just better user experiences but also improved SEO and performance in these scenarios. For React developers, optimizing these metrics can greatly enhance both performance and SEO rankings. This guide outlines actionable strategies to fine-tune Core Web Vitals in modern React applications.

 

What Are Core Web Vitals?

Core Web Vitals are performance indicators focusing on three essential user experience elements:

  • Largest Contentful Paint (LCP): Gauges loading performance, with an ideal score under 2.5 seconds.
  • Interaction to Next Paint (INP): Measures interactivity, targeting scores below 200 milliseconds for optimal responsiveness.

  • Cumulative Layout Shift (CLS): Evaluates visual stability, aiming for a score under 0.1.

 

Strategies to Optimize Core Web Vitals

 

  1. Enhance Largest Contentful Paint (LCP)

      Recommended Techniques:

  • Lazy Loading: Defer loading images and videos not immediately visible on the screen.
import React, { Suspense } from 'react';

const LazyImage = React.lazy(() => import('./ImageComponent'));

const App = () => (
  <Suspense fallback={<div>Loading...</div>}>
    <LazyImage />
  </Suspense>
);

export default App;
  • Critical CSS: Use tools like Critical to inline essential CSS for above-the-fold content.
  • Optimized Media: Serve properly compressed images using formats like WebP to reduce load times.

 

For a deeper understanding and best practices for implementing lazy loading, refer to the official Lazy Loading Documentation.

  1. Improve Interaction to Next Paint (INP)

     Recommended Techniques:

  • Code Splitting: Break your code into smaller chunks using tools like Webpack or React’s lazy and Suspense.
const LazyComponent = React.lazy(() => import('./HeavyComponent'));

const App = () => (
  <Suspense fallback={<div>Loading Component...</div>}>
    <LazyComponent />
  </Suspense>
);

 

  • Avoid Long Tasks: Keep the main thread responsive by breaking down lengthy JavaScript operations. Use requestIdleCallback for low-priority tasks.

 

requestIdleCallback(() => {
  performNonUrgentTask();
});

 

  1. Minimize Cumulative Layout Shift (CLS)

     Recommended Techniques:

  • Define Dimensions: Specify width and height for all media elements to prevent layout shifts.
<img src="image.jpg" width="600" height="400" alt="Example" />

 

  • Font Loading Optimization: Use font-display: swap to ensure text is readable while fonts are loading.
@font-face {
  font-family: 'CustomFont';
  src: url('custom-font.woff2') format('woff2');
  font-display: swap;
}

 

  • Preserve Space: Reserve space for dynamic content to avoid pushing elements around unexpectedly.

 

Tools for Monitoring Core Web Vitals

 

  • Performance tab in dev tools.

Use the Performance tab in Chrome DevTools to analyze and optimize Core Web Vitals, helping you track key metrics like LCP, FID, and CLS, and improve your site’s loading speed and interactivity.

Chrome devtools on performance tab's local metrics

  • Lighthouse: Perform in-depth audits directly in Chrome DevTools.

Lighthouse, a powerful tool built into Chrome DevTools, provides comprehensive audits of your website’s performance, including detailed insights into Core Web Vitals like LCP, FID, and CLS, along with actionable recommendations for optimization.

Refer the official lighthouse documentation for deeper insights into the tool.

  • Web Vitals Extension: Monitor Core Web Vitals in real time with this browser extension.

The Web Vitals Extension is ideal for ongoing, real-time monitoring of Core Web Vitals as you browse, giving quick feedback on page performance and helping you address issues instantly without needing to run full audits.

  • PageSpeed Insights: Access tailored recommendations for enhancing performance metrics.

 

For more information on each of these metrics and their importance, check out the official core web vitals documentation.

 

Conclusion

Optimizing Core Web Vitals is a critical step in creating a seamless and engaging user experience. Techniques like lazy loading, breaking down JavaScript tasks, and ensuring visual stability can dramatically improve your React application’s performance. Start implementing these strategies today to boost user satisfaction and climb search engine rankings.

Happy coding!

 

]]>
https://blogs.perficient.com/2024/12/31/optimizing-core-web-vitals-for-modern-react-applications/feed/ 0 374843
How Nested Context-Aware Configuration Makes Complex Configuration Easy in AEM https://blogs.perficient.com/2024/12/20/how-nested-context-aware-configuration-makes-complex-configuration-easy-in-aem/ https://blogs.perficient.com/2024/12/20/how-nested-context-aware-configuration-makes-complex-configuration-easy-in-aem/#respond Fri, 20 Dec 2024 16:30:52 +0000 https://blogs.perficient.com/?p=373487

Managing configurations in Adobe Experience Manager (AEM) can be challenging, especially when sharing configs across different websites, regions, or components.  The Context-Aware Configuration (CAC) framework in AEM simplifies configuration management by allowing developers to define and resolve configurations based on the context, such as the content hierarchy. However, as projects scale, configuration needs can become more intricate, involving nested configurations and varying scenarios. 

In this blog, we will explore Nested Context-Aware Configurations and how they provide a scalable solution to handle multi-layered and complex configurations in AEM. We’ll cover use cases, the technical implementation, and best practices for making the most of CAC. 

Understanding Nested Context-Aware Configuration

AEM’s Context-Aware Configuration allows you to create and resolve configurations dynamically, based on the content structure, so that the same configuration can apply differently depending on where in the content tree it is resolved. However, some projects require deeper levels of configurations — not just based on content structure but also different categories within a configuration itself. This is where nested configurations come into play. 

Nested Context-Aware Configuration involves having one or more configurations embedded within another configuration. This setup is especially useful when dealing with hierarchical or multi-dimensional configurations, such as settings that depend on both global and local contexts or component-specific configurations within a broader page configuration. 

You can learn more about basic configuration concepts on Adobe Experience League.

Categorizing Configurations with Nested Contexts

Nested configurations are particularly useful for categorizing configurations based on broad categories like branding, analytics, or permissions, and then nesting more specific configurations within those categories. 

For instance, at the parent level, you could define global categories for analytics tracking, branding, or user permissions. Under each category, you can then have nested configurations for region-specific overrides, such as: 

  • Global Analytics Config: Shared tracking ID for the entire site. 
  • Regional Analytics Config: Override global analytics tracking for specific regions. 
  • Component Analytics Config: Different tracking configurations for components that report analytics separately. 

This structure: 

  • Simplifies management: Reduces redundancy by categorizing configurations and using fallback mechanisms. 
  • Improves organization: Each configuration is neatly categorized and can be inherited from parent configurations when needed. 
  • Enhances scalability: Allows for easy extension and addition of new nested configurations without affecting the entire configuration structure. 

Benefits of Nested Context-Aware Configuration

  1. Scalability: Nested configurations allow you to scale your configuration structure as your project grows, without creating redundant or overlapping settings. 
  2. Granularity: Provides fine-grained control over configurations, enabling you to apply specific settings at various levels (global, regional, component). 
  3. Fallback Mechanism: If a configuration isn’t found at a specific level, AEM automatically falls back to a parent configuration, ensuring that the system has a reliable set of defaults to work with. 
  4. Maintainability: By organizing configurations hierarchically, you simplify maintenance. Changes at the global level automatically apply to lower levels unless explicitly overridden.

Advanced Use Cases

  1. Feature Flag Management: Nested CAC allows you to manage feature flags across different contexts. For example, global feature flags can be overridden by region or component-specific feature flags. 
  2. Personalization: Use nested configurations to manage personalized experiences based on user segments, with global rules falling back to more specific personalization at the regional or page level. 
  3. Localization: Nested CAC can handle localization configurations, enabling you to define language-specific content settings under broader regional or global configurations. 

Implementation

To implement the nested configurations, we need to define configurations for individual modules first. In the example below, we are going to create SiteConfig which will have some configs along with two Nested configs and then Nested config will have its own attributes. 

Let’s define Individual config first. Th they will Look like this: 

@Configuration(label = "Global Site Config", description = "Global Site Context Config.") 
public @interface SiteConfigurations { 
 
    @Property(label = "Parent Config - Property 1", 
            description = "Description for Parent Config Property 1", order = 1) 
    String parentConfigOne(); 
 
    @Property(label = "Parent Config - Property 2", 
            description = "Description for Parent Config Property 2", order = 2) 
    String parentConfigTwo(); 
 
    @Property(label = "Nested Config - One", 
            description = "Description for Nested Config", order = 3) 
    NestedConfigOne NestedConfigOne(); 
 
    @Property(label = "Nested Config - Two", 
            description = "Description for Nested Config", order = 4) 
    NestedConfigTwo[] NestedConfigTwo(); 
 
}

Following with this Nested ConfigOne and NestedConfigTwo will look like this: 

public @interface NestedConfigOne { 
 
    @Property(label = "Nested Config - Property 1", 
            description = "Description for Nested Config Property 1", order = 1) 
    String nestedConfigOne(); 
 
 
    @Property(label = "Nested Config - Property 2", 
            description = "Description for Nested Config Property 2", order = 2) 
    String nestedConfigTwo(); 
 
 
}

And…

public @interface NestedConfigTwo { 
 
    @Property(label = "Nested Config - Boolean Property 1", 
            description = "Description for Nested Config Boolean Property 1", order = 1) 
    String nestedBooleanProperty(); 
 
    @Property(label = "Nested Config - Multi Property 1", 
            description = "Description for Nested Config Multi Property 1", order = 1) 
    String[] nestedMultiProperty(); 
 
}

Note that we didn’t annotate nested configs with Property as this is not the main config.  

Let’s create service to read this and it will look like this:

public interface NestedConfigService { 
    SiteConfigurationModel getAutoRentalConfig(Resource resource); 
}

Implementation of service will be like this: 

@Component(service = NestedConfigService.class, 
        immediate = true) 
@ServiceDescription("Implementation For NestedConfigService") 
public class NestedConfigServiceImpl implements NestedConfigService { 
 
    @Override 
    public SiteConfigurationModel getAutoRentalConfig(Resource resource) { 
        final SiteConfigurations configs = getConfigs(resource); 
        return new SiteConfigurationModel(configs); 
    } 
 
    private SiteConfigurations getConfigs(Resource resource) { 
        return resource.adaptTo(ConfigurationBuilder.class) 
                .name(SiteConfigurations.class.getName()) 
                .as(SiteConfigurations.class); 
    } 
 
}

SiteConfigurationModel will hold the final config including all the configs. We can modify getters based on need. So currently, I am just adding its dummy implementation. 

public class SiteConfigurationModel { 
    public SiteConfigurationModel(SiteConfigurations configs) { 
 
        String parentConfigOne = configs.parentConfigOne(); 
        NestedConfigOne nestedConfigOne = configs.NestedConfigOne(); 
        NestedConfigTwo[] nestedConfigTwos = configs.NestedConfigTwo(); 
        //Construct SiteConfigurationModel As per Need 
 
    } 
}

Once you deploy the code On site config menu in context editor, it should look like :  

AEM Global Site Config

We can see it has given us the ability to configure property 1 and property 2 directly but for Nested one it gave an additional Edit button which will take us to configure the Nested Configs and it will look like this : 

AEM Global Site Config Nested Config One

AEM Global Site Config Nested Config Two

Since Nested config two is multifield it gives the ability to add an additional entry. 

A Powerful Solution to Simplify and Streamline

Nested Context-Aware Configuration in AEM offers a powerful solution for managing complex configurations across global, regional, and component levels. By leveraging nested contexts, you can easily categorize configurations, enforce fallback mechanisms, and scale your configuration management as your project evolves. 

Whether working on a multi-region site, handling diverse user segments, or managing complex components, nested configurations can help you simplify and streamline your configuration structure while maintaining flexibility and scalability. 

Learn More

Make sure to follow our Adobe blog for more Adobe platform insights! 

]]>
https://blogs.perficient.com/2024/12/20/how-nested-context-aware-configuration-makes-complex-configuration-easy-in-aem/feed/ 0 373487
Transforming Friction into Innovation: The QA and Software Development Relationship https://blogs.perficient.com/2024/11/06/transforming-friction-into-innovation-the-qa-and-software-development-relationship/ https://blogs.perficient.com/2024/11/06/transforming-friction-into-innovation-the-qa-and-software-development-relationship/#respond Wed, 06 Nov 2024 19:17:18 +0000 https://blogs.perficient.com/?p=371711

The relationship between Quality Assurance (QA) and Software Development teams is often marked by tension and conflicting priorities. But what if this friction could be the spark that ignites innovation and leads to unbreakable products? 

The Power of Productive Tension 

It’s no secret that QA and Development teams sometimes clash. QA and testing professionals are tasked with finding flaws and ensuring stability, while developers are focused on building features, focusing on speed and innovation. This natural tension, however, can be a powerful force when channeled correctly. 

 One of the key challenges in harnessing this synergy is breaking down the traditional silos between QA and Development and aligning teams early in the development process. 

  1. Shared Goals: Align both teams around common objectives that prioritize both quality and innovation.
  2. Cross-Functional Teams: Encourage collaboration by integrating QA professionals into development sprints from the start.
  3. Continuous Feedback: Implement systems that allow for rapid, ongoing communication between teams.

 Leveraging Automation and AI 

Automation and artificial intelligence are playing an increasingly crucial role in bridging the gap between QA and Software Development Teams: 

  1. Automated Testing: Frees up QA teams to focus on more complex, exploratory testing scenarios.
  2. AI-Powered Analysis: Helps identify patterns and potential issues that human testers might miss.
  3. Predictive Quality Assurance: Uses machine learning to anticipate potential bugs before they even occur.

 Best Practices  

Achieving true synergy between QA and Development isn’t always easy, but it’s well worth the effort. Here are some best practices to keep in mind: 

  1. Encourage Open Communication: Create an environment where team members feel comfortable sharing ideas and concerns early and often.
  2. Celebrate Collaborative Wins: Recognize and reward instances where QA-Dev cooperation leads to significant improvements.
  3. Continuous Learning: Invest in training programs that help both teams understand each other’s perspectives and challenges.
  4. Embrace Failure as a Learning Opportunity: Use setbacks as a chance to improve processes and strengthen the relationship between teams.

  

As business leaders are tasked with doing more with less, the relationship between QA and Development will only become more crucial. By embracing the productive tension between these teams and implementing strategies to foster collaboration, organizations can unlock new levels of innovation and product quality. 

Are you ready to turn your development and testing friction into a strategic advantage?

]]>
https://blogs.perficient.com/2024/11/06/transforming-friction-into-innovation-the-qa-and-software-development-relationship/feed/ 0 371711
Running AEM Author, Publisher, and Dispatcher Within Docker https://blogs.perficient.com/2024/09/18/running-aem-author-publisher-and-dispatcher-within-docker/ https://blogs.perficient.com/2024/09/18/running-aem-author-publisher-and-dispatcher-within-docker/#comments Wed, 18 Sep 2024 10:03:16 +0000 https://blogs.perficient.com/?p=369172

About eight years ago, I was introduced to Docker during a meetup at a restaurant with a coworker. He was so engrossed in discussing the Docker engine and containers that he barely touched the hors d’oeuvres. I was skeptical. 

I was familiar with Virtual Machines (VMs) and appreciated the convenience of setting up application servers without worrying about hardware. I wanted to know what advantages Docker could offer that VMs couldn’t. He explained that instead of virtualizing the entire computer, Docker only virtualizes the OS, making containers much slimmer than their VM counterparts. Each container shares the host OS kernel and often binaries and libraries. 

Curious, I wondered how AEM would perform inside Docker—a Java application running within the Java Virtual Machine, inside a Docker container, all on top of a desktop PC. I expected the performance to be terrible. Surprisingly, the performance was comparable to running AEM directly on my desktop PC. In hindsight, this should not have been surprising. The Docker container shared my desktop PC’s kernel, RAM, CPUs, storage, and network allowing the container to behave like a native application. 

I’ve been using Docker for my local AEM development ever since. I love how I can quickly spin up a new author, publish, or dispatch environment whenever I need it and just as easily tear it down. Switching to a new laptop or PC is a breeze — I don’t have to worry about installing the correct version of Java or other dependencies to get AEM up and running. 

In this blog, we’ll discuss running AEM author, publisher, and dispatcher within Docker and the setup process.

Setup Requirements

The AEM SDK, which includes the Quickstart JAR and Dispatcher tools, is necessary for this setup.  Additionally, Apache Maven must be installed. For the Graphical User Interface, we will use Rancher Desktop by SUSE, which operates on top of Docker’s command-line tools.  While the Docker engine itself is open source, Docker Desktop, the GUI distributed by Docker, is not. 

Step One: Installing Rancher Desktop

Download and Install Rancher Desktop by SUSE. Installing Racker Desktop will provide the Docker CLI (command line interface). If you wish to install the Docker CLI without Rancher Desktop, run the following command:

Windows

Install WinGet via the Microsoft store.

winget install --id=Docker.DockerCLI -e

Mac

Install Homebrew: 

/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh) 

brew cask install docker

Step Two: Creating the “AEM in Docker” folder

Create a folder named “aem-in-docker”.  Unzip the contents of the AEM SDK into this folder.  Copy your AEM “license.properties” file to this directory. 

Step Three: Creating the subfolders to contain the Docker Image instructions

Make three subfolders within your “aem-in-docker” folder named “base”, “author”, and “publish”. 

Your “aem-in-docker” folder should look something like this: 

AEM-In-Docker Folder

Step Four: Creating the Base Docker Image instruction (Dockerfile)

Create a file named “Dockerfile” within the “base” subdirectory.

Ensure the file does not have an extension.  Set the contents of the file to the following:

FROM ubuntu
# Setting the working directory
WORKDIR /opt/aem
# Copy the license file
COPY license.properties .
# Copy Quickstart jar file
COPY aem-sdk-quickstart-2024.8.17465.20240813T175259Z-240800.jar cq-quickstart.jar
# Install Java, Vim, and Wget.  Install Dynamic Media dependencies.
RUN apt-get update && \
    apt-get install -y curl && \
    apt-get install -y software-properties-common && \
    add-apt-repository ppa:openjdk-r/ppa && \
    apt-get update && \
    apt-get install -y openjdk-11-jdk vim ca-certificates gnupg wget imagemagick ffmpeg fontconfig expat freetype2-demos
# Unack the Jar file
RUN java -jar cq-quickstart.jar -unpack
# Set the LD_LIBRARY_PATH environmental variable
ENV LD_LIBRARY_PATH=/usr/local/lib

This file directs Docker to build a new image using the official Ubuntu image as a base. It specifies the working directory, copies the license file and the quickstart file into the image (note that your quickstart file might have a different name), installs additional packages (like Java, Vim, Wget, and some Dynamic Media dependencies), unpacks the quickstart file, and sets some environment variables.

Step Five: Create the Base Docker Image

Run the following command from within the “aem-in-docker” folder.

docker build -f base/Dockerfile -t aem-base .

It should take a few minutes to run. After the command has been completed run:

docker image ls

You should see your newly created “aem-base” image.

AEM Base Image

Step Six: Creating the Author Docker Image instruction (Dockerfile)

Create a file named “Dockerfile” within the “author” subdirectory.

Set the contents of the file to the following:

# Use the previously created aem-base
FROM aem-base

# Expose AEM author in port 4502 and debug on port 5005
EXPOSE 4502
EXPOSE 5005
VOLUME ["/opt/aem/crx-quickstart/logs"]
# Make the container always start in Author mode with Port 4502.  Add additional switches to support JAVA 11: https://experienceleague.adobe.com/en/docs/experience-manager-65/content/implementing/deploying/deploying/custom-standalone-install.  Add the Dynamic Media runmode.
ENTRYPOINT ["java", "-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=*:5005", "-XX:+UseParallelGC", "--add-opens=java.desktop/com.sun.imageio.plugins.jpeg=ALL-UNNAMED", "--add-opens=java.base/sun.net.www.protocol.jrt=ALL-UNNAMED", "--add-opens=java.naming/javax.naming.spi=ALL-UNNAMED", "--add-opens=java.xml/com.sun.org.apache.xerces.internal.dom=ALL-UNNAMED", "--add-opens=java.base/java.lang=ALL-UNNAMED", "--add-opens=java.base/jdk.internal.loader=ALL-UNNAMED", "--add-opens=java.base/java.net=ALL-UNNAMED", "-Dnashorn.args=--no-deprecation-warning", "-jar", "cq-quickstart.jar", "-Dsling.run.modes=author,dynamicmedia_scene7", "-p", "4502", "-nointeractive"]

This file instructs Docker to create a new image based on the “aem-base” image. It makes ports 4502 and 5005 available (5005 for debugging purposes), sets up a mount point at “/opt/aem/crx-quickstart/logs”, and specifies the command to run when the image is executed.

Step Seven: Create the Author Docker Image

Run the following command from within the “aem-in-docker” folder.

docker build -f author/Dockerfile -t aem-author .

After the command has been completed run:

docker image ls

You should see your newly created “aem-author” image.

AEM Author Image

Step Eight: Creating the Publisher Docker Image instruction (Dockerfile)

Create a file named “Dockerfile” within the “publish” subdirectory.

Set the contents of the file to the following:

# Use the previously created aem-base
FROM aem-base
# Expose AEM publish in port 4503 and debug on port 5006
EXPOSE 4503
EXPOSE 5006
VOLUME ["/opt/aem/crx-quickstart/logs"]
# Make the container always start in Author mode with Port 4503.  Add additional switches to support JAVA 11: https://experienceleague.adobe.com/en/docs/experience-manager-65/content/implementing/deploying/deploying/custom-standalone-install.  Add the Dynamic Media runmode.
ENTRYPOINT ["java", "-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=*:5006", "-XX:+UseParallelGC", "--add-opens=java.desktop/com.sun.imageio.plugins.jpeg=ALL-UNNAMED", "--add-opens=java.base/sun.net.www.protocol.jrt=ALL-UNNAMED", "--add-opens=java.naming/javax.naming.spi=ALL-UNNAMED", "--add-opens=java.xml/com.sun.org.apache.xerces.internal.dom=ALL-UNNAMED", "--add-opens=java.base/java.lang=ALL-UNNAMED", "--add-opens=java.base/jdk.internal.loader=ALL-UNNAMED", "--add-opens=java.base/java.net=ALL-UNNAMED", "-Dnashorn.args=--no-deprecation-warning", "-jar", "cq-quickstart.jar", "-Dsling.run.modes=publish,dynamicmedia_scene7", "-p", "4503", "-nointeractive"]

Step Nine: Create the Publisher Docker Image

Run the following command from within the “aem-in-docker” folder.

docker build -f publish/Dockerfile -t aem-publish .

After the command has been completed run:

docker image ls

You should see your newly created “aem-publish” image.

AEM Publish Image

Step Ten: Create the Adobe Network

Let’s set up a network to connect Docker containers and facilitate data sharing between them.

docker network create adobe

Step Eleven: Run the Author Docker Image

It’s time to run our Author Docker Image. First, create a local directory for the logs volume specified in the Dockerfile. Within the author subdirectory, create a directory named “logs.” Run the following command within the new logs folder:

Windows

docker run -d --name author -p 4502:4502 -p 5005:5005 --network adobe -v ${PWD}:/opt/aem/crx-quickstart/logs aem-author

macOS/Linux

docker run -d --name author -p 4502:4502 -p 5005:5005 --network adobe -v `pwd`:/opt/aem/crx-quickstart/logs aem-author

The command will return the ID of the new Docker container. It may take some time for the new AEM instance to start. To check its status, you can monitor the “error.log” file in the logs directory to check its status.

Windows

Get-Content -Path .\error.log -Wait

macOS/Linux

tail -f error.log

After AEM has finished starting up, check that everything is loading correctly by visiting: http://localhost:4502/aem/start.html.

Let’s stop the AEM container for the time being:

docker stop author

Step Twelve: Run the Publisher Docker Image

It’s time to run our Publisher Docker Image.  First, create a local directory for the logs volume specified in the Dockerfile. Within the publish subdirectory, create a directory named “logs.”  Run the following command within the new logs folder:

Windows

docker run -d --name publish -p 4503:4503 -p 5006:5006 --network adobe -v ${PWD}:/opt/aem/crx-quickstart/logs aem-publish

macOS/Linux

docker run -d --name publish -p 4503:4503 -p 5006:5006 --network adobe -v `pwd`:/opt/aem/crx-quickstart/logs aem-publish

The command will return the ID of the new Docker container. It may take some time for the new AEM instance to start. To check its status, you can monitor the “error.log” file in the logs directory to check its status.

Windows

Get-Content -Path .\error.log -Wait

macOS/Linux

tail -f error.log

After AEM has finished starting up, check that everything is loading correctly by visiting: http://localhost:4503/content.html.  You will see a “Not Found” page.  That is fine for now.

Let’s stop the AEM container for the time being:

docker stop publish

Step Thirteen: Start the Containers via Rancher Desktop

Open Rancher Desktop and go to the Containers tab in the left navigation pane. To start individual containers, check the box in the State column for each container you want to start, then click the Start button. To start all containers at once, check the box in the header row of the State column, and then click the Start button. Let’s go ahead and start all containers.

If you prefer using the command line, you can run:

docker start author
docker start publish

Containers Via Rancher Desktop

Step Fourteen: Create an AEM Project and install it on the Author and Publish instance

Since Docker’s mascot is a whale, I thought it would be fun to name our new AEM project after a famous fictional whale: Monstro from Pinocchio.

Run the following command from a command line (Note: you may have to run this command with elevated privileges):

mvn -B archetype:generate -D archetypeGroupId=com.adobe.aem -D archetypeArtifactId=aem-project-archetype -D archetypeVersion=50 -D aemVersion=cloud -D appTitle="Monstro" -D appId="monstro" -D groupId="com.monstro" -D frontendModule=general -D includeExamples=n

Once this project has been created, let us build and deploy it to our Author instance.

Run the following command from within the “Monstro” project:

mvn clean install -PautoInstallSinglePackage

Check that the project is installed by visiting the following URL to view the results: http://localhost:4502/editor.html/content/monstro/us/en.html.  You should see the following:

Project Monstro

Now, let us build and deploy the project to our Publish instance.

Run the following command from within the “Monstro” project:

mvn clean install -PautoInstallSinglePackagePublish

Verify that the project is installed by visiting this URL: http://localhost:4503/content/monstro/us/en.html.  Installation may take up to five minutes. After this period, you should see the following:

Post Installation Project Monstro

Step Fifteen: Set up the Publish Agent on Author

It’s time to configure the publish agent on our author instance. Go to this URL: http://localhost:4502/etc/replication/agents.author/publish.html.

Click the “Edit” button (next to settings).

Publish Agent On Author Setup

  • Click the checkbox next to “Enabled”
  • Enter “admin” in the “Agent User Id” field
  • Navigate to the Transport tab and enter the following in the URI field:  http://publish:4503/bin/receive?sling:authRequestLogin=1
  • Instead of using “localhost,” the hostname for our publish instance is our container’s name, “publish”
  • In the “username” field, enter “admin,” and in the “password” field, enter the admin’s password
  • Click the “OK” button to save the Agent settings
  • Click the “Test Connection” link, and the replication test should be successful

Step Sixteen: Publish content from the Author

Go back to http://localhost:4502/editor.html/content/monstro/us/en.html. Edit the “Hello, World” component by changing the text from “lalala :)” to “Monstro is the enormous, fearsome whale from Disney’s 1940 animated film Pinocchio.” Verify the update and publish the page. Then, check http://localhost:4503/content/monstro/us/en.html to see your changes on the Publisher as well.

Step Seventeen: Create the Dispatcher Container

Make sure the publisher instance is running before proceeding. Extract the AEM SDK Dispatcher tools.

Windows

Expand-Archive .\aem-sdk-dispatcher-tools-2.0.222-windows.zip
Rename-Item -Path .\aem-sdk-dispatcher-tools-2.0.222-windows -NewName dispatcher-sdk-2.0.222

macOS/Linux

chmod +x ./aem-sdk-dispatcher-tools-2.0.222-unix.sh
./aem-sdk-dispatcher-tools-2.0.222-unix.sh

Since we’ve set up a custom network for our AEM containers, the docker run script won’t function correctly because it doesn’t recognize this network. Let’s modify the docker run script.

Windows

Open “dispatcher-sdk-2.0.222\bin\docker_run.cmd” in your favorite editor.

Add the “–network adobe” argument to the docker command inside the “else” statement.

Modify Docker Run Script For Windows

macOS/Linux

Open “dispatcher-sdk-2.0.222/bin/docker_run.sh” in your favorite editor.

Add the “–network adobe” argument to the docker command inside the “else” statement.

Modify Docker Run Script For macOS/Linux

Execute the docker run script with the following parameters. Be sure to replace the dispatcher source path with the path to your “monstro” source.

Windows

.\ dispatcher-sdk-2.0.222\bin\docker_run.cmd C:\Users\shann\Sites\monstro\dispatcher\src publish:4503 8080

macOS/Linux

./dispatcher-sdk-2.0.222/bin/docker_run.sh ~/Sites/monstro/dispatcher/src publish:4503 8080

Once the text stream in your terminal has stopped, go to http://localhost:8080/.  You should see the following:

Dispatch Container For Project Monstro

Open Rancher Desktop and navigate to the Containers tab. Locate the container with an unusual name. If you stop this container, it won’t be possible to start it again. Please go ahead and stop this container. The dispatcher code running in your terminal will also terminate. We want this container to be more permanent, so let’s make some additional changes to the docker run script.

Creating A Permanent Container For Project Monstro

Windows

Open “dispatcher-sdk-2.0.222\bin\docker_run.cmd” in your favorite editor.

macOS/Linux

Open “dispatcher-sdk-2.0.222/bin/docker_run.sh” in your favorite editor.

Add the “–name dispatcher” argument to the “docker” command within the “else” statement. Also, remove the “–rm” switch. According to Docker documentation, the “–rm” switch automatically removes the container and its associated anonymous volumes when it exits, which is not what we want.

Windows

Modify Docker Run Script For Windows 2

macOS/Linux

Modify Docker Run Script For Macos Linux 2

Run the docker run command in your terminal again:

Windows

.\ dispatcher-sdk-2.0.222\bin\docker_run.cmd C:\Users\shann\Sites\monstro\dispatcher\src publish:4503 8080

macOS/Linux

./dispatcher-sdk-2.0.222/bin/docker_run.sh ~/Sites/monstro/dispatcher/src publish:4503 8080

Open Rancher Desktop and go to the Containers tab. You should see a container named “dispatcher.” Stop this container. The dispatcher code running in your terminal will terminate, but the container will remain in Rancher Desktop. You can now stop and restart this container as many times as you’d like. You can also start and stop the dispatcher via the command line:

docker start dispatcher
docker stop dispatcher

Docker Provides Value and Flexibility

We have an author and publisher AEM instance running inside a Docker container. Additionally, we have a dispatcher container created using the source from the Monstro project. Although this dispatcher container isn’t very useful, the advantage of Docker is that you can easily delete and create new containers as needed.

I hope you found this blog helpful. I’ve been using Docker on my local machine for the past eight years and value the flexibility it provides. I can’t imagine going back to managing a local AEM instance or dealing with Apache configurations to get the dispatcher working. Those days are behind me.

]]>
https://blogs.perficient.com/2024/09/18/running-aem-author-publisher-and-dispatcher-within-docker/feed/ 1 369172
Computational Complexity Theory https://blogs.perficient.com/2024/09/10/computational-complexity-theory/ https://blogs.perficient.com/2024/09/10/computational-complexity-theory/#comments Tue, 10 Sep 2024 14:40:48 +0000 https://blogs.perficient.com/?p=368922

Computational complexity studies the efficiency of algorithms. It helps classify the algorithm in terms of time and space to identify the amount of computing resources needed to solve a problem. The Big Ω, and Big θ notations are used to describe the asymptotic behavior of an algorithm as a function of the input size. In computer science, computational complexity theory is fundamental to understanding the limits of how efficiently an algorithm can be computed.

This paper seeks to determine when an algorithm provides solvable solutions in a short com- putational time and to find those that generate solutions with long computational times that can be categorized as intractable or unsolvable, using these polynomial functions as a classical repre- sentation of computational complexity. Some mathematical notations to represent computational complexity, its mathematical definition from the perspective of function theory and predicate cal- culus, as well as complexity classes and their main characteristics to find polynomial functions will be explained. Mathematical expressions can explain the time behavior of a function and show the computational complexity. In a nutshell, we can compare the behavior of an algorithm over time with a mathematical function such as f (n), f (n2), etc.

In logic and algorithms, there has always been a search for how to measure execution time, calculate the computational time to store data, determine whether an algorithm generates a cost or a benefit in solving a problem, or design algorithms that generate a viable solution.

Asymptotic notations

What is it?

Asymptotic notation describes how an algorithm behaves over time, when its arguments tend to a specific limit, usually when they grow very large (tend to infinity). It is mainly used in the analysis of algorithms to show their efficiency and performance, especially in terms of execution time or memory usage as the size of the input data increases.

The asymptotic notation represents the behavior of an algorithm over time by making a com- parison with mathematical functions. The algorithm has a cycle while repeating different actions until a condition is fulfilled, it can be said that this algorithm has a behavior similar to a linear function, but if it has another cycle within the one already mentioned, it can be compared to a quadratic function.

How is an asymptotic notation represented?

Asymptotic notations can be expressed in 3 ways:

  • O(n): The term ‘Big O’ or BigO refers to an upper limit on the execution time of an algorithm. It is used to describe the worst-case It is used to describe the worst-case scenario. For example, if an algorithm is O(n2) in the worst-case scenario, its execution time will increase proportionally to n2 where the n is the input size.
  • Ω(n): The ‘Big Ω’ or BigΩ, describes a minimum limit on the execution time of an algorithm and is used to describe the best-case scenario. The algorithm has the behavior of Ω(n), which means that in the best case, the execution time of the algorithm will grow at least proportionally a n.
  • Θ(n): ‘Big Θ’ or BigΘ, are to both an upper and a lower bound of the time behavior of an algorithm. It is used to explain that, regardless of the case, the execution time of the algorithm increases proportionally to the specified value. For example, if an algorithm is Θ(nlogn), your execution time will increase proportionally to nlogn at both ends.

In a nutshell, asymptotic notation is a mathematical representation of computational com- plexity expressed in terms of computational complexity. Now, if we express in polynomial terms an asymptotic notation, it allows us to see how the computational cost increases as a reference variable increases. For example, let’s evaluate a polynomial function f (n) = n + 7 to conclude that this function has a linear growth. Compare this linear function with a second one given what g(n) = n3 − 2, the function g(h) will have a cubic growth when n is larger.

Computational Complexity 1

Figure 1: f (n) = n + 7 vs g(n) = n3 − 2

From a mathematical point of view, it can be stated that:

The function f (n) = O(n) and that the function g(n) = O(n3)

 

Computational complexity types

Finding an algorithm that solves a problem efficiently is crucial in analyzing algorithms. To achieve this we must be able to express the algorithm’s behavior in functions, for example, if we can express the algorithm as the polynomial f (n) function, a polynomial time can be set to determine the algorithmic efficiency. In general, a good design of an algorithm depends on whether it runs in polynomial time or less.

Frequency counter and arithmetic sum and bounding rules

To express an algorithm as a mathematical function and know it is execution time, it is neces- sary to find an algebraic expression that represents the number of executions or instructions of the algorithm. The frequency counter is a polynomial representation that has been worked on throughout the topic of computational complexity. with some simple examples in Csharp on how to calculate the computational complexity of some algorithms. Use the Big O, because expresses computational complexity in the worst-case scenario.

Computational complexity Constant

Analyze the function that adds 2 numbers and returns the result of the sum:

Computational Complexity 2

With the Big O notation for each of the instructions in the above algorithm, the number of times each line of code is executed can be determined. In this case, each line is executed only once. Now, to determine the computational complexity or the Big O of this algorithm, the complexity for each of the instructions must be summed up:

O(1) + O(1) = O(2)

The constant value is equal 2, the polynomial time of the algorithm is constant, i.e. O(1).

Polynomial Computational Complexity

Now let’s look at another example with a slightly more complex algorithm. We need to traverse an array containing the numbers from 1 to 100 and the total sum of the whole array is required:

Computational Complexity 3

In the sequence of the algorithm, lines 2 and 6 are executed only once, but lines 3 and 4 will be repeated n times, until reaching 100 iterations (n = 100 the size of the array), to calculate the computational cost of this algorithm, the following is done:

O(1) + O(n) + O(n) + O(1) = O(2n + 2)

From this result, it can be stated that the algorithm is executed in time lineal given that O(2n + 2) ≈ O(n). Let’s analyze another algorithm, similar but with two cycles one after the other. These algo- rithms are those whose execution time depends on two variables, n and m, linearly. This indicates that the length of the algorithm is proportional to the sum of the sizes of two independent inputs. The computational complexity for this type of algorithm is O(n + m).

Computational Complexity 4

In this algorithm, the two cycles are independent since the first while represents n + 1 times while the second while represents m + 1, being n ̸= m. Therefore, the computational cost is given by:

O(7) + O(2n) + O(2m) ≈ O(n + m)

Exponential computational complexity

For the third example, the computational cost for an algorithm containing nested cycles is analyzed:

Computational Complexity 5

The conditions in a while (while) and do-while (do while) cycles are executed n + 1 times, as compared to a foreach cycle. These loops do one additional step: validate the condition to end the loop. In line number 7, by repeating n times and doing its corresponding validation, the computational complexity at this point is n(n + 1). In the end, the result of the computational complexity of this algorithm would result in the following:

O(6) + O(4n) + O(2n2) = O(2n2 + 4n + 6) ≈ O(n2)

Logarithmic computational complexity

  • Logarithmic Complexity in base 2 (log2(n)): Algorithms with logarithmic complexity O(logn) grow very slowly compared to other complexity types such as O(n) or O(n2). Even for large inputs, the number of trades does not increase Let us analyze the following algorithm:

2024 09 10 07h23 12

Using a table, let us analyze the step-by-step execution of the algorithm proposed above:

 

2024 09 09 15h10 13

Table 1: Logarithmic loop algorithm execution

If you examine the sequence in Table reftab:tab1, you can see that their behavior has a logarithmic correlation. A logarithm is the power that must be raised to get another number. For example, log10100 = 2 because 102 = 100. Therefore, it is clear that the base 2 must be used for the proposed algorithm:

64/2 = 32

32/2 = 16

16/2 = 8

8/2 = 4

4/2 = 2

2/2 = 1

It can be calculated that log264 = 6, which means that the six (6) loop has been executed six (6) times (i.e. when k takes values {0, 1, 2, 3, 4, 5}). This conclusion confirms that the while loop of this algorithm is log2(n), and the computational cost is shown as:

 

O(1) + O(1) + O(log2(n) + 1) + O(log2(n)) + O(log2(n)) + O(1)

= O(4) + O(3log2(n))

O(4) + O(3log2(n)) ≈ O(log2(n))

  • Logarithmic complexity (nlog(n)): Algorithms O(nlog(n)) have an execution time that increases in proportion to the product of the input size n and the logarithm of n. This indicates that the execution time does not double if the input size is doubled, on the contrary, it increases less significantly due to the logarithmic factor. This type of complexity has a lower efficiency than O(n2) but higher than O(n).

2024 09 10 07h24 27

 

O(2 ∗ (n/2)) + O(1) ≈ O(nlog(n))

Analyzing the algorithm proposed above, mentioning the merge sort algorithm, the algorithm performs a similar division, but instead of sorting elements, it counts the possible divisions into subgroups. The complexity of this algorithm is O(nlog(n)) due to recursion and n operations are performed at each recursion level until the base case is reached.

Finally, in a summary graph, you can see, the behavior of the number of operations performed by the functions based on their computational complexity.

Example

An integration service is periodically executed to retrieve customer IDs associated with four or more companies registered with a parent company. The process performs individual queries for each company, accessing various databases that use different persistence technologies. As a result, an array of data containing the customer IDs is generated without checking or removing possible duplicates.

In this case, the initial approach would involve comparing each employee ID with all other elements in the array, resulting in a quadratic number of comparisons, i.e., O(n2):

2024 09 10 07h28 19

In a code review, the author of this algorithm will be advised to optimize the current approach due to its inefficiency. To solve the problems related to nested loops, a more efficient approach can be taken by using a HashSet. Here is how to use this object to improve performance, reducing complexity from O(n2) to O(n):

2024 09 10 07h33 23

Currently, in C# you can use an object called IEnumerable, which allows you to perform the same task in a single line of code. But in this approach, several clarifications must be made:

  • Previously, it was noted that a single line of code can be interpreted as having O(1) complex- ity. In this case, it is different because the Distinct function traverses the original collection and returns a new sequence containing only the unique elements, removing any duplicates using a HashSet, which, as mentioned earlier, results in O(n) complexity.
  • The HashSet also has a drawback: in the worst case, when collisions are frequent, the complexity can degrade to O(n2). However, this is extremely rare and typically depends on the quality of the hash function and the characteristics of the data in the collection.

The correct approach should be:

2024 09 10 07h34 06

Conclusions

In general, we can reach three important conclusions about computational complexity.

  • To evaluate and compare the efficiency of various algorithms, computational complexity is essential. Helps to understand how the execution time or resource usage (such as memory) of an algorithm increases with input size. This analysis is essential for choosing the most appropriate algorithm for a particular problem, especially when working with significant amounts of data.
  • Algorithms with lower computational complexity can improve system performance signifi- cantly. For example, the choice of an algorithm O(nlogn) instead of one O(n2) can have a significant impact on the amount of time required to process large amounts of data. Ef- ficient algorithms are essential to ensure that the system is fast and scalable in real-world applications such as search engines, image processing, and big data analytics.

Cuadro (1)

Figure 2: Operation vs Elements

 

  • Understanding computational complexity helps developers and data scientists to design and optimize algorithms. It allows for finding bottlenecks and performance improvements. By adapting the algorithm design to the specific needs of the problem and the constraints of the execution environment, computational complexity analysis allows informed trade-offs between execution time and the use of other resources, such as memory.

References

  • Roberto Flórez Algoritmia Básica, Second Edition, Universidad de Antioquia, 2011.
  • Thomas Mailund. Introduction to Computational Thinking: Problem Solving, Algorithms, Data Structures, and More, Apress, 2021.
]]>
https://blogs.perficient.com/2024/09/10/computational-complexity-theory/feed/ 1 368922
AEM Local Development With OpenJDK 11 and Maven 3.9.x https://blogs.perficient.com/2024/07/23/aem-local-development-with-openjdk-11-and-maven-3-9-x/ https://blogs.perficient.com/2024/07/23/aem-local-development-with-openjdk-11-and-maven-3-9-x/#respond Tue, 23 Jul 2024 11:00:00 +0000 https://blogs.perficient.com/?p=366017

The official Adobe tutorial for setting up a local AEM development environment requests the reader to install Java JDK 11 for AEM 6.5 and above.  It does not provide a download link for the Java JDK 11.  If you were to do a quick Google search for “JDK 11 download,” you would be presented with a search results page containing links to Oracle. 

Oracle Corporation acquired Sun Microsystems (the creators of the Java Programming Language) in 2010.  In 2019, Oracle significantly changed its Java licensing model, impacting how businesses and developers could use Java.  Oracle now requires payment for commercial use of Oracle JDK for updates and support. 

Slightly lower on the Google search results page, you will see links to OpenLogic.  OpenLogic offers free builds of JDK 11. OpenJDK is available free of charge and on an “as is” basis. 

Installing OpenJDK

The simplest method I’ve found to install OpenJDK 11 is from this site: https://www.openlogic.com/openjdk-downloads

From here, you are presented with a form where you select your Java version (11), operating system, architecture, and Java package (JDK).  Select your preferred option, and the page will display a list of available Java versions. You can then choose to download either the installer for a quick and easy setup or a zip archive for manual installation.  I recommend downloading and running the installer. 

Another option is package managers.  Package managers simplify OpenJDK installation across platforms. They’re especially efficient on Linux. macOS users can utilize Homebrew for easy installation and updates. Windows users now have Winget from Microsoft for managing applications like OpenJDK. 

Links for installing OpenJDK via package managers: 

Installing Maven 

Installing Maven 3.9 requires a few additional steps. 

Installing on MacOS

The Homebrew package manager is the best option for macOS users.  Using the –ignore-dependencies flag is crucial to prevent it from installing a potentially conflicting version of OpenJDK. 

brew install --ignore-dependencies maven

Once Maven has been installed, edit the Z Shell configuration file (.zshrc) to include the following directives (create the file if it doesn’t exist): 

export JDK_HOME=$(/usr/libexec/java_home) 

export JAVA_HOME=$(/usr/libexec/java_home) 

export PATH=$PATH:${JAVA_HOME}/bin:/usr/local/bin

Open a new terminal window and verify Java and Maven are installed correctly: 

java --version 

mvn --version

If the output shows the location (path) and version information for both Java and Maven, congratulations! You’ve successfully installed them on your macOS system. 

Installing on Linux

Download the Maven Binary Archive here: https://maven.apache.org/download.cgi

Unpack the archive and move it to the /opt directory: 

tar -zxvf apache-maven-3.9.8-bin.tar.gz 

sudo mv apache-maven-3.9.8 /opt/apache-maven

Edit your shell configuration file and add the following directives:

export PATH=$PATH:/opt/apache-maven/bin

Open a new terminal window and verify Maven is installed correctly: 

mvn --version

If the output shows the location (path) and version information for Maven, congratulations! You’ve successfully installed Maven on your Linux system. 

Installing on Windows

Download the Maven Binary Archive here: https://maven.apache.org/download.cgi. 

Run PowerShell as an administrator. 

Unzip the Maven Binary Archive: 

Expand-Archive .\apache-maven-3.9.8-bin.zip

Create an “Apache Maven” folder within Program Files: 

New-Item 'C:\Program Files\Apache Maven' -ItemType Directory -ea 0

Move the extracted directory to the “Apache Maven” folder: 

Move-Item -Path .\apache-maven-3.9.8-bin\apache-maven-3.9.8 -Destination 'C:\Program Files\Apache Maven\'

Add the Maven directory to the Path Environment Variables: 

Maven Directory To The Path Environment Variables Example

Maven Directory To The Path Environment Variables Example 2

Maven Directory Edit Environment Variable

Click the “OK” button and open a new PowerShell Prompt to verify Maven is installed correctly: 

mvn --version

If the output shows the location (path) and version information for Maven, congratulations! You’ve successfully installed Maven on Windows. 

Additional Notes

Maven 3.9 will be the last version compatible with Adobe AEM 6.5. Future versions of Maven require JDK 17, which Adobe AEM does not yet support. 

When using Java 11, Adobe recommends adding additional switches to your command line when starting AEM.  See: https://experienceleague.adobe.com/en/docs/experience-manager-65/content/implementing/deploying/deploying/custom-standalone-install#java-considerations

Make sure to follow our Adobe blog for more Adobe solution tips and tricks!  

]]>
https://blogs.perficient.com/2024/07/23/aem-local-development-with-openjdk-11-and-maven-3-9-x/feed/ 0 366017
Adobe Sites: Migrating from Webpack to Vite https://blogs.perficient.com/2024/07/16/adobe-sites-migrating-from-webpack-to-vite/ https://blogs.perficient.com/2024/07/16/adobe-sites-migrating-from-webpack-to-vite/#comments Tue, 16 Jul 2024 20:50:31 +0000 https://blogs.perficient.com/?p=365994

Webpack is an amazing bundler for JavaScript and, with the correct loader, it can also transform CSS, HTML, and other assets.  When a new AEM project is created via the AEM Project Archetype and the front-end module is set to general, Adobe provides a Webpack configuration to generate the project’s client libraries.

Introducing Vite

Vite is a new build tool that has recently come onto the scene.  You can check the NPM trends here.

Compared to Webpack,

  • Vite provides significantly faster build times and hot reloading during development.
  • Vite utilizes Rollup.  Rollup generates small bundles by utilizing optimizations like tree shaking, ES6 modules, scope hoisting, minification, code splitting, and a plugin ecosystem.

Avoid Configuration Challenges With Vite

If you have any experience with Webpack, you know the challenges of configuring different loaders to preprocess your files.  Many of these configurations are unnecessary with Vite.  Vite supports TypeScript out of the box.  Vite provides built-in support for .scss, .sass, .less, .styl, and .stylus files.  There is no need to install Vite-specific plugins for them.  If the project contains a valid PostCSS configuration, it will automatically apply to all imported CSS.  It is truly a game-changer. 

Project “Jete”

“Vite” comes from the French word for “fast”.  In music, the term “Vite” refers to playing at a quickened pace.  For the following tutorial, I have chosen the music term “Jete” for the name of our project.  “Jete” refers to a bowing technique in which the player is instructed to let the bow bounce or jump off the strings.  Let us take a cue from this musical term and “bounce” into our tutorial. 

Migrating From Webpack to Vite Tutorial

Create an AEM Project via the AEM Project Archetype: 

mvn -B archetype:generate -D archetypeGroupId=com.adobe.aem -D archetypeArtifactId=aem-project-archetype -D archetypeVersion=49 -D aemVersion=cloud -D appTitle="Jete" -D appId="jete" -D groupId="com.jete" -D frontendModule=general -D includeExamples=n

Once your project has been created, install your project within your AEM instance:

mvn clean install -PautoInstallSinglePackage

After verifying the Jete site in AEM, we can start migrating our frontend project to Vite. 

Backup the existing ui.frontend directory: 

cd jete/ 

mv ui.frontend ../JeteFrontend 

From within “jete” run: 

npm create vite@latest

Use “aem-maven-archetype” for the project name, select Vanilla for the framework, and “TypeScript” for the variant. 

Rename the directory “aem-maven-archetype” to “ui.frontend”.  We chose that project name to match the name generated by the AEM Archetype. 

mv aem-maven-archetype ui.frontend

Let’s put the pom.xml file back into the frontend directory: 

mv ../JeteFrontend/pom.xml ui.frontend

Since we are updating the POM files, let’s update the Node and NPM versions in the parent.

pom.xml file. 

<configuration>  

  <nodeVersion>v20.14.0</nodeVersion>  

  <npmVersion>10.7.0</npmVersion>  

</configuration>

We will be using various Node utilities within our TypeScript filesLet us install the Node Types package. 

npm install @types/node --save-dev 

Add the following compiler options to our tsconfig.json file: 

"outDir": "dist", 

"baseUrl": ".", 

"paths": { 

  "@/*": [ 

    "src/*" 

  ] 

}, 

"types": [ 

  "node" 

]

These options set the output directory to “dist”, the base url to the current directory: “ui.frontend”, create an alias of “@” to the src directory, and add the Node types to the global scope. 

Let’s move our “public” directory and the index.html file into the “src” directory. 

Create a file named “vite.config.ts” within “ui.frontend” project. 

Add the following vite configurations: 

import path from 'path'; 

import { defineConfig } from 'vite'; 

export default defineConfig({ 

  build: { 

    emptyOutDir: true, 

    outDir: 'dist', 

  }, 

  root: path.join(__dirname, 'src'), 

  plugins: [], 

  server: { 

    port: 3000, 

  }, 

});

Update the index.html file within the “src” directoryChange the reference of the main.ts file from “/src/main.ts” to “./main.ts. 

<script type="module" src="./main.ts"></script>

Run the Vite dev server with the following command: 

npm run dev

You should see the following page: 

AEM Vite + Typescript

We are making progress! 

Let us make some AEM-specific changes to our Vite configuration. 

Change outDir to: 

path.join(__dirname, 'dist/clientlib-site')

Add the following within the build section: 

lib: { 

  entry: path.resolve(__dirname, 'src/main.ts'), 

  formats: ['iife'], 

  name: 'site.bundle', 

}, 

rollupOptions: { 

  output: { 

    assetFileNames: (file) => { 

      if (file.name?.endsWith('.css')) { 

        return 'site.bundle.[ext]'; 

      } 

      return `resources/[name].[ext]`; 

    }, 

    entryFileNames: `site.bundle.js`, 

  }, 

},

These configurations set the entry file, wrap the output within an immediately invoked function expression (to protect against polluting the global namespace), set the JavaScript and CSS bundle names to site.bundle.js and site.bundle.css, and set the output path for assets to a directory named “resources”.  Using the “iife” format requires setting the “process.env.NODE_ENV” variable. 

Add a “define” section at the same level as “build” with the following option: 

define: { 

  'process.env.NODE_ENV': '"production"', 

}, 

Add a “resolve” section at the same level as “define” and “build” to use our “@” alias: 

resolve: { 

  alias: { 

    '@': path.resolve(__dirname, './src'), 

  }, 

}, 

Add the following “proxy” section within the “server” section: 

proxy: { 

  '^/etc.clientlibs/.*': { 

      changeOrigin: true, 

      target: 'http://localhost:4502', 

  }, 

},

These options inform the dev server to proxy all requests starting with /etc.clientlibs to localhost:4502. 

It is time to remove the generated code.  Remove “index.html”, “conter.ts”, “style.css”, “typescript.svg”, “public/vite.svg” from within the “src” directory.  Remove everything from “main.ts”. 

Move the backup of index.html file to the src directory: 

cp ../JeteFrontend/src/main/webpack/static/index.html ui.frontend/src/

Edit the index.html file.  Replace the script including the “clientlib-site.js” with the following: 

<script type="module" src="./main.ts"></script>

Save the following image to “src/public/resources/images/”: 

https://raw.githubusercontent.com/PRFTAdobe/jete/main/ui.frontend/src/public/resources/images/favicon.ico 

Add the following element within the head section of the index.html file: 

<link rel="icon" href="./resources/images/favicon.ico" type="image/x-icon" />

While we are updating favicons, edit the

ui.apps/src/main/content/jcr_root/apps/jete/components/page/customheaderlibs.html file.

Add the following to the end of the file: 

<link rel="icon" href="/etc.clientlibs/jete/clientlibs/clientlib-site/resources/images/favicon.ico" type="image/x-icon" />

Run the Vite dev server once more … 

npm run dev

You should see the following: 

Project Jete With AEM Vite

It is not very attractiveLet us add some stylingRun the following command to install “sass”. 

npm i -D sass

Create a “main.scss” file under the “src” directory. 

touch main.scss

Edit the main.ts file and add the following line to the top of the file: 

import '@/main.scss'

Copy the variables stylesheet from the frontend backup to the “src” directory: 

cp ../JeteFrontend/src/main/webpack/site/_variables.scss ./ui.frontend/src/

Edit the _variables.scss file and add the following: 

$color-foreground-rgb: rgb(32 32 32);

Copy the base stylesheet from the frontend backup to the “src” directory: 

cp ../JeteFrontend/src/main/webpack/site/_base.scss ./ui.frontend/src/

Include references to these files within main.scss: 

@import 'variables'; 

@import 'base';

Run the Vite dev server once more … 

npm run dev

You should see the following: 

Project Jete With AEM Vite Version 2

Things are getting better, but there is still more work to do! 

Copy the component and site stylesheets from the frontend backup to the “src” directory: 

cp -R ../JeteFrontend/src/main/webpack/components ./ui.frontend/src/ 

 

cp -R ../JeteFrontend/src/main/webpack/site/styles ./ui.frontend/src/

Add the following to the main.scss file: 

@import './components/**/*.scss'; 

@import './styles/**/*.scss';

Run the Vite dev server … 

npm run dev

No luck this timeYou will probably see this error: 

Project Jete With AEM Vite Error

Vite doesn’t understand “splat imports”, “wildcard imports”, or “glob imports”.  We can fix this by installing a package and updating the Vite configuration file. 

Install the following package: 

npm i -D vite-plugin-sass-glob-import

Update the vite.config.ts fileAdd the following to the import statements: 

import sassGlobImports from 'vite-plugin-sass-glob-import';

Add “sassGlobImports” to the plugins section: 

plugins: [sassGlobImports()],

Now, let’s run the Vite dev server again. 

npm run dev

You should see the following: 

Project Jete With Aem Vite Version 3

Much better.  The front end is looking great!  Time to work on the JavaScript imports! 

TypeScript has been working well for us so far, so there’s no need to switch back to JavaScript. 

Remove the “helloworld” JavaScript file: 

rm -rf src/components/_helloworld.js

Grab the TypeScript from this URL and save it as src/components/_helloworld.ts: https://raw.githubusercontent.com/PRFTAdobe/jete/main/ui.frontend/src/components/_helloworld.ts 

To see the results of this script within our browser, we have to include this file within main.ts.  Importing splats won’t work on a TypeScript file.  So we can’t write: “import ‘@/components/**/*.ts’”.  Instead, we will write:

import.meta.glob('@/components/**/*.ts', { eager: true });

Now, let’s run the Vite dev server. 

npm run dev

You should see the following in Chrome DevTools: 

Aem Vite Javascript Example

Very good!  The JavaScript is working as well! 

The following section is optional, but it is good practice to add some linting rules. 

Install the following: 

npm i -D @typescript-eslint/eslint-plugin @typescript-eslint/parser autoprefixer eslint eslint-config-airbnb-base eslint-config-airbnb-typescript eslint-config-prettier eslint-import-resolver-typescript eslint-plugin-import eslint-plugin-prettier eslint-plugin-sort-keys eslint-plugin-typescript-sort-keys postcss postcss-dir-pseudo-class postcss-html postcss-logical prettier stylelint stylelint-config-recommended stylelint-config-standard stylelint-config-standard-scss stylelint-order stylelint-use-logical tsx

Save the following URLs to ui.frontend:

https://raw.githubusercontent.com/PRFTAdobe/jete/main/ui.frontend/.eslintrc.json

https://raw.githubusercontent.com/PRFTAdobe/jete/main/ui.frontend/.postcssrc.json 

https://raw.githubusercontent.com/PRFTAdobe/jete/main/ui.frontend/.prettierrc.json 

https://raw.githubusercontent.com/PRFTAdobe/jete/main/ui.frontend/.stylelintrc.json 

Add the following to the “script” section of package.json: 

"lint": "stylelint src/**/*.scss --fix && eslint . --ext ts,tsx --report-unused-disable-directives --max-warnings 0"

Let’s try out our new script by running: 

npm run lint

You should see a fair amount of sass linting errors.  You can fix the errors manually or overwrite your local versions with the ones from the git repo: https://github.com/PRFTAdobe/jete/tree/main/ui.frontend/src 

We are ready to move on from linting.  Let’s work on the AEM build. 

Install the following: 

npm i -D aem-clientlib-generator aemsync

Save the following URLs to ui.frontend: 

https://github.com/PRFTAdobe/jete/blob/main/ui.frontend/aem-sync-push.ts 

https://github.com/PRFTAdobe/jete/blob/main/ui.frontend/clientlib.config.ts 

https://github.com/PRFTAdobe/jete/blob/main/ui.frontend/aem-clientlib-generator.d.ts 

https://github.com/PRFTAdobe/jete/blob/main/ui.frontend/aemsync.d.ts 

The files with the “d.ts” extensions are used to provide typescript type information about the referenced packages. 

The “clientlib.config.ts” script, creates a client library based on the JS and CSS artifacts created during the build process.  It also copies the artifacts to the “clientlib” directory within “ui.apps”. 

The “aem-sync-push.ts” script takes the clientlib created above and pushes it to a running AEM instance. 

It is time to update the “script” section of package.json. 

Remove the existing “build” and “preview” commands.  Add the following commands: 

"build": "tsc && npm run lint && vite build && tsx ./clientlib.config.ts && tsx ./aem-sync-push.ts", 

"prod": "tsc && npm run lint && vite build && tsx ./clientlib.config.ts",

Let’s try out the build command first: 

npm run build

If the command has been completed successfully, you will see messages indicating that the “generator has finished” and the “aem sync has finished”.  You will also notice the creation of a “dist” directory under “ui.frontend”. 

Our last step is to copy over the “assembly.xml” file from the backup we made earlier. 

cp ../JeteFrontend/assembly.xml ui.frontend/

With that file in place, we are ready to rerun the AEM build: 

mvn clean install -PautoInstallSinglePackage

Congratulations!

The build should be complete without errors.  You have successfully migrated from Webpack to Vite! 

Make sure to follow our Adobe blog for more Adobe solution tips and tricks!  

]]>
https://blogs.perficient.com/2024/07/16/adobe-sites-migrating-from-webpack-to-vite/feed/ 1 365994
How to Use the Personalization Connector for Salesforce Sales and Service Clouds https://blogs.perficient.com/2024/03/08/how-to-use-the-personalization-connector-for-salesforce-sales-and-service-clouds/ https://blogs.perficient.com/2024/03/08/how-to-use-the-personalization-connector-for-salesforce-sales-and-service-clouds/#respond Fri, 08 Mar 2024 17:06:23 +0000 https://blogs.perficient.com/?p=358209

Have you ever wondered what happens if you connected Personalization with Sales and Service Clouds? The Personalization connector for Sales and Service Clouds exposes user data and server-side campaigns in Salesforce CRM for your contacts and leads.

✋ The Key for Success

To use Personalization for Sales and Service Clouds, you’ll need to have:

  1. User identity systems fully working. The user needs to exist inside the CRM, and you will need to make a match using the same attributes you use in Personalization. For instance, if you’re matching with email addresses in Personalization, ensure that your Contact or Lead has the corresponding email.
  2. The Personalization dataset needs to be fully working with catalog items or promotions. In addition, you’ll be sending Einstein Recipes, so it’s crucial to ensure this functionality as well.

Ensure both elements are functioning seamlessly for immediate results.

👉 Connect Both Tools

With Personalization now operational, you can now integrate with Sales Cloud. You need to install the “Interaction Studio Connector for Sales and Service Cloud” from AppExchange in Sales Cloud. This managed package includes Apex classes and four key lightning components that are used to display Personalization data.

To make it work, you need to:

  1. Create an API token in Personalization to send events and access the API. This token exposes all datasets in Personalization and their information.
  2. Create a Named Credential in CRM using the API token from Personalization.

Named Credential

 

👉 Access the Data

After you’ve set up the connection, it’s time to add the components to the Contact or Lead page layout. For these three components, I’ll show how they interact with your data.

  1. Event Stream
  2. Affinity Graph
  3. Next Best Recommendation

Each component must align a CRM attribute with a Personalization Identity Type, and it’s crucial to specify the dataset, ensuring that known users in CRM (contacts or leads) are associated with the chosen dataset.

✔ Event Stream Component

Event Stream

This component gives you information about the interactions your contact or lead is having with the site. You’ll be able to know what the contact or lead is doing on the site, where they are going, and the date and time they did a specific thing.

Take into account that it won’t refresh every 15 seconds like it does in Personalization, and it won’t give all the specific details you can see inside Personalization.

✔ Affinity Graph Component

Affinities Blog by TimeAffinities Blog by view

The affinity component gives you a better perspective on how your contact or lead interacts with a client’s product. This component works with the collected information about the different catalog objects you have set in Personalization.

Affinities Filter

You can filter by view time, purchases, views, and revenue, and you can add one for each catalog object you have.

✔ Next Best Recommendation Component

This component requires additional refinement as it necessitates exposing a Server-Side campaign in Personalization to send the outcomes of a chosen Einstein Recipe.

Next Best Recommendation

This campaign will return recommendations based on how the recipe is set, from trending products to co-browsed items or any eligible item.

Each time the contact record is open, you can get the triggered interaction for the campaign in the Event Stream of Personalization and in CRM.

Event Stream Recommendation

Event Stream in Personalization

 

Perficient + Salesforce 

With its ability to seamlessly expose Personalization information in CRM, this connector opens new possibilities for enhancing your team’s performance. At Perficient, we are ready to ensure the success of your endeavors.

As a leading Salesforce consulting partner, we are on a mission to harness the power of Salesforce to solve complex business problems. With specialized knowledge in Data Cloud, Einstein AI, Marketing Cloud, and Experience Cloud, our team is dedicated to crafting innovative digital experiences that drive client success.

We aren’t just experts; we’re storytellers who understand the unique needs and challenges in the manufacturing, automotive, healthcare and life sciences, and financial services industries. We team up with our industry and solution experts to build complex enterprise ecosystems for our clients, and through our commitment to building authentic relationships with clients and partners, we foster collaboration and trust that lead to sustainable growth.

At the heart of our mission is our belief in lifting up people and communities. By leveraging our global team’s skills and resources, we strive to solve complex business problems and leave a meaningful impact on society.

Join us in our journey to revolutionize the way businesses connect, engage, and thrive in the digital era.

]]>
https://blogs.perficient.com/2024/03/08/how-to-use-the-personalization-connector-for-salesforce-sales-and-service-clouds/feed/ 0 358209
The Power of User Testing for Web Accessibility: Digital Accessibility Testing Fundamentals 3 of 4 https://blogs.perficient.com/2023/05/14/the-power-of-user-testing-for-web-accessibility-digital-accessibility-testing-fundamentals-3-of-4/ https://blogs.perficient.com/2023/05/14/the-power-of-user-testing-for-web-accessibility-digital-accessibility-testing-fundamentals-3-of-4/#respond Mon, 15 May 2023 02:34:37 +0000 https://blogs.perficient.com/?p=335484

Welcome back to our series on Digital Accessibility Testing Fundamentals! In this third installment, we’ll discuss some of the user testing techniques and tools for digital accessibility.

User Testing

In our increasingly digital world, ensuring accessibility for all users is paramount. User testing has emerged as a powerful tool for gathering valuable feedback on digital products and services, enabling organizations to create more inclusive experiences. In particular, user testing with individuals who have disabilities is crucial for identifying accessibility barriers and optimizing the user experience. In this blog, we will explore the significance of user testing in improving accessibility features and address the importance of accommodating diverse user needs.

 

Understanding User Testing

User testing is a research method that involves observing and gathering feedback from users as they interact with a product or service. It aims to uncover insights into how users navigate, understand, and utilize a digital platform. While user testing is commonly associated with evaluating usability and user experience, it also plays a pivotal role in assessing the accessibility of digital products.

 

 

 

Testing with People Who Have Disabilities

Including individuals with disabilities in user, testing is essential for creating more accessible and inclusive experiences. By engaging with users who face specific challenges, organizations can identify barriers and gain a deeper understanding of the needs and perspectives of this user group. Also, involving individuals with disabilities early in the design and development process can prevent costly fixes and ensure that accessibility features are implemented effectively from the start.

Benefits of User Testing for Accessibility

  1. Identifying Barriers: User testing with individuals who have disabilities helps to uncover accessibility barriers that may be overlooked during the development phase. It provides insights into the challenges users face when interacting with a digital product, allowing organizations to address these issues promptly and effectively.
  2. Gathering Feedback: By directly involving users with disabilities, organizations can gather firsthand feedback on the accessibility features of their products. This feedback is invaluable for refining existing features and incorporating new ones that meet the needs of a diverse user base.
  3. Improving User Experience: User testing highlights pain points and usability issues, enabling organizations to improve the overall user experience. By addressing accessibility barriers, companies can enhance navigation, readability, interaction, and other aspects that contribute to a positive user experience.
  4. Enhancing Inclusivity: Designing with inclusivity in mind benefits not only individuals with disabilities but also a wider range of users. User testing helps organizations understand the unique needs of various user groups and create products that cater to a diverse audience.

Best Practices for User Testing Accessibility

  1. Recruiting Participants: Ensure a diverse group of participants with a range of disabilities, including but not limited to visual, auditory, motor, cognitive, and neurological impairments.
  2. Preparing Test Environment: Create an accessible testing environment that accommodates specific needs. Provide appropriate assistive technologies, accessibility features, and any necessary adaptations to facilitate the testing process.
  3. Structuring Test Scenarios: Develop test scenarios that focus on specific accessibility features and tasks. This allows participants to provide feedback on the effectiveness of these features and helps identify areas for improvement.
  4. Encouraging Open Communication: Create a comfortable and inclusive atmosphere during testing sessions, encouraging participants to share their experiences and provide honest feedback. Active listening and empathy are essential in understanding the challenges users face.
  5. Documenting Feedback: Record and document the feedback received during user testing. This information serves as a valuable reference for improving accessibility features and implementing necessary changes.

User testing is a powerful tool for ensuring accessibility and enhancing the user experience of digital products. By including individuals with disabilities in the testing process, organizations gain invaluable insights into the challenges faced by these users. User testing allows for the identification of accessibility barriers, refinement of features, and the creation of more inclusive experiences for all users. By adopting best practices and actively involving diverse user groups, organizations can work towards a more accessible and inclusive digital landscape.

 

For more information on why accessibility is important in general, you can check out my previous blog post here.

For further information on how to make your product accessible to your audience, contact our experienced design experts, check out our Accessibility IQ for your website, download our guide Digitally Accessible Experiences: Why It Matters and How to Create Them, read more from our UX for Accessible Design series.

So, What Comes Next?

In the next post, we’ll continue our exploration of Code Review techniques.

Stay in touch and follow my next post.

]]>
https://blogs.perficient.com/2023/05/14/the-power-of-user-testing-for-web-accessibility-digital-accessibility-testing-fundamentals-3-of-4/feed/ 0 335484