Paul Goodrich, Author at Perficient Blogs https://blogs.perficient.com/author/pgoodrich/ Expert Digital Insights Thu, 12 Sep 2024 19:32:18 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Paul Goodrich, Author at Perficient Blogs https://blogs.perficient.com/author/pgoodrich/ 32 32 30508587 Generative AI: The 3 Things No One is Telling You About https://blogs.perficient.com/2024/09/12/generative-ai-the-3-things-no-one-is-telling-you-about/ https://blogs.perficient.com/2024/09/12/generative-ai-the-3-things-no-one-is-telling-you-about/#comments Thu, 12 Sep 2024 19:32:18 +0000 https://blogs.perficient.com/?p=369055

Generative AI or, as I prefer to call it, generative machine learning (ML) has taken the business world by storm. By now you’ve likely encountered an email or some form of marketing generated by one of these models, perhaps without even realizing it. They’re powerful tools and I look forward to the improvements we can bring to people’s lives. But I also think it’s essential to understand their limitations, lest you end up with some embarrassing results.

So let’s review how some popular un-customized models handle text and image prompts and the 3 things no one is talking about when it comes to generative AI. 

1. Artificial Intelligence is a Misnomer

Though artificial intelligence is useful as a marketing term, intelligence is more of the end goal for many researchers rather than an accurate assessment of where the technology is right now. Intelligence implies many attributes including reflection, judgment, initiative, and wonder. That field of study is referred to as Artificial General Intelligence (AGI) and remains outside of the scope of the most popular models.  This distinction becomes only more evident as you feed complex scenarios and requirements into the currently available models.  Remember that most of the time these tools are backed by a predictive model based on training data rather than logically assessing an ask against a series of rules like traditional algorithms.

If you introduce it to prompts or situations where it does not have training, it may have trouble producing a realistic or accurate result.  For example, I have given a popular generative model the following prompt ‘Owl breathing in space.’   

Owl Breathing In Space

It’s a nice picture, but I’m afraid the owl won’t be breathing any time soon…

We as humans easily recognize a series of conceptual rules about a subject (e.g. owls need air to breathe) but this model has no such teaching yet.  That’s not to say that these models won’t reach a basic level of these attributes of intelligence someday but to be careful to understand this limitation in the meantime.

Fortunately, we can get to our desired result by specifying some additional instructions: “Owl breathing in space, wearing an astronaut suit.”

Owl Breathing In Space With Space Suit

2. Deep Learning Models Aren’t Always Trustworthy

Deep Learning Models typically present information with the same projected degree of confidence, even if it is completely incorrect or contradictory to something it said previously. Remember it’s a predictive language model, not a conscious actor or trustworthy source.​

Deep Learning Models Are Not Always A Trustworthy Source

It’s a very important distinction!  The model outputs the information exactly in reverse of what it should have stated.

To help mitigate this issue, we can instruct the model to pull answers only from an approved set of answers like in the image below.

Ask Your Model To Pull Approved Answers Example

However, this approach often isn’t sufficient. We can easily end up with the model filling in some gaps using language that is outside of the strictly approved set as seen in the example below.

Example Of A Gen Ai Model Breaking Rules

While the model tried to answer all the questions, it broke the rule we provided at the start of the session.

That’s where Perficient’s policies and governance frameworks come in. We can implement a procedural algorithm that sits between the raw output of the AI model and any user requests.  This management layer ensures strict output compliance and rejects any attempt by the model to provide an answer outside the exact language in the approved set. It’s much more in line with what we’re looking for!

3. Be Wary of Partner-Owned Models

While partner-owned models may have a high ROI if you do not have an existing ML practice, you are limiting your business’ potential by relying on partner closed-source models. For instance, closed-source models are often built for broad use cases rather than fine-tuned to your products or services.

Generative AI Answers How To Program Aem To Automatically Refresh The Page

While options #1 and #2 in the image above are technically feasible, the much simpler afterinsert EditConfig solution exists and is in better alignment with Adobe’s best practices. Notably, option #2 is a very complex solution I would never recommend anyone implement and #3 doesn’t answer the question at all. I would not trust this model to answer questions about AEM consistently and correctly without a lot more training. There’s no easy way around this yet, so make sure you always verify what the model is telling you against a trustworthy source.

So if you’re going to have to eventually spend a lot of time training a third-party model, why not invest in your own model? You don’t have to do it alone, as Perficient has a comprehensive artificial intelligence practice that can help guide you from the ground up.

Thinking Critically About Generative AI Safeguards

I hope these considerations were helpful and have led you to think about how to put safeguards in place when using generative AI. Until next time!

For more information on how Perficient can implement your dream digital experiences, contact us! We’d love to hear from you.

Contact Perficient to start your journey.

]]>
https://blogs.perficient.com/2024/09/12/generative-ai-the-3-things-no-one-is-telling-you-about/feed/ 1 369055
Headful or Headless AEM? Why not both with Hybrid? – Part 2 https://blogs.perficient.com/2023/02/09/headful-or-headless-aem-why-not-both-with-hybrid-part-2-combine-with-spa/ https://blogs.perficient.com/2023/02/09/headful-or-headless-aem-why-not-both-with-hybrid-part-2-combine-with-spa/#respond Thu, 09 Feb 2023 12:00:41 +0000 https://blogs.perficient.com/?p=326255

Previously I presented a common situation where an engineering team might push for Headless AEM and covered why, in my opinion, a Hybrid solution is a better approach.  I discussed how Content Fragments, Experience Fragments, and Sling Model Exporters are used in combination to deliver the Headless side of the AEM experience.  In case you missed part 1, you can read it here.

So how do you best combine using Content Fragments, Experience Fragments, and Sling Model Exporters with SPAs? Here are several approaches.

SPA Widgets / Single Pages

Sometimes only portions of the page or specific pages require a high interactivity or state management that comes with SPA frameworks.  You can build AEM components that utilize any JS library or framework, load them only on pages where they are needed, and even use different frameworks per page.  This approach is a light amount of custom development to integrate your SPA components with AEM.

For your engineering team, typically this is done through a small AEM wrapper component.  It utilizes an HTL template and data attributes for content sourced from the Sling Model and AEM dialog authoring.  As covered previously, you can then expose the Sling Model as JSON to support multichannel content.

This solution works best when you do not require SPA-controlled routing for your pages.  If you still need URL-based, stateful entry points, remember that query parameters and suffixes are always available as an option.

SPA Editor

The SPA Editor is an AEM-provided solution for React and Angular applications to integrate directly with AEM with in-context editing.  It works best when you

  • Plan to write your entire frontend for a site in React or Angular
  • Want to minimize the amount of AEM your frontend developers need to know
  • Require SPA-controlled routing and want to serve SPA web pages from AEM

The content authoring for the SPA components is still stored in the JCR, but provided as a JSON representation via component mapping instead of writing AEM HTL templates.  You can then expose this JSON to other channels in the same way as before.  One nice thing about this solution is Adobe has taken the time to re-write many of the Core Components in React and Angular.  This allows for direct parent/child SPA component relationships and data flows that will be very familiar to SPA developers.

Now this solution does come with some AEM limitations because the rendering is controlled via the SPA rather than HTML templates.  For a full, updated list of limitations, please read the Adobe documentation.  The SPA Editor is still actively supported and developed by Adobe, so hopefully they will remove these limitations over time.

Remote SPA

Remote SPA is an AEM-provided solution for externally hosted React applications to become editable within AEM.  Functionally, it operates in much the same way as SPA Editor, but the SPA server delivers the pages instead of AEM.  This allows the engineering team to build the bulk of the site components outside of AEM and to scale the page traffic separately from AEM.  Downsides of this approach include:

  • The need for separate server infrastructure
  • Two separate deployment cycles
  • The same limitations as SPA Editor
  • And it currently only supports React

Still, this may be the best solution for adding authoring to an existing SPA application or meeting timeline requirements for a team unfamiliar with AEM.

My colleague Jeff Molsen did a recent “Pop-up Perspective with Perficient” video on this topic if you’d like to learn more in-depth or see Remote SPA in action.

Explore What AEM Has to Offer

If you are using a different frontend, I still recommend taking the time to learn and develop within AEM with Sling Exporters. There are several great tutorials on how to develop editable components within AEM with or without SPA editor.  When faced with a new problem, explore what AEM has to offer first. Your marketing team will thank you for it.

For more information on how Perficient can implement your dream digital experiences, we’d love to hear from you. We’re certified by Adobe for our proven capabilities, and we hold an Adobe Experience Manager specialization (among others).  Contact Perficient to start your journey.

]]>
https://blogs.perficient.com/2023/02/09/headful-or-headless-aem-why-not-both-with-hybrid-part-2-combine-with-spa/feed/ 0 326255
Headful or Headless AEM? Why Not Both with Hybrid? https://blogs.perficient.com/2023/02/03/headful-or-headless-aem-why-not-both-with-hybrid/ https://blogs.perficient.com/2023/02/03/headful-or-headless-aem-why-not-both-with-hybrid/#respond Fri, 03 Feb 2023 18:54:23 +0000 https://blogs.perficient.com/?p=326239

It’s not uncommon when facing a new problem, to fall back on a tried-and-true solution.  Then, suddenly remember why the team moved off of that solution in the first place.  Recently, I’ve seen this trend with engineering teams and a desire for multichannel content.

A Common Case for Headless Content on AEM

Let’s set the stage with an example.  A digital marketing team has licensed Adobe Experience Manger 6.5 with the hope of using the WYSIWYG content editing to quickly produce and release content decoupled from code deployments.  They also see that AEM has the capacity to produce reusable multichannel content via Content Fragments.  Marketers plan on using those fragments within a marketing website, a companion mobile app, and voice assistance devices.  Finally, it would be great if the site had the option for highly interactive pages that didn’t require a refresh.  They ask the engineering team to implement the solution.

The engineering team is full of talented backend developers who have years of experience serving up web content.  The engineers dig into the platform, and while some are able to learn AEM component development, it’s slow going.  They soon have a few concerns:

  • They want to use a frontend framework to deliver the highly interactive parts of the website.  Expanding the team is difficult because only a few developers in the market have both AEM component and frontend framework experience.
  • There is already a disconnect between authoring Content fragments and the expectation of authoring content in-place on the page.
  • There are a myriad of options for implementing Single Page Applications (SPAs) in AEM.  Which one best fits the use case?
  • The slow progress to get even one demo-able page together.  If this was just in [insert technology] instead of multiple HTL templates, Sling models, and component dialogs it would be faster.

Frustrated, the engineering team spins up a proof of concept with their previous stack.  They go back to the marketing team and show how much faster they created a new page and components.  Engineering pushes for a compromise of headless content on AEM and a separate SPA site with the stack they are familiar with.  In my opinion, this largely defeats the purpose of licensing AEM.  With this approach, the ability to use WYSIWYG editing and decoupled code/content releases is gone.

Why You Should Consider Fragments and Sling Model Exporters

Instead, consider using Content Fragments, Experience Fragments, and Sling Model Exporters combined with SPAs for benefits of both approaches.

Content Fragments are code-free, structured text and image content you can author and expose to any channel via JSON or GraphQL. 

Experience Fragments are also code-free, but present experiences with a partial or complete layout in HTML.  Optionally, they include design and functionality via CSS and JavaScript.  The Experience Fragments can utilize any AEM component and are intended for reusable “ready/nearly ready” experiences.  With some light custom development, you can even leverage Content Fragments within AEM components and thus any Experience Fragment. 

Sling Model Exporters can serialize most AEM component authoring into JSON to expose to any channel.  This is the preferred option for retaining in-context editing, as most other solutions require navigation to another page to edit the experience.  All Adobe Core Components have this enabled by default.  This requires a small amount of development to configure for custom components but is quite powerful and quick for existing AEM implementations to enable multichannel content delivery. 

Utilizing these solutions together over headless alone has several benefits.  Having any one of these fragments together with at least one of the experiences within the AEM author environment means you can natively preview changes in a channel before they go live.  You can also build out much of the base site structure with Core Components and fragment-based authoring without any development.  Then once your team is more comfortable with AEM development, utilize Sling Model Exporter for multichannel delivery of custom experiences.  Most critically, you retain WYSIWYG editing and decoupled code/content releases. 

But What About SPA for AEM?

You might notice there’s a key element left unanswered here – how best to do that in combination with SPA?  Thankfully there are several options, which I cover in Part 2.

For more information on how Perficient can implement your dream digital experiences, we’d love to hear from you. We’re certified by Adobe for our proven capabilities, and we hold an Adobe Experience Manager specialization (among others).  Contact Perficient to start your journey.

]]>
https://blogs.perficient.com/2023/02/03/headful-or-headless-aem-why-not-both-with-hybrid/feed/ 0 326239
How Good is your AEM Security? – AEMaaCS and 3rd Party Dependencies https://blogs.perficient.com/2022/11/04/how-good-is-your-aem-security-aemaacs-and-3rd-party-dependencies/ https://blogs.perficient.com/2022/11/04/how-good-is-your-aem-security-aemaacs-and-3rd-party-dependencies/#respond Fri, 04 Nov 2022 17:20:33 +0000 https://blogs.perficient.com/?p=321127

Large scale data breaches and critical security vulnerabilities have companies thinking about security more than ever.  Many developers are familiar with the OWASP top 10 (https://owasp.org/www-project-top-ten/).  There are already many resources on generic mitigation for these vulnerabilities.  So instead, in this series, I cover security issues and mitigations specific to AEM.  In this final post, I will cover the Content Disposition Filter, changes in AEM as a Cloud Service, and 3rd party dependencies.

Previous posts in series:

Content Disposition Filter

If you allow users to upload SVG images to your site, you might be creating a vulnerability.  Viewing a user-uploaded SVG image directly via URL is subject to a XSS attack since these images can also contain malicious JavaScript.  Typically this JavaScript is constructed by a bad actor to either send user cookies and storage data to a remote server or create a legitimate looking page for phishing purposes.  Malicious users could then link this file stored on your server to unsuspecting victims, which would come from your domain.  From a security perspective, it’s preferable to disallow SVG uploads at all or at least programmatically remove all JS from the SVG files before saving them server-side.  If this is not possible within your business requirements or too large of a scope, there’s another solution that will help.

The Content Disposition Filter in AEM https://experienceleague.adobe.com/docs/experience-manager-64/administering/security/content-disposition-filter.html?lang=en is generally used to control whether assets accessed directly in AEM are displayed in the browser or downloaded.  In this case, you could configure it to always ask users to download SVG images when linked on your site, instead of rendering in the browser.

This will reduce the effectiveness of the malicious JavaScript in two ways.  One is that hopefully users will be more wary of downloading files than just viewing content rendered directly in the browser.  The second is that if you have properly configured CORS to not allow localhost, the downloaded malicious script will not be able to load several site resources as it will no longer be requesting them from your domain.  This hinders the malicious attempt of dressing up the SVG resource as a legitimate site page.  It’s important to note, this doesn’t cover all scenarios, but it’s an easy and quick configuration change within AEM.

AEM as a Cloud Service

In AEM as a Cloud Service (AEMaaCS), several of the potential vulnerabilities discussed in this series are no longer possible thanks to some key architecture changes.

  • AEMaaCS receives regular, automatic patches from Adobe rather than requiring a manual update process by the implementation team.
  • The /apps and /libs path are now immutable. You must redeploy the entire instance through Cloud Manager.
    • This makes some persistent XSS techniques (e.g. writing JSP pages to the JCR) and JCR-based remote code execution no longer possible.
    • You will still need to update dispatcher rulesets within your application codebase if there are any further Sling resolution vulnerabilities found.
  • The previous OSGI web console is now inaccessible.
  • User-generated content is generally not handled through direct writes to the JCR.
    • This can lower the chance of persistent XSS because the resources are no longer directly accessible via Sling.
    • You will still need to incorporate write type, permissions, and rate limits within your services that write to a database.
  • Your application code must pass Adobe’s SonarQube ruleset before it can be deployed, which catches some vulnerabilities before they make it to QA and Prod.

So while these changes help significantly, there’s still a lot of work to do to make sure your AEM instance is secure.

3rd Party Dependencies

After 5 weeks of security posts, there’s still a key security topic I haven’t discussed, and it’s one of the easiest to miss.  It’s also arguably the most likely for malicious actors to target.  That is 3rd party application and platform dependencies.  Malicious actors tend to focus on these libraries and frameworks for a few reasons.  One is that the code is typically open source, which is easier to search for vulnerabilities than your black-box application.  Another is the vulnerabilities are exploitable in multiple applications instead of just one, leading to higher payouts.

Let’s examine a well-known hack as an example of how bad this can get.  In May-July 2017 Equifax was hacked, resulting in one of the biggest data breaches in history.  Attackers stole names, addresses, Social Security numbers, birth dates for around 143 million people.  The vulnerability?  It wasn’t in the application codebase, but in an insecure dependent version of Apache Struts.

Code reviews, automated code scanning, and application best practices were not sufficient to catch this type of issue.  If, in addition, the Equifax team had kept up to date with the Struts security patch released months earlier on March 7th, the breach could have been avoided or greatly limited.

Thus my recommendations are as follows:

  • Choose 3rd party dependencies have security reviews, widespread adoption, and an active development community.
  • Have a scheduled intake process in place for critical AEM and 3rd party security patches.  Make it a priority to test and install them.
  • Utilize an automated application dependency scanner like FOSSA. It’s a lot easier to run these scans once or twice a week versus manually checking all dependencies.
  • Write automated security tests (unit and functional) along with your regular test cases.

That’s all for now, thank you for joining me on this security series.  If you enjoyed this type of content and would like to see more on AEM security, I’d love to hear from you.  For more information on how Perficient can help you achieve your AEM Security goals and implement your dream digital experiences,

Contact Perficient to start your journey.

]]>
https://blogs.perficient.com/2022/11/04/how-good-is-your-aem-security-aemaacs-and-3rd-party-dependencies/feed/ 0 321127
How good is your AEM Security? – Mitigation Tools https://blogs.perficient.com/2022/10/26/aem-security-mitigation-tools/ https://blogs.perficient.com/2022/10/26/aem-security-mitigation-tools/#respond Wed, 26 Oct 2022 14:35:21 +0000 https://blogs.perficient.com/?p=320688

Large scale data breaches and critical security vulnerabilities have companies thinking about security more than ever.  Many developers are familiar with the OWASP top 10 (https://owasp.org/www-project-top-ten/).  There are already many resources on generic mitigation for these vulnerabilities.  So instead, in this series, I will cover security issues and mitigations specific to AEM.  Here I will cover two of AEM’s exploit mitigation tools, CryptoSupport and Service Users.

Previous posts in series:

AEM Mitigation Tools

Encrypted Credentials

It is recommended for all passwords, security tokens, and other service credentials to be encrypted within the JCR.  That way, even if an attacker gains access to read the node where they are stored, the credentials are useless without the decryption key.  If you are storing these credentials via OSGI properties, you can use AEM’s Crypto Support page to manually encrypt any property based on a system master key.  Once it is encrypted, update the matching run mode OSGI config in /apps with the encrypted value.

Implementing for your QA environment would look like this:

  1. Configure all QA publishers to share the same system master key.  I recommend using a different system master key per environment type (Dev, QA, Prod).
  2. Navigate to the QA publisher OSGI console, Config Mgr, Main tab, and the Crypto Support link.  Generate the encrypted property (password/secret) via the plain text field.
    • If you do not have access to the OSGI console (AMS), you will need to create a support ticket with Adobe to give you access to the OSGI console in Prod.
    • If this is not possible, you can have the Adobe CSE generate the encrypted property for you or ask them to help copy the CryptoKey down to a lower environment.
      • However, there are some flaws with these two approaches.  Having the rep generate the encrypted property can be a security concern and disallowed by the client, as the implementation team is sending credentials outside of company-owned communications.  Copying the CryptoKey down can also be a concern, as sharing keys between local/dev environments and prod introduces a risk as these environments are typically less secure overall.
    • On AEM as a Cloud Service, you can utilize Cloud Manager Secret Variables instead.

3. Update the appropriate QA runmode OSGI configuration in the codebase with the encrypted value.

Once this process is complete, AEM’s OSGI Configuration Plugin will automatically decrypt any encrypted OSGI properties used within an OSGI service.  You should not have to use any manual decrypt library call within your services.  For more details on this, see the below Adobe documentation.

https://experienceleague.adobe.com/docs/experience-manager-64/administering/security/encryption-support-for-configuration-properties.html?lang=en

Service User Permissions

In older AEM versions, it was common for developers to utilize administrative sessions to access data within the JCR.  Today, most AEM developers have updated these sessions to utilize service users but not all have implemented best practices around the service user’s permissions.  Many service users are assigned overly broad permissions instead of specific paths.  For instance, the platform doesn’t prevent you from implementing a service user that has the same permissions as an administrator.  If this is the case, one insecure endpoint could lead to server-side XSS, Denial of Service attacks, or a data breach via the JCR writes I discussed in previous posts.

So be very thoughtful in how you are limiting permissions.  Some key considerations are:

  • Restrict the types of nodes and properties a service user writes.
  • If possible, consolidate programmatically read and written content to a set number of paths. Allow the service user read access only to those paths instead of all of /apps or /content.
  • In a multi-tenant architecture, utilize multiple service users to limit attack surface vectors.
  • If the content is accessible to specific authenticated users, prefer to use their request’s resource resolver and ACLs instead of a service user.

Why Mitigate?

Ideal service, access, and storage designs incorporate a lot of considerations and stakeholders can wonder if it’s worth the effort.  After all, if every service is designed well with no vulnerabilities, why spend time hardening other parts of the system?  I find it useful to explain it with the analogy of home security.  Most people do not rely on just locked doors to protect their homes from theft.  Some utilize deadbolts, motion lights, neighborhood watch, safes, and external locations (banks) to secure their valuables.  Why?  Attacks grow more sophisticated or change approach.  Once an attacker breaks through the first layer of defense, you want several more to meet them.

Next week, I will cover the Content Disposition Filter, and mitigation tools specific to AEM as a Cloud Service.

For more information on how Perficient can help you achieve your AEM Security goals and implement your dream digital experiences, we’d love to hear from you.

Contact Perficient to start your journey.

]]>
https://blogs.perficient.com/2022/10/26/aem-security-mitigation-tools/feed/ 0 320688
How good is your AEM Security? – Denial of Service https://blogs.perficient.com/2022/10/19/how-good-is-your-aem-security-denial-of-service/ https://blogs.perficient.com/2022/10/19/how-good-is-your-aem-security-denial-of-service/#respond Wed, 19 Oct 2022 15:35:40 +0000 https://blogs.perficient.com/?p=320268

Large scale data breaches and critical security vulnerabilities have companies thinking about security more than ever.  Many developers are familiar with the OWASP top 10 (https://owasp.org/www-project-top-ten/) and there are already many resources on generic mitigation for these vulnerabilities.  Instead in this series, I cover security issues and mitigations specific to AEM.  Today’s topic is Denial of Service vulnerabilities.

Previous posts in series:

Denial of Service Vulnerabilities

Ever visit a popular new website only to find that it has crashed or is throwing errors?  The site visitors may have unintentionally caused a denial of service issue by overloading the server processor or memory.  Malicious actors can intentionally cause the same issue by sending a high amount of actions to a server.

User-triggered creation of JCR nodes

DoS attack prevention starts at the business requirements, and is fortified through technical design.  Be very careful in designing any services, even authenticated, that create new JCR nodes as part of their behavior.  Malicious users may be able to exploit this behavior by flooding the JCR with node writes.  Worse still, they may be able to achieve Remote Code Execution by uploading new JSP servlet files, OSGI configs, OSGI bundle uploads, or persistent XSS.  Therefore, apps should never programmatically create new JCR nodes under the /apps path.  This is good practice for later working in AEM as a Cloud Service, where /apps is completely immutable after deployment.

If all this isn’t considered from the design stage, you may have some massive refactoring and data access rewrites later on.  Consider this example.  You have a public API within your application that writes to the JCR every time it receives a request.  A malicious user could quickly hit that endpoint 1000s of times, creating a massive write queue in your system.  So, how to fix it?  Do you remove all public access while you try to sort out the vulnerability?  That may create a frustrating experience for legitimate users.  Instead, let’s examine some practical prevention measures.

  1. Double check your ACLs for anonymous users write access on /content/usergenerated or other paths.  Consider removing unauthenticated access to generate content.
  2. Utilize rate limiting by IP, User, and/or API key.  IP rate limits can be somewhat circumvented by malicious users using a botnet or VPN.  Whereas User and API Key rate limits do not prevent access to public APIs or page loads.  So, it’s ideal to implement a combination of them for full coverage.
  3. Consider implementing audit logging and rate alerts for any authenticated actions.  This will allow you to quickly identify and revoke access from any users who have turned malicious.
  4. Avoid JCR writes and large in memory processing.  Set restrictions on the amount and type of content a user can upload.
  5. Utilize vulnerability scanning tools like SonarQube and Checkmarx.  They will typically catch vulnerabilities where the user is able to input arbitrarily sized or un-validated parameters to your application.
  6. Perform load testing for new services – a high number of calls should not significantly degrade the rest of the application.

Heavy processing services / jobs

Many use cases exist for jobs to run on AEM environments.  These are typically started via an admin content page, API endpoint, Sling scheduler, Sling scheduled jobs, or combination of the four.  These ideally kick off asynchronously processed tasks via Sling Jobs, an AEM workflow, or Adobe IO rather than using immediate processing power.  If using Sling Jobs and AEM Workflows, you can limit the instance to use only up to a certain amount of processor cores, lessening the total potential load on the server.  The default configuration is set at half the number of available total cores, so it may be useful to tweak this number down in applications with constant asynchronous processing. Also, it’s important to note that Adobe recommends never increasing the configuration above half the number of total cores.

In any case, I highly recommend that heavy processing should protected by authentication + ACLs and only be accessible from author environments.  The data can then flow out to a common data store or in non-cloud environments, replicate to publishers as needed.  But these days, the replication strategy is not as common, as large I/O writes cause processor churn and a later debt in JCR compaction.

Restricting JCR writes or who has access to a service is typically a conversation that should start in early solutioning and business requirements.  Don’t wait until technical implementation to have this discussion!  Stakeholders may need to budget additional funding for a shared data store, or entirely rethink how to provide public services, leading to project delays or failures.  Build key stakeholder relationships and design against attacks early.  Remember, perfect implementation on its own cannot fix a flawed design.

For more information on how Perficient can help you achieve your AEM Security goals and implement your dream digital experiences, we’d love to hear from you.

Contact Perficient to start your journey.

]]>
https://blogs.perficient.com/2022/10/19/how-good-is-your-aem-security-denial-of-service/feed/ 0 320268
How good is your AEM security? – Sling Resolution https://blogs.perficient.com/2022/10/11/how-good-is-your-aem-security-sling-resolution/ https://blogs.perficient.com/2022/10/11/how-good-is-your-aem-security-sling-resolution/#respond Tue, 11 Oct 2022 19:40:55 +0000 https://blogs.perficient.com/?p=319774

Large scale data breaches and critical security vulnerabilities have companies thinking about security more than ever.  Many developers are familiar with the OWASP top 10 (https://owasp.org/www-project-top-ten/) and there are already many resources on generic mitigation for these vulnerabilities.  Instead in this series, I cover security issues and mitigations specific to AEM.  Today’s topic is Sling Resolution.

Previous posts in series:

Sling Resolution Vulnerabilities

Dispatcher Allow rulesets are one of the largest risk vectors within AEM because of default Sling Resolution behavior.  Spend a significant amount of your security testing time here, because this is the most common way for malicious actors to find other vulnerabilities, compromise data within the system, and achieve remote code execution.

Remote code execution (RCE) is the highest impact concern, as an attacker may gain a broad range of further exploit options.  This can include breaking the system, compromising data, stealing user credentials, and creating phishing pages on legitimate domains.

In creating defensive measures, it’s useful to know how RCE can occur in the first place.  RCE is possible if hacker can get access to the OSGI console, writing to /apps or (in insecure systems) /content, Querybuilder, Groovy console, ACS AEM Tools, or WebDAV.  It is best to completely disallow access to these features on Production publishers for any user.  From there, utilize Adobe’s best practices of Deny all paths and then Allow necessary paths.   Most of these potential vulnerabilities should be mitigated by the latest version of the dispatcher rules present in Adobe’s AEM Project Archetype, but it’s worth confirming in your own dispatcher as well.

The following is not a comprehensive list of vulnerabilities but it will give you an idea of what to start looking for.

  • Sometimes developers write deny rules that are too rigid, or allow rules that are not rigid enough. For example, a request to /bin/querybuilder.json/test.css may still return the AEM querybuilder page.  Instead of using:
{ /type “deny” /path “/bin/querybuilder” }

Utilize wildcard (*) matching for child and extension paths. E.g.

{ /type “deny” /path “/bin/querybuilder*” }
  • Adding selectors, URL Params, multiple slashes to the URL may change how Sling and the Dispatcher interpret the request. Many vulnerabilities rely on exposing the JCR content structure through .json rendering requests.
  • If a malicious user gains access to the content structure, they can often mine usernames, credentials stored in plain text, PII, exposed admin pages, or clues about other vulnerabilities within the system. All it may take, is the attacker finding 1 username where they can brute-force the password, to have a much bigger data breach.
  • By default, numbered (-1,1,2,3…), “children”, and “infinity” selectors in a request are interpreted by Sling to display the current content path and Sling resources under it as a JSON response.  This functionality should not be available over the dispatcher.
  • If you have enabled Sling Model Exporters for your Sling Model objects, ensure no sensitive data or properties are exposed by accident in the response.

Putting into Practice

As AEM increases in market share, so too does the incentive for hackers to find exploits, so it’s very important to test and stay up to date on the latest recommended allow rules.  Test any rules that deviate from the standard ruleset very thoroughly.

Need some practical examples?  Below I’ve provided some basic URLs to test on your dispatcher.  Over the public internet these should return a 404 or redirect appropriately.  If not, that represents a vulnerability within your application.

  • /content.infinity.json
  • /.1.json
  • /content.5.json
  • /content.ext.json
  • /content.children.json/test.css
  • /etc.-1.json
  • /content.html/test.png/test.1.json
  • /bin/querybuilder.json/test.css
  • /bin/querybuilder.json?a.html
  • /bin/querybuilder.json.css
  • /bin/querybuilder.json.servlet.css
  • //bin//querybuilder.json
  • /system/console/bundles?.css

For more information on how Perficient can help you achieve your AEM Security goals and implement your dream digital experiences, we’d love to hear from you.

Contact Perficient to start your journey.

]]>
https://blogs.perficient.com/2022/10/11/how-good-is-your-aem-security-sling-resolution/feed/ 0 319774
How good is your AEM security? – XSS https://blogs.perficient.com/2022/10/04/how-good-is-your-aem-security-xss/ https://blogs.perficient.com/2022/10/04/how-good-is-your-aem-security-xss/#respond Tue, 04 Oct 2022 14:12:04 +0000 https://blogs.perficient.com/?p=316103

Large scale data breaches and critical security vulnerabilities have companies thinking about security more than ever.  Many developers are familiar with the OWASP top 10 (https://owasp.org/www-project-top-ten/) and there are already many resources on generic mitigation for these vulnerabilities.  Instead in this series, I cover security issues and mitigations specific to AEM.

XSS and AntiSamy

As a quick review, Cross Site Scripting (XSS) can occur if user-entered inputs are not filtered before displaying back to the user.  Remember that user-entered content can include sources that usually users do not edit like URL parameters, local storage, and cookies.  Many applications will also need to filter user content from HTML and JS emails before sending.

When using HTL tags in AEM, the default rendering context and all other contexts other than ‘unsafe’ will utilize a set of OWASP rules called AntiSamy.  Sometimes this ruleset may filter more than expected, and some developers resort to using the context=’unsafe’ attribute which renders the raw text from the Sling Model property.

A common use case that the default AntiSamy library interferes with is telephone (tel:) hrefs in links.  For instance:

<a href=”${model.url @ context='html'”>
  ${model.urlText @ context=’html’}
</a>

Will not allow model.url = tel://555-555-555 to be output correctly to the page.  A developer might just resort to:

<a href=”${model.url @ context='unsafe'”>
  ${model.urlText @ context='html'}
</a>

or worse:

${model.rteText @ context='unsafe'}

Solution Design

Thankfully, we have a few options other than using context=’unsafe’.

  1. Design your components to utilize feature flags for hardcoded attributes or special characters separately output from the user or author input.
  2. You can overlay the default ruleset located at /libs/cq/xssprotection/config.xml. Be careful not to allow any more attributes or special characters than you need, or ones commonly used by exploits to run JavaScript.
  3. Utilize a custom backend service or static utility that rewrites entered characters to desired substitute other than empty.

My preference is #1 so I’ll provide an example.  With these considerations, your implementation may look more like the following:

<a data-sly-test=”${!model.isPhoneLink}” href=”${model.url @ context='html'”>
  ${model.urlText @ context=’html’}
</a>
<a data-sly-test=”${model.isPhoneLink && model.isValidPhone}”
    href=”tel:// ${model.url @ context=’html’}${ model.phoneExt ? ',' : ''}${properties.phoneExt @ context='html'}
    x-cq-linkchecker="skip">
  ${model.url @ context='html'}${model.extentionText || ‘’ @ context='html'}${properties.phoneExt @ context='html'}
</a>
<a data-sly-test=”${model.isPhoneLink && !model.isValidPhone}”>
  ${model.invalidPhoneMsg}
</a>

Do you have any context = ‘unsafe’ usages in your codebase?  It’s worth checking to see if it’s exploitable.  But it’s only the start of the XSS vulnerability review process.

To see the implementation in action, watch this in video format.

For more information on how Perficient can help you achieve your AEM Security goals and implement your dream digital experiences, we’d love to hear from you.

Contact Perficient to start your journey.

]]>
https://blogs.perficient.com/2022/10/04/how-good-is-your-aem-security-xss/feed/ 0 316103
How to Make Agile Iteration Possible Within Waterfall Budgeting – Part 2 https://blogs.perficient.com/2021/07/29/how-to-make-agile-iteration-possible-within-waterfall-budgeting-part-2/ https://blogs.perficient.com/2021/07/29/how-to-make-agile-iteration-possible-within-waterfall-budgeting-part-2/#respond Thu, 29 Jul 2021 18:49:31 +0000 https://blogs.perficient.com/?p=295756

In Part 1 of How to Make Agile Iteration Possible Within Waterfall Budgeting, I covered the business desire to achieve iterative development and quick time-to-market, with the reservations of rigid waterfall budgeting and planning.  I also covered pre-project and project start steps, which I believe are critically important to achieving the desired result during implementation.  If you haven’t read Part 1, start there.  You can do everything right on the delivery side but still be very agile in the wrong direction.

Now on to delivery steps.  As a reminder, we are communicating these steps to the product owner, stakeholders, and management in the pre-project stage.  Having buy-in and time scheduled early will save a lot of heartache later on.

Delivery

  1. Development team delivers feature to QA team to test locally or on Develop server.
    • Goal: Functional and regression testing of delivery.
    • Tip: Doing QA verification immediately after development, instead of waiting, ensures the developer’s mind is fresh in familiarity with the implementation and test cases.
  2. QA verifies feature as complete and ready for QA server deployment.
    • Goal: Identify defects early.
    • Tips: This minimizes the time to correct defects, regressions on QA, and thus follow-up QA deployments. You can deploy continually or in a release at the end of a time period.
  3. Project Manager works with Business Analyst to review features completed and update overall project progress.
    • Goal: Compare progress completed vs fixed timeline and budget.
    • Tip: Project Manager should regularly review this with the Product Owner and Management.
  4. 2nd Stage Demo(s) of feature on QA server.
    • Goal: Celebrate what the team has accomplished! Demonstrate to stakeholders and Product Owner the value added by the team, update project completion percentage.
    • Tips: May find more success with focused attention by scheduling with relevant individual stakeholders rather than comprehensive delivery. Can be incremental changes from the first demo.
  5. Feature stakeholders perform UAT.
    • Goal: Achieve final sign-off for this iteration.
    • Tip: There should be no scope changes during this step, as those were covered in the first demo. Assess further changes identified during UAT to see if they can fit in a future time period, or wait until a separately budgeted, phase 2 project.
  6. Deploy the feature to Prod with a Feature Toggle system.  The Product Owner can then release it whenever they want.
    • Goal: Lower risk, enable multiple iterations before release.
    • Tip: Product Owner should maintain a schedule for feature releases.
  7. Regularly scheduled delivery team retro.
    • Goal: Celebrate successes. Identify steps to improve velocity and quality.
    • Tip: Focus on what is within the control of your team and follow up with action items.

Final Recommendations

A few more thoughts for you to consider.  These recommendations do not fall in a specific timeframe, but are key to help the project moving forward.

  1. I recommend to obtain commitment from the Product Owner and Stakeholders to check and provide updates directly in the ticket tracking system.
    • Goal: Writing up an ask once is better than someone else’s interpretation of it. It also doesn’t have to be copied over to another location.
    • Tip: This step is optional but highly recommended! It’s easy to lose track of a task as simple as copying from an email to the associated ticket.
  1. Make it a priority to minimize meetings and attendees. Many do not think about the hidden costs to meetings.
    • Could it be an email or comment on a ticket first?
    • How much development and testing halt in the time before a scheduled meeting? What about after?  How long does it take to regain focus?
    • How many people actually need to attend? If you invite someone, they will often feel obligated to attend.
  2. Consider what else you spend a lot of time doing outside of feature development. Could it be shorter and still effective?
    • I once worked with a client to reduce their production deployment time from 5-12 hours end-to-end to 4 minutes.

Now you might be thinking, “I thought this was about agile Iteration?  Why are specific recommendations on ticket tracking and meetings included?”

Becoming Agile

Companies that “do” agile implement highly structured business process that can add robustness and flexibility but also add overhead.  Companies that are agile use transformative thinking to optimize their time to delivery.  These are suggestions to get you thinking in the right mindset and improve through practice.  For more information on how Perficient can help you achieve your agile goals and implement your dream digital experiences, we’d love to hear from you.

Contact Perficient to start your journey.

]]>
https://blogs.perficient.com/2021/07/29/how-to-make-agile-iteration-possible-within-waterfall-budgeting-part-2/feed/ 0 295756
How to Make Agile Iteration Possible Within Waterfall Budgeting https://blogs.perficient.com/2021/07/21/how-to-make-fantastic-agile-iteration-possible-within-waterfall-budgeting/ https://blogs.perficient.com/2021/07/21/how-to-make-fantastic-agile-iteration-possible-within-waterfall-budgeting/#respond Wed, 21 Jul 2021 20:54:15 +0000 https://blogs.perficient.com/?p=294963

Product owners love the flexibility and short lead time of being Agile. At the same time, it can be difficult for management to adopt. Without a determinant understanding of the final product, there’s a struggle to estimate the total cost and rein in scope.  You could mark work as done within a sprint but the feature itself isn’t complete until the total experience is accepted by the business.  Then iterations can turn into scope creep as a “feature” rather than progress, and balloon project cost.

So, I have seen some projects turn back to Waterfall pricing as a defensive play.  The challenge is, the drawbacks of Waterfall haven’t gone away.  What do you do if you start with only a few or still developing requirements?  Or nearly inevitable scope creep?  One solution to mitigate these common pitfalls is utilizing Agile practices in a hybrid approach.

Agile Approach

For shorter projects with tighter timelines, I typically use a Kanban approach.  I particularly recommend Kanban if you don’t have enough requirements solidified at the start to fill out an entire sprint.  Then once the team begins solidifying requirements for multiple features, consider whether or not it makes sense to switch to Scrum.  In either case, work with stakeholders to categorize the long waterfall project planning into key milestones or sprints of well-understood deliverables.  The work should be spaced fairly evenly throughout the project, making adjustments for team capacity in advance.  While the efficiency of the team will increase over each time period, you’ll want to leave room in the project timeline for additional feature iterations.  Product owners and key stakeholders will need to be regularly engaged for the team to progress.

Pre-Project

Whether you will succeed or fail in this endeavor often comes down to management and project stakeholder buy-in to the process during the pre-project planning phase.  Start by presenting the process detailed below end-to-end and explain why it is important. The end goal should be generating a working agreement. Also, consider including appropriate directors and other leadership as attendees to increase the likelihood of buy-in across stakeholders.

  1. Identify the features in the project plan that have the largest impact and priority, then sort by ascending effort.
    • Goal: Show value ASAP and develop a working agreement.
  2. Consolidate individual features across the scope. Obtain product owner and key stakeholder’s approval.
    • Goal: Efficiency and potential reduction in scope. Standardize look, feel, and behavior.
  3. Development team estimates the capacity for each deliverable period.
    • Goal: Ensure available working team aligns with delivery period goals.
    • Tips: Account for time off and other commitments. Plan for backfilling people.
  4. Business Analyst and Technical Lead estimate small time periods by key feature milestones to cover the entire scope of the project.
    • Goal: Create checkpoints for ensuring the project is on track and will deliver the final result as expected.
    • Tips: Remember that the efficiency of the delivery team should increase as the project continues. If the workload in your next time period is looking light, pull work forward in the timeline.

Project Start

When the project starts, your steps will continue in a seamless flow as below:

  1. Laser focus on solidifying requirements for features.
    • Goal: Prevent rework and ensure the team has a common understanding during working sessions.
    • Tips: Keep the meetings small in attendance – product owner, key stakeholders, business analyst, technical lead, and (optional) developer who will be working on the feature.  You will typically receive more engagement the fewer people you have involved per meeting.  Work ahead to future time periods.  An example rule of thumb for what is solidified, final requirement variation from now until release is <10%.
  2. For sprint 0 / 1, assign one or more features to a smaller than usual period of time.
    • Goal: Keep the business engaged, show immediate results, and show the benefit of working in an Agile fashion.
    • Tips: The technical lead can do this while the rest of the team is on-boarded and works on project setup. Use these quick feature wins as examples for how the process could operate going forward and to build trust.  This will help you in obtaining a commitment from the product owner and stakeholders to consistently attend requirements working sessions and demos.
  3. Business analyst copies feature requirements to the ticket tracking system.
    • Goal: Requirements are in a single location and are up-to-date.
    • Tips: Include uses cases, error states, validation, messaging, and other common considerations. Decide on who is responsible for copying the meeting or email action items back to the ticket.  For clarifications or other ticket changes, you should include in the comments who directed the change and the date.
  4. Developers work with business analysts to split into sub-tasks and work the feature.
    • Goal: Allow developers to work in parallel with individually deliverable pieces.
    • Tips: Decouple delivery from release by building in componentized work and feature toggles. Highlight all work done by the delivery team in the demo, even if it is not visually demoable.
  5. Schedule “early-look” feature demos with key stakeholders, even before QA.
    • Goal: Ensure the team is moving in the right direction and finalize requirements before QA tests.
    • Tips: Keep these short and focused. Quite a few people are visual thinkers and have to see how a feature will work in practice before finalizing requirements.  Often you will discover additional use cases and error states during this step!  Stress to the stakeholders that this should be the final lock-in of requirements for this iteration.  Another benefit is developers will work hard to make sure their work is ready for demo, even if the Business Analyst or Technical Lead is the one giving the demo.

Fitting It Together

The individual steps have their own benefits, but it helps to look at the overall schedule.  Here is an example timeline of how this planning might look mid-project.

Agile Iterations In A Waterfall Timeline

As you can see, the time period capacity and subsequent velocity generally moves upward as the project progresses, accounting for time-off and holidays.  As the team works to minimize defects and changes in requirements, they have slots open up later in the project to pull work forward or to work on another iteration.  This is beyond the originally planned work, but all still fits within the same timeline and budget!  In part two, I will cover the delivery side of the business process and some final thoughts.  Stay tuned!

]]>
https://blogs.perficient.com/2021/07/21/how-to-make-fantastic-agile-iteration-possible-within-waterfall-budgeting/feed/ 0 294963