Generative AI or, as I prefer to call it, generative machine learning (ML) has taken the business world by storm. By now you’ve likely encountered an email or some form of marketing generated by one of these models, perhaps without even realizing it. They’re powerful tools and I look forward to the improvements we can bring to people’s lives. But I also think it’s essential to understand their limitations, lest you end up with some embarrassing results.
So let’s review how some popular un-customized models handle text and image prompts and the 3 things no one is talking about when it comes to generative AI.
Though artificial intelligence is useful as a marketing term, intelligence is more of the end goal for many researchers rather than an accurate assessment of where the technology is right now. Intelligence implies many attributes including reflection, judgment, initiative, and wonder. That field of study is referred to as Artificial General Intelligence (AGI) and remains outside of the scope of the most popular models. This distinction becomes only more evident as you feed complex scenarios and requirements into the currently available models. Remember that most of the time these tools are backed by a predictive model based on training data rather than logically assessing an ask against a series of rules like traditional algorithms.
If you introduce it to prompts or situations where it does not have training, it may have trouble producing a realistic or accurate result. For example, I have given a popular generative model the following prompt ‘Owl breathing in space.’
It’s a nice picture, but I’m afraid the owl won’t be breathing any time soon…
We as humans easily recognize a series of conceptual rules about a subject (e.g. owls need air to breathe) but this model has no such teaching yet. That’s not to say that these models won’t reach a basic level of these attributes of intelligence someday but to be careful to understand this limitation in the meantime.
Fortunately, we can get to our desired result by specifying some additional instructions: “Owl breathing in space, wearing an astronaut suit.”
Deep Learning Models typically present information with the same projected degree of confidence, even if it is completely incorrect or contradictory to something it said previously. Remember it’s a predictive language model, not a conscious actor or trustworthy source.
It’s a very important distinction! The model outputs the information exactly in reverse of what it should have stated.
To help mitigate this issue, we can instruct the model to pull answers only from an approved set of answers like in the image below.
However, this approach often isn’t sufficient. We can easily end up with the model filling in some gaps using language that is outside of the strictly approved set as seen in the example below.
While the model tried to answer all the questions, it broke the rule we provided at the start of the session.
That’s where Perficient’s policies and governance frameworks come in. We can implement a procedural algorithm that sits between the raw output of the AI model and any user requests. This management layer ensures strict output compliance and rejects any attempt by the model to provide an answer outside the exact language in the approved set. It’s much more in line with what we’re looking for!
While partner-owned models may have a high ROI if you do not have an existing ML practice, you are limiting your business’ potential by relying on partner closed-source models. For instance, closed-source models are often built for broad use cases rather than fine-tuned to your products or services.
While options #1 and #2 in the image above are technically feasible, the much simpler afterinsert EditConfig solution exists and is in better alignment with Adobe’s best practices. Notably, option #2 is a very complex solution I would never recommend anyone implement and #3 doesn’t answer the question at all. I would not trust this model to answer questions about AEM consistently and correctly without a lot more training. There’s no easy way around this yet, so make sure you always verify what the model is telling you against a trustworthy source.
So if you’re going to have to eventually spend a lot of time training a third-party model, why not invest in your own model? You don’t have to do it alone, as Perficient has a comprehensive artificial intelligence practice that can help guide you from the ground up.
I hope these considerations were helpful and have led you to think about how to put safeguards in place when using generative AI. Until next time!
For more information on how Perficient can implement your dream digital experiences, contact us! We’d love to hear from you.
]]>Previously I presented a common situation where an engineering team might push for Headless AEM and covered why, in my opinion, a Hybrid solution is a better approach. I discussed how Content Fragments, Experience Fragments, and Sling Model Exporters are used in combination to deliver the Headless side of the AEM experience. In case you missed part 1, you can read it here.
So how do you best combine using Content Fragments, Experience Fragments, and Sling Model Exporters with SPAs? Here are several approaches.
Sometimes only portions of the page or specific pages require a high interactivity or state management that comes with SPA frameworks. You can build AEM components that utilize any JS library or framework, load them only on pages where they are needed, and even use different frameworks per page. This approach is a light amount of custom development to integrate your SPA components with AEM.
For your engineering team, typically this is done through a small AEM wrapper component. It utilizes an HTL template and data attributes for content sourced from the Sling Model and AEM dialog authoring. As covered previously, you can then expose the Sling Model as JSON to support multichannel content.
This solution works best when you do not require SPA-controlled routing for your pages. If you still need URL-based, stateful entry points, remember that query parameters and suffixes are always available as an option.
The SPA Editor is an AEM-provided solution for React and Angular applications to integrate directly with AEM with in-context editing. It works best when you
The content authoring for the SPA components is still stored in the JCR, but provided as a JSON representation via component mapping instead of writing AEM HTL templates. You can then expose this JSON to other channels in the same way as before. One nice thing about this solution is Adobe has taken the time to re-write many of the Core Components in React and Angular. This allows for direct parent/child SPA component relationships and data flows that will be very familiar to SPA developers.
Now this solution does come with some AEM limitations because the rendering is controlled via the SPA rather than HTML templates. For a full, updated list of limitations, please read the Adobe documentation. The SPA Editor is still actively supported and developed by Adobe, so hopefully they will remove these limitations over time.
Remote SPA is an AEM-provided solution for externally hosted React applications to become editable within AEM. Functionally, it operates in much the same way as SPA Editor, but the SPA server delivers the pages instead of AEM. This allows the engineering team to build the bulk of the site components outside of AEM and to scale the page traffic separately from AEM. Downsides of this approach include:
Still, this may be the best solution for adding authoring to an existing SPA application or meeting timeline requirements for a team unfamiliar with AEM.
My colleague Jeff Molsen did a recent “Pop-up Perspective with Perficient” video on this topic if you’d like to learn more in-depth or see Remote SPA in action.
If you are using a different frontend, I still recommend taking the time to learn and develop within AEM with Sling Exporters. There are several great tutorials on how to develop editable components within AEM with or without SPA editor. When faced with a new problem, explore what AEM has to offer first. Your marketing team will thank you for it.
For more information on how Perficient can implement your dream digital experiences, we’d love to hear from you. We’re certified by Adobe for our proven capabilities, and we hold an Adobe Experience Manager specialization (among others). Contact Perficient to start your journey.
]]>It’s not uncommon when facing a new problem, to fall back on a tried-and-true solution. Then, suddenly remember why the team moved off of that solution in the first place. Recently, I’ve seen this trend with engineering teams and a desire for multichannel content.
Let’s set the stage with an example. A digital marketing team has licensed Adobe Experience Manger 6.5 with the hope of using the WYSIWYG content editing to quickly produce and release content decoupled from code deployments. They also see that AEM has the capacity to produce reusable multichannel content via Content Fragments. Marketers plan on using those fragments within a marketing website, a companion mobile app, and voice assistance devices. Finally, it would be great if the site had the option for highly interactive pages that didn’t require a refresh. They ask the engineering team to implement the solution.
The engineering team is full of talented backend developers who have years of experience serving up web content. The engineers dig into the platform, and while some are able to learn AEM component development, it’s slow going. They soon have a few concerns:
Frustrated, the engineering team spins up a proof of concept with their previous stack. They go back to the marketing team and show how much faster they created a new page and components. Engineering pushes for a compromise of headless content on AEM and a separate SPA site with the stack they are familiar with. In my opinion, this largely defeats the purpose of licensing AEM. With this approach, the ability to use WYSIWYG editing and decoupled code/content releases is gone.
Instead, consider using Content Fragments, Experience Fragments, and Sling Model Exporters combined with SPAs for benefits of both approaches.
Content Fragments are code-free, structured text and image content you can author and expose to any channel via JSON or GraphQL.
Experience Fragments are also code-free, but present experiences with a partial or complete layout in HTML. Optionally, they include design and functionality via CSS and JavaScript. The Experience Fragments can utilize any AEM component and are intended for reusable “ready/nearly ready” experiences. With some light custom development, you can even leverage Content Fragments within AEM components and thus any Experience Fragment.
Sling Model Exporters can serialize most AEM component authoring into JSON to expose to any channel. This is the preferred option for retaining in-context editing, as most other solutions require navigation to another page to edit the experience. All Adobe Core Components have this enabled by default. This requires a small amount of development to configure for custom components but is quite powerful and quick for existing AEM implementations to enable multichannel content delivery.
Utilizing these solutions together over headless alone has several benefits. Having any one of these fragments together with at least one of the experiences within the AEM author environment means you can natively preview changes in a channel before they go live. You can also build out much of the base site structure with Core Components and fragment-based authoring without any development. Then once your team is more comfortable with AEM development, utilize Sling Model Exporter for multichannel delivery of custom experiences. Most critically, you retain WYSIWYG editing and decoupled code/content releases.
You might notice there’s a key element left unanswered here – how best to do that in combination with SPA? Thankfully there are several options, which I cover in Part 2.
For more information on how Perficient can implement your dream digital experiences, we’d love to hear from you. We’re certified by Adobe for our proven capabilities, and we hold an Adobe Experience Manager specialization (among others). Contact Perficient to start your journey.
]]>Large scale data breaches and critical security vulnerabilities have companies thinking about security more than ever. Many developers are familiar with the OWASP top 10 (https://owasp.org/www-project-top-ten/). There are already many resources on generic mitigation for these vulnerabilities. So instead, in this series, I cover security issues and mitigations specific to AEM. In this final post, I will cover the Content Disposition Filter, changes in AEM as a Cloud Service, and 3rd party dependencies.
Previous posts in series:
If you allow users to upload SVG images to your site, you might be creating a vulnerability. Viewing a user-uploaded SVG image directly via URL is subject to a XSS attack since these images can also contain malicious JavaScript. Typically this JavaScript is constructed by a bad actor to either send user cookies and storage data to a remote server or create a legitimate looking page for phishing purposes. Malicious users could then link this file stored on your server to unsuspecting victims, which would come from your domain. From a security perspective, it’s preferable to disallow SVG uploads at all or at least programmatically remove all JS from the SVG files before saving them server-side. If this is not possible within your business requirements or too large of a scope, there’s another solution that will help.
The Content Disposition Filter in AEM https://experienceleague.adobe.com/docs/experience-manager-64/administering/security/content-disposition-filter.html?lang=en is generally used to control whether assets accessed directly in AEM are displayed in the browser or downloaded. In this case, you could configure it to always ask users to download SVG images when linked on your site, instead of rendering in the browser.
This will reduce the effectiveness of the malicious JavaScript in two ways. One is that hopefully users will be more wary of downloading files than just viewing content rendered directly in the browser. The second is that if you have properly configured CORS to not allow localhost, the downloaded malicious script will not be able to load several site resources as it will no longer be requesting them from your domain. This hinders the malicious attempt of dressing up the SVG resource as a legitimate site page. It’s important to note, this doesn’t cover all scenarios, but it’s an easy and quick configuration change within AEM.
In AEM as a Cloud Service (AEMaaCS), several of the potential vulnerabilities discussed in this series are no longer possible thanks to some key architecture changes.
So while these changes help significantly, there’s still a lot of work to do to make sure your AEM instance is secure.
After 5 weeks of security posts, there’s still a key security topic I haven’t discussed, and it’s one of the easiest to miss. It’s also arguably the most likely for malicious actors to target. That is 3rd party application and platform dependencies. Malicious actors tend to focus on these libraries and frameworks for a few reasons. One is that the code is typically open source, which is easier to search for vulnerabilities than your black-box application. Another is the vulnerabilities are exploitable in multiple applications instead of just one, leading to higher payouts.
Let’s examine a well-known hack as an example of how bad this can get. In May-July 2017 Equifax was hacked, resulting in one of the biggest data breaches in history. Attackers stole names, addresses, Social Security numbers, birth dates for around 143 million people. The vulnerability? It wasn’t in the application codebase, but in an insecure dependent version of Apache Struts.
Code reviews, automated code scanning, and application best practices were not sufficient to catch this type of issue. If, in addition, the Equifax team had kept up to date with the Struts security patch released months earlier on March 7th, the breach could have been avoided or greatly limited.
Thus my recommendations are as follows:
That’s all for now, thank you for joining me on this security series. If you enjoyed this type of content and would like to see more on AEM security, I’d love to hear from you. For more information on how Perficient can help you achieve your AEM Security goals and implement your dream digital experiences,
Contact Perficient to start your journey.
]]>Large scale data breaches and critical security vulnerabilities have companies thinking about security more than ever. Many developers are familiar with the OWASP top 10 (https://owasp.org/www-project-top-ten/). There are already many resources on generic mitigation for these vulnerabilities. So instead, in this series, I will cover security issues and mitigations specific to AEM. Here I will cover two of AEM’s exploit mitigation tools, CryptoSupport and Service Users.
Previous posts in series:
It is recommended for all passwords, security tokens, and other service credentials to be encrypted within the JCR. That way, even if an attacker gains access to read the node where they are stored, the credentials are useless without the decryption key. If you are storing these credentials via OSGI properties, you can use AEM’s Crypto Support page to manually encrypt any property based on a system master key. Once it is encrypted, update the matching run mode OSGI config in /apps with the encrypted value.
Implementing for your QA environment would look like this:
3. Update the appropriate QA runmode OSGI configuration in the codebase with the encrypted value.
Once this process is complete, AEM’s OSGI Configuration Plugin will automatically decrypt any encrypted OSGI properties used within an OSGI service. You should not have to use any manual decrypt library call within your services. For more details on this, see the below Adobe documentation.
In older AEM versions, it was common for developers to utilize administrative sessions to access data within the JCR. Today, most AEM developers have updated these sessions to utilize service users but not all have implemented best practices around the service user’s permissions. Many service users are assigned overly broad permissions instead of specific paths. For instance, the platform doesn’t prevent you from implementing a service user that has the same permissions as an administrator. If this is the case, one insecure endpoint could lead to server-side XSS, Denial of Service attacks, or a data breach via the JCR writes I discussed in previous posts.
So be very thoughtful in how you are limiting permissions. Some key considerations are:
Ideal service, access, and storage designs incorporate a lot of considerations and stakeholders can wonder if it’s worth the effort. After all, if every service is designed well with no vulnerabilities, why spend time hardening other parts of the system? I find it useful to explain it with the analogy of home security. Most people do not rely on just locked doors to protect their homes from theft. Some utilize deadbolts, motion lights, neighborhood watch, safes, and external locations (banks) to secure their valuables. Why? Attacks grow more sophisticated or change approach. Once an attacker breaks through the first layer of defense, you want several more to meet them.
Next week, I will cover the Content Disposition Filter, and mitigation tools specific to AEM as a Cloud Service.
For more information on how Perficient can help you achieve your AEM Security goals and implement your dream digital experiences, we’d love to hear from you.
Contact Perficient to start your journey.
]]>Large scale data breaches and critical security vulnerabilities have companies thinking about security more than ever. Many developers are familiar with the OWASP top 10 (https://owasp.org/www-project-top-ten/) and there are already many resources on generic mitigation for these vulnerabilities. Instead in this series, I cover security issues and mitigations specific to AEM. Today’s topic is Denial of Service vulnerabilities.
Previous posts in series:
Ever visit a popular new website only to find that it has crashed or is throwing errors? The site visitors may have unintentionally caused a denial of service issue by overloading the server processor or memory. Malicious actors can intentionally cause the same issue by sending a high amount of actions to a server.
DoS attack prevention starts at the business requirements, and is fortified through technical design. Be very careful in designing any services, even authenticated, that create new JCR nodes as part of their behavior. Malicious users may be able to exploit this behavior by flooding the JCR with node writes. Worse still, they may be able to achieve Remote Code Execution by uploading new JSP servlet files, OSGI configs, OSGI bundle uploads, or persistent XSS. Therefore, apps should never programmatically create new JCR nodes under the /apps path. This is good practice for later working in AEM as a Cloud Service, where /apps is completely immutable after deployment.
If all this isn’t considered from the design stage, you may have some massive refactoring and data access rewrites later on. Consider this example. You have a public API within your application that writes to the JCR every time it receives a request. A malicious user could quickly hit that endpoint 1000s of times, creating a massive write queue in your system. So, how to fix it? Do you remove all public access while you try to sort out the vulnerability? That may create a frustrating experience for legitimate users. Instead, let’s examine some practical prevention measures.
Many use cases exist for jobs to run on AEM environments. These are typically started via an admin content page, API endpoint, Sling scheduler, Sling scheduled jobs, or combination of the four. These ideally kick off asynchronously processed tasks via Sling Jobs, an AEM workflow, or Adobe IO rather than using immediate processing power. If using Sling Jobs and AEM Workflows, you can limit the instance to use only up to a certain amount of processor cores, lessening the total potential load on the server. The default configuration is set at half the number of available total cores, so it may be useful to tweak this number down in applications with constant asynchronous processing. Also, it’s important to note that Adobe recommends never increasing the configuration above half the number of total cores.
In any case, I highly recommend that heavy processing should protected by authentication + ACLs and only be accessible from author environments. The data can then flow out to a common data store or in non-cloud environments, replicate to publishers as needed. But these days, the replication strategy is not as common, as large I/O writes cause processor churn and a later debt in JCR compaction.
Restricting JCR writes or who has access to a service is typically a conversation that should start in early solutioning and business requirements. Don’t wait until technical implementation to have this discussion! Stakeholders may need to budget additional funding for a shared data store, or entirely rethink how to provide public services, leading to project delays or failures. Build key stakeholder relationships and design against attacks early. Remember, perfect implementation on its own cannot fix a flawed design.
For more information on how Perficient can help you achieve your AEM Security goals and implement your dream digital experiences, we’d love to hear from you.
Contact Perficient to start your journey.
]]>Large scale data breaches and critical security vulnerabilities have companies thinking about security more than ever. Many developers are familiar with the OWASP top 10 (https://owasp.org/www-project-top-ten/) and there are already many resources on generic mitigation for these vulnerabilities. Instead in this series, I cover security issues and mitigations specific to AEM. Today’s topic is Sling Resolution.
Previous posts in series:
Dispatcher Allow rulesets are one of the largest risk vectors within AEM because of default Sling Resolution behavior. Spend a significant amount of your security testing time here, because this is the most common way for malicious actors to find other vulnerabilities, compromise data within the system, and achieve remote code execution.
Remote code execution (RCE) is the highest impact concern, as an attacker may gain a broad range of further exploit options. This can include breaking the system, compromising data, stealing user credentials, and creating phishing pages on legitimate domains.
In creating defensive measures, it’s useful to know how RCE can occur in the first place. RCE is possible if hacker can get access to the OSGI console, writing to /apps or (in insecure systems) /content, Querybuilder, Groovy console, ACS AEM Tools, or WebDAV. It is best to completely disallow access to these features on Production publishers for any user. From there, utilize Adobe’s best practices of Deny all paths and then Allow necessary paths. Most of these potential vulnerabilities should be mitigated by the latest version of the dispatcher rules present in Adobe’s AEM Project Archetype, but it’s worth confirming in your own dispatcher as well.
The following is not a comprehensive list of vulnerabilities but it will give you an idea of what to start looking for.
{ /type “deny” /path “/bin/querybuilder” }
Utilize wildcard (*) matching for child and extension paths. E.g.
{ /type “deny” /path “/bin/querybuilder*” }
As AEM increases in market share, so too does the incentive for hackers to find exploits, so it’s very important to test and stay up to date on the latest recommended allow rules. Test any rules that deviate from the standard ruleset very thoroughly.
Need some practical examples? Below I’ve provided some basic URLs to test on your dispatcher. Over the public internet these should return a 404 or redirect appropriately. If not, that represents a vulnerability within your application.
For more information on how Perficient can help you achieve your AEM Security goals and implement your dream digital experiences, we’d love to hear from you.
Contact Perficient to start your journey.
]]>Large scale data breaches and critical security vulnerabilities have companies thinking about security more than ever. Many developers are familiar with the OWASP top 10 (https://owasp.org/www-project-top-ten/) and there are already many resources on generic mitigation for these vulnerabilities. Instead in this series, I cover security issues and mitigations specific to AEM.
As a quick review, Cross Site Scripting (XSS) can occur if user-entered inputs are not filtered before displaying back to the user. Remember that user-entered content can include sources that usually users do not edit like URL parameters, local storage, and cookies. Many applications will also need to filter user content from HTML and JS emails before sending.
When using HTL tags in AEM, the default rendering context and all other contexts other than ‘unsafe’ will utilize a set of OWASP rules called AntiSamy. Sometimes this ruleset may filter more than expected, and some developers resort to using the context=’unsafe’ attribute which renders the raw text from the Sling Model property.
A common use case that the default AntiSamy library interferes with is telephone (tel:) hrefs in links. For instance:
<a href=”${model.url @ context='html'”> ${model.urlText @ context=’html’} </a>
Will not allow model.url = tel://555-555-555 to be output correctly to the page. A developer might just resort to:
<a href=”${model.url @ context='unsafe'”> ${model.urlText @ context='html'} </a>
or worse:
${model.rteText @ context='unsafe'}
Thankfully, we have a few options other than using context=’unsafe’.
My preference is #1 so I’ll provide an example. With these considerations, your implementation may look more like the following:
<a data-sly-test=”${!model.isPhoneLink}” href=”${model.url @ context='html'”> ${model.urlText @ context=’html’} </a> <a data-sly-test=”${model.isPhoneLink && model.isValidPhone}” href=”tel:// ${model.url @ context=’html’}${ model.phoneExt ? ',' : ''}${properties.phoneExt @ context='html'} x-cq-linkchecker="skip"> ${model.url @ context='html'}${model.extentionText || ‘’ @ context='html'}${properties.phoneExt @ context='html'} </a> <a data-sly-test=”${model.isPhoneLink && !model.isValidPhone}”> ${model.invalidPhoneMsg} </a>
Do you have any context = ‘unsafe’ usages in your codebase? It’s worth checking to see if it’s exploitable. But it’s only the start of the XSS vulnerability review process.
To see the implementation in action, watch this in video format.
For more information on how Perficient can help you achieve your AEM Security goals and implement your dream digital experiences, we’d love to hear from you.
Contact Perficient to start your journey.
]]>In Part 1 of How to Make Agile Iteration Possible Within Waterfall Budgeting, I covered the business desire to achieve iterative development and quick time-to-market, with the reservations of rigid waterfall budgeting and planning. I also covered pre-project and project start steps, which I believe are critically important to achieving the desired result during implementation. If you haven’t read Part 1, start there. You can do everything right on the delivery side but still be very agile in the wrong direction.
Now on to delivery steps. As a reminder, we are communicating these steps to the product owner, stakeholders, and management in the pre-project stage. Having buy-in and time scheduled early will save a lot of heartache later on.
A few more thoughts for you to consider. These recommendations do not fall in a specific timeframe, but are key to help the project moving forward.
Now you might be thinking, “I thought this was about agile Iteration? Why are specific recommendations on ticket tracking and meetings included?”
Companies that “do” agile implement highly structured business process that can add robustness and flexibility but also add overhead. Companies that are agile use transformative thinking to optimize their time to delivery. These are suggestions to get you thinking in the right mindset and improve through practice. For more information on how Perficient can help you achieve your agile goals and implement your dream digital experiences, we’d love to hear from you.
Contact Perficient to start your journey.
]]>Product owners love the flexibility and short lead time of being Agile. At the same time, it can be difficult for management to adopt. Without a determinant understanding of the final product, there’s a struggle to estimate the total cost and rein in scope. You could mark work as done within a sprint but the feature itself isn’t complete until the total experience is accepted by the business. Then iterations can turn into scope creep as a “feature” rather than progress, and balloon project cost.
So, I have seen some projects turn back to Waterfall pricing as a defensive play. The challenge is, the drawbacks of Waterfall haven’t gone away. What do you do if you start with only a few or still developing requirements? Or nearly inevitable scope creep? One solution to mitigate these common pitfalls is utilizing Agile practices in a hybrid approach.
For shorter projects with tighter timelines, I typically use a Kanban approach. I particularly recommend Kanban if you don’t have enough requirements solidified at the start to fill out an entire sprint. Then once the team begins solidifying requirements for multiple features, consider whether or not it makes sense to switch to Scrum. In either case, work with stakeholders to categorize the long waterfall project planning into key milestones or sprints of well-understood deliverables. The work should be spaced fairly evenly throughout the project, making adjustments for team capacity in advance. While the efficiency of the team will increase over each time period, you’ll want to leave room in the project timeline for additional feature iterations. Product owners and key stakeholders will need to be regularly engaged for the team to progress.
Whether you will succeed or fail in this endeavor often comes down to management and project stakeholder buy-in to the process during the pre-project planning phase. Start by presenting the process detailed below end-to-end and explain why it is important. The end goal should be generating a working agreement. Also, consider including appropriate directors and other leadership as attendees to increase the likelihood of buy-in across stakeholders.
When the project starts, your steps will continue in a seamless flow as below:
The individual steps have their own benefits, but it helps to look at the overall schedule. Here is an example timeline of how this planning might look mid-project.
As you can see, the time period capacity and subsequent velocity generally moves upward as the project progresses, accounting for time-off and holidays. As the team works to minimize defects and changes in requirements, they have slots open up later in the project to pull work forward or to work on another iteration. This is beyond the originally planned work, but all still fits within the same timeline and budget! In part two, I will cover the delivery side of the business process and some final thoughts. Stay tuned!
]]>