Perficient Blogs https://blogs.perficient.com Expert Insights Fri, 03 Jul 2020 13:30:55 +0000 en-US hourly 1 https://i2.wp.com/blogs.perficient.com/files/favicon-194x194-1.png?fit=32%2C32&ssl=1 Perficient Blogs https://blogs.perficient.com 32 32 30508587 A Day in the Life of a Teams Administrator: Part 2 https://blogs.perficient.com/2020/07/03/a-day-in-the-life-of-a-teams-administrator-part-2/ https://blogs.perficient.com/2020/07/03/a-day-in-the-life-of-a-teams-administrator-part-2/#comments Fri, 03 Jul 2020 13:30:55 +0000 https://blogs.perficient.com/?p=274677 Welcome back! Last time we took a look at settings from within the Teams Admin Center. This included many of the Teams policies and settings that you may be asked to configure as a Teams administrator. In this blog, we’ll be returning from our lunch break (read the last article, it’ll make more sense) and start looking into streamlining our administration capabilities by utilizing Microsoft Graph.

Streamlining Administration in Teams

You’re back from lunch, and you’re ready to take on whatever the rest of the day throws at you! Since you are constantly having to create teams for your organization, you’re looking for a quick, consistent way to standardize on a team design. Luckily for you, we’re able to streamline this team creation process by using Microsoft Graph. Let’s go ahead and navigate to Microsoft Graph. Once there we’ll find the authentication section on the left-hand side (sign-in if it isn’t already signed in), and then we’ll find the cogwheel (Settings) icon and select “Select Permissions”.

Graphapi

The permissions pane should open on the right-hand side of your screen. To ensure we have the proper permissions we’ll need the following:

  • Group.Read.All
  • Group.ReadWrite.All
  • User.Read.All
  • User.ReadWrite.All

Once you’ve found and selected each of these permissions you will be required to consent to which you’ll just need to select Accept.

Permissionconsent

If you’d like to check if consent was applied to these permissions successfully, just search for these permissions again and you should see the status as “Consented” for those given permissions.

Consented

Great, now that we have the proper permissions in place, we can proceed with creating a team using a built-in Teams template! What are Teams templates, you may ask? Teams templates are pre-built definitions of a team’s structure designed around a business need or project. You can use these templates to quickly create a team, channels, and even preinstall apps to pull in content. These templates will give you an easily defined consistent structure across your entire organization. In this scenario, our manager has requested that we create a team for our Contoso Management Team, so let’s get right to it!

Post

  • In the Query box, clear the existing URL and then enter https://graph.microsoft.com/beta/teams.
  • Under the Query box, select Request Headers.
  • Verify that the Key box contains Content-type and the Value box contains application/json.
    • If they are not present, you’ll just need to add them
  • Under the Query box, select Request Body.
  • In the Request Body box, enter the following:

{

“template@odata.bind”: “https://graph.microsoft.com/beta/teamsTemplates(‘retailManagerCollaboration’)”,

“displayName”: “Contoso Management Team”,

“description”: “This private team will be used by the managers of Contoso”

}

  • Select Run Query
    •  Wait for the query to complete and then verify the banner displays Success.
      If the query fails, run the query again. If it fails again, verify the Request Body content is correct.

Now let’s navigate to our Teams client (teams.microsoft.com) and check it out there.

Contosomanagementteam

Success! We see our newly created “Contoso Management Team” team and channels! However, this is just a small taste of what is possible for streamlining team creation in Microsoft Graph. With Microsoft Graph we can use any template and then customize the base template by adding in array-valued items to shape the team accordingly. You can get a more in-depth look at the team creation process using Microsoft Graph here.

We’re getting close to wrapping up our day, but we have just a few more tasks that our manager wanted us to handle which we’ll be tackling in the next blog! We’ll wrap things up in the next blog by handling the licensing aspect of Microsoft Teams. After all, all of these things we’ve covered so far won’t be of much use if the users aren’t even licensed for Teams! I hope you’ve found this blog helpful, and I hope you’ll join us for part 3 of our “Day in the life of a Teams administrator”!

]]>
https://blogs.perficient.com/2020/07/03/a-day-in-the-life-of-a-teams-administrator-part-2/feed/ 1 274677
Resolving Sitecore SXA 9.3 Core Library JavaScript Security Vulnerabilities https://blogs.perficient.com/2020/07/02/resolving-sitecore-sxa-9-3-core-library-javascript-security-vulnerabilities/ https://blogs.perficient.com/2020/07/02/resolving-sitecore-sxa-9-3-core-library-javascript-security-vulnerabilities/#respond Thu, 02 Jul 2020 21:01:46 +0000 https://blogs.perficient.com/?p=276795 Site themes for a Sitecore SXA site determine the look, feel and interactivity of the user interface. Base themes, included by default in the Media Library, are intended to be leveraged as dependencies for one or more site themes.

Base Themes are built on top of a set of core, third-party CSS and JavaScript libraries such as jQuery, jQuery UI, lo-dash, mediaelement, modernizr, etc. For more on SXA site and base themes, see the official SXA documentation.

Keeping Third-Party Dependencies Up to Date

It is important to keep third-party libraries up to date when new versions are released. It allows you to leverage new features and performance improvements and avoid any bugs from previously released versions.

Even if your site is working just fine, it is important to keep these third-party dependencies up to date to avoid any security vulnerabilities. Unethical hackers are increasingly targeting these vulnerabilities to gain access to sensitive data.

At the time of authoring this article, there are two security vulnerability issues related to outdated JS libraries in Core Libraries:

These security vulnerabilities were discovered when auditing best practices with Google Lighthouse. Fortunately, resolving these issues is fairly simple and straightforward. Let’s take a look at resolving each issue.

Google Lighthouse Dependency Security Vulnerability Warnings

How to Update Core Library Script Versions

Step 1. Updating jQuery

Navigate to /sitecore/media library/Base Themes/Core Libraries/scripts/xaquery, then scroll down to the Media section and download the file.

Open in a text editor and you will see a minified version of jQuery@3.3.1 on lines 3 – 4. You will also see this file contains a few XA-related variables and a minified version of jQuery UI (currently the latest version).

Xaquery File Screenshot

Head over to jQuery’s downloads page, and get the latest, compressed version of jQuery (version 3.5.1 at the time of writing).

If you view it in a browser and copy the entire file (two lines), you can paste it replacing lines 3 – 4.

Save the file and re-attach it to the item in Sitecore: /sitecore/media library/Base Themes/Core Libraries/scripts/xaquery.

You can now move on to step 2.

Step 2. Updating Lo-Dash

This process will be very similar to step 1. Navigate to /sitecore/media library/Base Themes/Core Libraries/scripts/lo-dash, then scroll down to the Media section and download the file. Open in a text editor and you will see a minified version of Lodash@4.17.11. (you will have to Ctrl + f and search for VERSION).

Lodash Version 4.17.11 Screenshot

Head over to Lodash, and click the link for the full build gzipped (version 4.17.15 at the time of writing).

Copy and paste, replacing the older version. Be careful not to delete the additional variable added around line 138:

Do Not Delete These Variables Screenshot

Save the file and re-attach it to the item in Sitecore: /sitecore/media library/Base Themes/Core Libraries/scripts/lo-dash.

With both xaquery.js and lodash.js updated, simply publish the changes and you are done.

Conclusion

Working with third-party libraries helps to springboard development. The one trade-off to using them is making sure to keep them up to date.

]]>
https://blogs.perficient.com/2020/07/02/resolving-sitecore-sxa-9-3-core-library-javascript-security-vulnerabilities/feed/ 0 276795
[Webinar Recording] Preparing for Your Oracle, Medidata, and Veeva CTMS Migration Project https://blogs.perficient.com/2020/07/02/webinar-preparing-for-your-oracle-medidata-and-veeva-ctms-migration-project/ https://blogs.perficient.com/2020/07/02/webinar-preparing-for-your-oracle-medidata-and-veeva-ctms-migration-project/#respond Thu, 02 Jul 2020 16:30:23 +0000 https://blogs.perficient.com/?p=276778 I recently delivered a webinar, in which I discussed the CTMS migration approaches taken across several case studies. You’ll come away with an understanding of:
  • Pros and cons of each CTMS migration method
  • Types of migration tools, including APIs, ETL tools, and adapters
  • Approximate timelines and costs associated with each migration method

The topics discussed can be applied to any CTMS migration project, whether you’re moving to or from Oracle’s Siebel CTMS, Medidata’s Rave CTMS, and Veeva’s Vault CTMS.

If your organization is considering migrating to a new CTMS or has any other needs related to CTMS, feel free to reach out to me.

Video Transcript

Welcome everyone to today’s webinar on Preparing for your Oracle, Medidata, and Veeva CTMS Migration project. I know from working with and speaking to a lot of you in the past several weeks that everyone’s schedule these days are even more packed than usual and I truly thank you for taking part of your valuable day to join us for today’s webinar and I hope you find it beneficial. In today’s title, we mentioned a few specific names of vendors specifically those that are leading the industry with CTMS solutions, and today’s webinar will be focused on various approaches and considerations of data migration that apply to each of them, as well as considerations that would apply for any CTMS solution off the shelf or custom home grown as well. So, hopefully this information can be universally valuable.

Let me start out by introducing myself. My name is Param Singh and I am the Director of the Clinical Operations Solutions practice in the Perficient Life Science Business Unit. I have been working in the Life Sciences industry for over 20 years, and have almost exclusively been working in the areas of Clinical Operations to implement systems and solutions and industry best practices to help our clients achieve their vision for Clinical Operations all while staying in line with industry and regulatory standards and guidelines. My team has led and been a part of dozens of different implementations of Clinical Trial Management systems. These vary from implementations from Pharma, CROs, Medical Device companies, and also range from anywhere from 30 user to global implementations of over 4,500 users. And each type and size of organization has its own requirements, approaches, and challenges when it comes to data migration of their operational data, and I am happy to be able to discuss some of these considerations with you on today’s webinar.

As I mentioned, our team has experience with leading and working on dozens of CTMS projects. We have a long-standing relationship and partnership with Oracle and have been a key partner in delivering system implementations of Oracle CTMS to our clients. We have an established partnership with Medidata and have implemented various projects including a number of integrations with Medidata for our customers and are currently engaged in doing another CTMS data migration. Similarly, we have a partnership with Veeva as well and have implemented various projects including a number of integrations with Veeva for our life sciences clients. Being a third-party consultancy, we have a broad range of skills and experience that span across these specific leading CTMS vendors as well as other vendors focused on the life science industry, as well as strategic partnerships with technology and platform vendors such as AWS, Microsoft, Adobe, and others. Perficient has developed specific expertise in each of these areas to help our clients achieve success effectively and efficiently with the right resources on the projects.

Now that I’ve introduced myself, I’d like to take just a minute to talk about my team here at Perficient and to explain a little more about what we do.

We provide a variety of services and products related to clinical operations: we lead and manage validated implementations of CTMS solutions, whether those are implementations of off the shelf applications from leading third party vendors, or custom built digital solutions related to clinical operations. Also if you have seen some of our previous webinars, you know that we do extensive work with integrations between CTMS solutions and other clinical and non-clinical related systems. Our approach to system implementations is very process focused and holistic in nature, and we also provide our process consulting services to help organizations define and harmonize their SOPs and business process across their organization with respect to clinical trial management. And we offer comprehensive training services and products related to CTMS and process training course and materials as well.

That is just a glimpse of some of the services and products we offer specific to clinical operations, and for more information on what we offer with respect to clinical safety and pharmacovigilance, clinical data management, clinical data warehouse and data solutions, or life sciences products and services in general, please feel to contact me for some focused discussions.

Here is today’s agenda.

Following the introductions, we’re going to get right down to business and discuss strategies for determining the answer to each of these questions. Should we migrate? We will cover some basic considerations on this question and what the purpose of data migration could and should be. What are we looking to achieve and what benefits are we looking to take advantage of when migrating and are these relevant for our specific project. If so and we do look to migrate data, what exactly should we migrate? What data is being collected today in the current system and which subset do we decide to migrate to the new system? What considerations do we need to take into account in determining what the scope of our migration should be? Next, how should we do so? What tools are available to us for migration, what limitations do we have with the inherent tools and capabilities of each system? And when do we move forward with each set of data for migration? There are some timing considerations that need to be weighed and a lot of these are not specific to a technical limitation necessarily, but rather other process and resource related constraints.

By the end of today’s webinar, you will have a framework for performing this sort of analysis for your own organization.

Let’s get started.

So, the first question we need to address and consider is should we even migrate? We need to look at the legacy data that we have and determine what are the benefits of bringing all of this data into our new system? This may include current active studies as well as historical data in our legacy systems.

For historical data, one benefit is that by having this data in one central system, we can enable comprehensive reporting across all of this data. If all the data is in one place, we can generate reports, view metrics, and make timely decisions based on our analysis of that data. We would be able to do this via a data warehouse also, if all of our legacy systems feed into a data warehouse solution, so this benefit alone may not be enough to make the case for migration for your organization. The second one listed here is also related, where we can have a complete side by side LIVE picture of each study in one system to be able to perform the same type of analysis and understand how we are trending as an organization with regard to our process for clinical trial management.

For current and active studies, the benefits include the ones we just mentioned, as well as benefits that affect current operations. Migrating active studies, enables your workforce to be able to work within one technical system and one set of business processes. When we implement a new system, we usually implement some modified processes as well, and with migration of all current studies, we can ensure everyone is working within the same process and there is less confusion and complexity having to have resources trained in two or more different systems.

From an IT and support perspective, migration and decommissioning of legacy systems reduces the overall cost of support and maintenance of these systems. If we do not migrate, we will have our IT staff supporting multiple systems, which includes maintaining the hardware as well as the support tickets from the users, etc. Long term, maintaining both systems may be a costly approach, and migration can help decommission the legacy applications quicker and reduce the overall cost.

So, we know the potential benefits to data migration and for each organizations the value of these benefits will need to be determined and weighed against the risks. Let’s look at some potential risks to data migrations.

There may be some loss of functionality in the new system. Example, if you are implementing a new CTMS in a phased approach, you may not have integrations in place, or specific enhancements in place to capture and track certain data which you were able to in your legacy system.

And if we can’t find a place to migrate data to, there could potentially be some loss of data. This could be loss of data in that scenario where there is no target to the source attributes, or the data is represented in a slightly different way, so we need to cleanse or translate the data during the migration process.

With any data migration, timing is key. Go live of the new system, when the data is migrated, and when the data is available to the users in the new system are all things to consider in the rollout, and in certain situations there may be a lag time. For example, the system may be live, and we just migrated the study, but the study team has yet to be trained so they don’t have access to the new system, which will create some lag time, and during that time, they aren’t able to manage their study in the old system either, since the study has already been migrated, any changes they make in legacy, won’t get reflected in the new system. This is something that the deployment team needs to consider ensuring minimal lag time for migrated data. Time overlap is related to that example, if the company allows users to be in the legacy or both systems post migration. So, there has to be a clear directive on how to use migrated data to minimize and mitigate these risks.

So, this slide just shows examples of different types of organizations and the different factors they need to consider when taking on a data migration initiative. There is no secret formula to when you should migrate and when you shouldn’t, but certain factors will weigh in more than others at various organizations. So, with this example, we have a Growing CRO, who large scale studies are planned or already started. They have limited resources to manage studies, and limited IT resources to manage systems and tools. So, I don’t know how many systems they have currently, but if the studies with this new client haven’t started yet, there may not be a need to migrate anything and just focus on launching the CTMS in time to get the planned studies on the new system. For the oncology company example, they already have some long-term trial that they are managing and there is a considerable amount of data in a legacy study and these studies are long in duration, that may make the case to migrate. But if they are thinking about doing this for reporting and metric purposes only, they have also recently implemented the data warehouse so they could get their consolidated data across studies in both systems via the warehouse. So, the decision is not always black or white, there are several factors that will go into it and some factors may be more important to one org than another.

Let’s look at the next question, which is what should we migrate? The scoping should take place at two levels. One at the study level, so which studies should be migrated. And from those selected studies, which data types or records should we migrate.

So first we need to look at good candidates for migration from a study perspective. You will need to consider the benefits you specifically are looking for as well from the previous question. Initially, you must look at historical closed studies first, and whether you want that data in the new system for reporting purposes or whether the legacy data is already in or planned to be in a data warehouse. Next, you will have to consider your CTMS go live date, and end dates for current and planned studies. If you have made the decision that historical data can stay in legacy system, then short term studies will typically fall in that bucket as well. Since there will be little overlap if the studies end shortly after the new CTMS system go live, the easiest thing to do is let those studies end in the legacy system. That leaves long duration studies, which in many cases would be a good candidate for migration.

The other thing to consider other than the effort to do the migration itself, is the inevitable data cleansing effort that needs to occur BEFORE the migration. So, for data that doesn’t map directly from Legacy to the new CTMS, your business team will have to find a way to retain that data or migrate it somehow. Also, for data that doesn’t meet your newly defined data standards for the new systems, there will be a cleansing effort to translate legacy data into the standards that the new system is governed by. Simple example are addresses where the new standard is the Street spelled out instead of Rd. St., etc.

For current studies, consider the size and the amount of data already collected – the more data already collected, the more data will need to be cleaned and migrated.

This slide illustrates the examples I mentioned on the previous slide. So, taking a study by study approach, we look at studies that are ending soon, and realize it would be more effort to migrate them, rather than let them run out in the legacy system. On the other side, there are studies that are starting soon, and by the time we go live with the CTMS, these studies will have very little data captured, so it’s easier to just enter that data in the new system when we go live, so we won’t need to migrate those either. For the long duration studies, there is already considerable data, and these studies will run for several months or up to a year post go live of the new system, so we don’t want to be in the old system for that long, so we will migrate these studies to keep our desired legacy system cutoff date.

Ok, so now we have narrowed the scope from a study perspective as to which studies to migrate, now we must define the scope of the data types or records within those studies that hold information that we want to migrate to the new system.

For this, we need to determine what do you have available to track in the new system. There may be cases where you are tracking something in your legacy system, which does not have a clear target in the new system. So, we need to determine what data has a placeholder in the new system.

I won’t drain the slide here, but these are some examples of data elements. The point is to consider everything that could potentially be migrated into the new system.

Once we have examined the target, we need to look at the source or sources, as to what you are tracking in your legacy systems or other databases or trackers, or documents.

As you review this, you may need to consider some Reasons to not migrate some data:

  • One reason could be, not currently tracking data type, so nothing to migrate.
  • Another reason could be that the target Selected CTMS does not offer functionality to track data type
  • And last reason could be that Happy with existing tool or tracker– not going to use functionality in CTMS

All these decisions would need to be made across all of the data to determine overall data scope.

Now that you have made a clear mapping of data types and have identified what actually can be migrated, now we must determine the business need for that data in the new system.

When looking at the business needs, consider data needed for Historical studies as well as Current/Active studies.

Some examples are listed here: there is no business need to migrate historical correspondence records, there are no reporting needs for that data going forward.

Or another example is that adverse events were tracked as part of monitoring in the legacy CTMS but the safety system is the ultimate source of that data, so we don’t have a need to migrate those records to CTMS.

Some acceptable workarounds or business process decisions could also be determined such as trip report. While trip reports/site visit records are typically migrated from CTMS to CTMS, approved trip reports can be printed and archived or filed in eTMF instead of migrating to a new CTMS.

So, we have covered whether we should migrate, what specifically is beneficial to migrate, and now it’s time to ask how we go about doing this? To answer this question, we must first see how many sources of legacy data we have. Number of unique sources can dramatically increase the effort for a migration. We can be dealing with multiple CTMS systems on an actual Relational DB, or we can be dealing with Excel spreadsheets, custom trackers, work document, document management systems, etc. Sometimes, data migration is a 2-step process where data from tools such as Excel spreadsheets and other trackers are entered into a true legacy CTMS system, from which we then migrate ALL data into the new system. Migration from each of these individual spreadsheets, MS database, and even multiple custom CTMS system in various formats can be very costly, so along with the data cleansing process, we can employ a data consolidation effort, which will combine the data into one legacy format from which to build our automated migration routines from. That way we validate one specific approach and source for the migration.

Additional consideration on how we should migrate include determining the volume of data for each record type that you are looking to migrate. If the volume across the system is quite low, you may consider manual data entry into the new system rather than configuring a migration routine or program for that entity. Remember, most migrations are one time, so the one-time cost for building a migration program needs to be understood.  When doing large projects like this, Validation efforts are a huge part of the effort and cost, so looking at manual front-end data entry or manual migration options where validation isn’t necessarily required should be considered.  

Additional considerations that need to be understood are the level of transformation of the data during migration. What are the relationships of the entities in both systems? Do they match, or are you going to have to transform the data into target relationships of entities, which can be complex. You must also review attributes and whether you have a good mapping between the two systems. And also, are the data standards in line between either systems or do we need to cleanse the data prior to migration? A good example of this we mentioned before is how are addresses tracked in both systems, and another example is list of values in both systems.

The next area to consider are the tools and methods that are available for data migrations. We always have the option for manual migration, where we recognize the need for the data from our legacy system, but we do not code any automated routines to migrate the data, but rather hire data entry folks to key in the data before go live. This may end up being a less costly method to migrate, depending on the volume of data. We actually had an organization hire temp data entry personnel to key in over 10,000 contacts in the system, instead of building a series of migration routines that would pull their contacts from various sources, and they actually ended up saving money going with that approach. Obviously, there are some risks associated with manual entry of this data as well, but they chose to go that route because it was cost effective in their specific case.

When looking at automated options, you have to consider embedded tools in your systems that enable migration, such as EIM, enterprise integration manager for Oracle Siebel applications, or csv or xml imports for other cloud based CTMS systems such as Veeva Vault and Medidata solutions. You may also consider the existing tools that your organization already owns licenses to such and Informatica and other ETL tools. Or you may need to build our custom migration routines to transform the data appropriately. Of course, when selecting a tool, you will need to consider various constraints, such as budget, time, and complexity.

The next few slides, we have put together some visuals on the technical approaches that are typically used for end to end CMTS data migrations. The first one is the one that historically we saw most often which is the In-house CTMS to a new In-house CTMS. With legacy CTMS solutions installed and managed in house and replacing them in house with either custom solutions or off the shelf solutions that were installed in house, this is the typical flow of data and the general technical approach.

The next approach is going from In-house to a Standard Cloud CTMS. This is probably the most common approach right now. With more standard cloud based CTMS solutions available such as Veeva Vault CTMS and Medidata CTMS, more companies are finding this option to be advantageous for their business. The key to data migration to cloud solutions is that you have to format the data to the required format accepted by the target solutions import tools. Typically, with these types of CTMS solutions, the format for the imports is standard and can’t be customized easily and you need to adhere to these formats. For example, for Veeva Vault, what we have are published CSV formats that are defined for the Veeva Data Import process. And for Medidata CTMS, what we have are published XML formats defined for the data import process. So, typically we use ETL tools such as Informatica others to extract data from the legacy database and transform the data into the prescribe formatted files that will be transferred to the cloud CTMS Vendors infrastructure to then run the import process to complete the migration.

The last technical approach that we have illustrated here is the In-house CTMS to a Customized Cloud CTMS solution.  This is slightly different that the last approach in that, the cloud CTMS solution is customized for the business. It has been customized and configured specifically for the organization, and the automated migration solution would be built and validated as well to conform to this customize solution. This is meant to be both usable for a one-time migration, and fully reusable for ongoing data transfer to target solution from other sources of data as well, such as CRO feeds and additional sources from company acquisitions.  We have actually deployed this type of solution for large global pharma companies that are using this setup and it has saved lots of time and effort to reuse this approach for ongoing data transfers.

Next is timing! Timing is a very important aspect of migrations. There are of course multiple options on when you can migrate your studies from the legacy system to the new system, but these need to be in line with other key decisions such as the training approach and legacy decommissioning strategy. Big Bang migrations are when you decide to move all of the studies that have been selected for migration at the same time and push a button and migrate them over together. This would make sense if the users of the system are also getting trained in a big bang deployment. If you are doing a pilot rollout of users, then you would want to only migrate studies based on the study team and users that have been trained on the new system. There is no sense in migrating if those specific users are not trained and therefore not going to be in the system to manage their study data.

Where we have seen great success especially in larger organizations, is a phased study by study joint training and migration roll out. This allows the org to work out any minor issues in process and increases user adoption as well overall. This in some cases is easier to manage as well, since you are training smaller groups at a time.

However, if IT or the support organization cannot temporarily support legacy systems and CTMS together, Big Bang may make sense

So, in summary, we have discussed the 4 aspects of CTMS Data Migration Analysis. We really have covered all the steps each organization needs to take before undertaking a migration initiative. There needs to be thorough planning and analysis to any data migration before we can understand the full scope of the initiative.

Purpose – Define the business benefits early on as this is the driver to define the rest of the migration initiative. (KEY TAKEAWAY)

Scope – Take strong consideration on the data that you need to migrate and how that aligns to the overall business benefit that you have defined.

Method – Take stock of the tools and resources that are available to you and which methods are going to be required based on the target system. Do not rule out a manual migration, since the cost for automated migration is a one-time cost, and if there is not an opportunity to reuse what is to be built.

Timing – Consider the rollout strategy, training approach, when determining migration strategy, as all these need to align with each other. Can’t decide one without impacting the other.

Give Your CTMS Migration a Head Start (Free Jump Start)

As we close out today’s webinar, we wanted to share some service offerings our team has created for our client partner.  Our team has developed two jump starts to help your organization determine how to approach CTMS migrations regardless of what the source and target systems are. With our level of experience and expertise, with this first jump start, our team can help develop a CTMS Migration Implementation Scope Analysis, which includes detailed timelines, milestones, process and data flow diagrams as well as overall cost/effort for the data migration initiative. This specific jump start is free of cost and around 2 weeks of effort to develop an overall scope analysis.

Give Your CTMS Migration a Head Start (50k Jump Start)

The next jump start is more detailed and provides your organization with additional tangible deliverables that can help you develop the overall scope analysis as well as strategy and detailed plan for the data migration. This specific jump start includes the scope analysis, but also included identification of the CTMS data entities for migration, and draft migration validation plan and protocol, as well as the draft migration requirements specification document as well.  These are documents that you will need as part of your data migration implementation project, so this specific jump start which is a short paid engagement provides these initial deliverables for your organization to get a head start on the implementation. If you would like to discuss these options in detail for your organization, please feel free to reach out to me directly so we can schedule a detailed discussion to determine which option may be right for your organization.

Thank you all for sharing the past hour or so with me. I hope you found the information useful.

Thanks again and enjoy the rest of your day!

]]>
https://blogs.perficient.com/2020/07/02/webinar-preparing-for-your-oracle-medidata-and-veeva-ctms-migration-project/feed/ 0 276778
Considering Accessibility in Web Design: What You Can Do To Make Your Design Accessible https://blogs.perficient.com/2020/07/02/considering-accessibility-in-web-design-what-you-can-do-to-make-your-design-accessible/ https://blogs.perficient.com/2020/07/02/considering-accessibility-in-web-design-what-you-can-do-to-make-your-design-accessible/#respond Thu, 02 Jul 2020 15:56:07 +0000 https://blogs.perficient.com/?p=276771 Since 1991, when the first website was introduced, websites have evolved from being purely content driven to becoming increasingly focused on their visual experience. While there is no argument that having a clean, cool, professional-looking, website is important it isn’t everything, especially when it comes to Accessibility.

There is a popular misconception that implementing accessible design into your site means having to sacrifice its aesthetics. This couldn’t be further from the truth. In fact, many of the design applications related to Accessibility are not visible on screen. In a sense, one could argue that incorporating accessible design practices could improve your existing design, not just for users with disabilities, but for users overall. In spite of this, it is apparent when looking at many websites today, that Accessibility was not included as part of the initial design process.

With so much information available on the subject, it is understandable that incorporating accessible elements into design can feel overwhelming.  Nevertheless, there are a few easy places to start. Below are some of the most common design issues that effect a website’s Accessibility:

  1. Color contrast is insufficient-

Not only are some color combinations just downright hideous, they are not good for Accessibility either. For website content to be visible to all users it must have a color contrast ratio between the foreground and background of 4.5:1 according to WCAG 2.0 standards. Without this, users with vision or cognitive impairments may not be able to perceive content correctly. Improving this can also help develop the understanding that some color combinations, no matter how good they visually appear, should just never be user together.

  1. Color alone is used to convey information to the user-

While using color to convey information to users is practical, it should not be used as the sole method to indicate an action or change in content. Whether it is required text shown in red, correct form entries shown in green, or hyper-links indicated in blue, using color to provide functional context needs to have additional visual cues for users with visual or cognitive impairments. These visual cues could come in the form of symbols or additional text such as ‘Required field’ text with asterisks, check mark symbols for correctly submitted information, or lines below hyper-links.

  1. Headings levels are not used for heading text-

Applying heading levels to your website content helps communicate the organization of your page structure. Assistive technology users, such as those using screen reader, will use heading levels to quickly navigate the page in search of relevant topics. If heading levels are not applied, or applied incorrectly, this could affect a user’s ability to find the information they are looking for causing confusion and frustration, even leading a user to leave the page. This issue not only effects non-sighted users, but also those who use assistive technologies as a supplemental form of understanding a site’s content.

  1. Unclear or inconsistent navigation options-

Let’s say you are driving and suddenly the road comes to an end with no indication. You decide to turn back, although you know this route is taking you in the opposite direction. On your way back, you look for an alternate route that could lead you to your destination, but in searching you are not able to do so. This example is very similar to websites that do not have clear and consistent navigation options. Users with disabilities often rely on consistent patterns within a website’s content to predict its behavior. This applies to elements such as links, buttons, or forms, to name a few. Not only does this apply for Accessibility reasons, but also helps ensure your website design is harmonious across all pages.

  1. Alternative text for images and media are not applied-

Providing alternative text for images and media has several benefits. Alternative text can be added for elements such as images, graphics, and icons to offer additional context to assistive technology users. This means screen readers can relay this information to help users further understand a website’s content and functionality. Alongside this, media alternatives can be added to elements such as videos to visually substitute the need for sound. Having transcripts of audio or providing visible links to audio described versions of videos, not only benefits those with auditory impairments but can also improve a website’s overall usability.  This design feature makes videos easier to watch discretely, say for instance on a crowded subway.

As the old saying goes, implementing accessible design into your site is better done late than never. By keeping Accessibility in mind when designing, you will ensure your website is inclusive and available to all users while improving your overall design. For more information on Accessibility, and additional design criteria, please visit https://www.w3.org/WAI/design-develop/.   

]]>
https://blogs.perficient.com/2020/07/02/considering-accessibility-in-web-design-what-you-can-do-to-make-your-design-accessible/feed/ 0 276771
Starting Your Career Remotely: Thoughts and Tips from Perficient’s Newest Colleagues https://blogs.perficient.com/2020/07/02/starting-your-career-remotely-thoughts-and-tips-from-perficients-newest-colleagues/ https://blogs.perficient.com/2020/07/02/starting-your-career-remotely-thoughts-and-tips-from-perficients-newest-colleagues/#respond Thu, 02 Jul 2020 15:39:08 +0000 https://blogs.perficient.com/?p=276678

This series was written with joy by Grayson Harden, Rebekah Williamson, and Mary Claire Freese. Don’t miss out on the first article, New Hires Gain Real-World Experience in the Corporate Onboarding Boot Camp.

Creating Career Connections at Perficient

When accepting internship and full-time offers with Perficient, Atlanta’s newest members were anxious to get into the office and kickoff the 10-week Boot Camp training program. However, no one could have anticipated having to start a career working from home. Working remotely can bring plenty of challenges, but Perficient’s remote accommodations have helped create a fun, productive workspace for the new hires.

BONUS: See what lessons one Perficient director shares after 20 years on the job

10

Though day-to-day communication looks different in a remote setting, Perficient’s newest colleagues feel it has been easier than anticipated to form relationships and collaborate on projects. On a day-to-day basis, there are scheduled meetings and seminars via Microsoft Teams to help everyone stay in touch and on the same page. This creates consistency as well as provides much-needed interaction in between solo project work.

“It feels like we’re all in the same space even though we’re meeting through a screen.” — Eric Cho, associate technical consultant

It also helps to know we’re not alone. Across the company, other new colleagues are onboarding remotely, too.

6 (1)

Throughout a normal Boot Camp day, both the technical and business consultants participate in intensive modules, learning the vital programs and technologies that Perficient relies on. While working on the modules, each intern/associate meets with their assigned career counselor and great start guide who offer professional guidance and answer technical questions when issues arise. David Davenport, intern technical consultant, enjoys his weekly syncs with his mentors.

“It’s great to have multiple mentors because they provide insight on different roles within Perficient and unique perspectives on the consulting industry as a whole.” — David Davenport, intern technical consultant

Perficient Graphics (1)

Because we’re all experiencing the uniqueness of starting remote, the Boot Camp group is growing in connection each week.

“Having a tight-knit group makes it easier for people to reach out to each other and form relationships.” — Alex Karadsheh, intern technical consultant

The Perficient team has put in extra effort to replicate the social aspect of the office, scheduling weekly happy hours and virtual break rooms. In order to meet each other face-to-face and interact outside of work, the Boot Camp group also coordinated an outdoor Athens event which helped solidify friendships and create a bond.

BONUS: Check out some the typical office activities we’re excited to get back to

Being remote certainly has its drawbacks, but it has taught the Boot Camp team a lot about maintaining a solid work ethic and staying productive. Kam Ndirangu, intern technical consultant, feels like he is definitely growing personal skills that he wouldn’t have otherwise if he was in an office.

“When facing the distractions of being at home, it’s important to treat it like the real world. Can I still hold myself accountable when no one is watching over my shoulder?” — Kam Ndirangu, intern technical consultant

3

Tips for Starting Your Career Remotely

Whether you’re starting your career out remotely or looking for ways to improve your productivity at home, here are some tips from the Boot Camp team that has made remote working a little easier:

  • Set up a dedicated workspace free from distractions in a separate part of the house
  • Take small breaks throughout the day to refresh your mind: stretch, get a snack/water
  • Change up your lunch break: cook a nice meal, take your dog for a walk, call a friend
  • Switch locations: take advantage of sitting outside if possible
  • Stay in touch: reach out to coworkers throughout the day
  • Ensure that you’re putting in the work when no one is watching!

DID YOU KNOW: Perficient was already ranked a Top Workplace in 2020 in Atlanta and Minneapolis! And we’re not stopping there…


READY TO GROW YOUR CAREER?

At Perficient, we continually look for ways to champion and challenge our workforce, encourage personal and professional growth, and celebrating our people-oriented culture.

Learn more about what it’s like to work at Perficient at our Careers page. See open jobs or join our community for career tips, job openings, company updates, and more!

Go inside Life at Perficient and connect with us on LinkedIn and Twitter.

]]>
https://blogs.perficient.com/2020/07/02/starting-your-career-remotely-thoughts-and-tips-from-perficients-newest-colleagues/feed/ 0 276678
Innovation in B2B Healthcare: the Now, the New, & the Next https://blogs.perficient.com/2020/07/02/innovation-in-b2b-healthcare-the-now-the-new-the-next/ https://blogs.perficient.com/2020/07/02/innovation-in-b2b-healthcare-the-now-the-new-the-next/#respond Thu, 02 Jul 2020 14:00:58 +0000 https://blogs.perficient.com/?p=276736 It goes without saying that the start of this new decade is unprecedented, unplanned, and flat out unwanted. Yet here we are, and we must find ways to think creatively and collaboratively moving forward.

B2B healthcare organizations are at a very interesting crossroads. With all that has happened around the COVID-19 pandemic, leveraging technology and commerce solutions in the healthcare industry has never been more important.

With many businesses fighting to stay above water, healthcare organizations are struggling to keep up with the increasing demand for their products and services. This crisis has exposed many vulnerabilities that may have been hibernating for some time in areas such as ecommerce, site experience, product information management (PIM), and other key spaces.

Addressing these problems is why we created the latest version of our Now/New/Next (N3) – which focuses exclusively on B2B healthcare distribution and manufacturing organizations. This guide looks at N3 and focuses on:

  • Now: The elements considered table stakes for most organizations and how you may look to integrate them within your business
  • New: Components that a few companies are executing on today, and why they will likely be able to differentiate themselves from the competition
  • Next: What’s around the corner that no organization is working towards today, but would be at the forefront of innovation if they were

We’ll offer perspective and solutions to help B2B healthcare distribution and manufacturing organizations get past this current speedbump and jumpstart their roadmap to think differently about their business, products, and solutions. To learn more, download our free guide, How to Innovate and Evolve in the B2B Healthcare Industry.

]]>
https://blogs.perficient.com/2020/07/02/innovation-in-b2b-healthcare-the-now-the-new-the-next/feed/ 0 276736
Why Cloning Pages in Sitecore May Not Be The Shortcut You’re Looking For https://blogs.perficient.com/2020/07/01/why-cloning-pages-in-sitecore-may-not-be-the-shortcut-youre-looking-for/ https://blogs.perficient.com/2020/07/01/why-cloning-pages-in-sitecore-may-not-be-the-shortcut-youre-looking-for/#respond Wed, 01 Jul 2020 20:53:45 +0000 https://blogs.perficient.com/?p=276732 Oftentimes in my work with clients, we are asked a lot about shortcuts or ways to quickly and easily re-create content pages from their website that have complex layouts or are very labor-intensive. We discussed in a previous blog post how copying is not always the best answer. In this blog post, we’ll review why cloning might not be the same kind of shortcut you are looking for either.

When cloning is good

When you clone a page, Sitecore makes a copy of the original page and uses the data items from the original page as content. Any changes you make to the original will ripple outward and be applied to the clone. This works great for pages with a simple layout that need to be replicated on another brand, microsite or version of your site, like a blog post, article or even a product page. Cloning gives you the ability to maintain several copies of a site, where you’d only need to update the content on the original and all the clones will get the same exact changes applied.

When cloning is troublesome

The key to cloning being useful is that the cloned pages are simple. For pages with multiple components or other moving parts, cloning starts to get troublesome. Content authors that I’ve worked with that use clones cite the importance of needing to maintain page and field inheritance (aka. the link between the original item and the clone) but run into difficulties when they come across pages or content that “just need this one small thing to be different from the other page”. Versions can also be prickly to work with. Sitecore’s warning and error message notifications when they try to update a page can be confusing, and less experienced authors may accidentally break that link between the original and the clone. While it is easier to restore and resync an individual field between an original and a clone by resetting the field value, an un-cloned page can never go back to its cloned state. You need to delete the un-cloned page and create a new clone from the original.

Workflow states can also create issues. If you clone an original in the middle of its workflow, the clone will inherit the same workflow state, but will not maintain a connection. So, as your original finishes out its workflow and gets published, the workflow of the clone will remain in the same state it was created and would also need to be moved through to publish.

So, where does that leave us?

Outside of specific use cases, cloning is not going to be the shortcut you’re looking for if you’re trying to quickly create complex pages. A better way would involve working with your development team to create a branch template that includes a base set of components already placed into the layout for your page. Authors will still need to move in your content and might need to add some additional components to a page, but it goes a lot faster than starting from a completely blank layout.

]]>
https://blogs.perficient.com/2020/07/01/why-cloning-pages-in-sitecore-may-not-be-the-shortcut-youre-looking-for/feed/ 0 276732
Use FDM to FDMEE Migration Utility https://blogs.perficient.com/2020/07/01/use-fdm-to-fdmee-migration-utility/ https://blogs.perficient.com/2020/07/01/use-fdm-to-fdmee-migration-utility/#respond Wed, 01 Jul 2020 18:00:00 +0000 https://blogs.perficient.com/?p=276699 There are basically 2 options when upgrading FDM classic to FDMEE: rebuild all artifacts, or use FDM to FDMEE migration utility to migrate some, then build the rest.  FDM to FDMEE migration utility is a helpful tool but it has its limitation, for example, the utility can not translate script from VB script to Jython.

Please refer to the utility doc for the details the utility can and can not do. The not applicable type of artifacts are those existing in FDM but not in FDMEE anymore.

When running the migration process, analysis will be needed when error out. Therefore you need to balance the overhead with the help it can do for you. Setting up the utility, doing troubleshooting do cost time.

]]>
https://blogs.perficient.com/2020/07/01/use-fdm-to-fdmee-migration-utility/feed/ 0 276699
Perficient Included in Forrester’s Now Tech For Oracle Apps Services Providers https://blogs.perficient.com/2020/07/01/perficient-included-in-forresters-now-tech-for-oracle-apps-services-providers/ https://blogs.perficient.com/2020/07/01/perficient-included-in-forresters-now-tech-for-oracle-apps-services-providers/#respond Wed, 01 Jul 2020 14:53:10 +0000 https://blogs.perficient.com/?p=276016 Forrester has included Perficient in its Now Tech: Oracle Apps Implementation Services Providers, Q2 2020 report. The newly launched report lists implementation service providers who help global clients identify potential partners to support their shift to Oracle Cloud providing guidance though each stage of an enterprise cloud transformation.

Within the report, Forrester examined implementation service providers for their work with Oracle Cloud apps including ERP Cloud, EPM Cloud, and HCM Cloud as well as legacy-oriented apps like PeopleSoft and E-Business Suite. It segments each vendor by market presence and capabilities. The report names Perficient as a small (<$200 million in Oracle apps services revenue) consultancy with vertical market focus in energy, healthcare, and manufacturing. The report lists American Honda Finance Corporation, Merck, and Midcoast Energy, LLC as our sample customers and we believe our work with them showcases our expertise in Oracle.

Inclusion in this report is an honor and, in our opinion, highlights our vast expertise with the Oracle applications and our success in delivering strong results for our clients.

Benefits of a Partnering with a Proven Partner

In the report, Forrester notes that, “While Oracle continues to breathe new life into existing products, such as Oracle E-Business Suite and JD Edwards, future opportunities for Oracle customers are in Oracle Cloud Applications, also known as Fusion.” According to the report, providers with a balanced portfolio between legacy and cloud “have strong existing on-premises capabilities while also moving customers to the cloud. These providers have capabilities straddling the old and the new in the Oracle apps portfolio.”

In order to successfully transform core business operations, application development and delivery (AD&D) professionals are encouraged to look for Oracle applications services providers that can:

  • Accelerate the shift to the cloud and to modernizing applications
  • Foster innovation
  • Manage business disruption

With the right partner, you can accelerate your enterprise cloud transformation to achieve desired business goals.

Why Perficient?

Perficient is a National System Integrator with a dedicated Oracle practice for ERP, EPM, and Analytics/BI that has been serving clients for nearly two decades, delivering 3,000 cloud and on-premises implementations.  In addition, our life sciences practice specializes in the implementation, management, and support of Oracle Health Sciences applications. Unlike boutique firms that specialize in one or two offerings, our investment in and commitment to our Oracle partnership is extensive, with 15 Oracle specializations, which means Perficient is investing heavily in training and resources to stay ahead.

We have delivered strategy and full-lifecycle implementation projects, and upgrades & enhancements for on-premises, cloud, and hybrid solutions to meet the unique needs of our clients. Cloud delivery accelerators help you improve operational efficiency, while promoting cost efficiencies and reducing risk to help you maximize your investment.

We also offer a post-implementation managed service offering called SupportNet.

]]>
https://blogs.perficient.com/2020/07/01/perficient-included-in-forresters-now-tech-for-oracle-apps-services-providers/feed/ 0 276016
How to Create a Unified Process to Manage Product Content for B2B Healthcare Ecommerce https://blogs.perficient.com/2020/07/01/how-to-create-a-unified-process-to-manage-product-content-for-b2b-healthcare-ecommerce/ https://blogs.perficient.com/2020/07/01/how-to-create-a-unified-process-to-manage-product-content-for-b2b-healthcare-ecommerce/#comments Wed, 01 Jul 2020 14:00:30 +0000 https://blogs.perficient.com/?p=276632 Healthcare ecommerce has seen consistent growth over the past decade, and during the last couple of years, the Amazon effect within the healthcare space has caught everyone’s attention. Even though B2B healthcare companies have generally been quick to adapt to elevate digital and technological maturity, the involvement of marketplaces in the healthcare sector exposed the need for transparency in product information such as pricing and product specs.

The need for a seamless customer experience, speed to market, product and pricing transparency, and value of informational content in the B2B space has grown exponentially, meaning manufacturers have been creating more content and product information for their respective digital networks including one of their key channels– their distributors.

Distributors already have their work cut out for them when it comes to keeping up with digital maturity and automation in relation to the sourcing and distribution of product information. To add another level of complexity to an already busy sector, the Covid-19 pandemic has perpetuated leaders to re-evaluate their business models and sales strategies. It isn’t enough to just automate the technical touchpoints anymore. Now more than ever it’s important that the product content is accurate, consistent across all channels, and able to make available quickly.

But with speed comes challenges, and as companies evaluate the trials that come with a mitigation plan, it is key to prepare the business for a world post-pandemic rather than just organizing for the current global climate in the midst of Covid-19. To get started with the evaluation, companies can consider the following aspects of their business process and look for opportunities to build further efficiency:

Sourcing Product Information from Various Vendors/Manufacturers

Vendors/Manufacturers control their respective branding and packaging guidelines. This information doesn’t always flow consistently to distributors or sometimes gets skipped altogether. Hence, distributors must invest a lot of manual effort to consolidate information from various manufacturers and make the product information consistent in their own system.

In a scenario like this, companies should consider provisioning a supplier onboarding portal that would allow manufacturers to be self-sufficient when uploading their product information. A platform like this would also enable distributors to review and approve all content before importing it into their own product information management (PIM) system to maintain a consistent single source of truth.

The Product Syndication Process

Manufacturers need to syndicate their product data to multiple channels, distributors being one of them. The challenge occurs when the same product serves multiple industries and hence, multiple categories. For example, vinyl gloves are a category of product that can span across many classifications such as cleaning supplies, medical provisions, safety gear, etc. and the manufacturer may need this product to be syndicated to various channels with slightly different product information for each. In turn, distributors face a similar challenge as they must also provide product information to various customers.

Being able to change product information based on channel or customer type is a key area to consider when making improvements. Companies should consider developing standard mappings and configuring automated syndications so that the product data from a PIM system can easily be structured within different categories and channels to seamlessly be sent to these channels regularly.

Persona-Based Content

There are often various personas that research and purchase products in the healthcare field. You have users with clinical backgrounds as well as users with business backgrounds. It can be a struggle to manage product data and information to cater to each group independently. For example, people who don’t fall into a clinical persona won’t need or want to see pictures of patients’ injuries when purchasing a wound care product.

Being able to manage all types of product data and content in a centralized location is key to supporting based on use case. Even though buyers would not interact directly with your PIM system, this method of personalized content delivery can be provisioned through outbound channels or metadata on images, and there can be rules built during an outbound configuration that a commerce platform can interpret to cater to various user personas.

These are some of the use cases we have actively seen in our conversations with manufacturers and distributors when assisting companies that need to revisit and build efficiency in their PIM processes and systems. If you have similar challenges in your organization or other scenarios that you’re evaluating related to the management of your product information, reach out to find out how our team can help.

To learn more about the various innovations and growth we’re seeing in the B2B healthcare space today, check out our free guide, How to Innovate and Evolve in the B2B Healthcare Industry.

]]>
https://blogs.perficient.com/2020/07/01/how-to-create-a-unified-process-to-manage-product-content-for-b2b-healthcare-ecommerce/feed/ 1 276632
Argus Safety, Oracle Clinical & RDC Release Notes [July 2020] https://blogs.perficient.com/2020/07/01/argus-safety-oracle-clinical-rdc-release-notes-july-2020/ https://blogs.perficient.com/2020/07/01/argus-safety-oracle-clinical-rdc-release-notes-july-2020/#respond Wed, 01 Jul 2020 13:05:34 +0000 https://blogs.perficient.com/?p=276495 Perficient’s Life Sciences practice regularly monitors the software release notes for several Oracle Health Sciences applications, including:

  • Argus Safety
  • Oracle Clinical/Remote Data Capture (OC/RDC)
  • Thesaurus Management System (TMS)
  • Generally speaking, we review release notes at the beginning of each month for the previous month. On occasion, there are no new releases and, therefore, nothing to review; however, we post a fresh version monthly to eliminate confusion.

For our latest review click here.

Oracle Health Sciences Logo

]]>
https://blogs.perficient.com/2020/07/01/argus-safety-oracle-clinical-rdc-release-notes-july-2020/feed/ 0 276495
DevSecOps – Canary Deployment Pattern https://blogs.perficient.com/2020/07/01/devsecops-canary-deployment-pattern/ https://blogs.perficient.com/2020/07/01/devsecops-canary-deployment-pattern/#respond Wed, 01 Jul 2020 13:00:07 +0000 https://blogs.perficient.com/?p=275892 The Canary Deployment Pattern, or canary release, is a DevSecOps deployment strategy that minimizes risk by targeting a limited audience.  As with all deployment patterns, the goal is to introduce the newly deployed system to the users with as least risk and in as secure a manner as possible.  As noted below, the motivation of this particular approach is to identify a small segment of the user community that can act as an initial response group.  Typically, this means that the selected user segment is “friendly” to the idea of trying out a new and possibly unfamiliar set of system features.  By limiting the general release in the this manner, feed-back can be gathered on the impact/acceptance of the new features.  Once the “canary” group has had sufficient time to validate the deployment, the full user-base can be updated.

Canary Deployment

  • Pattern Name: Canary Release

  • Intent: A canary release is a way to identify potential system problems early without exposing all users. The intent is to deploy the application to a limited user audience and gain feed-back on any issues that may arise.
  • Also Known As: Limited Rollout, Feature Trial, Beta Release, Soak Deployment
  • Motivation (Forces): Reduce the impact of a new and potentially disruptive change to the user community.  For example, a significant change to a user experience, such as release of a new user interface.
  • Applicability: Any system where users can be impacted by a significant functional change; often applied to system releases that have readily identified user sub-populations (i.e. ‘friendly’ or ‘pre-release’)
  • Structure: 
Canary Deployment Pattern

Figure 1. Structure of rollout in the canary release pattern

  • Participants: 
Participant Role Description
Production Release Current production release This is the current production release system. It will not be affected by the experimental release.
Primary User Group Represents the core user group for the system This group is comprised of the core system users who will remain on the current release as the “control” group.
Load Balancer Channel traffic from specific user groups or regions to target server environment Depending on how the user population is segregated (e.g. organization internal users vs. external users) the load balancer is used to direct user traffic to a well-defined release endpoint
‘Canary’ User Group Represents a ‘friendly’ user group for experimental potentially disruptive releases This is a select group of system users who have agreed to try out the new functionality/capabilities and provide feedback as needed.
‘Canary’ Release Proposed experimental release This is the experimental or potentially disruptive release. It will be separated from the current production release.

 

  • Collaboration: This pattern depends on the ability of the load balancer to properly shift user traffic so that the “canary” group is the only set of users who see the experimental release.  Typically, this group will include a set of pre-selected ‘friendlies’ who understand that they will be using a new, possibly unstable system.  It is expected that the ‘canary’ users will be contacted to review the results of the system use.
  • Consequences: Given that the ‘canary’ release is potentially disruptive or unstable it should be closely monitored during the ‘canary’ testing period.  Follow up with the targeted user group is also highly recommended to gain feedback on usability and stability.  If necessary, the ‘canary’ release can be rolled-back to the previous release version while discovered issues are remediated.
  • Implementation: This pattern requires that the production environment be capable of segregation into two groups.  It is necessary that the production release servers be capable of handling the full user load in the case where the ‘canary’ release must be rolled back.  Moreover, the user base must have some differentiating factor that can be used at the load balancer level to route traffic to the appropriate end-point (e.g. IP address range, internal/external user group, VPN/IPSEC tunnel, etc.).
Canary Deployment Pattern

Figure 2. Canary deployment traffic routing

  • Trade Offs: This approach requires that a portion of the production environment be deployed with a different release version.  There is additional monitoring overhead and potentially data migration that will be required once the ‘canary’ period expires.  While the ‘canary’ version is in production there may also be difficulty with any additional release development and support as the development team may be required to review issues and observations.
  • Known Uses: This pattern is often found during ‘beta-releases’ of new systems where stability and/or capabilities are rapidly changing.  It is also found where development teams are rolling-out significant modifications to core system functionality and are concerned with user acceptance or system stability under production loads.

Conclusion

The canary release pattern offers product development teams an opportunity to validate system changes on a small sub-set of the user population, thereby avoiding wide-spread disruption if a set of new features fails to meet expectations.  Moreover, for very large user groups (e.g. national or international in scope), this approach provides a mechanism to target specific user communities in isolation.  The ‘canary’ group can therefore provide valuable feedback that can be incorporated into the overall system prior to a full user-base exposure.

]]>
https://blogs.perficient.com/2020/07/01/devsecops-canary-deployment-pattern/feed/ 0 275892