Life Sciences

[Webinar Recording] Preparing for Your Oracle, Medidata, and Veeva CTMS Migration Project

Senior Female Scientist Works With High Tech Equipment In A Modern Laboratory. Her Colleagues Are Working Beside Her.
I recently delivered a webinar, in which I discussed the CTMS migration approaches taken across several case studies. You’ll come away with an understanding of:
  • Pros and cons of each CTMS migration method
  • Types of migration tools, including APIs, ETL tools, and adapters
  • Approximate timelines and costs associated with each migration method

The topics discussed can be applied to any CTMS migration project, whether you’re moving to or from Oracle’s Siebel CTMS, Medidata’s Rave CTMS, and Veeva’s Vault CTMS.

If your organization is considering migrating to a new CTMS or has any other needs related to CTMS, feel free to reach out to me.

Video Transcript

Welcome everyone to today’s webinar on Preparing for your Oracle, Medidata, and Veeva CTMS Migration project. I know from working with and speaking to a lot of you in the past several weeks that everyone’s schedule these days are even more packed than usual and I truly thank you for taking part of your valuable day to join us for today’s webinar and I hope you find it beneficial. In today’s title, we mentioned a few specific names of vendors specifically those that are leading the industry with CTMS solutions, and today’s webinar will be focused on various approaches and considerations of data migration that apply to each of them, as well as considerations that would apply for any CTMS solution off the shelf or custom home grown as well. So, hopefully this information can be universally valuable.

Let me start out by introducing myself. My name is Param Singh and I am the Director of the Clinical Operations Solutions practice in the Perficient Life Science Business Unit. I have been working in the Life Sciences industry for over 20 years, and have almost exclusively been working in the areas of Clinical Operations to implement systems and solutions and industry best practices to help our clients achieve their vision for Clinical Operations all while staying in line with industry and regulatory standards and guidelines. My team has led and been a part of dozens of different implementations of Clinical Trial Management systems. These vary from implementations from Pharma, CROs, Medical Device companies, and also range from anywhere from 30 user to global implementations of over 4,500 users. And each type and size of organization has its own requirements, approaches, and challenges when it comes to data migration of their operational data, and I am happy to be able to discuss some of these considerations with you on today’s webinar.

As I mentioned, our team has experience with leading and working on dozens of CTMS projects. We have a long-standing relationship and partnership with Oracle and have been a key partner in delivering system implementations of Oracle CTMS to our clients. We have an established partnership with Medidata and have implemented various projects including a number of integrations with Medidata for our customers and are currently engaged in doing another CTMS data migration. Similarly, we have a partnership with Veeva as well and have implemented various projects including a number of integrations with Veeva for our life sciences clients. Being a third-party consultancy, we have a broad range of skills and experience that span across these specific leading CTMS vendors as well as other vendors focused on the life science industry, as well as strategic partnerships with technology and platform vendors such as AWS, Microsoft, Adobe, and others. Perficient has developed specific expertise in each of these areas to help our clients achieve success effectively and efficiently with the right resources on the projects.

Now that I’ve introduced myself, I’d like to take just a minute to talk about my team here at Perficient and to explain a little more about what we do.

We provide a variety of services and products related to clinical operations: we lead and manage validated implementations of CTMS solutions, whether those are implementations of off the shelf applications from leading third party vendors, or custom built digital solutions related to clinical operations. Also if you have seen some of our previous webinars, you know that we do extensive work with integrations between CTMS solutions and other clinical and non-clinical related systems. Our approach to system implementations is very process focused and holistic in nature, and we also provide our process consulting services to help organizations define and harmonize their SOPs and business process across their organization with respect to clinical trial management. And we offer comprehensive training services and products related to CTMS and process training course and materials as well.

That is just a glimpse of some of the services and products we offer specific to clinical operations, and for more information on what we offer with respect to clinical safety and pharmacovigilance, clinical data management, clinical data warehouse and data solutions, or life sciences products and services in general, please feel to contact me for some focused discussions.

Here is today’s agenda.

Following the introductions, we’re going to get right down to business and discuss strategies for determining the answer to each of these questions. Should we migrate? We will cover some basic considerations on this question and what the purpose of data migration could and should be. What are we looking to achieve and what benefits are we looking to take advantage of when migrating and are these relevant for our specific project. If so and we do look to migrate data, what exactly should we migrate? What data is being collected today in the current system and which subset do we decide to migrate to the new system? What considerations do we need to take into account in determining what the scope of our migration should be? Next, how should we do so? What tools are available to us for migration, what limitations do we have with the inherent tools and capabilities of each system? And when do we move forward with each set of data for migration? There are some timing considerations that need to be weighed and a lot of these are not specific to a technical limitation necessarily, but rather other process and resource related constraints.

By the end of today’s webinar, you will have a framework for performing this sort of analysis for your own organization.

Let’s get started.

So, the first question we need to address and consider is should we even migrate? We need to look at the legacy data that we have and determine what are the benefits of bringing all of this data into our new system? This may include current active studies as well as historical data in our legacy systems.

For historical data, one benefit is that by having this data in one central system, we can enable comprehensive reporting across all of this data. If all the data is in one place, we can generate reports, view metrics, and make timely decisions based on our analysis of that data. We would be able to do this via a data warehouse also, if all of our legacy systems feed into a data warehouse solution, so this benefit alone may not be enough to make the case for migration for your organization. The second one listed here is also related, where we can have a complete side by side LIVE picture of each study in one system to be able to perform the same type of analysis and understand how we are trending as an organization with regard to our process for clinical trial management.

For current and active studies, the benefits include the ones we just mentioned, as well as benefits that affect current operations. Migrating active studies, enables your workforce to be able to work within one technical system and one set of business processes. When we implement a new system, we usually implement some modified processes as well, and with migration of all current studies, we can ensure everyone is working within the same process and there is less confusion and complexity having to have resources trained in two or more different systems.

From an IT and support perspective, migration and decommissioning of legacy systems reduces the overall cost of support and maintenance of these systems. If we do not migrate, we will have our IT staff supporting multiple systems, which includes maintaining the hardware as well as the support tickets from the users, etc. Long term, maintaining both systems may be a costly approach, and migration can help decommission the legacy applications quicker and reduce the overall cost.

So, we know the potential benefits to data migration and for each organizations the value of these benefits will need to be determined and weighed against the risks. Let’s look at some potential risks to data migrations.

There may be some loss of functionality in the new system. Example, if you are implementing a new CTMS in a phased approach, you may not have integrations in place, or specific enhancements in place to capture and track certain data which you were able to in your legacy system.

And if we can’t find a place to migrate data to, there could potentially be some loss of data. This could be loss of data in that scenario where there is no target to the source attributes, or the data is represented in a slightly different way, so we need to cleanse or translate the data during the migration process.

With any data migration, timing is key. Go live of the new system, when the data is migrated, and when the data is available to the users in the new system are all things to consider in the rollout, and in certain situations there may be a lag time. For example, the system may be live, and we just migrated the study, but the study team has yet to be trained so they don’t have access to the new system, which will create some lag time, and during that time, they aren’t able to manage their study in the old system either, since the study has already been migrated, any changes they make in legacy, won’t get reflected in the new system. This is something that the deployment team needs to consider ensuring minimal lag time for migrated data. Time overlap is related to that example, if the company allows users to be in the legacy or both systems post migration. So, there has to be a clear directive on how to use migrated data to minimize and mitigate these risks.

So, this slide just shows examples of different types of organizations and the different factors they need to consider when taking on a data migration initiative. There is no secret formula to when you should migrate and when you shouldn’t, but certain factors will weigh in more than others at various organizations. So, with this example, we have a Growing CRO, who large scale studies are planned or already started. They have limited resources to manage studies, and limited IT resources to manage systems and tools. So, I don’t know how many systems they have currently, but if the studies with this new client haven’t started yet, there may not be a need to migrate anything and just focus on launching the CTMS in time to get the planned studies on the new system. For the oncology company example, they already have some long-term trial that they are managing and there is a considerable amount of data in a legacy study and these studies are long in duration, that may make the case to migrate. But if they are thinking about doing this for reporting and metric purposes only, they have also recently implemented the data warehouse so they could get their consolidated data across studies in both systems via the warehouse. So, the decision is not always black or white, there are several factors that will go into it and some factors may be more important to one org than another.

Let’s look at the next question, which is what should we migrate? The scoping should take place at two levels. One at the study level, so which studies should be migrated. And from those selected studies, which data types or records should we migrate.

Oracle - Guide to Oracle Cloud: 5 Steps to Ensure a Successful Move to the Cloud
Guide to Oracle Cloud: 5 Steps to Ensure a Successful Move to the Cloud

Explore key considerations, integrating the cloud with legacy applications and challenges of current cloud implementations.

Get the Guide

So first we need to look at good candidates for migration from a study perspective. You will need to consider the benefits you specifically are looking for as well from the previous question. Initially, you must look at historical closed studies first, and whether you want that data in the new system for reporting purposes or whether the legacy data is already in or planned to be in a data warehouse. Next, you will have to consider your CTMS go live date, and end dates for current and planned studies. If you have made the decision that historical data can stay in legacy system, then short term studies will typically fall in that bucket as well. Since there will be little overlap if the studies end shortly after the new CTMS system go live, the easiest thing to do is let those studies end in the legacy system. That leaves long duration studies, which in many cases would be a good candidate for migration.

The other thing to consider other than the effort to do the migration itself, is the inevitable data cleansing effort that needs to occur BEFORE the migration. So, for data that doesn’t map directly from Legacy to the new CTMS, your business team will have to find a way to retain that data or migrate it somehow. Also, for data that doesn’t meet your newly defined data standards for the new systems, there will be a cleansing effort to translate legacy data into the standards that the new system is governed by. Simple example are addresses where the new standard is the Street spelled out instead of Rd. St., etc.

For current studies, consider the size and the amount of data already collected – the more data already collected, the more data will need to be cleaned and migrated.

This slide illustrates the examples I mentioned on the previous slide. So, taking a study by study approach, we look at studies that are ending soon, and realize it would be more effort to migrate them, rather than let them run out in the legacy system. On the other side, there are studies that are starting soon, and by the time we go live with the CTMS, these studies will have very little data captured, so it’s easier to just enter that data in the new system when we go live, so we won’t need to migrate those either. For the long duration studies, there is already considerable data, and these studies will run for several months or up to a year post go live of the new system, so we don’t want to be in the old system for that long, so we will migrate these studies to keep our desired legacy system cutoff date.

Ok, so now we have narrowed the scope from a study perspective as to which studies to migrate, now we must define the scope of the data types or records within those studies that hold information that we want to migrate to the new system.

For this, we need to determine what do you have available to track in the new system. There may be cases where you are tracking something in your legacy system, which does not have a clear target in the new system. So, we need to determine what data has a placeholder in the new system.

I won’t drain the slide here, but these are some examples of data elements. The point is to consider everything that could potentially be migrated into the new system.

Once we have examined the target, we need to look at the source or sources, as to what you are tracking in your legacy systems or other databases or trackers, or documents.

As you review this, you may need to consider some Reasons to not migrate some data:

  • One reason could be, not currently tracking data type, so nothing to migrate.
  • Another reason could be that the target Selected CTMS does not offer functionality to track data type
  • And last reason could be that Happy with existing tool or tracker– not going to use functionality in CTMS

All these decisions would need to be made across all of the data to determine overall data scope.

Now that you have made a clear mapping of data types and have identified what actually can be migrated, now we must determine the business need for that data in the new system.

When looking at the business needs, consider data needed for Historical studies as well as Current/Active studies.

Some examples are listed here: there is no business need to migrate historical correspondence records, there are no reporting needs for that data going forward.

Or another example is that adverse events were tracked as part of monitoring in the legacy CTMS but the safety system is the ultimate source of that data, so we don’t have a need to migrate those records to CTMS.

Some acceptable workarounds or business process decisions could also be determined such as trip report. While trip reports/site visit records are typically migrated from CTMS to CTMS, approved trip reports can be printed and archived or filed in eTMF instead of migrating to a new CTMS.

So, we have covered whether we should migrate, what specifically is beneficial to migrate, and now it’s time to ask how we go about doing this? To answer this question, we must first see how many sources of legacy data we have. Number of unique sources can dramatically increase the effort for a migration. We can be dealing with multiple CTMS systems on an actual Relational DB, or we can be dealing with Excel spreadsheets, custom trackers, work document, document management systems, etc. Sometimes, data migration is a 2-step process where data from tools such as Excel spreadsheets and other trackers are entered into a true legacy CTMS system, from which we then migrate ALL data into the new system. Migration from each of these individual spreadsheets, MS database, and even multiple custom CTMS system in various formats can be very costly, so along with the data cleansing process, we can employ a data consolidation effort, which will combine the data into one legacy format from which to build our automated migration routines from. That way we validate one specific approach and source for the migration.

Additional consideration on how we should migrate include determining the volume of data for each record type that you are looking to migrate. If the volume across the system is quite low, you may consider manual data entry into the new system rather than configuring a migration routine or program for that entity. Remember, most migrations are one time, so the one-time cost for building a migration program needs to be understood.  When doing large projects like this, Validation efforts are a huge part of the effort and cost, so looking at manual front-end data entry or manual migration options where validation isn’t necessarily required should be considered.  

Additional considerations that need to be understood are the level of transformation of the data during migration. What are the relationships of the entities in both systems? Do they match, or are you going to have to transform the data into target relationships of entities, which can be complex. You must also review attributes and whether you have a good mapping between the two systems. And also, are the data standards in line between either systems or do we need to cleanse the data prior to migration? A good example of this we mentioned before is how are addresses tracked in both systems, and another example is list of values in both systems.

The next area to consider are the tools and methods that are available for data migrations. We always have the option for manual migration, where we recognize the need for the data from our legacy system, but we do not code any automated routines to migrate the data, but rather hire data entry folks to key in the data before go live. This may end up being a less costly method to migrate, depending on the volume of data. We actually had an organization hire temp data entry personnel to key in over 10,000 contacts in the system, instead of building a series of migration routines that would pull their contacts from various sources, and they actually ended up saving money going with that approach. Obviously, there are some risks associated with manual entry of this data as well, but they chose to go that route because it was cost effective in their specific case.

When looking at automated options, you have to consider embedded tools in your systems that enable migration, such as EIM, enterprise integration manager for Oracle Siebel applications, or csv or xml imports for other cloud based CTMS systems such as Veeva Vault and Medidata solutions. You may also consider the existing tools that your organization already owns licenses to such and Informatica and other ETL tools. Or you may need to build our custom migration routines to transform the data appropriately. Of course, when selecting a tool, you will need to consider various constraints, such as budget, time, and complexity.

The next few slides, we have put together some visuals on the technical approaches that are typically used for end to end CMTS data migrations. The first one is the one that historically we saw most often which is the In-house CTMS to a new In-house CTMS. With legacy CTMS solutions installed and managed in house and replacing them in house with either custom solutions or off the shelf solutions that were installed in house, this is the typical flow of data and the general technical approach.

The next approach is going from In-house to a Standard Cloud CTMS. This is probably the most common approach right now. With more standard cloud based CTMS solutions available such as Veeva Vault CTMS and Medidata CTMS, more companies are finding this option to be advantageous for their business. The key to data migration to cloud solutions is that you have to format the data to the required format accepted by the target solutions import tools. Typically, with these types of CTMS solutions, the format for the imports is standard and can’t be customized easily and you need to adhere to these formats. For example, for Veeva Vault, what we have are published CSV formats that are defined for the Veeva Data Import process. And for Medidata CTMS, what we have are published XML formats defined for the data import process. So, typically we use ETL tools such as Informatica others to extract data from the legacy database and transform the data into the prescribe formatted files that will be transferred to the cloud CTMS Vendors infrastructure to then run the import process to complete the migration.

The last technical approach that we have illustrated here is the In-house CTMS to a Customized Cloud CTMS solution.  This is slightly different that the last approach in that, the cloud CTMS solution is customized for the business. It has been customized and configured specifically for the organization, and the automated migration solution would be built and validated as well to conform to this customize solution. This is meant to be both usable for a one-time migration, and fully reusable for ongoing data transfer to target solution from other sources of data as well, such as CRO feeds and additional sources from company acquisitions.  We have actually deployed this type of solution for large global pharma companies that are using this setup and it has saved lots of time and effort to reuse this approach for ongoing data transfers.

Next is timing! Timing is a very important aspect of migrations. There are of course multiple options on when you can migrate your studies from the legacy system to the new system, but these need to be in line with other key decisions such as the training approach and legacy decommissioning strategy. Big Bang migrations are when you decide to move all of the studies that have been selected for migration at the same time and push a button and migrate them over together. This would make sense if the users of the system are also getting trained in a big bang deployment. If you are doing a pilot rollout of users, then you would want to only migrate studies based on the study team and users that have been trained on the new system. There is no sense in migrating if those specific users are not trained and therefore not going to be in the system to manage their study data.

Where we have seen great success especially in larger organizations, is a phased study by study joint training and migration roll out. This allows the org to work out any minor issues in process and increases user adoption as well overall. This in some cases is easier to manage as well, since you are training smaller groups at a time.

However, if IT or the support organization cannot temporarily support legacy systems and CTMS together, Big Bang may make sense

So, in summary, we have discussed the 4 aspects of CTMS Data Migration Analysis. We really have covered all the steps each organization needs to take before undertaking a migration initiative. There needs to be thorough planning and analysis to any data migration before we can understand the full scope of the initiative.

Purpose – Define the business benefits early on as this is the driver to define the rest of the migration initiative. (KEY TAKEAWAY)

Scope – Take strong consideration on the data that you need to migrate and how that aligns to the overall business benefit that you have defined.

Method – Take stock of the tools and resources that are available to you and which methods are going to be required based on the target system. Do not rule out a manual migration, since the cost for automated migration is a one-time cost, and if there is not an opportunity to reuse what is to be built.

Timing – Consider the rollout strategy, training approach, when determining migration strategy, as all these need to align with each other. Can’t decide one without impacting the other.

Give Your CTMS Migration a Head Start (Free Jump Start)

As we close out today’s webinar, we wanted to share some service offerings our team has created for our client partner.  Our team has developed two jump starts to help your organization determine how to approach CTMS migrations regardless of what the source and target systems are. With our level of experience and expertise, with this first jump start, our team can help develop a CTMS Migration Implementation Scope Analysis, which includes detailed timelines, milestones, process and data flow diagrams as well as overall cost/effort for the data migration initiative. This specific jump start is free of cost and around 2 weeks of effort to develop an overall scope analysis.

Give Your CTMS Migration a Head Start (50k Jump Start)

The next jump start is more detailed and provides your organization with additional tangible deliverables that can help you develop the overall scope analysis as well as strategy and detailed plan for the data migration. This specific jump start includes the scope analysis, but also included identification of the CTMS data entities for migration, and draft migration validation plan and protocol, as well as the draft migration requirements specification document as well.  These are documents that you will need as part of your data migration implementation project, so this specific jump start which is a short paid engagement provides these initial deliverables for your organization to get a head start on the implementation. If you would like to discuss these options in detail for your organization, please feel free to reach out to me directly so we can schedule a detailed discussion to determine which option may be right for your organization.

Thank you all for sharing the past hour or so with me. I hope you found the information useful.

Thanks again and enjoy the rest of your day!

About the Author

Param Singh has been working in the life sciences industry his entire career. As the director of clinical trial management solutions at Perficient, he developed the clinical trial management team to become one of the best in the industry. Param leads a highly skilled team of implementation specialists and continues to build lasting relationships with clients. He has a knack for resource and project management, which allows clients to achieve success. Param has been with Perficient, via the acquisition of BioPharm Systems, since 2008. Prior to joining the company, he guided the clinical trial management group at Accenture.

More from this Author

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Subscribe to the Weekly Blog Digest:

Sign Up