Skip to main content

Life Sciences

The Elephant In The (Server) Room



Server Virtualization: An amazingly helpful technological advancement that has left computer system validation (CSV) teams everywhere scratching their heads. Why? CSV calls for proof that the servers used for regulated computer systems meet or exceed the minimum hardware requirements for those systems. This is known as “hardware qualification” or “HQ.” But when the hardware is virtualized, HQ gets complicated. 

Before the advent of virtualized hardware, the validation lead for a project would simply compare the hardware requirements provided by the software vendor against the specifications of the planned physical server(s) and document the results. Pretty straightforward.

However, with virtualization, a single physical server can host many separate virtual servers and, while the virtual servers inherits some specifications from the physical host, each can also be set up with its own unique parameters.

Sooooo…if a regulated system will be installed on a virtual server residing on a physical host server, what exactly do you qualify in your HQ for that system? Should a virtual server count as “hardware” in this context? Should you qualify the physical host server in its entirety, even though only a portion of it will be used for the regulated system? How do you account for the potential performance impact of all of the virtual servers on the physical machine?

Traditionally, our take on all of this here in Perficient’s life sciences practice has been to qualify the virtual servers. However, in collaboration with our IT department, we recently adopted a new philosophy and made a change to our process: We’ve stopped directly qualifying virtual servers. Now, we are qualifying the physical host server and then applying the HQ to the virtual servers.

Our rationale is this: The physical host server is the true hardware upon which the regulated system resides, so its capabilities and limitations matter the most. If the physical server can’t support the software system (i.e., fails the HQ), the specifications of the virtual server mean nothing. However, once the physical host server passes the HQ, the specs of the virtual server come into play and they also need to be qualified.

In our minds, this approach provides a greater degree of confidence that the hardware (both physical and virtual) can properly support the regulated system in question. And, of course, we make sure there is full traceability between virtual and host servers in our validation documentation.

However, we’re still mulling over the question of the potential performance impact of all of the virtual servers on a single host. We currently rely on server monitoring to alert us when we have a performance issue, and then IT allocates additional resources to address the issue. But is it possible to proactively identify potential performance impacts during the HQ process?

Our new HQ approach is more comprehensive than the old one, but we’re still only looking at the virtual server(s) for a specific system in isolation – not at all of the virtual servers on a single host in concert. For that, we rely on our (extremely capable!) IT colleagues to effectively use the monitoring tool (vSphere) that comes with the virtualized software (VMware) to choose the right physical host for the job.

But, is that enough, or should we as validation professionals be doing some sort of due diligence before we deem the hardware setup to be sufficient? What if the specs of the physical host server pass HQ and the specs of the virtual server being added to the host pass HQ, but adding the new the virtual server to the host ends up being too much for the physical host? Where do we draw the line between IT’s responsibility and CSV’s? Is a reactive approach sufficient, as long as the server monitoring system provides enough advance notice? If we’re relying on the server monitoring system for this, should it be validated?? (I think I just heard our IT Director’s head explode…)

All questions that we continue to ponder. We are forever seeking the right balance of compliance and practicality. If you have ideas or experiences to share on the topic, please post a comment or drop us a line. We’d love to hear your thoughts.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Marin Richeson

Marin joined the life sciences industry in 2001. Over the course of her tenure, she has held roles in clinical finance, IT, quality assurance, and validation. The diversity of her experience provides her with a unique perspective on the interconnectedness of this complex, multi-faceted industry. Marin Richeson is a lead business consultant in Perficient's life sciences practice.

More from this Author

Follow Us