Skip to main content

Healthcare

5 Commonly Asked Questions About Intrinsic Bias in AI/ML Models in Healthcare

Flow Chart Background

Healthcare organizations play a key role in offering access to care, motivating skilled workers, and acting as social safety nets in their communities. They, along with life sciences organizations, serve on the front lines of addressing health equity.

With a decade of experience in data content and knowledge, specializing in document processing, AI solutions, and natural language solutions, I strive to apply my technical and industry expertise to the top-of-mind issue of diversity, equity, and inclusion in healthcare.

Here are five questions that I hear commonly in my line of work:

1. What is the digital divide, and how does it impact healthcare consumers?

There are still too many people in this country who don’t have reliable access to computing devices and the internet in their homes. If we think back to the beginning of the pandemic, we can see this in sharp relief. The number one impediment to the shift to virtual school was that kids didn’t have devices or reliable internet at home.

We also saw quite clearly that the divide is disproportionately impacting low income people in disadvantaged neighborhoods.

The problem is both affordability and access.

The result, through a healthcare lens, is that people without reliable access to the internet have less access to information they can use to manage their health.

They are less able to find a doctor who’s a good fit for them. Their access to information about their insurance policy and what is covered is more restricted. They are less able to access telehealth services and see a provider from home.

All this compounds because we’re using digital and internet-connected tools to improve healthcare and outcomes for patients. But ultimately, the digital divide means we’re achieving marginal gains for the populations with the best outcomes already and not getting significant gains from the populations that need support the most.

2. How can organizations maintain an ethical stance while using AI/ML in healthcare?

Focus on intrinsic bias, the subconscious stereotypes that affect the way individuals make decisions. People have intrinsic biases picked up from their environment that require conscious acknowledgement and attention. Machine learning models also pick up these biases. This happens because models are trained on data about historical human decisions, so the human biases come through (and can even be amplified). It’s critical to understand where a model comes from, how it was trained, and why it was created before using it.

Ethical use of AI/ML in healthcare requires careful attention to detail and, often, human review of machine decisions in order to build trust.

3. How can HCOs manage inherent bias in data? Is it possible to eliminate it?

At this point, we’re working to manage bias, not eliminate it. This is most critical for training machine learning models and correctly interpreting the results. We generally recommend using appropriate tools to help detect bias in model predictions and to use those detections to drive retraining and repredicting.

Here are some of the simplest tools in our arsenal:

  • Flip the offending parameter and try again.
  • Determine if the model would have made a different prediction if the person was white and male.
  • Use that additional data point to advise a human on their decision.

For healthcare in particular, the human in the loop is critically important. There are some cases where membership in a protected class changes a prediction because it acts as a proxy for key genetic factor (man or woman, white or Black). The computer can easily correct for bias when reviewing a loan application. However, when evaluating heart attack risk, there are specific health factors that can be predicted by race or gender.

4. Why is it important to educate data scientists in this area?

Data scientists need to be aware of potential issues and omit protected class information from model training sets whenever possible. This is very difficult to do in healthcare, because that information can be used to predict outcomes.

The data scientist needs to understand the likelihood that there will be a problem and be trained to recognize problematic patterns. This is also why it’s very important for data scientists to have some understanding of the medical or scientific domain about which they’re building a model.

They need to understand the context of the data they’re using and the predictions they’re making to understand if protected classes driving outcomes is expected or unexpected.

5: What tools are available to identify bias in AI/ML models and how can an organization choose the right tool?

Tools like IBM OpenScale, Amazon Sagemaker Clarify, Google What-if and Microsoft Fairlearn are a great starting point in terms of detecting bias in models during training, and some can do so at runtime (including the ability to make corrections or identify changes in model behavior over time). These tools that enable both bias detection and model explainability and observability are critical to bringing AI/ML into live clinical and non-clinical healthcare settings.

EXPLORE NOW: Diversity, Equity & Inclusion (DE&I) in Healthcare

Healthcare Leaders Turn to Us

Perficient is dedicated to enabling organizations to elevate diversity, equity, and inclusion within their companies. Our healthcare practice is comprised of experts who understand the unique challenges facing the industry. The 10 largest health systems and 10 largest health insurers in the U.S. have counted on us to support their end-to-end digital success. Modern Healthcare has also recognized us as the fourth largest healthcare IT consulting firm.

We bring pragmatic, strategically-grounded know-how to our clients’ initiatives. And our work gets attention – not only by industry groups that recognize and award our work but also by top technology partners that know our teams will reliably deliver complex, game-changing implementations. Most importantly, our clients demonstrate their trust in us by partnering with us again and again. We are incredibly proud of our 90% repeat business rate because it represents the trust and collaborative culture that we work so hard to build every day within our teams and with every client.

With more than 20 years of experience in the healthcare industry, Perficient is a trusted, end-to-end, global digital consultancy. Contact us to learn how we can help you plan and implement a successful DE&I initiative for your organization.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Eric Walk, Director

10 years of experience, specializing in system architecture and planning, platform installations and upgrades, and system administrator training for a wide array of Content, Process and Data platforms. View My Certifications

More from this Author

Follow Us
TwitterLinkedinFacebookYoutubeInstagram