PhD student Solvejg Wastvedt developed a framework and new metrics to measure the fairness of risk prediction models with support from Innovative Methods & Data Science (IMDS) co-Director Julian Wolfson, PhD, and IMDS member Jared Huling, PhD.

The motivation for this study was the need for tools to assess the fairness of risk prediction models used in healthcare. Without this assessment, biased risk prediction models that are intended to personalize treatment can make health inequities worse.

While there are existing methods for measuring algorithm fairness, they are hard to apply to healthcare.

These methods often focus on one way of grouping people, such as by race or gender. This fails to account for the variety of ways different groupings intersect and affect the discrimination a person experiences both within and outside the healthcare delivery system.

Risk prediction models are also used to guide clinical treatment, which comes with its own statistical issues that do not work with existing fairness measures.

This project is the first known work to develop new fairness metrics that account for both of these issues. The research team came up with three metrics for measuring fairness, as well as a framework of estimation and inference tools for the metrics.

The main approach they used is counterfactual fairness, which uses techniques from causal inference.

Instead of basing their fairness metrics on observed data, in which some patients are treated and some not, the researchers used causal inference techniques to simulate hypothetical outcomes under no treatment. Then fairness can be measured with respect to the same no-treatment baseline for all patients, ensuring the algorithm accurately predicts patients’ needs regardless of current treatment assignment patterns.

Read more about the tools Wastvedt developed here.

The IMDS program provides support to researchers like Wastvedt to develop new data science methods and work with new forms of multimodal health data.