Fair Conformal Prediction

From ISLAB/CAISR
Title Fair Conformal Prediction
Summary Our goal is to design algorithms using conformal prediction framework that make fair predictions across various groups based on e.g., age, sex, income.
Keywords fair machine learning, bias, fairness
TimeFrame Fall 2022
References https://ojs.aaai.org/index.php/AAAI/article/view/21459
Prerequisites
Author
Supervisor Ece Calikus
Level Master
Status Open


The goal of fairness in machine learning is to design algorithms that make fair predictions across various demographic groups. Conformal prediction [1] is a technique devised to assess the uncertainty of predictions produced by a machine learning model. In particular, given an input, conformal prediction estimates a prediction interval in regression problems and a set of classes in classification problems. Both the prediction interval and sets are guaranteed to cover the true value with high probability.

Our goal is to use this approach to assess prediction uncertainty among different subpopulations (i.e., bias) with sensitive attributes (e.g., race, sex, income, age, etc.) and extend this approach to provide equal coverage among different subgroupsPotential applications for this project involve healthcare, education, and the environment.

[1] Shafer, G. and Vovk, V., 2008. A Tutorial on Conformal Prediction. Journal of Machine Learning Research, 9(3).