A. García-Galindo, M. López-De-Castro, R. Armañanzas
Today, the rapidly evolving landscape of ML often translates into the development of complex models that are deployed faster than carefully designed for compliance with ethical requirements, such as non-discrimination. In the algorithmic fairness literature, the set of techniques that modify the predictions of a (biased) model to satisfy a certain fairness criteria are commonly known as post-processing bias mitigation mechanisms. Nevertheless, most of these techniques do not come with any theoretical guarantees. In this work, we draw inspiration from recent works related to uncertainty quantification approaches to present a post-hoc procedure that endows a predictive model with reliable algorithmic fairness guarantees. Specifically, we propose to calibrate a certain bias mitigation procedure that operates on top of any black-box algorithm, regardless of its internal form or the underlying data generation process.
Palabras clave: algorithmic fairness. bias mitigation, theoretical guarantees, risk control
Programado
Tratamiento y análisis de Big Data (TABiDa1)
10 de junio de 2025 15:30
Sala 3. Maria Rúbies Garrofé