A. García-Galindo, M. López-De-Castro, R. Armañanzas

Today, the rapidly evolving landscape of ML often translates into the development of complex models that are deployed faster than carefully designed for compliance with ethical requirements, such as non-discrimination. In the algorithmic fairness literature, the set of techniques that modify the predictions of a (biased) model to satisfy a certain fairness criteria are commonly known as post-processing bias mitigation mechanisms. Nevertheless, most of these techniques do not come with any theoretical guarantees. In this work, we draw inspiration from recent works related to uncertainty quantification approaches to present a post-hoc procedure that endows a predictive model with reliable algorithmic fairness guarantees. Specifically, we propose to calibrate a certain bias mitigation procedure that operates on top of any black-box algorithm, regardless of its internal form or the underlying data generation process.

Keywords: algorithmic fairness. bias mitigation, theoretical guarantees, risk control

Scheduled

Big Data processing and analysis (TABiDa1)
June 10, 2025  3:30 PM
Sala 3. Maria Rúbies Garrofé


Other papers in the same session


Cookie policy

We use cookies in order to be able to identify and authenticate you on the website. They are necessary for the correct functioning of it, and therefore they can not be disabled. If you continue browsing the website, you are agreeing with their acceptance, as well as our Privacy Policy.

Additionally, we use Google Analytics in order to analyze the website traffic. They also use cookies and you can accept or refuse them with the buttons below.

You can read more details about our Cookie Policy and our Privacy Policy.