R. Naveiro, M. Carreau, W. N. Caballero

Adversarial machine learning (AML) has shown that statistical models are vulnerable to data manipulation, yet most studies focus on classical methods. We extend white-box poisoning attacks to Bayesian inference, demonstrating its susceptibility to strategic data manipulations. Our attacks, based on selective deletion and replication of observations, can steer the Bayesian posterior toward a desired distribution—even without an analytical posterior form.

We establish their theoretical properties and empirically validate them in synthetic and real-world scenarios. Interestingly, in some cases, modifying a small fraction of carefully chosen data points leads to drastic shifts in inference.

Keywords: Bayesian inference, MCMC, adversarial machine learning

Scheduled

Interdisciplinary applications of Bayesian methods
June 10, 2025  3:30 PM
Sala de prensa (MR 13)


Other papers in the same session


Cookie policy

We use cookies in order to be able to identify and authenticate you on the website. They are necessary for the correct functioning of it, and therefore they can not be disabled. If you continue browsing the website, you are agreeing with their acceptance, as well as our Privacy Policy.

Additionally, we use Google Analytics in order to analyze the website traffic. They also use cookies and you can accept or refuse them with the buttons below.

You can read more details about our Cookie Policy and our Privacy Policy.