Sensitivity of Bayesian Inference to Data Deletion and Replication
R. Naveiro, M. Carreau, W. N. Caballero
Adversarial machine learning (AML) has shown that statistical models are vulnerable to data manipulation, yet most studies focus on classical methods. We extend white-box poisoning attacks to Bayesian inference, demonstrating its susceptibility to strategic data manipulations. Our attacks, based on selective deletion and replication of observations, can steer the Bayesian posterior toward a desired distribution—even without an analytical posterior form.
We establish their theoretical properties and empirically validate them in synthetic and real-world scenarios. Interestingly, in some cases, modifying a small fraction of carefully chosen data points leads to drastic shifts in inference.
Palabras clave: Bayesian inference, MCMC, adversarial machine learning
Programado
Interdisciplinary applications of Bayesian methods
10 de junio de 2025 15:30
Sala de prensa (MR 13)
Otros trabajos en la misma sesión
J. M. Camacho Rodríguez, R. Naveiro, D. Rios Insua
D. Corrales Alonso, D. Ríos Insua
P. García Arce, R. Naveiro Flores, D. Ríos Insua