P. Morala Miguélez, J. A. Cifuentes Quintero, R. E. Lillo Rodríguez, I. Úcar Marqués
Machine learning models are increasingly used in critical applications, making interpretability essential. However, the majority of explainability methods only explain variable importances and contributions, often neglecting the possible interaction effects between variables. In this study, we compare extensions of SHAP‐based methods (such as the Shapley Taylor Interaction Index, FaithSHAP and n-Shapley values) with an alternative approach employing the method NN2Poly in a surrogate and local manner. In order to measure interaction detection, a polynomial data simulation benchmark is proposed with varying settings of correlation, noise and number of variables, alongside ranking order interaction importance metrics. This evaluation procedure is a step toward building more reliable benchmarks for interpretability, an often lacking aspect in this area of research.
Palabras clave: Interpretability, ML, AI
Programado
Tratamiento y análisis de Big Data (TABiDa1)
10 de junio de 2025 15:30
Sala 3. Maria Rúbies Garrofé