P. Morala Miguélez, J. A. Cifuentes Quintero, R. E. Lillo Rodríguez, I. Úcar Marqués

Machine learning models are increasingly used in critical applications, making interpretability essential. However, the majority of explainability methods only explain variable importances and contributions, often neglecting the possible interaction effects between variables. In this study, we compare extensions of SHAP‐based methods (such as the Shapley Taylor Interaction Index, FaithSHAP and n-Shapley values) with an alternative approach employing the method NN2Poly in a surrogate and local manner. In order to measure interaction detection, a polynomial data simulation benchmark is proposed with varying settings of correlation, noise and number of variables, alongside ranking order interaction importance metrics. This evaluation procedure is a step toward building more reliable benchmarks for interpretability, an often lacking aspect in this area of research.

Keywords: Interpretability, ML, AI

Scheduled

Big Data processing and analysis (TABiDa1)
June 10, 2025  3:30 PM
Sala 3. Maria Rúbies Garrofé


Other papers in the same session


Cookie policy

We use cookies in order to be able to identify and authenticate you on the website. They are necessary for the correct functioning of it, and therefore they can not be disabled. If you continue browsing the website, you are agreeing with their acceptance, as well as our Privacy Policy.

Additionally, we use Google Analytics in order to analyze the website traffic. They also use cookies and you can accept or refuse them with the buttons below.

You can read more details about our Cookie Policy and our Privacy Policy.