Explainability is NOT a Game - Intelligence Artificielle Access content directly
Preprints, Working Papers, ... Year : 2023

Explainability is NOT a Game

Abstract

Explainable artificial intelligence (XAI) aims to help human decision-makers in understanding complex machine learning (ML) models. One of the hallmarks of XAI are measures of relative feature importance, which are theoretically justified through the use of Shapley values. This paper builds on recent work and offers a simple argument for why Shapley values can provide misleading measures of relative feature importance, by assigning more importance to features that are irrelevant for a prediction, and assigning less importance to features that are relevant for a prediction. The significance of these results is that they effectively challenge the many proposed uses of measures of relative feature importance in a fast-growing range of high-stakes application domains. CCS Concepts • Computing methodologies → Artificial intelligence; Machine learning algorithms; Machine learning; • Theory of computation → Automated reasoning.
Fichier principal
Vignette du fichier
msh-submission-jun23.pdf (172.82 Ko) Télécharger le fichier
Origin : Files produced by the author(s)
licence : CC BY - Attribution

Dates and versions

hal-04154767 , version 1 (06-07-2023)
hal-04154767 , version 2 (07-07-2023)

Identifiers

  • HAL Id : hal-04154767 , version 1

Cite

Joao Marques-Silva, Xuanxiang Huang. Explainability is NOT a Game. 2023. ⟨hal-04154767v1⟩
56 View
90 Download

Share

Gmail Facebook X LinkedIn More