KIT | KIT-Bibliothek | Impressum | Datenschutz

User-centric explainable AI: design and evaluation of an approach to generate coherent counterfactual explanations for structured data

Förster, Maximilian ORCID iD icon 1; Hühn, Philipp; Klier, Mathias ; Kluge, Kilian
1 Institut für Wirtschaftsinformatik (WIN), Karlsruher Institut für Technologie (KIT)

Abstract:

Many Artificial Intelligence (AI) systems are black boxes, which hinders their deployment. Explainable AI (XAI) approaches which automatically generate counterfactual explanations aim to assist users in scrutinising AI decisions. One property of explanations crucial for their acceptance by users is their coherence. Users perceive counterfactual explanations as coherent if they present a realistic/typical counterfactual scenario that is suitable to explain the factual situation. We design an optimisation-based approach to generate coherent counterfactual explanations applicable to structured data. We demonstrate its applicability and rigorously evaluate its efficacy through functionally grounded and human-grounded evaluation. Results suggest that our approach indeed produces counterfactual explanations that are perceived as coherent by users. More specifically, they are perceived as more realistic, typical, and feasible than state-of-the-art explanations.


Originalveröffentlichung
DOI: 10.1080/12460125.2022.2119707
Zugehörige Institution(en) am KIT Institut für Wirtschaftsinformatik (WIN)
Publikationstyp Zeitschriftenaufsatz
Publikationsdatum 25.10.2023
Sprache Englisch
Identifikator ISSN: 1246-0125, 2116-7052
KITopen-ID: 1000192131
Erschienen in Journal of Decision Systems
Verlag Lavoisier
Band 32
Heft 4
Seiten 700–731
Vorab online veröffentlicht am 06.09.2022
Schlagwörter Explainable Artificial, Intelligence; counterfactual, explanations; coherent, explanations; user study
Nachgewiesen in Scopus
OpenAlex
KIT – Die Universität in der Helmholtz-Gemeinschaft
KITopen Landing Page