KIT | KIT-Bibliothek | Impressum | Datenschutz

Incorporating Domain Knowledge for Learning Interpretable Features

Melchior, Martin

Abstract (englisch):

Deep Learning has seen an enormous success in the last years. In several application domains prediction models with remarkable accuracy could be trained, sometimes by using large datasets. Often, these models are configured with huge amounts of parameters and seen by domain experts as hard to understand black boxes and hence of less value or not trustworthy. As a result, we observe a claim for better interpretability in several application domains. This claim can also be seen to arise from the fact that the formulation of the underlying problems is not complete and certain important aspects are disregarded. Interpretability is required particularly in domains where high demands with respect to safety or fairness are posed or, for example, in natural sciences where the application of these techniques aims at knowledge discovery. Alternatively, the gap in problem formulation can be compensated by incorporating a priori domain knowledge into the model. In this article, we highlight the importance to further advance the techniques to support interpretability or the mechanisms to incorporate domain knowledge in the machine learning approaches. ... mehr


Verlagsausgabe §
DOI: 10.5445/IR/1000150142
Veröffentlicht am 09.09.2022
Cover der Publikation
Zugehörige Institution(en) am KIT Institut für Wirtschaftsinformatik und Marketing (IISM)
Publikationstyp Zeitschriftenaufsatz
Publikationsjahr 2022
Sprache Englisch
Identifikator ISSN: 2363-9881
KITopen-ID: 1000150142
Erschienen in Archives of Data Science, Series A (Online First)
Verlag KIT Scientific Publishing
Band 8
Heft 2
Seiten 14 S. online
KIT – Die Forschungsuniversität in der Helmholtz-Gemeinschaft
KITopen Landing Page