Enhancing Interpretability of Machine Learning Models over Knowledge Graphs
- authored by
- Yashrajsinh Chudasama, Disha Purohit, Philipp D. Rohde, Maria Esther Vidal
- Abstract
Artificial Intelligence (AI) plays a critical role in data-driven decision-making frameworks. However, the lack of transparency in some machine learning (ML) models hampers their trustworthiness, especially in domains like healthcare. This demonstration aims to showcase the potential of Semantic Web technologies in enhancing the interpretability of AI. By incorporating an interpretability layer, ML models can become more reliable, providing decision-makers with deeper insights into the model’s decision-making process. InterpretME effectively documents the execution of an ML pipeline using factual statements within the InterpretME knowledge graph (KG). Consequently, crucial metadata such as hyperparameters, decision trees, and local ML interpretations are presented in both human- and machine-readable formats, facilitating symbolic reasoning on a model’s outcomes. Following the Linked Data principles, InterpretME establishes connections between entities in the InterpretME KG and their counterparts in existing KGs, thus, enhancing contextual information of the InterpretME KG entities.
- Organisation(s)
-
L3S Research Centre
Institute of Data Science
- External Organisation(s)
-
German National Library of Science and Technology (TIB)
- Type
- Conference contribution
- No. of pages
- 5
- Publication date
- 2023
- Publication status
- Published
- Peer reviewed
- Yes
- ASJC Scopus subject areas
- Computer Science(all)
- Sustainable Development Goals
- SDG 3 - Good Health and Well-being
- Electronic version(s)
-
https://ceur-ws.org/Vol-3526/paper-05.pdf (Access:
Open)