Example-based Explanations with Adversarial Attacks for Respiratory Sound Analysis
- verfasst von
- Yi Chang, Zhao Ren, Thanh Tam Nguyen, Wolfgang Nejdl, Björn W. Schuller
- Abstract
Respiratory sound classification is an important tool for remote screening of respiratory-related diseases such as pneumonia, asthma, and COVID-19. To facilitate the interpretability of classification results, especially ones based on deep learning, many explanation methods have been proposed using prototypes. However, existing explanation techniques often assume that the data is non-biased and the prediction results can be explained by a set of prototypical examples. In this work, we develop a unified example-based explanation method for selecting both representative data (prototypes) and outliers (criticisms). In particular, we propose a novel application of adversarial attacks to generate an explanation spectrum of data instances via an iterative fast gradient sign method. Such unified explanation can avoid over-generalisation and bias by allowing human experts to assess the model mistakes case by case. We performed a wide range of quantitative and qualitative evaluations to show that our approach generates effective and understandable explanation and is robust with many deep learning models.
- Organisationseinheit(en)
-
Forschungszentrum L3S
- Externe Organisation(en)
-
Imperial College London
Griffith University Queensland
Universität Augsburg
- Typ
- Aufsatz in Konferenzband
- Seiten
- 4003-4007
- Anzahl der Seiten
- 5
- Publikationsdatum
- 2022
- Publikationsstatus
- Veröffentlicht
- Peer-reviewed
- Ja
- ASJC Scopus Sachgebiete
- Sprache und Linguistik, Mensch-Maschine-Interaktion, Signalverarbeitung, Software, Modellierung und Simulation
- Ziele für nachhaltige Entwicklung
- SDG 3 – Gute Gesundheit und Wohlergehen
- Elektronische Version(en)
-
https://doi.org/10.48550/arXiv.2203.16141 (Zugang:
Offen)
https://doi.org/10.21437/Interspeech.2022-11355 (Zugang: Offen)