Example-based Explanations with Adversarial Attacks for Respiratory Sound Analysis

authored by
Yi Chang, Zhao Ren, Thanh Tam Nguyen, Wolfgang Nejdl, Björn W. Schuller
Abstract

Respiratory sound classification is an important tool for remote screening of respiratory-related diseases such as pneumonia, asthma, and COVID-19. To facilitate the interpretability of classification results, especially ones based on deep learning, many explanation methods have been proposed using prototypes. However, existing explanation techniques often assume that the data is non-biased and the prediction results can be explained by a set of prototypical examples. In this work, we develop a unified example-based explanation method for selecting both representative data (prototypes) and outliers (criticisms). In particular, we propose a novel application of adversarial attacks to generate an explanation spectrum of data instances via an iterative fast gradient sign method. Such unified explanation can avoid over-generalisation and bias by allowing human experts to assess the model mistakes case by case. We performed a wide range of quantitative and qualitative evaluations to show that our approach generates effective and understandable explanation and is robust with many deep learning models.

Organisation(s)
L3S Research Centre
External Organisation(s)
Imperial College London
Griffith University Queensland
University of Augsburg
Type
Conference contribution
Pages
4003-4007
No. of pages
5
Publication date
2022
Publication status
Published
Peer reviewed
Yes
ASJC Scopus subject areas
Language and Linguistics, Human-Computer Interaction, Signal Processing, Software, Modelling and Simulation
Sustainable Development Goals
SDG 3 - Good Health and Well-being
Electronic version(s)
https://doi.org/10.48550/arXiv.2203.16141 (Access: Open)
https://doi.org/10.21437/Interspeech.2022-11355 (Access: Open)