FAMESUMM
Investigating and Improving Faithfulness of Medical Summarization
- verfasst von
- Nan Zhang, Yusen Zhang, Wu Guo, Prasenjit Mitra, Rui Zhang
- Abstract
Summaries of medical text shall be faithful by being consistent and factual with source inputs, which is an important but understudied topic for safety and efficiency in healthcare. In this paper, we investigate and improve faithfulness in summarization on a broad range of medical summarization tasks. Our investigation reveals that current summarization models often produce unfaithful outputs for medical input text. We then introduce FAMESUMM, a framework to improve faithfulness by fine-tuning pre-trained language models based on medical knowledge. FAMESUMM performs contrastive learning on designed sets of faithful and unfaithful summaries, and it incorporates medical terms and their contexts to encourage faithful generation of medical terms. We conduct comprehensive experiments on three datasets in two languages: health question and radiology report summarization datasets in English, and a patient-doctor dialogue dataset in Chinese. Results demonstrate that FAMESUMM is flexible and effective by delivering consistent improvements over mainstream language models such as BART, T5, mT5, and PEGASUS, yielding state-of-the-art performances on metrics for faithfulness and general quality. Human evaluation by doctors also shows that FAMESUMM generates more faithful outputs. Our code is available at github.com/psunlpgroup/FaMeSumm.
- Organisationseinheit(en)
-
Forschungszentrum L3S
- Externe Organisation(en)
-
Pennsylvania State University
Zhengzhou University
- Typ
- Aufsatz in Konferenzband
- Seiten
- 10915-10931
- Anzahl der Seiten
- 17
- Publikationsdatum
- 2023
- Publikationsstatus
- Veröffentlicht
- Peer-reviewed
- Ja
- ASJC Scopus Sachgebiete
- Theoretische Informatik und Mathematik, Angewandte Informatik, Information systems, Linguistik und Sprache
- Ziele für nachhaltige Entwicklung
- SDG 3 – Gute Gesundheit und Wohlergehen
- Elektronische Version(en)
-
https://doi.org/10.18653/v1/2023.emnlp-main.673 (Zugang:
Offen)