Domain Adaptation for Speech Enhancement in a Large Domain Gap

Lior Frenkel, Jacob Goldberger, Shlomo E. Chazan

Research output: Contribution to journalConference articlepeer-review

1 Scopus citations


Speech enhancement approaches based on neural networks, aim to learn a noisy-to-clean transformation using a supervised learning paradigm. However, networks trained in this way may not be effective at handling languages and types of noise that were not present in the training data. To address this issue, this study focuses on unsupervised domain adaptation, specifically for large-domain-gap cases. In this setup, we have noisy speech data from the new domain but the corresponding clean speech data are not available. We propose an adaptation method that is based on domain-adversarial training followed by iterative self-training where the quality of the estimated speech used as pseudo labels is monitored by the performance of the adapted network on labeled data from the source domain. Experimental results show that our method effectively mitigates the domain mismatch between training and test sets, and surpasses the current baseline.

Original languageEnglish
Pages (from-to)2458-2462
Number of pages5
JournalProceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
StatePublished - 2023
Event24th International Speech Communication Association, Interspeech 2023 - Dublin, Ireland
Duration: 20 Aug 202324 Aug 2023

Bibliographical note

Publisher Copyright:
© 2023 International Speech Communication Association. All rights reserved.


  • domain adaptation
  • domain shift
  • self-training
  • speech enhancement


Dive into the research topics of 'Domain Adaptation for Speech Enhancement in a Large Domain Gap'. Together they form a unique fingerprint.

Cite this