Abstract
In this paper we present a unified time-frequency method for speaker extraction in clean and noisy conditions. Given a mixed signal, along with a reference signal, the common approaches for extracting the desired speaker are either applied in the time-domain or in the frequency-domain. In our approach, we propose a Siamese-Unet architecture that uses both representations. The Siamese encoders are applied in the frequency-domain to infer the embedding of the noisy and reference spectra, respectively. The concatenated representations are then fed into the decoder to estimate the real and imaginary components of the desired speaker, which are then inverse-transformed to the time-domain. The model is trained with the Scale-Invariant Signal-to-Distortion Ratio (SI-SDR) loss to exploit the time-domain information. The time-domain loss is also regularized with frequency-domain loss to preserve the speech patterns. Experimental results demonstrate that the unified approach is not only very easy to train, but also provides superior results as compared with Blind Source Separation (BSS) methods, as well as commonly used speaker extraction approach.
Original language | English |
---|---|
Title of host publication | 30th European Signal Processing Conference, EUSIPCO 2022 - Proceedings |
Publisher | European Signal Processing Conference, EUSIPCO |
Pages | 762-766 |
Number of pages | 5 |
ISBN (Electronic) | 9789082797091 |
State | Published - 2022 |
Event | 30th European Signal Processing Conference, EUSIPCO 2022 - Belgrade, Serbia Duration: 29 Aug 2022 → 2 Sep 2022 |
Publication series
Name | European Signal Processing Conference |
---|---|
Volume | 2022-August |
ISSN (Print) | 2219-5491 |
Conference
Conference | 30th European Signal Processing Conference, EUSIPCO 2022 |
---|---|
Country/Territory | Serbia |
City | Belgrade |
Period | 29/08/22 → 2/09/22 |
Bibliographical note
Publisher Copyright:© 2022 European Signal Processing Conference, EUSIPCO. All rights reserved.
Funding
1This project has received funding from the European Union’s Horizon 2020 Research and Innovation Programme under Grant Agreement No. 871245. 1This project has received funding from the European Union's Horizon 2020 Research and Innovation Programme under Grant Agreement No. 871245.
Funders | Funder number |
---|---|
Horizon 2020 Framework Programme | |
Horizon 2020 | 871245 |