Abstract
State-of-the-art deep-learning-based voice activity detectors (VADs) are often trained with anechoic data. However, real acoustic environments are generally reverberant, which causes the performance to significantly deteriorate. To mitigate this mismatch between training data and real data, we simulate an augmented training set that contains nearly five million utterances. This extension comprises of anechoic utterances and their reverberant modifications, generated by convolutions of the anechoic utterances with a variety of room impulse responses (RIRs). We consider five different models to generate RIRs, and five different VADs that are trained with the augmented training set. We test all trained systems in three different real reverberant environments. Experimental results show 20% increase on average in accuracy, precision and recall for all detectors and response models, compared to anechoic training. Furthermore, one of the RIR models consistently yields better performance than the other models, for all the tested VADs. Additionally, one of the VADs consistently outperformed the other VADs in all experiments.
| Original language | English |
|---|---|
| Title of host publication | 2020 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2020 - Proceedings |
| Publisher | Institute of Electrical and Electronics Engineers Inc. |
| Pages | 406-410 |
| Number of pages | 5 |
| ISBN (Electronic) | 9781509066315 |
| DOIs | |
| State | Published - May 2020 |
| Externally published | Yes |
| Event | 2020 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2020 - Barcelona, Spain Duration: 4 May 2020 → 8 May 2020 |
Publication series
| Name | ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings |
|---|---|
| Volume | 2020-May |
| ISSN (Print) | 1520-6149 |
Conference
| Conference | 2020 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2020 |
|---|---|
| Country/Territory | Spain |
| City | Barcelona |
| Period | 4/05/20 → 8/05/20 |
Bibliographical note
Publisher Copyright:© 2020 Institute of Electrical and Electronics Engineers Inc.. All rights reserved.
Funding
This work was supported by the Israel Science Foundation (grant no. 576/16) and the ISF-NSFC joint research program (grant No. 2514/17).
| Funders | Funder number |
|---|---|
| ISF-NSFC | 2514/17 |
| Israel Science Foundation | 576/16 |
Keywords
- Deep learning
- Reverberation
- Room impulse response
- Voice activity detection
Fingerprint
Dive into the research topics of 'Evaluation of deep-learning-based voice activity detectors and room impulse response models in reverberant environments'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver