Abstract
We address voice activity detection in acoustic environments of transients and stationary noises, which often occur in real-life scenarios. We exploit unique spatial patterns of speech and non-speech audio frames by independently learning their underlying geometric structure. This process is done through a deep encoder-decoder-based neural network architecture. This structure involves an encoder that maps spectral features with temporal information to their low-dimensional representations, which are generated by applying the diffusion maps method. The encoder feeds a decoder that maps the embedded data back into the high-dimensional space. A deep neural network, which is trained to separate speech from non-speech frames, is obtained by concatenating the decoder to the encoder, resembling the known diffusion nets architecture. Experimental results show enhanced performance compared to competing voice activity detection methods. The improvement is achieved in both accuracy, robustness, and generalization ability. Our model performs in a real-time manner and can be integrated into audio-based communication systems. We also present a batch algorithm that obtains an even higher accuracy for offline applications.
Original language | English |
---|---|
Article number | 8681421 |
Pages (from-to) | 254-264 |
Number of pages | 11 |
Journal | IEEE Journal on Selected Topics in Signal Processing |
Volume | 13 |
Issue number | 2 |
DOIs | |
State | Published - May 2019 |
Externally published | Yes |
Bibliographical note
Publisher Copyright:© 2007-2012 IEEE.
Funding
Manuscript received September 9, 2018; revised February 4, 2019 and March 26, 2019; accepted April 2, 2019. Date of publication April 4, 2019; date of current version May 16, 2019. This work was supported in part by the Israel Science Foundation under Grant 576/16 and in part by the ISF-NSFC Joint Research Program under Grant 2514/17. The guest editor coordinating the review of this paper and approving it for publication was Dr. Bo Li. (Corresponding author: Amir Ivry.) The authors are with the Andrew and Erna Viterbi Faculty of Electrical Engineering, Technion-Israel Institute of Technology, Haifa 3200003, Israel (e-mail:,[email protected]; [email protected]; icohen@ee. technion.ac.il). Digital Object Identifier 10.1109/JSTSP.2019.2909472
Funders | Funder number |
---|---|
ISF-NSFC | 2514/17 |
Israel Science Foundation | 576/16 |
Keywords
- Deep learning
- diffusion maps
- voice activity detection