TY - JOUR
T1 - Looking to listen at the cocktail party
T2 - A speaker-independent audio-visual model for speech separation
AU - Ephrat, Ariel
AU - Mosseri, Inbar
AU - Lang, Oran
AU - Dekel, Tali
AU - Wilson, Kevin
AU - Hassidim, Avinatan
AU - Freeman, William T.
AU - Rubinstein, Michael
N1 - Publisher Copyright:
© 2018 Copyright held by the owner/author(s).
PY - 2018
Y1 - 2018
N2 - We present a joint audio-visual model for isolating a single speech signal from a mixture of sounds such as other speakers and background noise. Solving this task using only audio as input is extremely challenging and does not provide an association of the separated speech signals with speakers in the video. In this paper, we present a deep network-based model that incorporates both visual and auditory signals to solve this task. The visual features are used to "focus" the audio on desired speakers in a scene and to improve the speech separation quality. To train our joint audio-visual model, we introduce AVSpeech, a new dataset comprised of thousands of hours of video segments from the Web. We demonstrate the applicability of our method to classic speech separation tasks, as well as real-world scenarios involving heated interviews, noisy bars, and screaming children, only requiring the user to specify the face of the person in the video whose speech they want to isolate. Our method shows clear advantage over stateof- the-art audio-only speech separation in cases of mixed speech. In addition, our model, which is speaker-independent (trained once, applicable to any speaker), produces better results than recent audio-visual speech separation methods that are speaker-dependent (require training a separate model for each speaker of interest).
AB - We present a joint audio-visual model for isolating a single speech signal from a mixture of sounds such as other speakers and background noise. Solving this task using only audio as input is extremely challenging and does not provide an association of the separated speech signals with speakers in the video. In this paper, we present a deep network-based model that incorporates both visual and auditory signals to solve this task. The visual features are used to "focus" the audio on desired speakers in a scene and to improve the speech separation quality. To train our joint audio-visual model, we introduce AVSpeech, a new dataset comprised of thousands of hours of video segments from the Web. We demonstrate the applicability of our method to classic speech separation tasks, as well as real-world scenarios involving heated interviews, noisy bars, and screaming children, only requiring the user to specify the face of the person in the video whose speech they want to isolate. Our method shows clear advantage over stateof- the-art audio-only speech separation in cases of mixed speech. In addition, our model, which is speaker-independent (trained once, applicable to any speaker), produces better results than recent audio-visual speech separation methods that are speaker-dependent (require training a separate model for each speaker of interest).
KW - Audio-Visual
KW - BLSTM
KW - CNN
KW - Deep Learning
KW - Source Separation
KW - Speech Enhancement
UR - http://www.scopus.com/inward/record.url?scp=85056780840&partnerID=8YFLogxK
U2 - 10.1145/3197517.3201357
DO - 10.1145/3197517.3201357
M3 - ???researchoutput.researchoutputtypes.contributiontojournal.article???
SN - 0730-0301
VL - 37
JO - ACM Transactions on Graphics
JF - ACM Transactions on Graphics
IS - 4
M1 - A73
ER -