Generative Spoken Dialogue Language Modeling

Tu Anh Nguyen, Eugene Kharitonov, Jade Copet, Yossi Adi, Wei Ning Hsu, Ali Elkahky, Paden Tomasello, Robin Algayres, Benoît Sagot, Abdelrahman Mohamed, Emmanuel Dupoux

Research output: Contribution to journalArticlepeer-review

17 Scopus citations

Abstract

We introduce dGSLM, the first ‘‘textless’’ model able to generate audio samples of naturalistic spoken dialogues. It uses recent work on unsupervised spoken unit discovery coupled with a dual-tower transformer architecture with cross-attention trained on 2000 hours of two-channel raw conversational audio (Fisher dataset) without any text or labels. We show that our model is able to generate speech, laughter, and other paralinguistic signals in the two channels simultaneously and reproduces more naturalistic and fluid turn taking compared to a text-based cascaded model.1,2.

Original languageEnglish
Pages (from-to)250-266
Number of pages17
JournalTransactions of the Association for Computational Linguistics
Volume11
DOIs
StatePublished - 14 Mar 2023
Externally publishedYes

Bibliographical note

Publisher Copyright:
© 2023 Association for Computational Linguistics.

Funding

In this work, E.D. in his academic role (EHESS, ENS-PSL, CNRS) was supported by the Agence Nationale pour la Recherche (ANR-17-EURE-0017 Frontcog, ANR-10-IDEX-0001-02 PSL*, ANR-19-P3IA-0001 PRAIRIE 3IA Institute), a grant from CIFAR (Learning in Machines and Brains). B.S. was also supported by the Agence Nationale pour la Recherche (ANR-19-P3IA-0001 PRAIRIE 3IA Institute).

FundersFunder number
ANR-10-IDEX-0001-02PSL, ANR-19-P3IA-0001
ANR-17-EURE-0017
Canadian Institute for Advanced Research
Agence Nationale de la Recherche

    Fingerprint

    Dive into the research topics of 'Generative Spoken Dialogue Language Modeling'. Together they form a unique fingerprint.

    Cite this