TY - JOUR
T1 - Discrete Audio Tokens
T2 - More Than a Survey!
AU - Mousavi, Pooneh
AU - Maimon, Gallil
AU - Moumen, Adel
AU - Petermann, Darius
AU - Shi, Jiatong
AU - Wu, Haibin
AU - Yang, Haici
AU - Kuznetsova, Anastasia
AU - Ploujnikov, Artem
AU - Marxer, Ricard
AU - Ramabhadran, Bhuvana
AU - Elizalde, Benjamin
AU - Lugosch, Loren
AU - Li, Jinyu
AU - Subakan, Cem
AU - Woodland, Phil
AU - Kim, Minje
AU - Lee, Hung Yi
AU - Watanabe, Shinji
AU - Adi, Yossi
AU - Ravanelli, Mirco
N1 - Publisher Copyright:
© 2025, Transactions on Machine Learning Research. All rights reserved.
PY - 2025
Y1 - 2025
N2 - Discrete audio tokens are compact representations that aim to preserve perceptual quality, phonetic content, and speaker characteristics while enabling efficient storage and inference, as well as competitive performance across diverse downstream tasks. They provide a practical alternative to continuous features, enabling the integration of speech and audio into modern large language models (LLMs). As interest in token-based audio processing grows, various tokenization methods have emerged, and several surveys have reviewed the latest progress in the field. However, existing studies often focus on specific domains or tasks and lack a unified comparison across various benchmarks. This paper presents a systematic review and benchmark of discrete audio tokenizers, covering three domains: speech, music, and general audio. We propose a taxonomy of tokenization approaches based on encoder-decoder, quantization techniques, training paradigm, streamability, and application domains. We evaluate tokenizers on multiple benchmarks for reconstruction, downstream performance, and acoustic language modeling, and analyze trade-offs through controlled ablation studies. Our findings highlight key limitations, practical considerations, and open challenges, providing insight and guidance for future research in this rapidly evolving area. For more information, including our main results and tokenizer database, please refer to our website: https://poonehmousavi.github.io/dates-website/.
AB - Discrete audio tokens are compact representations that aim to preserve perceptual quality, phonetic content, and speaker characteristics while enabling efficient storage and inference, as well as competitive performance across diverse downstream tasks. They provide a practical alternative to continuous features, enabling the integration of speech and audio into modern large language models (LLMs). As interest in token-based audio processing grows, various tokenization methods have emerged, and several surveys have reviewed the latest progress in the field. However, existing studies often focus on specific domains or tasks and lack a unified comparison across various benchmarks. This paper presents a systematic review and benchmark of discrete audio tokenizers, covering three domains: speech, music, and general audio. We propose a taxonomy of tokenization approaches based on encoder-decoder, quantization techniques, training paradigm, streamability, and application domains. We evaluate tokenizers on multiple benchmarks for reconstruction, downstream performance, and acoustic language modeling, and analyze trade-offs through controlled ablation studies. Our findings highlight key limitations, practical considerations, and open challenges, providing insight and guidance for future research in this rapidly evolving area. For more information, including our main results and tokenizer database, please refer to our website: https://poonehmousavi.github.io/dates-website/.
UR - https://www.scopus.com/pages/publications/105016605430
M3 - ???researchoutput.researchoutputtypes.contributiontojournal.article???
AN - SCOPUS:105016605430
SN - 2835-8856
VL - 2025
JO - Transactions on Machine Learning Research
JF - Transactions on Machine Learning Research
ER -