DDKtor: Automatic Diadochokinetic Speech Analysis

Yael Segal, Kasia Hitczenko, Matthew Goldrick, Adam Buchwald, Angela Roberts, Joseph Keshet

Research output: Contribution to journalConference articlepeer-review

4 Scopus citations

Abstract

Diadochokinetic speech tasks (DDK), in which participants repeatedly produce syllables, are commonly used as part of the assessment of speech motor impairments. These studies rely on manual analyses that are time-intensive, subjective, and provide only a coarse-grained picture of speech. This paper presents two deep neural network models that automatically segment consonants and vowels from unannotated, untranscribed speech. Both models work on the raw waveform and use convolutional layers for feature extraction. The first model is based on an LSTM classifier followed by fully connected layers, while the second model adds more convolutional layers followed by fully connected layers. These segmentations predicted by the models are used to obtain measures of speech rate and sound duration. Results on a young healthy individuals dataset show that our LSTM model outperforms the current state-of-the-art systems and performs comparably to trained human annotators. Moreover, the LSTM model also presents comparable results to trained human annotators when evaluated on unseen older individuals with Parkinson's Disease dataset.

Original languageEnglish
Pages (from-to)4611-4615
Number of pages5
JournalProceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
Volume2022-September
DOIs
StatePublished - 2022
Externally publishedYes
Event23rd Annual Conference of the International Speech Communication Association, INTERSPEECH 2022 - Incheon, Korea, Republic of
Duration: 18 Sep 202222 Sep 2022

Bibliographical note

Publisher Copyright:
Copyright © 2022 ISCA.

Funding

This work is supported by the Ministry of Science & Technology, Israel (Y. Segal); U.S. National Institutes of Health (NIH; grants R21MH119677, K01DC014298, R01DC018589); and the Ontario Brain Institute with matching funds provided by participating hospitals, the Windsor/Essex County ALS Association and the Temerty Family Foundation. The opinions, results, and conclusions are those of the authors and no endorsement by the Ontario Brain Institute or NIH is intended or should be inferred. Thanks to Hung-Shao Cheng, Rosemary Dong, Ka-terina Alexopoulos, Camila Hirani, and Jasmine Tran for help in data collection and processing.

FundersFunder number
Windsor/Essex County ALS Association
National Institutes of HealthR21MH119677, R01DC018589, K01DC014298
Ontario Brain Institute
Ministry of science and technology, Israel
Temerty Family Foundation

    Keywords

    • DDK
    • Deep neural networks
    • Diadochokinetic speech
    • Parkinson's Disease
    • Voice onset time
    • Vowel duration

    Fingerprint

    Dive into the research topics of 'DDKtor: Automatic Diadochokinetic Speech Analysis'. Together they form a unique fingerprint.

    Cite this