DDKtor: Automatic Diadochokinetic Speech Analysis

Yael Segal, Kasia Hitczenko, Matthew Goldrick, Adam Buchwald, Angela Roberts, Joseph Keshet

Research output: Contribution to journalConference articlepeer-review

2 Scopus citations

Abstract

Diadochokinetic speech tasks (DDK), in which participants repeatedly produce syllables, are commonly used as part of the assessment of speech motor impairments. These studies rely on manual analyses that are time-intensive, subjective, and provide only a coarse-grained picture of speech. This paper presents two deep neural network models that automatically segment consonants and vowels from unannotated, untranscribed speech. Both models work on the raw waveform and use convolutional layers for feature extraction. The first model is based on an LSTM classifier followed by fully connected layers, while the second model adds more convolutional layers followed by fully connected layers. These segmentations predicted by the models are used to obtain measures of speech rate and sound duration. Results on a young healthy individuals dataset show that our LSTM model outperforms the current state-of-the-art systems and performs comparably to trained human annotators. Moreover, the LSTM model also presents comparable results to trained human annotators when evaluated on unseen older individuals with Parkinson's Disease dataset.

Original languageEnglish
Pages (from-to)4611-4615
Number of pages5
JournalProceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
Volume2022-September
DOIs
StatePublished - 2022
Externally publishedYes
Event23rd Annual Conference of the International Speech Communication Association, INTERSPEECH 2022 - Incheon, Korea, Republic of
Duration: 18 Sep 202222 Sep 2022

Bibliographical note

Publisher Copyright:
Copyright © 2022 ISCA.

Keywords

  • DDK
  • Deep neural networks
  • Diadochokinetic speech
  • Parkinson's Disease
  • Voice onset time
  • Vowel duration

Fingerprint

Dive into the research topics of 'DDKtor: Automatic Diadochokinetic Speech Analysis'. Together they form a unique fingerprint.

Cite this