Direct error rate minimization of hidden Markov models

Joseph Keshet, Chih Chieh Cheng, Mark Stoehr, David McAllester

Research output: Contribution to journalConference articlepeer-review

5 Scopus citations

Abstract

We explore discriminative training of HMM parameters that directly minimizes the expected error rate. In discriminative training one is interested in training a system to minimize a desired error function, like word error rate, phone error rate, or frame error rate. We review a recent method (McAllester, Hazan and Keshet, 2010), which introduces an analytic expression for the gradient of the expected error-rate. The analytic expression leads to a perceptron-like update rule, which is adapted here for training of HMMs in an online fashion. While the proposed method can work with any type of the error function used in speech recognition, we evaluated it on phoneme recognition of TIMIT, when the desired error function used for training was frame error rate. Except for the case of GMM with a single mixture per state, the proposed update rule provides lower error rates, both in terms of frame error rate and phone error rate, than other approaches, including MCE and large margin.

Original languageEnglish
Pages (from-to)449-452
Number of pages4
JournalProceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
StatePublished - 2011
Externally publishedYes
Event12th Annual Conference of the International Speech Communication Association, INTERSPEECH 2011 - Florence, Italy
Duration: 27 Aug 201131 Aug 2011

Keywords

  • Automatic speech recognition
  • Direct error minimization
  • Discriminative training
  • Hidden Markov models
  • Minimum frame error
  • Minimum phone error
  • Online learning

Fingerprint

Dive into the research topics of 'Direct error rate minimization of hidden Markov models'. Together they form a unique fingerprint.

Cite this