Driving Synthetic Mouth Gestures: Phonetic Recognition for FaceMe!

William Goldenthal, Keith Waters, Jean Manuel Van Thong, Oren Glickman

Research output: Contribution to conferencePaperpeer-review

16 Scopus citations

Abstract

The goal of this work is to use phonetic recognition to drive a synthetic image with speech. Phonetic units are identified by the phonetic recognition engine and mapped to mouth gestures, known as visemes, the visual counterpart of phonemes. The acoustic waveform and visemes are then sent to a synthetic image player, called FaceMe! where they are rendered synchronously. This paper provides background for the core technologies involved in this process and describes asynchronous and synchronous prototypes of a combined phonetic recognition/FaceMe! system which we use to render mouth gestures on an animated face.

Original languageEnglish
Pages1995-1998
Number of pages4
StatePublished - 1997
Externally publishedYes
Event5th European Conference on Speech Communication and Technology, EUROSPEECH 1997 - Rhodes, Greece
Duration: 22 Sep 199725 Sep 1997

Conference

Conference5th European Conference on Speech Communication and Technology, EUROSPEECH 1997
Country/TerritoryGreece
CityRhodes
Period22/09/9725/09/97

Bibliographical note

Publisher Copyright:
© 1997 5th European Conference on Speech Communication and Technology, EUROSPEECH 1997. All rights reserved.

Fingerprint

Dive into the research topics of 'Driving Synthetic Mouth Gestures: Phonetic Recognition for FaceMe!'. Together they form a unique fingerprint.

Cite this