Inproceedings,

Text-to-visual speech synthesis based on parameter generation from HMM

, , , , and .
Proceedings of the 1998 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 6, page 3745-3748. Seattle, WA, USA, (May 1998)
DOI: 10.1109/ICASSP.1998.679698

Abstract

This paper presents a new technique for synthesizing visual speech from arbitrarily given text. The technique is based on an algorithm for parameter generation from HMM with dynamic features, which has been successfully applied to text-to-speech synthesis. In the training phase, syllable HMMs are trained with visual speech parameter sequences that represent lip movements. In the synthesis phase, a sentence HMM is constructed by concatenating syllable HMMs corresponding to the phonetic transcription for the input text. Then an optimum visual speech parameter sequence is generated from the sentence HMM in an ML sense. The proposed technique can generate synchronized lip movements with speech in a unified framework. Furthermore, coarticulation is implicitly incorporated into the generated mouth shapes. As a result, synthetic lip motion becomes smooth and realistic

Tags

Users

  • @m-toman
  • @dblp

Comments and Reviews