Abstract
Speech sounds of spoken language are obtained by varying
configuration of the articulators surrounding the vocal
tract. They contain abundant information that can be
utilized to better understand the underlying mechanism of
human speech production. We propose a novel deep neural
network-based learning framework that understands acoustic
information in the variable-length sequence of vocal tract
shaping during speech production, captured by real-time
magnetic resonance imaging (rtMRI), and translate it into
text. The proposed framework comprises of spatiotemporal
convolutions, a recurrent network, and the connectionist
temporal classification loss, trained entirely end-to-end.
On the USC-TIMIT corpus, the model achieved a 40.6\% PER at
sentence-level, much better compared to the existing models.
To the best of our knowledge, this is the first study that
demonstrates the recognition of entire spoken sentence based
on an individual's articulatory motions captured by rtMRI
video. We also performed an analysis of variations in the
geometry of articulation in each sub-regions of the vocal
tract (i.e., pharyngeal, velar and dorsal, hard palate,
labial constriction region) with respect to different
emotions and genders. Results suggest that each sub-regions
distortion is affected by both emotion and gender.
Users
Please
log in to take part in the discussion (add own reviews or comments).