Abstract
This paper presents work on multimodal communication with an anthropomorphic agent. It focuses on processing of multimodal input and output employing natural language and gestures in virtual environments. On the input side, we describe our approach to recognize and interpret co-verbal gestures used for pointing, object manipulation, and object description. On the output side, we present the utterance generation module of the agent which is able to produce coordinated speech and gestures.
Users
Please
log in to take part in the discussion (add own reviews or comments).