Abstract
In this paper, we review the major approaches to multimodal human-computer interaction, giving an overview of the field from a computer vision perspective. In particular, we focus on body, gesture, gaze, and affective interaction (facial expression recognition and emotion in audio). We discuss user and task modeling, and multimodal fusion, highlighting challenges, open issues, and emerging applications for multimodal human-computer interaction (MMHCI) research.
Users
Please
log in to take part in the discussion (add own reviews or comments).