Аннотация
This article presents a modular approach to incorporate multimodal gesture and speech driven interaction into virtual reality systems. Based on existing techniques for modelling VR-applications, the overall task is separated into different problem categories: from sensor synchronisation to a high-level description of crossmodal temporal and semantic coherences, a set of solution concepts is presented that seamlessly fit into both the static (scenegraph-based) representation and into the dynamic (renderloop and immersion) aspects of a realtime application. The developed framework establishes a connecting layer between raw sensor data and a general functional description of multimodal and scenecontext related evaluation procedures for VR-setups.
As an example for the concepts, their implementation in a system for virtual construction is described.
Пользователи данного ресурса
Пожалуйста,
войдите в систему, чтобы принять участие в дискуссии (добавить собственные рецензию, или комментарий)