A General Framework for Multimodal Interaction in Virtual Reality Systems: PrOSA
M. Latoschik. The Future of VR and AR Interfaces - Multimodal, Humanoid, Adaptive and Intelligent. Proceedings of the Workshop at IEEE Virtual Reality 2001, 138, page 21-25. (2001)
Abstract
This article presents a modular approach to incorporate multimodal gesture and speech driven interaction into virtual reality systems. Based on existing techniques for modelling VR-applications, the overall task is separated into different problem categories: from sensor synchronisation to a high-level description of crossmodal temporal and semantic coherences, a set of solution concepts is presented that seamlessly fit into both the static (scenegraph-based) representation and into the dynamic (renderloop and immersion) aspects of a realtime application. The developed framework establishes a connecting layer between raw sensor data and a general functional description of multimodal and scenecontext related evaluation procedures for VR-setups.
As an example for the concepts, their implementation in a system for virtual construction is described.
%0 Conference Paper
%1 lat:PrOSA-framework
%A Latoschik, Marc Erich
%B The Future of VR and AR Interfaces - Multimodal, Humanoid, Adaptive and Intelligent. Proceedings of the Workshop at IEEE Virtual Reality 2001
%D 2001
%K myown
%N 138
%P 21-25
%T A General Framework for Multimodal Interaction in Virtual Reality Systems: PrOSA
%U https://downloads.hci.informatik.uni-wuerzburg.de/A_General_Framework_for_Multimodal.pdf
%X This article presents a modular approach to incorporate multimodal gesture and speech driven interaction into virtual reality systems. Based on existing techniques for modelling VR-applications, the overall task is separated into different problem categories: from sensor synchronisation to a high-level description of crossmodal temporal and semantic coherences, a set of solution concepts is presented that seamlessly fit into both the static (scenegraph-based) representation and into the dynamic (renderloop and immersion) aspects of a realtime application. The developed framework establishes a connecting layer between raw sensor data and a general functional description of multimodal and scenecontext related evaluation procedures for VR-setups.
As an example for the concepts, their implementation in a system for virtual construction is described.
@inproceedings{lat:PrOSA-framework,
abstract = {This article presents a modular approach to incorporate multimodal gesture and speech driven interaction into virtual reality systems. Based on existing techniques for modelling VR-applications, the overall task is separated into different problem categories: from sensor synchronisation to a high-level description of crossmodal temporal and semantic coherences, a set of solution concepts is presented that seamlessly fit into both the static (scenegraph-based) representation and into the dynamic (renderloop and immersion) aspects of a realtime application. The developed framework establishes a connecting layer between raw sensor data and a general functional description of multimodal and scenecontext related evaluation procedures for VR-setups.
As an example for the concepts, their implementation in a system for virtual construction is described.},
added-at = {2012-05-02T17:21:05.000+0200},
author = {Latoschik, Marc Erich},
biburl = {https://www.bibsonomy.org/bibtex/29c7f8ae6c6307fdf67b25ba13af62961/hci-uwb},
booktitle = {The Future of VR and AR Interfaces - Multimodal, Humanoid, Adaptive and Intelligent. Proceedings of the Workshop at IEEE Virtual Reality 2001},
interhash = {2cce423caa54b3c3ab3ce98ed4a055a0},
intrahash = {9c7f8ae6c6307fdf67b25ba13af62961},
keywords = {myown},
number = 138,
pages = {21-25},
timestamp = {2024-05-06T17:22:37.000+0200},
title = {A General Framework for Multimodal Interaction in Virtual Reality Systems: PrOSA},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/A_General_Framework_for_Multimodal.pdf},
year = 2001
}