A 3D Gesture Recognition System for Multimodal Dialog Systems
R. Neßelrath, и J. Alexandersson. Proceedings of the 6th IJCAI Workshop on Knowledge and Reasoning in Practical Dialogue Systems, Pasadena, CA, USA, стр. 46-51. (2009)
Аннотация
We present a framework for integrating dynamic gestures as a new input modality into arbitrary applications. The framework allows training new gestures and recognizing them as user input with the help of machine learning algo-rithms. The precision of the gesture recognition is evaluated with special attention to the elderly. We show how this functionality is implemented into our dialogue system and present an example application which allows the system to learn and recognize gestures in a speech based dialogue.
%0 Conference Paper
%1 NesselrathAlexandersson09KRPDS
%A Neßelrath, Robert
%A Alexandersson, Jan
%B Proceedings of the 6th IJCAI Workshop on Knowledge and Reasoning in Practical Dialogue Systems, Pasadena, CA, USA
%D 2009
%K v1205 paper ai dfki user interface multimodal recognition interaction processing dialog learn
%P 46-51
%T A 3D Gesture Recognition System for Multimodal Dialog Systems
%X We present a framework for integrating dynamic gestures as a new input modality into arbitrary applications. The framework allows training new gestures and recognizing them as user input with the help of machine learning algo-rithms. The precision of the gesture recognition is evaluated with special attention to the elderly. We show how this functionality is implemented into our dialogue system and present an example application which allows the system to learn and recognize gestures in a speech based dialogue.
@inproceedings{NesselrathAlexandersson09KRPDS,
abstract = {We present a framework for integrating dynamic gestures as a new input modality into arbitrary applications. The framework allows training new gestures and recognizing them as user input with the help of machine learning algo-rithms. The precision of the gesture recognition is evaluated with special attention to the elderly. We show how this functionality is implemented into our dialogue system and present an example application which allows the system to learn and recognize gestures in a speech based dialogue.},
added-at = {2012-05-30T10:51:24.000+0200},
author = {Ne{\ss}elrath, Robert and Alexandersson, Jan},
biburl = {https://www.bibsonomy.org/bibtex/26d081ba137f3dcb2cd85221c1bcf77df/flint63},
booktitle = {Proceedings of the 6th IJCAI Workshop on Knowledge and Reasoning in Practical Dialogue Systems, Pasadena, CA, USA},
file = {Preprint:2009/NesselrathAlexandersson09KRPDS.pdf:PDF},
groups = {public},
interhash = {043db956f4ba8f1a0375dce26c050b4b},
intrahash = {6d081ba137f3dcb2cd85221c1bcf77df},
keywords = {v1205 paper ai dfki user interface multimodal recognition interaction processing dialog learn},
pages = {46-51},
timestamp = {2018-04-16T11:44:09.000+0200},
title = {A {3D} Gesture Recognition System for Multimodal Dialog Systems},
username = {flint63},
year = 2009
}