Abstract
Automatic classification of the users’ internal affective and emotional states is to be considered for many applications, ranging from organisational tasks to health care. To develop automatic technical systems suitable training material is necessary and an appropriate adaptation towards users is needed. In this work, we present preliminary but promising results of our research focusing on emotion classification by visual and audio signals. This is related to a semi-automatic and crossmodality labelling of data sets which will help to establish a kind of ground truth for labels in the adaptation process of classifiers. In our experiments we showed that prosodic features, especially, higher order ones like formant’s three bandwidth are related to visual/mimic expressions.
Users
Please
log in to take part in the discussion (add own reviews or comments).