Abstract
If no specific precautions are taken, people talking to a computer can---the same way as while talking to another human---speak aside, either to themselves or to another person. On the one hand, the computer should notice and process such utterancesin a special way; on the other hand, such utterances provide us with unique data to contrast these two registers: talkingvs. not talking to a computer. In this paper, we present two different databases, SmartKom and SmartWeb, and classify and analyse On-Talk (addressing the computer) vs. Off-Talk (addressing someone else)---and by that, the user's focus of attention---found in these two databases employing uni-modal (prosodic and linguistic) features, and employing multimodal information (additional face detection).
Users
Please
log in to take part in the discussion (add own reviews or comments).