@butonic

Automatically Detecting Action Items in Audio Meeting Recordings

, , , and . Proceedings of the 7 th SIGdial Workshop on Discourse and Dialogue, The Stanford Natural Language Processing Group, (2006)

Abstract

Identification of action items in meeting recordings can provide immediate access to salient information in a medium notoriously difficult to search and summarize. To this end, we use a maximum entropy model to automatically detect action itemrelated utterances from multi-party audio meeting recordings. We compare the effect of lexical, temporal, syntactic, semantic, and prosodic features on system performance. We show that on a corpus of action item annotations on the ICSI meeting recordings, characterized by high imbalance and low inter-annotator agreement, the system performs at an F measure of 31.92%. While this is low compared to better-studied tasks on more mature corpora, the relative usefulness of the features towards this task is indicative of their usefulness on more consistent annotations, as well as to related tasks.

Links and resources

Tags

community

  • @butonic
  • @dblp
@butonic's tags highlighted