Automatically Detecting Action Items in Audio Meeting Recordings
W. Morgan, P. Chang, S. Gupta, and J. Brenier. Proceedings of the 7 th SIGdial Workshop on Discourse and Dialogue, The Stanford Natural Language Processing Group, (2006)
Abstract
Identification of action items in meeting recordings can provide immediate access to salient information in a medium notoriously difficult to search and summarize. To this end, we use a maximum entropy model to automatically detect action itemrelated utterances from multi-party audio meeting recordings. We compare the effect of lexical, temporal, syntactic, semantic, and prosodic features on system performance. We show that on a corpus of action item annotations on the ICSI meeting recordings, characterized by high imbalance and low inter-annotator agreement, the system performs at an F measure of 31.92%. While this is low compared to better-studied tasks on more mature corpora, the relative usefulness of the features towards this task is indicative of their usefulness on more consistent annotations, as well as to related tasks.
%0 Conference Paper
%1 morgan06actionitems
%A Morgan, William
%A Chang, Pi-Chuan
%A Gupta, Surabhi
%A Brenier, Jason M.
%B Proceedings of the 7 th SIGdial Workshop on Discourse and Dialogue
%D 2006
%K 2006 stanford NT2OD nlp
%T Automatically Detecting Action Items in Audio Meeting Recordings
%U http://nlp.stanford.edu/pubs/sigdial06.pdf
%X Identification of action items in meeting recordings can provide immediate access to salient information in a medium notoriously difficult to search and summarize. To this end, we use a maximum entropy model to automatically detect action itemrelated utterances from multi-party audio meeting recordings. We compare the effect of lexical, temporal, syntactic, semantic, and prosodic features on system performance. We show that on a corpus of action item annotations on the ICSI meeting recordings, characterized by high imbalance and low inter-annotator agreement, the system performs at an F measure of 31.92%. While this is low compared to better-studied tasks on more mature corpora, the relative usefulness of the features towards this task is indicative of their usefulness on more consistent annotations, as well as to related tasks.
@inproceedings{morgan06actionitems,
abstract = {Identification of action items in meeting recordings can provide immediate access to salient information in a medium notoriously difficult to search and summarize. To this end, we use a maximum entropy model to automatically detect action itemrelated utterances from multi-party audio meeting recordings. We compare the effect of lexical, temporal, syntactic, semantic, and prosodic features on system performance. We show that on a corpus of action item annotations on the ICSI meeting recordings, characterized by high imbalance and low inter-annotator agreement, the system performs at an F measure of 31.92%. While this is low compared to better-studied tasks on more mature corpora, the relative usefulness of the features towards this task is indicative of their usefulness on more consistent annotations, as well as to related tasks.},
added-at = {2007-02-25T19:32:31.000+0100},
author = {Morgan, William and Chang, Pi-Chuan and Gupta, Surabhi and Brenier, Jason M.},
biburl = {https://www.bibsonomy.org/bibtex/21773aec4e1f6f0e370da45deeef04436/butonic},
booktitle = {Proceedings of the 7 th SIGdial Workshop on Discourse and Dialogue},
interhash = {6e3447d490aa445ad0a046cd1485eca4},
intrahash = {1773aec4e1f6f0e370da45deeef04436},
keywords = {2006 stanford NT2OD nlp},
organization = {The Stanford Natural Language Processing Group},
school = {Stanford University},
timestamp = {2007-02-25T19:32:31.000+0100},
title = {{A}utomatically {D}etecting {A}ction {I}tems in {A}udio {M}eeting {R}ecordings},
url = {http://nlp.stanford.edu/pubs/sigdial06.pdf},
year = 2006
}