A Comparison of Alternative Parse Tree Paths for Labeling Semantic Roles
R. Swanson, and A. Gordon. Proceedings of the Joint Conference of the International Committee on Computational Linguistics and the Association for Computational Linguistics, page 17-21. Sydney, Australia, (July 2006)
Abstract
The integration of sophisticated inference-based techniques into natural language processing applications first requires a reliable method of encoding the predicate-argument structure of the propositional content of text. Recent statistical approaches to automated predicateargument annotation have utilized parse tree paths as predictive features, which encode the path between a verb predicate and a node in the parse tree that governs its argument. In this paper, we explore a number of alternatives for how these parse tree paths are encoded, focusing on the difference between automatically generated constituency parses and dependency parses. After describing five alternatives for encoding parse tree paths, we investigate how well each can be aligned with the argument substrings in annotated text corpora, their relative precision and recall performance, and their comparative learning curves. Results indicate that constituency parsers produce parse tree paths that can more easily be aligned to argument substrings, perform better in precision and recall, and have more favorable learning curves than those produced by a dependency parser.
%0 Conference Paper
%1 swanson06comparison
%A Swanson, Reid
%A Gordon, Andrew S.
%B Proceedings of the Joint Conference of the International Committee on Computational Linguistics and the Association for Computational Linguistics
%C Sydney, Australia
%D 2006
%K learning roles nlp
%P 17-21
%T A Comparison of Alternative Parse Tree Paths for Labeling Semantic Roles
%U http://people.ict.usc.edu/~gordon/ACL06.PDF
%X The integration of sophisticated inference-based techniques into natural language processing applications first requires a reliable method of encoding the predicate-argument structure of the propositional content of text. Recent statistical approaches to automated predicateargument annotation have utilized parse tree paths as predictive features, which encode the path between a verb predicate and a node in the parse tree that governs its argument. In this paper, we explore a number of alternatives for how these parse tree paths are encoded, focusing on the difference between automatically generated constituency parses and dependency parses. After describing five alternatives for encoding parse tree paths, we investigate how well each can be aligned with the argument substrings in annotated text corpora, their relative precision and recall performance, and their comparative learning curves. Results indicate that constituency parsers produce parse tree paths that can more easily be aligned to argument substrings, perform better in precision and recall, and have more favorable learning curves than those produced by a dependency parser.
@inproceedings{swanson06comparison,
abstract = {The integration of sophisticated inference-based techniques into natural language processing applications first requires a reliable method of encoding the predicate-argument structure of the propositional content of text. Recent statistical approaches to automated predicateargument annotation have utilized parse tree paths as predictive features, which encode the path between a verb predicate and a node in the parse tree that governs its argument. In this paper, we explore a number of alternatives for how these parse tree paths are encoded, focusing on the difference between automatically generated constituency parses and dependency parses. After describing five alternatives for encoding parse tree paths, we investigate how well each can be aligned with the argument substrings in annotated text corpora, their relative precision and recall performance, and their comparative learning curves. Results indicate that constituency parsers produce parse tree paths that can more easily be aligned to argument substrings, perform better in precision and recall, and have more favorable learning curves than those produced by a dependency parser.},
added-at = {2007-02-27T15:35:02.000+0100},
address = {Sydney, Australia},
author = {Swanson, Reid and Gordon, Andrew S.},
biburl = {https://www.bibsonomy.org/bibtex/2d27a87c006730e31c1e0a41f23836bda/stefano},
booktitle = {Proceedings of the Joint Conference of the International Committee on Computational Linguistics and the Association for Computational Linguistics},
interhash = {fa6f311110cf1023fc3f750636599746},
intrahash = {d27a87c006730e31c1e0a41f23836bda},
keywords = {learning roles nlp},
month = Jul,
pages = {17-21},
school = {University of Southern California},
timestamp = {2007-02-27T15:35:02.000+0100},
title = {{A} {C}omparison of {A}lternative {P}arse {T}ree {P}aths for {L}abeling {S}emantic {R}oles},
url = {http://people.ict.usc.edu/~gordon/ACL06.PDF},
year = 2006
}