,

On Generalization in Coreference Resolution

, , , , и .
Proceedings of the Fourth Workshop on Computational Models of Reference, Anaphora and Coreference, стр. 111--120. Punta Cana, Dominican Republic, Association for Computational Linguistics, (ноября 2021)
DOI: 10.18653/v1/2021.crac-1.12

Аннотация

While coreference resolution is defined independently of dataset domain, most models for performing coreference resolution do not transfer well to unseen domains. We consolidate a set of 8 coreference resolution datasets targeting different domains to evaluate the off-the-shelf performance of models. We then mix three datasets for training; even though their domain, annotation guidelines, and metadata differ, we propose a method for jointly training a single model on this heterogeneous data mixture by using data augmentation to account for annotation differences and sampling to balance the data quantities. We find that in a zero-shot setting, models trained on a single dataset transfer poorly while joint training yields improved overall performance, leading to better generalization in coreference resolution models. This work contributes a new benchmark for robust coreference resolution and multiple new state-of-the-art results.

тэги

Пользователи данного ресурса

  • @mschwab

Комментарии и рецензии