BLEU: a method for automatic evaluation of machine translation
K. Papineni, S. Roukos, T. Ward, and W. Zhu. Proceedings of the 40th annual meeting on association for computational linguistics, page 311--318. Association for Computational Linguistics, (2002)
Abstract
Human evaluations of machine translation
are extensive but expensive. Human evaluations can take months to finish and involve human labor that can not be reused.
We propose a method of automatic machine translation evaluation that is quick,
inexpensive, and language-independent,
that correlates highly with human evaluation, and that has little marginal cost per
run. We present this method as an automated understudy to skilled human judges
which substitutes for them when there is
need for quick or frequent evaluations.
%0 Conference Paper
%1 papineni2002bleu
%A Papineni, Kishore
%A Roukos, Salim
%A Ward, Todd
%A Zhu, Wei-Jing
%B Proceedings of the 40th annual meeting on association for computational linguistics
%D 2002
%K final thema:transformer
%P 311--318
%T BLEU: a method for automatic evaluation of machine translation
%X Human evaluations of machine translation
are extensive but expensive. Human evaluations can take months to finish and involve human labor that can not be reused.
We propose a method of automatic machine translation evaluation that is quick,
inexpensive, and language-independent,
that correlates highly with human evaluation, and that has little marginal cost per
run. We present this method as an automated understudy to skilled human judges
which substitutes for them when there is
need for quick or frequent evaluations.
@inproceedings{papineni2002bleu,
abstract = {Human evaluations of machine translation
are extensive but expensive. Human evaluations can take months to finish and involve human labor that can not be reused.
We propose a method of automatic machine translation evaluation that is quick,
inexpensive, and language-independent,
that correlates highly with human evaluation, and that has little marginal cost per
run. We present this method as an automated understudy to skilled human judges
which substitutes for them when there is
need for quick or frequent evaluations.},
added-at = {2020-07-14T17:37:27.000+0200},
author = {Papineni, Kishore and Roukos, Salim and Ward, Todd and Zhu, Wei-Jing},
biburl = {https://www.bibsonomy.org/bibtex/2d23d5be2053deb3a54d4177258aa81e9/jonaskaiser},
booktitle = {Proceedings of the 40th annual meeting on association for computational linguistics},
description = {Weiterführende Literatur für BLEU Score},
interhash = {307a48f8126bdebdbb445d6d187c2564},
intrahash = {d23d5be2053deb3a54d4177258aa81e9},
keywords = {final thema:transformer},
organization = {Association for Computational Linguistics},
pages = {311--318},
timestamp = {2020-07-14T17:37:27.000+0200},
title = {BLEU: a method for automatic evaluation of machine translation},
year = 2002
}