Abstract
Are assessment tools for machine-generated translations applicable to human translations? To address this question, the present study compares two assessments used in translation tests: the first is the error-analysis-based method applied by most schools and institutions, the other a scale-based method proposed by Liu, Chang et al. (2005). They have adapted Carroll’s scales developed for quality assessment of machine-generated translations. In the present study, twelve graders were invited to re-grade the test papers in Liu, Chang et al. (2005)’s experiment by different methods. Based on the results and graders’ feedback, a number of modifications of the measuring procedure as well as the scales were provided. The study showed that the scale method mostly used to assess machine-generated translations is also a reliable and valid tool to assess human translations. The measurement was accepted by the Ministry of Education in Taiwan and applied in the 2007 public translation proficiency test.
Users
Please
log in to take part in the discussion (add own reviews or comments).