Machine Translation Of Fictional And Non-fictional Texts : An examination of Google Translate's accuracy on translation of fictional versus non-fictional texts.

University essay from Stockholms universitet/Engelska institutionen

Abstract: This study focuses on and tries to identify areas where machine translation can be useful by examining translated fictional and non-fictional texts, and the extent to which these different text types are better or worse suited for machine translation.  It additionally evaluates the performance of the free online translation tool Google Translate (GT). The BLEU automatic evaluation metric for machine translation was used for this study, giving a score of 27.75 BLEU value for fictional texts and 32.16 for the non-fictional texts. The non-fictional texts are samples of law documents, (commercial) company reports, social science texts (religion, welfare, astronomy) and medicine. These texts were selected because of their degree of difficulty. The non-fictional sentences are longer than those of the fictional texts and in this regard MT systems have struggled. In spite of having longer sentences, the non-fictional texts got a higher BLUE score than the fictional ones. It is speculated that one reason for the higher score of non-fictional texts might be that more specific terminology is used in these texts, leaving less room for subjective interpretation than for the fictional texts. There are other levels of meaning at work in the fictional texts that the human translator needs to capture. 

  AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)