Improving BERTScore for Machine Translation Evaluation Through Contrastive Learning

University essay from Uppsala universitet/Institutionen för lingvistik och filologi

Abstract: Since the advent of automatic evaluation, tasks within Natural Language Processing (NLP), including Machine Translation, have been able to better utilize both time and labor resources. Later, multilingual pre-trained models (MLMs)have uplifted many languages’ capacity to participate in NLP research. Contextualized representations generated from these MLMs are both influential towards several downstream tasks and have inspired practitioners to better make sense of them. We propose the adoption of BERTScore, coupled with contrastive learning, for machine translation evaluation in lieu of BLEU - the industry leading metric. While BERTScore computes a similarity score for each token in a candidate and reference sentence, it does away with exact matches in favor of computing token similarity using contextual embeddings. We improve BERTScore via contrastive learning-based fine-tuning on MLMs. We use contrastive learning to improve BERTScore across different language pairs in both high and low resource settings (English-Hausa, English-Chinese), across three models (XLM-R, mBERT, and LaBSE) and across three domains (news,religious, combined). We also investigated both the effects of pairing relatively linguistically similar low-resource languages (Somali-Hausa), and data size on BERTScore and the corresponding Pearson correlation to human judgments. We found that reducing the distance between cross-lingual embeddings via contrastive learning leads to BERTScore having a substantially greater correlation to system-level human evaluation than BLEU for mBERT and LaBSE in all language pairs in multiple domains.

  AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)