Distilling Multilingual Transformer Models for Efficient Document Retrieval : Distilling multi-Transformer models with distillation losses involving multi-Transformer interactions

University essay from KTH/Skolan för elektroteknik och datavetenskap (EECS)

Abstract: Open Domain Question Answering (OpenQA) is a task concerning automatically finding answers to a query from a given set of documents. Language-agnostic OpenQA is an increasingly important research area in the globalised world, where the answers can be in a different language from the question. An OpenQA system generally consists of a document retriever to retrieve relevant passages and a reader to extract answers from the passages. Large Transformers, such as Dense Passage Retrieval (DPR) models, have achieved state-of-the-art performances in document retrievals, but they are computationally expensive in production. Knowledge Distillation (KD) is an effective way to reduce the size and increase the speed of Transformers while retaining their performances. However, most existing research focuses on distilling single Transformer models, instead of multi-Transformer models, as in the case of DPR. This thesis project uses MiniLM and DistilBERT distillation methods, two of the most successful methods to distil the BERT model, to individually distil the passage and query model of a fined-tuned DPR model comprised of two pretrained MPNet models. In addition, the project proposes and tests Embedding Similarity Loss (ESL), a distillation loss designed for the interaction between the passage and query models in DPR architecture. The results show that using ESL results in better students than using MiniLM or DistilBERT loss alone and that combining ESL with any of the other two losses increases their student models’ performances in most cases, especially when training on Information-Seeking Question Answering in Typologically Diverse Languages (TyDi QA) instead of The Stanford Question Answering Dataset 1.1 (SQuAD 1.1). The best resulting 6-layer student DPR model retained more than 90% of the recall and Mean Average Precision (MAP) in Cross-Lingual Transfer (XLT) tasks while reducing the inference time to 63.2%. In Generalised Cross-Lingual Transfer (G-XLT) tasks, it retained only around 42% of the recall and MAP using 53.8% of the inference time.

  AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)