Punctuation Restoration as Post-processing Step for Swedish Language Automatic Speech Recognition

University essay from Luleå tekniska universitet/Institutionen för system- och rymdteknik

Abstract: This thesis focuses on the Swedish language, where punctuation restoration, especially as a postprocessing step for the output of Automatic Speech Recognition (ASR) applications, needs furtherresearch. I have collaborated with NewsMachine AB, a company that provides large-scale mediamonitoring services for its clients, for which it employs ASR technology to convert spoken contentinto text.This thesis follows an approach initially designed for high-resource languages such as English. Themethod is based on KB-BERT, a pre-trained Swedish neural network language model developedby the National Library of Sweden. The project uses KB-BERT with a Bidirectional Long-ShortTerm Memory (BiLSTM) layer on top for the task of punctuation restoration. The model is finetuned using the TED Talk 2020 dataset in Swedish, which is acquired from OPUS (an open-sourceparallel corpus). The punctuation marks comma, period, question mark, and colon are considered for this project. A comparative analysis is conducted between two KB-BERT models: bertbase-swedish-cased and albert-base-swedish-cased-alpha. The fine-tuned Swedish BERT-BiLSTMmodel, trained on 5 classes, achieved an overall F1-score of 81.6%, surpassing the performance ofthe ALBERT-BiLSTM model, which was also trained on 5 classes and obtained an overall F1-scoreof 66.6%. Additionally, the BERT-BiLSTM model, trained on 4 classes (excluding colon), outperformed prestoBERT, an existing model designed for the same task in Swedish, with an overallF1-score of 82.8%. In contrast, prestoBERT achieved an overall F1-score of 78.9%.As a further evaluation of the model’s performance on ASR transcribed text, noise was injectedbased on four probabilities (0.05, 0.1, 0.15, 0.2) into a copy of the test data in the form of threeword-level errors (deletion, substitution, and insertion). The performance of the BERT-BiLSTMmodel substantially decreased for all the errors as the probability of noise injected increased. Incontrast, the model still performed comparatively better when dealing with deletion errors as compared to substitution and insertion errors. Lastly, the data resources received from NewsMachineAB were used to perform a qualitative assessment of how the model performs in punctuating realtranscribed data as compared to human judgment.

  AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)