A Study on Neural Network Modeling Techniques for Automatic Document Summarization
Abstract: With the Internet becoming widespread, countless articles and multimedia content have been filled in our daily life. How to effectively acquire the knowledge we seek becomes one of the unavoidable issues. To help people to browse the main theme of the document faster, many studies are dedicated to automatic document summarization, which aims to condense one or more documents into a short text yet still keep its essential content as much as possible. Automatic document summarization can be categorized into extractive and abstractive. Extractive summarization selects the most relevant set of sentences to a target ratio and assemble them into a concise summary. On the other hand, abstractive summarization produces an abstract after understanding the key concept of a document. The recent past has seen a surge of interest in developing deep neural network-based supervised methods for both types of automatic summarization. This thesis presents a continuation of this line and exploit two kinds of frameworks, which integrate convolutional neural network (CNN), long short-term memory (LSTM) and multilayer perceptron (MLP) for extractive speech summarization. The empirical results seem to demonstrate the effectiveness of neural summarizers when compared with other conventional supervised methods. Finally, to further explore the ability of neural networks, we experiment and analyze the results of applying sequence-to-sequence neural networks for abstractive summarization.
AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)