Making Chatbots More Conversational : Using Follow-Up Questions for Maximizing the Informational Value in Evaluation Responses

University essay from Uppsala universitet/Institutionen för informationsteknologi

Author: Pontus Hilding; [2019]

Keywords: ;

Abstract: This thesis contains a detailed outline of a system that analyzes textual conversation-based evaluation responses in an attempt to maximize the extraction of informational value. This is achieved by asking intelligent follow-up questions where the system deems it to be necessary. The system is realized as a multistage rocket. The first step utilizes a neural network trained on manually extracted linguistic features of an evaluation answer to assess whether a follow-up question is required. Next, what question to ask is determined by a separate network which employs word embeddings to make appropriate question predictions. Finally, the grammatical structure of the answer is analyzed to resolve how the predicted question should be composed in terms of grammatical properties such as tense and plurality to make it as natural-sounding and human as possible. The resulting system was overall satisfactory. Exposing the system to evaluation answers in the educational sector caused it, in the majority of cases, to ask relevant follow-up questions which dived deeper into the users' answer. The domain-narrow and meager amount of annotated data naturally made the system very domain-specific which is partially counteracted by the use of secondary predictions and default fallback questions. However, using the system with any unrestricted answer out-of-the-box is likely to generate a very vague question-response. With a substantial increase in the amount of annotated data, this would most likely be vastly improved.

  AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)