Leveraging Large Language Models for Firm-Intelligence: A RAG Framework Approach

University essay from Lunds universitet/Statistiska institutionen

Abstract: In the wake of OpenAI's release of ChatGPT in November 2022, powered by the 175 billion parameter neural network GPT-3, the potential applications of Large Language Models (LLMs) in various sectors have become evident. One such application lies in hedge funds and trading desks where knowledge sharing is paramount. These entities often possess a wealth of firm-specific knowledge that spans different research areas and personnel expertise. Leveraging LLMs on this knowledge is challenging due to its proprietary nature, the immense data and computational demands of training LLMs, and the inherent limitations of LLMs, such as the tendency to fabricate facts. The Retrieval Augmented Generation (RAG) framework, which has recently gained traction (Shi et al., 2023), presents a solution. This thesis explores the potential of creating a firm intelligence unit using the RAG framework, leveraging research reports from Lund University Finance Society's Trading & Quantitative Research (TQR) department as a representative dataset. The envisioned AI Assistant aims to answer questions based on the TQR reports, admit ignorance when necessary, and provide detailed answer sources. This study provides insights into the theory behind LLMs and the implementation of the RAG framework and offers a comprehensive evaluation, discussing results, limitations, and future prospects for firm intelligence units.

  AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)