Build More Capable LLMs with Retrieval Augmented Generation | by John Adeojo | Aug, 2023

How Retrieval Augmented Generation Can Enhance Your LLMs by Integrating a Knowledge Base

Image by author: Generated with Midjourney

ChatGPT is limited for many practical business use cases outside of code generation. The limitation arises from the training data, and the model’s propensity to hallucinate. At the time of writing, if you try to ask the Chat-GPT questions about events occurring after September 2021, you will probably receive a response like this:

Image by author

This isn’t helpful, so how can we go about rectifying it?

Option 1 — Train or fine-tune the model on up-to-date data.

Fine-tuning or training a model can be impractical and expensive. Putting aside the costs, the effort required to prepare the data sets is enough to forgo this option.

Option 2 — Use retrieval augmented generation (RAG) methods.

RAG methods enable us to give the large language model access to an up-to-date knowledge base. This is much cheaper than training a model from scratch or fine-tuning, and much easier to implement. In this article, I show you how to leverage RAG with your OpenAI model. We will put the model to the test by conducting a short analysis of its ability to answer questions about the Russia-Ukraine conflict of 2022 from a Wikipedia knowledge base.

Note: This topic, although sensitive, was chosen for the obvious reason that the current ChatGPT model has no knowledge of it.

You will require an OpenAI API key, you can grab one directly from their website or follow this tutorial. The framework used for RAG is Haystack by Deepset, which is open source. They provide APIs enabling you to build applications on top of large language models. We also leverage sentence transformers and the transformers library from Hugging Face.

Source link

Leave a Comment