Use LangChain’s Output Parser with ChatGPT for Structured Outputs | by Soner Yıldırım | Jun, 2023


Prompt template + LLM

Prompt template and an LLM is the simplest chain you can create with LangChain.

Using a prompt template has many advantages over manually customizing prompts with f-strings. It allows for reusing prompts when applicable. Also, LangChain provides ready-to-use templates for common tasks such as querying a database.

We’ll use OpenAI’s ChatGPT as our LLM so we need to set up an API key.

import os
import openai

from dotenv import load_dotenv, find_dotenv
_ = load_dotenv(find_dotenv())
openai.api_key = os.environ['OPENAI_API_KEY']

For this code to work and set up the API key, you need to create an environment variable named OPENAI_API_KEY, which holds the API key you obtained from the API Keys menu on OpenAI website.

Let’s start with creating a model. ChatOpenAI is LangChain’s abstraction for ChatGPT API endpoint.

from langchain.chat_models import ChatOpenAI

chat = ChatOpenAI(temperature=0.0)

By default, LangChain creates the chat model with a temperature value of 0.7. The temperature parameter adjusts the randomness of the output. Higher values like 0.7 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We can set its value when creating the model instance.

The next step is to create the prompt template. We’ll create a template for extracting information from product reviews.

review_template = """
For the following review, extract the following information:

recommended: Does the buyer recommend the product?
Answer True if yes, False if not or unknown.

delivery_days: How many days did it take for the product
to arrive? If this information is not found, output -1.

setup: Extract any sentences about the setup of the product.

Format the output as JSON with the following keys:
recommended
delivery_days
setup

review: {review}
"""

from langchain.prompts import ChatPromptTemplate

prompt_template = ChatPromptTemplate.from_template(review_template)

The code snippet above creates a prompt template from the given prompt string. The review is saved as an input variable, which can be checked using the input_variables attribute:

prompt_template.input_variables

# output
['review']

We can now create an actual prompt using this template and a product review.

product_review = """
I got this product to plug my internet based phone for work from home (Avaya desktop phone).
It works! It arrived in 5 days, which was earlier than the estimated delivery date.
The setup was EXTREMELY easy. At completion, I plugged the phone into the
extender's ethernet port and made a few phone calls which all worked perfectly with
complete clarity. VERY happy with this purchase since a cordless headset is
around $250 (which I would have needed since the phone had to be at the ethernet
port on the wall). I recommend this product!
"""

messages = prompt_template.format_messages(review=product_review)

The messages is a Python list that contains the actual prompt. We can see the prompt using messages[0].content , which outputs the following prompt:

For the following review, extract the following information:

recommended: Does the buyer recommend the product Answer True if yes, False if not or unknown.

delivery_days: How many days did it take for the product to arrive? If this information is not found, output -1.

setup: Extract any sentences about the setup of the product.

Format the output as JSON with the following keys: recommended delivery_days setup

review: I got this product to plug my internet based phone for work from home (Avaya desktop phone). It works! It arrived in 5 days, which was earlier than the estimated delivery date. The setup was EXTREMELY easy. At completion, I plugged the phone into the extender’s ethernet port and made a few phone calls which all worked perfectly with complete clarity. VERY happy with this purchase since a cordless headset is around $250 (which I would have needed since the phone had to be at the ethernet port on the wall). I recommend this product!

We have the model and prompt ready. The next step is to query the model using the prompt:

# chat is the model and messages is the prompt
response = chat(messages)
print(response.content)

# output
{
"recommended": true,
"delivery_days": 5,
"setup": "The setup was EXTREMELY easy."
}

Although the response looks like a JSON, it is a string, which makes it difficult to parse.

type(response.content)
# output
str

We’ll now learn how to use an output parser together with the prompt template to make it easier to parse the output.



Source link

Leave a Comment