Reflection#

Reflection is a way to question the results of an LLM by the same or a different LLM. With this additional prompting step, we can correct results the LLM might have gotten wrong in the first step.

import openai
from IPython.display import Markdown
import json

def prompt(message:str, model="gpt-3.5-turbo"):
    """A prompt helper function that sends a message to openAI
    and returns only the text response.
    """
    client = openai.OpenAI()
    response = client.chat.completions.create(
        model=model,
        messages=[{"role": "user", "content": message}]
    )
    return response.choices[0].message.content

Initial prompting step#

First, we prompt an LLM as usual. Our example task asks for generating a trivial Jupyter notebook. Jupyter notebooks are files in json format.

first_notebook = prompt("""
Write Python code for adding two numbers `a` and `b`.
Output it as Jupyter notebook in ipynb/json format. 
""").strip("```json").strip("```")

first_file = "generated_notebook.ipynb"
with open(first_file, 'w') as file:
    file.write(first_notebook)
    
Markdown(f"[Open notebook]({first_file})")

This function can be used to test if the response is valid json.

def is_valid_json(test_string):
    import json
    try:
        json.loads(test_string)
        return True
    except ValueError:
        return False

is_valid_json(first_notebook)
False

Reflection step#

We now take the output of the first prompt and modify it to make sure it is a Jupyter notebook file.

second_notebook = prompt(f"""
Take the following text and extract the Jupyter 
notebook ipynb/json from it:

{first_notebook}

Make sure the output is in ipynb/json format.
""").strip("```json").strip("```")

second_file = "modified_notebook.ipynb"
with open(second_file, 'w') as file:
    file.write(second_notebook)
    
Markdown(f"[Open notebook]({second_file})")
is_valid_json(second_notebook)
True