Generating Jupyter Notebooks using Reflection#
Reflection is a way to question the results of an LLM by the same or a different LLM. With this additional prompting step, we can correct results the LLM might have gotten wrong in the first step.
import openai
from IPython.display import Markdown
import json
from llm_utilities import prompt_scadsai_llm
from functools import partial
# select the prompt function and model
prompt = partial(prompt_scadsai_llm, model="deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct")
Initial prompting step#
First, we prompt an LLM as usual. Our example task asks for generating a trivial Jupyter notebook. Jupyter notebooks are files in json format.
first_notebook = prompt("""
Write Python code for demonstrating plotting using seaborn with 10 plots.
Output it as Jupyter notebook in ipynb/json format.
Add explanationory cells in betweeen the code cells.
The notebook should contain explanation of how to call the function.
Make sure there are line breaks after each code line.
""").strip().strip("```json").strip("```")
first_file = "generated_notebook.ipynb"
with open(first_file, 'w') as file:
file.write(first_notebook)
Markdown(f"[Open notebook]({first_file})")
This function can be used to test if the response is valid json.
def is_valid_json(test_string):
import json
try:
json.loads(test_string)
return True
except ValueError:
return False
is_valid_json(first_notebook)
False
Reflection step#
We now take the output of the first prompt and modify it to make sure it is a Jupyter notebook file.
second_notebook = prompt(f"""
Take the following text and extract the Jupyter
notebook ipynb/json from it:
{first_notebook}
Make sure the output is in ipynb/json format only. No markdown fences please.
""").strip().strip("```json").strip("```")
second_file = "modified_notebook.ipynb"
with open(second_file, 'w') as file:
file.write(second_notebook)
Markdown(f"[Open notebook]({second_file})")
is_valid_json(second_notebook)
True
Exercise#
Ask the LLM to produce a more complex example notebook. Try to find the limits of this approach.
Exercise#
Try different models such as meta-llama/Llama-3.3-70B-Instruct and mistral-7b-q4.