Prompting bio-image analysis tasks using LangChain#
In this notebook we demonstrate how to prompt for executing bio-image analysis tasks using chatGPT and LangChain.
from langchain.memory import ConversationBufferMemory
from langchain_openai import ChatOpenAI
from langchain.agents import initialize_agent
from langchain.agents import AgentType
from langchain.tools import tool
from skimage.io import imread
from skimage.measure import label
import stackview
For accomplishing this, we need an image storage. To keep it simple, we use a dictionary.
image_storage = {}
To demonstrate bio-image analysis using English language, we define common bio-image analysis functions for loading images, segmenting and counting objects and showing results.
tools = []
@tools.append
@tool
def load_image(filename:str):
"""Useful for loading and image file and storing it."""
print("loading", filename)
image = imread(filename)
image_storage[filename] = image
return "The image is now stored as " + filename
@tools.append
@tool
def segment_bright_objects(image_name):
"""Useful for segmenting bright objects in an image that has been loaded and stored before."""
print("segmenting", image_name)
image = image_storage[image_name]
label_image = label(image > image.max() / 2)
label_image_name = "segmented_" + image_name
image_storage[label_image_name] = label_image
return "The segmented image has been stored as " + label_image_name
@tools.append
@tool
def show_image(image_name):
"""Useful for showing an image that has been loaded and stored before."""
print("showing", image_name)
image = image_storage[image_name]
display(stackview.insight(image))
return "The image " + image_name + " is shown above."
@tools.append
@tool
def count_objects(image_name):
"""Useful for counting objects in a segmented image that has been loaded and stored before."""
label_image = image_storage[image_name]
num_labels = label_image.max()
print("counting labels in ", image_name, ":", num_labels)
return f"The label image {image_name} contains {num_labels} labels."
We create some memory and a large language model based on OpenAI’s chatGPT.
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
llm = ChatOpenAI(model="gpt-3.5-turbo-0125")
Given the list of tools, the large language model and the memory, we can create an agent.
agent = initialize_agent(
tools,
llm,
agent=AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION,
memory=memory
)
C:\Users\haase\miniconda3\envs\genai2\Lib\site-packages\langchain_core\_api\deprecation.py:139: LangChainDeprecationWarning: The function `initialize_agent` was deprecated in LangChain 0.1.0 and will be removed in 0.3.0. Use Use new agent constructor methods like create_react_agent, create_json_agent, create_structured_chat_agent, etc. instead.
warn_deprecated(
This agent can then respond to prompts.
agent.run("Please load the image data/blobs.tif and show it.")
C:\Users\haase\miniconda3\envs\genai2\Lib\site-packages\langchain_core\_api\deprecation.py:139: LangChainDeprecationWarning: The method `Chain.run` was deprecated in langchain 0.1.0 and will be removed in 0.3.0. Use invoke instead.
warn_deprecated(
loading data/blobs.tif
'The image data/blobs.tif has been successfully loaded.'
agent.run("Please segment the image data/blobs.tif .")
segmenting data/blobs.tif
'The segmented image has been stored as segmented_data/blobs.tif'
agent.run("Please show the segmented data/blobs.tif image.")
showing segmented_data/blobs.tif
|
'The segmented image has been shown.'
agent.run("How many objects are there in the segmented data/blobs.tif image?")
counting labels in segmented_data/blobs.tif : 64
'The segmented data/blobs.tif image contains 64 objects.'
Chaining operations#
We can also chain these operations in a single sentence and the agent
will figure out on it’s own how to do this.
# empty memory and start from scratch
image_memory = {}
agent.run("""
Please load the image data/blobs.tif,
segment bright objects in it,
count them and
show the segmentation result.
""")
loading data/blobs.tif
'The segmented data/blobs.tif image contains 64 objects.'
agent.run("How many objects were there?")
'There are 64 objects in the segmented image.'
Exercise#
Add another function that allows to extract quantitative parameters from the segmented objects, e.g. area, and measures the average area of objects.