Powerful system messages#
In this notebook we will generate image analysis code using a detailed system prompt. The system prompt contains generic instructions and specific bio-image analysis knowledge. The knowledge is stored in a text file:
import openai
from IPython.display import Markdown
filename = 'code_snippets.txt'
with open(filename, 'r') as file:
knowledge = file.read()
Markdown(knowledge[:428])
Displays an image with a slider and label showing mouse position and intensity.
stackview.annotate(image, labels)
Allows cropping an image along all axes.
stackview.crop(image)
Showing an image stored in variable
image
and a segmented image stored in variablelabels
on top. Also works with two images or two label images.
stackview.curtain(image, labels, alpha: float = 1)
We use this content and the following instructions to assemble a system message:
system_message = f"""
You are a extremely talented bioimage analyst and you use Python to solve your tasks.
If the request entails writing code, write concise professional, high-quality bioimage analysis code.
## Python specific instructions
You can only use those Python libraries: scikit-image,numpy,scipy,pandas,matplotlib,seaborn,scikit-learn,stackview,torch,cellpose,stardist,n2v,pyclesperanto_prototype,apoc,napari-segment-blobs-and-things-with-membranes,napari-simpleitk-image-processing,napari-skimage-regionprops,skan,os,dask,czifile.
If the user asks for those simple tasks, use these code snippets.
{knowledge}
## Todos
Answer your response in three sections:
1. Summary: First provide a short summary of the task.
2. Plan: Provide a concise step-by-step plan without any code.
3. Code: Provide the code.
Structure it with markdown headings like this:
### Summary
I will do this and that.
### Plan
1. Do this.
2. Do that.
### Code
```
this()
that()
```
## Final remarks
The following points have highest importance and may overwrite the instructions above.
Make sure to provide 1) summary, 2) plan and 3) code.
Make sure to keep your answer concise and to the point.
Make sure the code you write is correct and can be executed.
"""
Note the following helper function shows how we integrate the system message into the list of messages submitted to the server.
def prompt(message:str, system_message:str=None, model="gpt-4o"):
"""A prompt helper function that sends a message to openAI
and returns only the text response.
"""
import openai
client = openai.OpenAI()
messages = []
if system_message is not None:
messages += [{"role": "system", "content": system_message}]
messages += [{"role": "user", "content": message}]
response = client.chat.completions.create(
model=model,
messages=messages
)
return response.choices[0].message.content
Our question refers to a function that is listed in our knowledge base, but might not be know to ChatGPT.
my_question = "How can I segment an image using Otsu's method, spot detection and Voronoi-tesselation?"
answer1 = prompt(my_question, system_message=system_message)
Markdown(answer1)
Summary
I will segment an image using Otsu’s method, spot detection, and Voronoi-tessellation by leveraging the cle.voronoi_otsu_labeling
function from the pyclesperanto_prototype
library.
Plan
Load the image that needs to be segmented.
Use the
cle.voronoi_otsu_labeling
function to perform Otsu’s thresholding, spot detection, and Voronoi-tessellation on the image.Visualize the segmented image.
Code
import pyclesperanto_prototype as cle
import stackview
from skimage.io import imread
# Step 1: Load the image
image = imread('path_to_image.tif') # Replace with the actual path to your image
# Step 2: Perform segmentation using Otsu's method, spot detection, and Voronoi-tessellation
segmented_image = cle.voronoi_otsu_labeling(image, spot_sigma=2, outline_sigma=2)
# Step 3: Visualize the segmented image
stackview.insight(segmented_image)
As comparison, we submit the same request again, but this time without the system message.
answer2 = prompt(my_question)
Markdown(answer2)
Sure! Otsu’s method, spot detection, and Voronoi tessellation can be combined to segment an image in a meaningful way. Here’s a step-by-step outline of how you can do this:
Otsu’s Method for Initial Segmentation:
Otsu’s method is used to perform automatic thresholding which separates the foreground (relevant objects) from the background based on their intensity levels.
Here’s how you can implement it:
import cv2 import numpy as np import matplotlib.pyplot as plt # Load the image image = cv2.imread('path/to/image.jpg', cv2.IMREAD_GRAYSCALE) # Apply Otsu's thresholding ret, otsu_threshold = cv2.threshold(image, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU) # Display the result plt.imshow(otsu_threshold, cmap='gray') plt.title("Otsu's Thresholding") plt.show()
Spot Detection using blob detection:
To detect spots in the image, you can use techniques such as Blob Detection.
One popular approach is using the
SimpleBlobDetector
provided by OpenCV.# Set up the SimpleBlobDetector with default parameters params = cv2.SimpleBlobDetector_Params() detector = cv2.SimpleBlobDetector_create(params) # Detect blobs keypoints = detector.detect(otsu_threshold) # Draw detected blobs as red circles blobs = cv2.drawKeypoints(image, keypoints, np.zeros((1,1)), (0,0,255), cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS) # Display the result plt.imshow(blobs, cmap='gray') plt.title("Blob Detection") plt.show()
Voronoi Tessellation:
To apply Voronoi Tessellation, you need the coordinates of the detected spots from the previous step.
OpenCV or SciPy can be used for computing the Voronoi diagram.
from scipy.spatial import Voronoi, voronoi_plot_2d # Extract the coordinates of the keypoints points = np.array([kp.pt for kp in keypoints], dtype=np.float32) # Compute Voronoi tessellation vor = Voronoi(points) # Plot the Voronoi diagram voronoi_plot_2d(vor) plt.title("Voronoi Tessellation") plt.show()
Combining Results:
Finally, combine the results from Otsu’s segmentation, blob detection, and Voronoi tesselation to interpret and analyze the segmented regions within the image.
This may involve masking and labeling the areas defined by the Voronoi cells:
regions, vertices = voronoi_finite_polygons_2d(vor) for region in regions: polygon = vertices[region] # Optionally: Mask and process each polygon with respect to the original image # to identify and segment specific regions.
Complete Code Example:
import cv2
import numpy as np
import matplotlib.pyplot as plt
from scipy.spatial import Voronoi, voronoi_plot_2d
def voronoi_finite_polygons_2d(vor, radius=None):
new_regions = []
new_vertices = vor.vertices.tolist()
center = vor.points.mean(axis=0)
if radius is None:
radius = vor.points.ptp().max()
all_ridges = {}
for (p1, p2), (v1, v2) in zip(vor.ridge_points, vor.ridge_vertices):
all_ridges.setdefault(p1, []).append((p2, v1, v2))
all_ridges.setdefault(p2, []).append((p1, v1, v2))
for p1, region in enumerate(vor.point_region):
vertices = vor.regions[region]
if all(v >= 0 for v in vertices):
new_regions.append(vertices)
continue
ridges = all_ridges[p1]
new_region = [v for v in vertices if v >= 0]
for p2, v1, v2 in ridges:
if v2 < 0:
v1, v2 = v2, v1
if v1 >= 0:
continue
t = vor.points[p2] - vor.points[p1]
t /= np.linalg.norm(t)
n = np.array([-t[1], t[0]])
midpoint = vor.points[[p1, p2]].mean(axis=0)
direction = np.sign(np.dot(midpoint - center, n)) * n
far_point = vor.vertices[v2] + direction * radius
new_region.append(len(new_vertices))
new_vertices.append(far_point.tolist())
vs = np.asarray([new_vertices[v] for v in new_region])
c = vs.mean(axis=0)
angles = np.arctan2(vs[:, 1] - c[1], vs[:, 0] - c[0])
new_region = np.array(new_region)[np.argsort(angles)]
new_regions.append(new_region.tolist())
return new_regions, np.asarray(new_vertices)
# Load the image
image = cv2.imread('path/to/image.jpg', cv2.IMREAD_GRAYSCALE)
# Apply Otsu's thresholding
ret, otsu_threshold = cv2.threshold(image, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)
# Set up the SimpleBlobDetector with default parameters
params = cv2.SimpleBlobDetector_Params()
detector = cv2.SimpleBlobDetector_create(params)
# Detect blobs
keypoints = detector.detect(otsu_threshold)
# Draw detected blobs as red circles
blobs = cv2.drawKeypoints(image, keypoints, np.zeros((1,1)), (0,0,255), cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
# Display the blobs
plt.imshow(blobs, cmap='gray')
plt.title("Blob Detection")
plt.show()
# Extract the coordinates of the keypoints
points = np.array([kp.pt for kp in keypoints], dtype=np.float32)
# Compute Voronoi tessellation
vor = Voronoi(points)
# Generate finite polygons
regions, vertices = voronoi_finite_polygons_2d(vor)
# Plot the Voronoi diagram
plt.figure()
voronoi_plot_2d(vor)
plt.title("Voronoi Tessellation")
for region in regions:
polygon = vertices[region]
plt.fill(*zip(*polygon), edgecolor='k', alpha=0.4)
plt.show()
This complete example integrates all steps and will help you segment the image visually and analytically using Otsu’s method, spot detection, and Voronoi tesselation. The outlined steps can be further refined based on specific needs and image characteristics.
Exercise#
Modify the knowledge base. Remove the information about Voronoi-Otsu-Labeling and run this notebook again.
Optional exercise: Add more knowledge into the knowledge file and test if the language model gets it.