LLAVA#
In this notebook we will use LLAVA, a vision language model, to inspect a natural image.
import openai
from skimage.io import imread
import stackview
from image_utilities import numpy_to_bytestream
import base64
from stackview._image_widget import _img_to_rgb
Example images#
First we load a natural image
The LLava model is capable of describing images via the ollama API.
def prompt_ollama(prompt:str, image, model="llava"):
"""A prompt helper function that sends a message to ollama
and returns only the text response.
"""
rgb_image = _img_to_rgb(image)
byte_stream = numpy_to_bytestream(rgb_image)
base64_image = base64.b64encode(byte_stream).decode('utf-8')
message = [{"role": "user", "content": [
{"type": "text", "text": prompt},
{
"type": "image_url",
"image_url": {
"url": f"data:image/jpeg;base64,{base64_image}"
}
}]}]
# setup connection to the LLM
client = openai.OpenAI(
base_url = "http://localhost:11434/v1"
)
# submit prompt
response = client.chat.completions.create(
model=model,
messages=message
)
# extract answer
return response.choices[0].message.content
image = imread("data/real_cat.png")
stackview.insight(image)
|
prompt_ollama("what's in this image?", image, model="llava")
" This is an image of a cat sitting on top of a white microscope. The room appears to be indoors, and the focus is on the cat, which looks attentive or curious. The microscope has an adjustable arm and a magnifying glass, suggesting that it may be used for scientific observation or documentation. Additionally, there's a toy telescope in front of the cat, further indicating that this might be a space related to science, such as an astronomy or telescope repair station. The cat's attention is not on the microscope but rather directed off-camera. "
Exercise#
Load the MRI dataset and ask LLava about the image.