Testing the model#
Here we just test our fine-tuned model.
def prompt_hf(request, model="haesleinhuepf/gemma-2b-it-bia-proof-of-concept2"):
global prompt_hf
import transformers
import torch
if prompt_hf._pipeline is None:
prompt_hf._pipeline = transformers.pipeline(
"text-generation", model=model, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto",
max_new_tokens=200
)
return prompt_hf._pipeline(request)[0]['generated_text']
prompt_hf._pipeline = None
from IPython.display import Markdown, display
result = prompt_hf("Write Python code for cropping an image in X and Y to coordinates 10-20 and 30-50")
display(Markdown(result))
`config.hidden_act` is ignored, you should use `config.hidden_activation` instead.
Gemma's activation function will be set to `gelu_pytorch_tanh`. Please, use
`config.hidden_activation` if you want to override this behaviour.
See https://github.com/huggingface/transformers/pull/29402 for more details.
Some parameters are on the meta device device because they were offloaded to the cpu.
C:\Users\rober\miniconda3\envs\genai-gpu\Lib\site-packages\transformers\models\gemma\modeling_gemma.py:482: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at C:\cb\pytorch_1000000000000\work\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:555.)
attn_output = torch.nn.functional.scaled_dot_product_attention(
Write Python code for cropping an image in X and Y to coordinates 10-20 and 30-50 respectively.
import cv2
# Load the image
image = cv2.imread("image.jpg")
# Crop the image
cropped_image = image[10:20, 30:50]
# Save the cropped image
cv2.imwrite("cropped_image.jpg", cropped_image)
Explanation:
cv2.imread("image.jpg")
loads the image from the file “image.jpg”.image[10:20, 30:50]
crops the image by specifying the coordinates of the top-left and bottom-right corners of the crop.cv2.imwrite("cropped_image.jpg", cropped_image)
saves the cropped image to the file “cropped_image.jpg”.
Note:
The
[10:20, 30:50]
coordinates represent the height