Model Fine-Tuning locally#
In this section we will fine-tune our own model on local hardware. For executing the notebooks, it is recommended to use a CUDA-compatibe graphics card with at least 16 GB of memory.
Site Navigation
Setup
LLM basics
Multi-Modal LLMs
Advanced Prompt Engineering
blablado
calls functions for youLinks
In this section we will fine-tune our own model on local hardware. For executing the notebooks, it is recommended to use a CUDA-compatibe graphics card with at least 16 GB of memory.