Blablador endpoint#
In this notebook we will use the blablador infrastructure at the Research Center Jülich. Before you can access it, you need to create an API key as explained on this page and store it in your environment as BLABLADOR_API_KEY. You will see that also this method uses the OpenAI API and we change the base_url.
import os
import openai
openai.__version__
'1.74.0'
def prompt_blablador(prompt:str, model="alias-llama3-huge"):
"""A prompt helper function that sends a prompt to Blablador (FZ Jülich)
and returns only the text response.
"""
import os
# setup connection to the LLM-server
client = openai.OpenAI(
base_url = "https://helmholtz-blablador.fz-juelich.de:8000/v1",
api_key = os.environ.get('BLABLADOR_API_KEY')
)
response = client.chat.completions.create(
model=model,
messages=[{"role": "user", "content": prompt}]
)
# extract answer
return response.choices[0].message.content
prompt_blablador("Hi!")
'Hello! How are you today? Is there something I can help you with, or would you like to chat?'
Exercise#
List the models available in the blablador endpoint and try them out by specifying them when calling prompt_blablador().
client = openai.OpenAI(
base_url = "https://helmholtz-blablador.fz-juelich.de:8000/v1",
api_key = os.environ.get('BLABLADOR_API_KEY')
)
print("\n".join([model.id for model in client.models.list().data]))
1 - Llama3 405 the best general model and big context size
1 - Ministral 8b - the fast model
1 - Teuken-7B-instruct-research-v0.4 - The OpenGPT-X model
10 Mistral-Nemo-Instruct-2407 - Our fast-experimental - with a large context size
2 - Qwen3 30B A3B - a reasoning model from Alibaba from April 2025
3 - DeepCoder-14B-Preview - the code model from 09.04.2025
alias-code
alias-fast
alias-fast-experimental
alias-large
alias-llama3-huge
alias-opengptx
alias-reasoning