KISSKI / GWDG endpoint#

In this notebook we will use the KISSKI LLM service infrastructure. KISSKI is the German AI Service Center for Sensible and Critical Infrastructure. Before you can access it, you need to create an API key by filling out this form; make sure to check the box “API access to our chat service”. You will see that also this method uses the OpenAI API and we change the base_url.

import os
import openai
openai.__version__
'1.58.1'
def prompt_kisski(message:str, model="meta-llama-3.1-70b-instruct"):
    """A prompt helper function that sends a message to KISSKI Chat AI API
    and returns only the text response.
    """
    import os
    import openai
    
    # convert message in the right format if necessary
    if isinstance(message, str):
        message = [{"role": "user", "content": message}]
    
    # setup connection to the LLM
    client = openai.OpenAI()
    client.base_url = "https://chat-ai.academiccloud.de/v1"
    client.api_key = os.environ.get('KISSKI_API_KEY')
    
    response = client.chat.completions.create(
        model=model,
        messages=message
    )
    
    # extract answer
    return response.choices[0].message.content
prompt_kisski("Hi!")
'Hello. How can I assist you today?'

Exercise#

List the models available in the blablador endpoint and try them out by specifying them when calling prompt_blablador().

client = openai.OpenAI()
client.base_url = "https://chat-ai.academiccloud.de/v1"
client.api_key = os.environ.get('KISSKI_API_KEY')

print("\n".join([model.id for model in client.models.list().data]))
meta-llama-3.1-8b-instruct
internvl2.5-8b
deepseek-r1
deepseek-r1-distill-llama-70b
llama-3.3-70b-instruct
llama-3.1-nemotron-70b-instruct
llama-3.1-sauerkrautlm-70b-instruct
mistral-large-instruct
qwen2.5-vl-72b-instruct
qwen2.5-72b-instruct
qwen2.5-coder-32b-instruct
codestral-22b
meta-llama-3.1-8b-rag
occiglot-7b-eu5-instruct