KISSKI / GWDG endpoint#

In this notebook we will use the KISSKI LLM service infrastructure. KISSKI is the German AI Service Center for Sensible and Critical Infrastructure. Before you can access it, you need to create an API key by filling out this form; make sure to check the box “API access to our chat service”. You will see that also this method uses the OpenAI API and we change the base_url.

import os
import openai
openai.__version__
'1.58.1'
def prompt_kisski(message:str, model="meta-llama-3.1-70b-instruct"):
    """A prompt helper function that sends a message to KISSKI Chat AI API
    and returns only the text response.
    """
    import os
    
    # convert message in the right format if necessary
    if isinstance(message, str):
        message = [{"role": "user", "content": message}]
    
    # setup connection to the LLM
    client = openai.OpenAI()
    client.base_url = "https://chat-ai.academiccloud.de/v1"
    client.api_key = os.environ.get('KISSKI_API_KEY')
    
    response = client.chat.completions.create(
        model=model,
        messages=message
    )
    
    # extract answer
    return response.choices[0].message.content
prompt_kisski("Hi!")
'Hello. Is there something I can help you with, or would you like to explore a specific topic or ask a question?'

Exercise#

List the models available in the blablador endpoint and try them out by specifying them when calling prompt_blablador().

client = openai.OpenAI()
client.base_url = "https://chat-ai.academiccloud.de/v1"
client.api_key = os.environ.get('KISSKI_API_KEY')

print("\n".join([model.id for model in client.models.list().data]))
meta-llama-3.1-8b-instruct
internvl2-8b
meta-llama-3.1-70b-instruct
llama-3.1-nemotron-70b-instruct
llama-3.1-sauerkrautlm-70b-instruct
mistral-large-instruct
qwen2.5-72b-instruct
codestral-22b
qwen2-vl-72b-instruct
teuken-7b-instruct-research
occiglot-7b-eu5-instruct