Blablador endpoint#

In this notebook we will use the blablardor infrastructure at the Research Center Jülich. Before you can access it, you need to create an API key as explained on this page. You will see that also this method uses the OpenAI API and we change the base_url.

import os
import openai
openai.__version__
'1.35.14'
def prompt_blablador(message:str, model="gpt-3.5-turbo"):
    """A prompt helper function that sends a message to Blablador (FZ Jülich)
    and returns only the text response.
    """
    import os
    
    # convert message in the right format if necessary
    if isinstance(message, str):
        message = [{"role": "user", "content": message}]
    
    # setup connection to the LLM
    client = openai.OpenAI()
    client.base_url = "https://helmholtz-blablador.fz-juelich.de:8000/v1"
    
    # todo: enter your API key here:
    # client.api_key = ""
    client.api_key = os.environ.get('BLABLADOR_API_KEY')
    response = client.chat.completions.create(
        model=model,
        messages=message
    )
    
    # extract answer
    return response.choices[0].message.content
prompt_blablador("Hi!")
'\nHello! How can I assist you today?'

Exercise#

List the models available in the blablador endpoint and try them out by specifying them when calling prompt_blablador().

client = openai.OpenAI()
client.base_url = "https://helmholtz-blablador.fz-juelich.de:8000/v1"
client.api_key = os.environ.get('BLABLADOR_API_KEY')

print("\n".join([model.id for model in client.models.list().data]))
1 - Mistral-7B-Instruct-v0.2 - the best option in general - fast and good
2 - Mixtral-8x7B-Instruct-v0.1 Slower with higher quality
3 - starcoder2-15b - A model for programming
4 - Cosmosage V3 - Answers your Cosmology and Astronomy questions (new version June 2024)
5 - GritLM-7B - For Chat AND Text Embeddings
6 - Llama3-8B-Instruct - A good model from META
alias-code
alias-embeddings
alias-fast
alias-large
alias-llama3-small
gpt-3.5-turbo
text-davinci-003
text-embedding-ada-002