OpenAI API#
The OpenAI Application Programming Interface (API) became a de-facto standard to communicate with LLMs programmatically. The Python interface is open source.
As the field is moving fast an APIs break sometimes, consider printing out the version of the library you used.
import openai
openai.__version__
'1.102.0'
For accessing LLMs you create a client object first.
client = openai.OpenAI()
client
<openai.OpenAI at 0x19431b333d0>
The API expects messages in a certain format:
my_messages = []
my_messages.append({
"role": "user",
"content": "What's the capital of France?"
})
my_messages
[{'role': 'user', 'content': "What's the capital of France?"}]
You can send a request to the server using the chat.completions
API. If you’re planning to use ChatGPT, possible OpenAI models and their prices can be found here.
response = client.chat.completions.create(
model="gpt-5", # or:
messages=my_messages
)
response
ChatCompletion(id='chatcmpl-CCRczXAO5C4YgK7gf6Rbi11J9sUQA', choices=[Choice(finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage(content='Paris.', refusal=None, role='assistant', annotations=[], audio=None, function_call=None, tool_calls=None))], created=1757082173, model='gpt-5-2025-08-07', object='chat.completion', service_tier='default', system_fingerprint=None, usage=CompletionUsage(completion_tokens=11, prompt_tokens=12, total_tokens=23, completion_tokens_details=CompletionTokensDetails(accepted_prediction_tokens=0, audio_tokens=0, reasoning_tokens=0, rejected_prediction_tokens=0), prompt_tokens_details=PromptTokensDetails(audio_tokens=0, cached_tokens=0)))
The answer comes in a similar format like the request was sent. It is a list of answers actually.
response.choices
[Choice(finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage(content='Paris.', refusal=None, role='assistant', annotations=[], audio=None, function_call=None, tool_calls=None))]
You can access the text-answer like this:
response.choices[0].message.content
'Paris.'
Helper functions#
For using the API, it is highly recommended to write some helper functions such as this one:
def prompt_chatgpt(message:str, model="gpt-3.5-turbo"):
"""A prompt helper function that sends a message to openAI
and returns only the text response.
"""
# convert message in the right format if necessary
if isinstance(message, str):
message = [{"role": "user", "content": message}]
# setup connection to the LLM
client = openai.OpenAI()
# submit prompt
response = client.chat.completions.create(
model=model,
messages=message
)
# extract answer
return response.choices[0].message.content
This makes our life easier because we can easily access the LLM like this:
prompt_chatgpt("How many o are in Woolloomoloo?")
"There are five 'o's in Woolloomoloo."
client = openai.OpenAI()
print("\n".join(sorted([model.id for model in client.models.list().data])))
babbage-002
chatgpt-4o-latest
codex-mini-latest
computer-use-preview
computer-use-preview-2025-03-11
dall-e-2
dall-e-3
davinci-002
ft:gpt-3.5-turbo-0125:leipzig-university::9VNFya3H:ckpt-step-77
ft:gpt-3.5-turbo-0125:leipzig-university::9VNFz3h3
ft:gpt-3.5-turbo-0125:leipzig-university::9VNFzOv6:ckpt-step-88
ft:gpt-3.5-turbo-0125:leipzig-university::9WlQiTl6:ckpt-step-77
ft:gpt-3.5-turbo-0125:leipzig-university::9WlQjtjc
ft:gpt-3.5-turbo-0125:leipzig-university::9WlQjz24:ckpt-step-88
ft:gpt-3.5-turbo-0125:leipzig-university::9Wmaf1H6:ckpt-step-77
ft:gpt-3.5-turbo-0125:leipzig-university::9Wmag9S6:ckpt-step-88
ft:gpt-3.5-turbo-0125:leipzig-university::9WmagblR
ft:gpt-3.5-turbo-0125:leipzig-university::9X70LxDH:ckpt-step-77
ft:gpt-3.5-turbo-0125:leipzig-university::9X70M36A
ft:gpt-3.5-turbo-0125:leipzig-university::9X70MopH:ckpt-step-88
ft:gpt-3.5-turbo-0125:leipzig-university::9X7CCAiE:ckpt-step-77
ft:gpt-3.5-turbo-0125:leipzig-university::9X7CCBSR:ckpt-step-88
ft:gpt-3.5-turbo-0125:leipzig-university::9X7CCzv4
ft:gpt-3.5-turbo-0125:leipzig-university::9X7PEBEk:ckpt-step-84
ft:gpt-3.5-turbo-0125:leipzig-university::9X7PEoLV:ckpt-step-72
ft:gpt-3.5-turbo-0125:leipzig-university::9X7PFVgP
ft:gpt-4o-2024-08-06:leipzig-university::9ydjMDl7:ckpt-step-45
ft:gpt-4o-2024-08-06:leipzig-university::9ydjNWWH
ft:gpt-4o-2024-08-06:leipzig-university::9ydjNinx:ckpt-step-90
gpt-3.5-turbo
gpt-3.5-turbo-0125
gpt-3.5-turbo-1106
gpt-3.5-turbo-16k
gpt-3.5-turbo-instruct
gpt-3.5-turbo-instruct-0914
gpt-4
gpt-4-0125-preview
gpt-4-0613
gpt-4-1106-preview
gpt-4-turbo
gpt-4-turbo-2024-04-09
gpt-4-turbo-preview
gpt-4.1
gpt-4.1-2025-04-14
gpt-4.1-mini
gpt-4.1-mini-2025-04-14
gpt-4.1-nano
gpt-4.1-nano-2025-04-14
gpt-4o
gpt-4o-2024-05-13
gpt-4o-2024-08-06
gpt-4o-2024-11-20
gpt-4o-audio-preview
gpt-4o-audio-preview-2024-10-01
gpt-4o-audio-preview-2024-12-17
gpt-4o-audio-preview-2025-06-03
gpt-4o-mini
gpt-4o-mini-2024-07-18
gpt-4o-mini-audio-preview
gpt-4o-mini-audio-preview-2024-12-17
gpt-4o-mini-realtime-preview
gpt-4o-mini-realtime-preview-2024-12-17
gpt-4o-mini-search-preview
gpt-4o-mini-search-preview-2025-03-11
gpt-4o-mini-transcribe
gpt-4o-mini-tts
gpt-4o-realtime-preview
gpt-4o-realtime-preview-2024-10-01
gpt-4o-realtime-preview-2024-12-17
gpt-4o-realtime-preview-2025-06-03
gpt-4o-search-preview
gpt-4o-search-preview-2025-03-11
gpt-4o-transcribe
gpt-5
gpt-5-2025-08-07
gpt-5-chat-latest
gpt-5-mini
gpt-5-mini-2025-08-07
gpt-5-nano
gpt-5-nano-2025-08-07
gpt-audio
gpt-audio-2025-08-28
gpt-image-1
gpt-realtime
gpt-realtime-2025-08-28
o1
o1-2024-12-17
o1-mini
o1-mini-2024-09-12
o1-pro
o1-pro-2025-03-19
o3
o3-2025-04-16
o3-deep-research
o3-deep-research-2025-06-26
o3-mini
o3-mini-2025-01-31
o3-pro
o3-pro-2025-06-10
o4-mini
o4-mini-2025-04-16
o4-mini-deep-research
o4-mini-deep-research-2025-06-26
omni-moderation-2024-09-26
omni-moderation-latest
text-embedding-3-large
text-embedding-3-small
text-embedding-ada-002
tts-1
tts-1-1106
tts-1-hd
tts-1-hd-1106
whisper-1