Search K
Appearance
Appearance
INFERENCE DEFEND
This article is meant for Inference Defend users.
CalypsoAI offers an OpenAI-compatible API endpoint, allowing you to secure your prompts with minimal changes to your existing code. By redirecting your OpenAI SDK client to the CalypsoAI URL and using a CalypsoAI token as the API key, all requests are automatically scanned and protected.
This example demonstrates how to use the official OpenAI python library to send a chat completion request through CalypsoAI.
To send a chat completion request:
WARNING
The url has to be updated with the connection name given to the model.
from openai import OpenAI
CALYPSOAI_URL = "https://www.calypsoai.app/openai/CONNECTION-NAME"
CALYPSOAI_TOKEN = "ADD-YOUR-TOKEN-HERE"
client = OpenAI(base_url=CALYPSOAI_URL, api_key=CALYPSOAI_TOKEN)
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "developer", "content": "What is your name"}],
)
print(response.model_dump_json(indent=2))Run the script.
Analyze the response. The format matches the standard OpenAI API response, including the generated message and usage statistics.
{
"id": "chatcmpl-COmwwonHeh1JUC08FUJ5BaW4mMOAk",
"choices": [
{
"finish_reason": "stop",
"index": 0,
"logprobs": null,
"message": {
"content": "I don't have a personal name, but you can call me Assistant. How can I help you today?",
"refusal": null,
"role": "assistant",
"annotations": [],
"audio": null,
"function_call": null,
"tool_calls": null
}
}
],
"created": 1760024070,
"model": "gpt-4o-mini-2024-07-18",
"object": "chat.completion",
"service_tier": "default",
"system_fingerprint": "fp_51db84afab",
"usage": {
"completion_tokens": 21,
"prompt_tokens": 11,
"total_tokens": 32,
"completion_tokens_details": {
"accepted_prediction_tokens": 0,
"audio_tokens": 0,
"reasoning_tokens": 0,
"rejected_prediction_tokens": 0
},
"prompt_tokens_details": {
"audio_tokens": 0,
"cached_tokens": 0
}
}
}