Search K
Appearance
Appearance
INFERENCE DEFEND
This article is meant for Inference Defend users.
ROLES AND PERMISSIONS
To complete the tasks described in this section, make sure you have the required permissions.
Each time you send a prompt to an LLM, the prompt is saved and stored as a log you can view.
To get the prompt logs, you need to call the cai.client.prompts.get() method in your request. The cai.client.prompts.get() method accepts multiple optional parameters you can include to refine your results.
In this scenario, we are going to get the prompt logs, without including any optional parameters.
To get the prompt logs:
Add your token value to the following sample:
from calypsoai import CalypsoAI
# Define the URL and token for CalypsoAI
CALYPSOAI_URL="https://www.us1.calypsoai.app"
CALYPSOAI_TOKEN="ADD-YOUR-TOKEN-HERE"
# Initialize the CalypsoAI client
cai = CalypsoAI(url=CALYPSOAI_URL, token=CALYPSOAI_TOKEN)
# Get the prompt logs
prompts = [prompt for prompt in cai.prompts.iterate()]
# Print the response
print(prompts)Run the script.
Analyze the response.
The following response sample is a simplified version of a successful request, focusing only on the main details relevant to this specific request.
{
"prompts": [
{
"id": "01975a7e-f050-706e-afa8-1e22f1064f21",
"input": "Hello world",
"projectId": "01975a7d-ba51-70a9-97c1-8158db2a8957",
"provider": "01975a76-69c9-700f-8871-b689fb827e7f",
"result": {
"outcome": "blocked",
"scannerResults": [
{
"outcome": "failed",
"scanDirection": "request",
"scannerId": "019745f2-abad-700e-a805-93993f59e036",
"scannerVersionMeta": null,
"startedDate": "2025-06-10T15:39:17.978433Z"
}
]
}
},
(...)
]
}The response includes the following key parameters:
prompts > input: The contents of the prompt sent to the LLM.prompts > projectId: The ID of the project to which the prompt is sent.prompts > provider: The ID of the provider to which the prompt is sent.prompts > result > outcome: The outcome of the prompt scan request.prompts > result > scannerResults: A list of the scan result information for all scanners used during the scan.prompts > result > scannerResults > outcome: The outcome of each individual scanner used during the scan.prompts > result > outcome. For example, a scanner may be cleared but the global outcome takes into account the outcome of all scanners used during the scan.prompts > result > scannerResults > scanDirection: The scanning direction of each individual scanner used during the scan.prompts > result > scannerResults > scannerId: The ID of each individual scanner used during the scan.{
"next": "eyJyb3ciOjEwLCJsaW1pdCI6MTAsInR5cGUiOiJyb3cifQ==",
"prev": null,
"prompts": [
{
"externalMetadata": null,
"fromTemplate": false,
"id": "01975a7e-f050-706e-afa8-1e22f1064f21",
"input": "Hello world",
"memory": null,
"orgId": null,
"parentId": null,
"preserve": false,
"projectId": "01975a7d-ba51-70a9-97c1-8158db2a8957",
"provider": "01975a76-69c9-700f-8871-b689fb827e7f",
"receivedAt": "2025-06-10T15:39:17.968442Z",
"result": {
"files": null,
"outcome": "blocked",
"providerResult": null,
"response": null,
"scannerResults": [
{
"completedDate": "2025-06-10T15:39:18.969754Z",
"customConfig": false,
"data": {
"type": "custom"
},
"outcome": "failed",
"scanDirection": "request",
"scannerId": "019745f2-abad-700e-a805-93993f59e036",
"scannerVersionMeta": null,
"startedDate": "2025-06-10T15:39:17.978433Z"
}
]
},
"type": "prompt",
"userId": "919ff136-9cfa-4f8a-b347-c0cde08aca7c"
},
(...)
]
}DEFAULT RESPONSE
Our sample Python request uses the cursor attribute to return a paginated list of prompts, starting from the most recent prompt. By default, there is a limit of 10 results per page, and only the prompts sent by the owner of the token are shown.
You can use the next and prev cursor properties to view prompts on specific pages.
You may want to refine your search to get the logs for prompts that match your specific search requirements.
In this scenario, we are going to get the prompt logs for all blocked prompts, from all users, with a limit of 1000 results shown per page.
To get the prompt logs:
Add your token value to the following sample:
from calypsoai import CalypsoAI
from calypsoai.datatypes import PromptOutcome, PromptType
# Define the URL and token for CalypsoAI
CALYPSOAI_URL="https://www.us1.calypsoai.app"
CALYPSOAI_TOKEN="ADD-YOUR-TOKEN-HERE"
# Initialize the CalypsoAI client
cai = CalypsoAI(url=CALYPSOAI_URL, token=CALYPSOAI_TOKEN)
# Get the prompt logs
prompts = cai.client.prompts.get(limit=1000, type_=[PromptType.PROMPT], outcomes=[PromptOutcome.BLOCKED],
onlyUser=False)
# Print the response
print(prompts.model_dump_json(indent=2))Run the script.
Analyze the response.
Our sample Python request produces a paginated list very similar to the sample JSON response in Get the prompt logs, showing the prompts that match your search criteria.