Quick start
- Go to Settings > Advanced and create an access token
- Copy it immediately — you won’t see it again
API reference
Base URL:https://api.zo.computer
POST /zo/ask
POST /zo/ask
Send a message to your Zo and get a response.RequestResponseStreaming Response (when Event Types:
Your message to Zo
Continue an existing conversation
Override the default model (use
/models/available to list options)Override the active persona (use
/personas/available to list options)JSON Schema for structured output
Enable streaming mode. Returns Server-Sent Events (SSE) instead of JSON.
Zo’s response. Returns an object if
output_format was specified.ID to continue this conversation in subsequent requests.
stream: true)Returns a Server-Sent Events stream with Content-Type: text/event-stream. Each event has the format:FrontendModelResponse— Text chunk from the model (data.content)End— Stream completed (includesdata.outputifoutput_formatwas specified)Error— Error occurred (data.message)
x-conversation-id response header contains the conversation ID for follow-up requests.GET /models/available
GET /models/available
List all available models you can use with the
/zo/ask endpoint. When authenticated with an API key, includes your BYOK (Bring Your Own Key) configurations.ResponseList of available models
The value to pass to
model_name in /zo/askHuman-readable model name
Model provider (e.g., “Anthropic”, “OpenAI”, “Custom”)
Short description of the model’s capabilities
Either “fast” or “capable”, indicating the model’s speed/capability tradeoff
Maximum context window size in tokens
Whether this is a BYOK (Bring Your Own Key) model
GET /personas/available
GET /personas/available
List all configured personas. Use the returned
id values with the persona_id parameter in /zo/ask to override the active persona.ResponseList of configured personas
The value to pass to
persona_id in /zo/askDisplay name for the persona
System prompt defining the persona’s behavior
AI model ID, or null for system default
Avatar image URL
Examples
Continuing a conversation
Use the returnedconversation_id to continue the conversation:
Structured output
Useoutput_format to get responses as structured JSON. This is based on OpenAI’s Structured Outputs.
Streaming
Usestream: true to receive responses as Server-Sent Events: