Output Formats
Agent Endpoints support three output formats that control how the LLM structures its response. Set the output_format field when creating or updating an endpoint.
Text (Default)
The LLM responds naturally in plain text. No additional formatting instructions are added to the system prompt.
Create the endpoint:
curl -X POST https://api.aerostack.dev/api/agent-endpoints \
-H "Authorization: Bearer YOUR_JWT_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"name": "summarizer",
"workspace_id": "ws_abc123",
"system_prompt": "You are a concise summarizer. Given any text, produce a clear summary.",
"output_format": "text"
}'Call it:
curl -X POST https://api.aerostack.dev/api/run/summarizer \
-H "Authorization: Bearer aek_your_api_key" \
-H "Content-Type: application/json" \
-d '{"input": "Long article text here..."}'Response:
{
"output": "The article discusses three main points about edge computing: reduced latency, improved reliability, and cost savings compared to centralized cloud architectures.",
"usage": {
"tokens_input": 892,
"tokens_output": 34,
"cost_cents": 0.05,
"latency_ms": 1200,
"iterations": 1
}
}JSON
The LLM is instructed to respond with valid JSON only. You can optionally provide an output_schema to tell the LLM exactly what shape the JSON should be.
Without a Schema
When output_format is json but no output_schema is set, the system prompt is augmented with:
Respond with valid JSON only. No other text.
The LLM will produce JSON, but the structure is up to it.
curl -X POST https://api.aerostack.dev/api/agent-endpoints \
-H "Authorization: Bearer YOUR_JWT_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"name": "sentiment-analyzer",
"workspace_id": "ws_abc123",
"system_prompt": "Analyze the sentiment of the given text. Return sentiment, confidence, and key phrases.",
"output_format": "json"
}'With a Schema
When output_schema is provided alongside output_format: "json", the schema is injected into the system prompt as a formatting instruction:
curl -X POST https://api.aerostack.dev/api/agent-endpoints \
-H "Authorization: Bearer YOUR_JWT_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"name": "lead-classifier",
"workspace_id": "ws_abc123",
"system_prompt": "You are a lead classification agent. Given a company description, classify the lead.",
"output_format": "json",
"output_schema": {
"type": "object",
"properties": {
"company_name": { "type": "string" },
"industry": { "type": "string" },
"size": { "type": "string", "enum": ["startup", "smb", "enterprise"] },
"score": { "type": "number", "minimum": 0, "maximum": 100 },
"recommended_action": { "type": "string" }
},
"required": ["company_name", "industry", "size", "score", "recommended_action"]
}
}'Call it:
curl -X POST https://api.aerostack.dev/api/run/lead-classifier \
-H "Authorization: Bearer aek_your_api_key" \
-H "Content-Type: application/json" \
-d '{"input": "TechFlow is a 50-person SaaS company building developer tools for CI/CD pipelines. They recently raised a Series A and are expanding their engineering team."}'Response:
{
"output": {
"company_name": "TechFlow",
"industry": "Developer Tools / SaaS",
"size": "smb",
"score": 78,
"recommended_action": "Schedule demo call — strong fit for developer-focused platform"
},
"raw_output": "{\n \"company_name\": \"TechFlow\",\n \"industry\": \"Developer Tools / SaaS\",\n \"size\": \"smb\",\n \"score\": 78,\n \"recommended_action\": \"Schedule demo call — strong fit for developer-focused platform\"\n}",
"usage": {
"tokens_input": 456,
"tokens_output": 92,
"cost_cents": 0.11,
"latency_ms": 1678,
"iterations": 1
}
}When the LLM output is valid JSON, the output field contains the parsed object and raw_output contains the original text. If the JSON is invalid, output contains the raw text and raw_output is omitted.
JSON Parsing Behavior
The execution engine attempts to parse the LLM output as JSON. It handles common LLM quirks:
- Markdown code blocks — If the LLM wraps JSON in
json ..., the engine extracts the JSON from inside the code block - Invalid JSON — If parsing fails, the raw text is returned as
outputwithout error. TheoutputParsedfield will be absent.
The output_schema is passed to the LLM as instructions, not enforced by validation. The LLM will try to match the schema, but there is no guarantee. Always validate the response in your application code for critical use cases.
Markdown
The LLM responds with formatted markdown text. No special system prompt modification is applied — the model naturally produces markdown when the format is set. This is useful for generating documentation, reports, or formatted content.
curl -X POST https://api.aerostack.dev/api/agent-endpoints \
-H "Authorization: Bearer YOUR_JWT_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"name": "report-generator",
"workspace_id": "ws_abc123",
"system_prompt": "You are a report generator. Given data or a topic, produce a well-formatted markdown report with headings, bullet points, and tables where appropriate.",
"output_format": "markdown"
}'Response:
{
"output": "# Q1 Performance Report\n\n## Key Metrics\n\n| Metric | Value | Change |\n|--------|-------|--------|\n| Revenue | $1.2M | +15% |\n| Users | 8,400 | +22% |\n\n## Highlights\n\n- Revenue grew 15% quarter-over-quarter\n- User acquisition exceeded targets by 12%\n\n## Recommendations\n\n1. Increase investment in the developer tools vertical\n2. Expand the sales team to capitalize on momentum",
"usage": {
"tokens_input": 234,
"tokens_output": 156,
"cost_cents": 0.09,
"latency_ms": 2100,
"iterations": 1
}
}Changing Output Format
You can change the output format on an existing endpoint without creating a new one:
curl -X PATCH https://api.aerostack.dev/api/agent-endpoints/aep_your_endpoint_id \
-H "Authorization: Bearer YOUR_JWT_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"output_format": "json",
"output_schema": {
"type": "object",
"properties": {
"summary": { "type": "string" },
"tags": { "type": "array", "items": { "type": "string" } }
}
}
}'Comparison
| Feature | text | json | markdown |
|---|---|---|---|
| Default | Yes | No | No |
| System prompt modified | No | Yes (adds JSON instruction) | No |
| Schema support | No | Yes (output_schema) | No |
| Parsed output | No | Yes (outputParsed / output as object) | No |
| SSE token streaming | Yes | No (emits only done event) | Yes |
| Best for | Chat, summaries, free-form | APIs, data extraction, classification | Reports, docs, formatted content |