Completions
Generate text with any LLM available in your project’s AI configuration.
Basic completion
const result = await sdk.ai.complete({
prompt: 'Write a haiku about Cloudflare Workers.',
model: 'gpt-4o-mini',
})
console.log(result.text)
// → "Code runs at the edge / Milliseconds tick away / Cache holds the light still"Options
| Option | Type | Default | Description |
|---|---|---|---|
prompt | string | — | The input prompt |
model | string | project default | Model identifier |
maxTokens | number | 512 | Maximum tokens to generate |
temperature | number | 0.7 | Sampling temperature (0–2) |
system | string | — | System message (for chat models) |
messages | Message[] | — | Full conversation history |
Chat-style conversation
const result = await sdk.ai.complete({
model: 'gpt-4o',
system: 'You are a helpful customer support agent for Acme Corp.',
messages: [
{ role: 'user', content: 'My order hasn\'t arrived.' },
{ role: 'assistant', content: 'I\'m sorry to hear that. Can you share your order ID?' },
{ role: 'user', content: 'It\'s ORDER-12345.' },
],
})Response shape
interface CompletionResult {
text: string
model: string
usage: {
promptTokens: number
completionTokens: number
totalTokens: number
}
finishReason: 'stop' | 'length' | 'content_filter'
}Error handling
try {
const result = await sdk.ai.complete({ prompt, model })
} catch (err) {
if (err.code === 'RATE_LIMIT') {
// Back off and retry
}
if (err.code === 'CONTEXT_TOO_LONG') {
// Truncate the prompt
}
}