FeaturesAIOverview

AI

Aerostack AI gives you LLM completions, vector embeddings, and token streaming via a single SDK — without managing API keys or provider SDKs directly.

Quick start

import { sdk } from '@aerostack/sdk'
 
// Text completion
const result = await sdk.ai.complete({
  prompt: 'Summarize this article in 3 bullet points:\n\n' + articleText,
  model: 'gpt-4o-mini',
  maxTokens: 256,
})
console.log(result.text)
 
// Embeddings
const embedding = await sdk.ai.embed('What is the capital of France?')
// Returns: number[] (1536-dimensional vector)

Completions

const result = await sdk.ai.complete({
  prompt: 'Write a product description for: ' + productName,
  model: 'gpt-4o-mini',   // or 'gpt-4o', 'claude-3-haiku', etc.
  maxTokens: 512,
  temperature: 0.7,
})
 
// result.text — the generated text
// result.usage — { promptTokens, completionTokens, totalTokens }

Embeddings

Generate vector representations for semantic search and similarity:

// Single text
const vector = await sdk.ai.embed('user search query')
 
// Store in database alongside your content
await sdk.db.query(
  'INSERT INTO documents (id, text, embedding) VALUES (?, ?, ?)',
  [id, text, JSON.stringify(vector)]
)
 
// Find similar documents
const queryVector = await sdk.ai.embed(searchQuery)
const similar = await sdk.search.query(queryVector, { table: 'documents', limit: 10 })

Streaming (via WebSocket)

For real-time token delivery to clients, use the AI Streaming guide — tokens are pushed via your Realtime channel.

// Stream from server and push tokens via WebSocket
for await (const token of await sdk.ai.streamCompletion({ prompt })) {
  sdk.socket.emit('ai:token', { token }, sessionChannel)
}
sdk.socket.emit('ai:done', {}, sessionChannel)

Model support

Configure your AI provider and model in Dashboard → AI → Configuration. Supported providers include OpenAI, Anthropic, and Cloudflare AI Workers.

AI requests are proxied through Aerostack’s edge — your provider API key is never exposed to the client.

Next steps