FeaturesAIEmbeddings

Embeddings

Convert text into a high-dimensional vector for semantic search, similarity comparisons, and RAG (retrieval-augmented generation) pipelines.

Generate an embedding

const vector = await sdk.ai.embed('What is machine learning?')
// Returns: number[] (1536 dimensions for text-embedding-3-small)

Store embeddings in D1

const text = 'Cloudflare Workers run on V8 isolates at the edge.'
const vector = await sdk.ai.embed(text)
 
await sdk.db.query(
  'INSERT INTO knowledge_base (id, text, embedding) VALUES (?, ?, ?)',
  [crypto.randomUUID(), text, JSON.stringify(vector)]
)

Semantic search pipeline

// 1. Embed the user's query
const queryVector = await sdk.ai.embed(userQuery)
 
// 2. Find the most similar documents using vector search
const results = await sdk.search.query(queryVector, {
  table: 'knowledge_base',
  limit: 5,
})
 
// 3. Use results as context for a completion
const context = results.map(r => r.text).join('\n\n')
const answer = await sdk.ai.complete({
  prompt: `Based on the following context, answer this question: ${userQuery}\n\nContext:\n${context}`,
  model: 'gpt-4o-mini',
})

Batch embeddings

const texts = ['text one', 'text two', 'text three']
const vectors = await Promise.all(texts.map(t => sdk.ai.embed(t)))

Model

The default embedding model is text-embedding-3-small (OpenAI). Configure your provider in Dashboard → AI → Configuration.