FunctionsOverview

Functions — Fullstack Edge Runtime

Aerostack Functions are fullstack edge functions running on Cloudflare Workers. Every function has native, in-process bindings to six platform primitives — Database, Cache, Queue, AI, Vector Search, and Storage. No HTTP calls. No SDK clients. Direct access at ~0ms latency.

Functions are the foundation layer of Aerostack. They power MCP servers, Skills, bot workflows, and (coming soon) AI Proxy and AI Endpoints. Everything you build on Aerostack runs on Functions under the hood.

Architecture

All six bindings are injected into your function’s env parameter. They are not HTTP calls — they are native Cloudflare bindings that execute within the same worker process. This means a database query or cache lookup adds microseconds, not milliseconds.

How Functions Compare

Traditional ServerlessAerostack Functions
Databasefetch('https://your-db-api/query') — HTTP round tripenv.DB.prepare('SELECT ...').all() — in-process
Cacheredis.get(key) — network hop to Redisenv.CACHE.get(key) — same datacenter
Queuesqs.sendMessage(...) — AWS API callenv.QUEUE.send(...) — native binding
AIopenai.chat(...) — external APIenv.AI.run(model, ...) — built-in multi-provider
Vectorpinecone.query(...) — external APIenv.VECTORIZE.query(...) — native binding
Storages3.putObject(...) — AWS API callenv.STORAGE.put(...) — native binding
Latency per call50-200ms per service call~0ms — all in-worker
Auth/configManage credentials for each serviceZero config — bindings are pre-wired

The Cloudflare Workers Pattern

Aerostack functions follow the standard Cloudflare Workers interface. If you have written a Worker before, you already know how to write a function.

// src/index.ts
export default {
  // Handle HTTP requests — required
  async fetch(request: Request, env: Env, ctx: ExecutionContext): Promise<Response> {
    const url = new URL(request.url)
 
    if (url.pathname === '/users') {
      const users = await env.DB.prepare('SELECT * FROM users WHERE active = 1').all()
      return Response.json(users.results)
    }
 
    return new Response('Not found', { status: 404 })
  },
 
  // Handle queue messages — optional
  async queue(batch: MessageBatch, env: Env): Promise<void> {
    for (const msg of batch.messages) {
      console.log('Processing job:', msg.body)
      msg.ack()
    }
  },
 
  // Handle cron triggers — optional
  async scheduled(event: ScheduledEvent, env: Env, ctx: ExecutionContext): Promise<void> {
    // Runs on a schedule (e.g. every hour)
    await env.DB.prepare('DELETE FROM sessions WHERE expires_at < ?').bind(Date.now()).run()
  }
}

Three handlers, one file, six platform bindings. That is the entire model.

What You Can Build

Use CaseWhat happens
Custom API backendREST endpoints with DB, cache, and auth — no separate server needed
Bot intelligence layerFunction queries your DB, runs AI analysis, caches the result, returns it to a bot via MCP
Real-time data pipelineIngest webhook → enqueue job → process with AI → store in vector DB → cache result
RAG pipelineIngest documents → chunk → embed → store in vector search → query semantically from bots/MCP
Scheduled analyticsCron function aggregates DB data → caches dashboard results → queues Slack notification
Smart form processorReceive submission → validate → store in DB → queue confirmation email → respond
Webhook receiverAccept incoming webhooks, validate signatures, enqueue for async processing
AI-powered searchUser query → embed → vector search → re-rank with LLM → return results

Next Steps