Using Skills with Any LLM
The same skill works with any LLM. Aerostack generates the right protocol format automatically — you publish once, your skill is accessible from Claude, ChatGPT, and Gemini without rebuilding anything.
The Problem with Today’s Tool Ecosystem
Most AI tools are tied to a single LLM platform:
| Skill format | Claude | ChatGPT | Gemini |
|---|---|---|---|
| OpenAI Actions (GPT Store) | ❌ | ✅ | ❌ |
| MCP server | ✅ | ✅* | ❌ |
| Gemini Extensions | ❌ | ❌ | ✅ |
*OpenAI added MCP support in 2026, so MCP servers now work with both Claude and ChatGPT. Gemini still requires its own format.
Aerostack handles the translation layer — one skill definition, every format.
Option 1: MCP (Claude, Cursor, Windsurf, ChatGPT)
The primary way to use skills is through your workspace gateway. Configure it once in your editor — all installed skills appear as MCP tools:
{
"mcpServers": {
"my-workspace": {
"url": "https://gateway.aerostack.dev/ws/my-workspace",
"headers": { "Authorization": "Bearer mwt_…" }
}
}
}Works with:
- Claude (via Claude Code, claude.ai with MCP beta)
- Cursor
- Windsurf
- VS Code Copilot
- ChatGPT (via MCP connector, 2026+)
- Any editor that supports MCP
Install a skill → it appears through this URL immediately.
Option 2: OpenAI Function Format
If you’re using the OpenAI API directly in your code (not via an editor), fetch your skills as OpenAI function definitions:
GET https://gateway.aerostack.dev/ws/{workspaceSlug}/openai-tools
Authorization: Bearer mwt_...Returns:
[
{
"type": "function",
"function": {
"name": "github__create_issue",
"description": "[github] Create a new issue on a GitHub repository",
"parameters": {
"type": "object",
"properties": {
"owner": { "type": "string", "description": "Repository owner" },
"repo": { "type": "string", "description": "Repository name" },
"title": { "type": "string", "description": "Issue title" },
"body": { "type": "string", "description": "Issue body (markdown)" }
},
"required": ["owner", "repo", "title"]
}
}
}
]Use in your OpenAI API calls:
import OpenAI from 'openai';
const client = new OpenAI();
// Fetch your installed skills in OpenAI format
const toolsResponse = await fetch('https://gateway.aerostack.dev/ws/my-workspace/openai-tools', {
headers: { Authorization: `Bearer ${process.env.MWT_TOKEN}` }
});
const tools = await toolsResponse.json();
// Use in ChatCompletion
const response = await client.chat.completions.create({
model: 'gpt-4o',
tools,
messages: [{ role: 'user', content: 'Create a GitHub issue for the login bug' }]
});
// When OpenAI calls a tool, route it back through Aerostack gateway
if (response.choices[0].finish_reason === 'tool_calls') {
const toolCall = response.choices[0].message.tool_calls[0];
const result = await callAerostackGateway(toolCall.function.name, JSON.parse(toolCall.function.arguments));
// ...
}Option 3: Gemini Tool Declarations
For Google Gemini API users:
GET https://gateway.aerostack.dev/ws/{workspaceSlug}/gemini-tools
Authorization: Bearer mwt_...Returns Gemini-compatible function declarations:
[
{
"name": "github__create_issue",
"description": "[github] Create a new issue on a GitHub repository",
"parameters": {
"type": "OBJECT",
"properties": {
"owner": { "type": "STRING", "description": "Repository owner" },
"repo": { "type": "STRING", "description": "Repository name" },
"title": { "type": "STRING", "description": "Issue title" }
},
"required": ["owner", "repo", "title"]
}
}
]Use with the Gemini API:
import { GoogleGenerativeAI } from '@google/generative-ai';
const genAI = new GoogleGenerativeAI(process.env.GEMINI_API_KEY);
// Fetch tools in Gemini format
const toolsResponse = await fetch('https://gateway.aerostack.dev/ws/my-workspace/gemini-tools', {
headers: { Authorization: `Bearer ${process.env.MWT_TOKEN}` }
});
const functions = await toolsResponse.json();
const model = genAI.getGenerativeModel({
model: 'gemini-1.5-pro',
tools: [{ functionDeclarations: functions }]
});Option 4: Install per Editor (Manual)
If you prefer to install skills directly into your editor’s config rather than via a shared workspace:
# Cursor
aerostack skill install @johndoe/github-skill --editor cursor
# → writes entry to ~/.cursor/mcp.json
# Windsurf
aerostack skill install @johndoe/github-skill --editor windsurf
# → writes entry to ~/.codeium/windsurf/mcp_config.json
# Claude Code
aerostack skill install @johndoe/github-skill --editor claude-code
# → writes entry to ~/.claude/mcp.json
# VS Code
aerostack skill install @johndoe/github-skill --editor vscode
# → writes entry to .vscode/mcp.jsonWhich Approach to Use
| Situation | Recommended approach |
|---|---|
| Using Claude Code, Cursor, or Windsurf | Workspace gateway (MCP) |
| Building with the OpenAI API in code | /openai-tools endpoint |
| Building with the Gemini API in code | /gemini-tools endpoint |
| Team setup where everyone needs the same skills | Shared workspace gateway |
| Personal setup, specific editor | --editor flag |
The workspace gateway (Option 1) is the most powerful because:
- One config, all editors
- Add skills without touching editor configs
- Works with MCP-compatible LLMs automatically
Publish Once, Reach All LLMs
When you publish a skill on Aerostack, the tool definitions in your tools[] array use JSON Schema — the same schema format used by MCP, OpenAI, and Gemini (with minor formatting differences). Aerostack generates the format-specific responses at request time.
You write the tool once. Every LLM can use it.