BotsConnect Tools

Connect Tools

Every Aerostack bot connects to one MCP workspace. The workspace aggregates tools from multiple MCP servers, Skills, and Functions into a single tool catalog. When you add a new MCP server to the workspace, the bot can immediately use all its tools — no code changes, no redeployment.


How It Works

When a message arrives, the bot engine:

  1. Discovers tools — calls tools/list on the workspace gateway to get all available tool definitions
  2. Passes tools to the LLM — the LLM receives compressed tool schemas alongside the conversation
  3. LLM decides — if tools are needed, the LLM returns a tool call request
  4. Bot engine executes — calls tools/call on the workspace gateway with the tool name and arguments
  5. Result fed back — the tool result is added to the conversation and the LLM produces a response

This all happens automatically. You do not write routing logic — the LLM decides which tools to use based on the user’s message and the tool descriptions.


Tool Namespacing

Tools are namespaced using the pattern serverSlug__toolName. This prevents collisions when multiple MCP servers expose tools with the same name.

MCP Server SlugTool NameFully Qualified Name
stripeget_paymentstripe__get_payment
zendeskcreate_ticketzendesk__create_ticket
databasequery_ordersdatabase__query_orders
my-skillsummarizemy-skill__summarize

The LLM sees and uses the fully qualified names. When the bot engine receives a tool call, it routes it to the correct MCP server via the workspace gateway.


Adding Tools to Your Bot

From the Dashboard

  1. Navigate to Workspaces in the sidebar
  2. Open the workspace linked to your bot
  3. Click Add MCP Server and search the marketplace
  4. Select the MCP server and click Add
  5. The bot picks up the new tools within a few minutes (tool definitions are cached)

From the Bot Detail Page

If your bot does not have a workspace yet, or you want to create a new one:

  1. Navigate to Bots and open your bot
  2. In the Workspace section, click Create Workspace or Change Workspace
  3. Add MCP servers to the workspace

Tool definitions are cached for performance. After adding or removing MCP servers from a workspace, the bot picks up changes within a few minutes.


Agent Loop vs Workflow: Two Ways to Use Tools

The execution mode determines how tools are invoked.

Agent Loop (LLM Picks Tools)

In agent loop mode, the LLM receives the full tool catalog and decides autonomously which tools to call. You influence behavior through the system prompt:

When users ask about order status, use the orders__get_order tool.
When users need a refund, use the stripe__create_refund tool.
Never call stripe__create_refund without first verifying the order exists.

The LLM may call zero tools (for simple conversational messages), one tool, or chain multiple tools across up to 10 loop iterations.

Use case: A customer asks “Where is my order #12345?” The LLM automatically calls orders__get_order with the order ID, reads the result, and generates a natural-language response with the tracking details.

Workflow (You Place mcp_tool Nodes)

In workflow mode, you explicitly place mcp_tool nodes in your graph. Each node specifies exactly which tool to call and what arguments to pass:

{
  "type": "mcp_tool",
  "data": {
    "toolName": "stripe__get_payment",
    "arguments": "{\"payment_id\": \"{{payment_id}}\"}",
    "outputVariable": "payment_info"
  }
}

The result is stored in the outputVariable and can be referenced by downstream nodes using {{payment_info}} interpolation.

Use case: A refund workflow that always follows the same steps — look up order, check amount, process refund or escalate — regardless of how the user phrases their request.


Token Optimization

Aerostack automatically optimizes token usage when working with tools:

OptimizationDetails
Tool limitMaximum 20 tools sent to the LLM per message
Smart loadingTools are only included when the LLM needs them — simple messages skip tool definitions
Result truncationLarge tool results are automatically truncated (3000 chars in workflows)

Managing Tool Count

Bots with more than 20 tools may experience degraded LLM performance — the model has more options to consider, which can slow down decision-making and increase errors. Strategies:

  1. Separate workspaces — put related tools in one workspace and use different bots for different domains
  2. Write clear descriptions — the LLM relies on tool descriptions to make decisions. Vague descriptions lead to incorrect tool selection
  3. Guide via system prompt — mention the most important tools by name in your system prompt

Workspace Token Management

The bot engine automatically creates and manages workspace access tokens behind the scenes. You do not need to manage tokens manually — they are created when the bot is set up, rotated as needed, and refreshed before expiry.


Use Cases by Tool Type

Database MCP

Bot queries customer records, order history, and account data. The LLM formulates SQL queries or calls pre-defined query tools based on the user’s question.

Example: “Show me all orders from last week” triggers database__query_orders with a date filter.

Payment MCP (Stripe, etc.)

Bot checks payment status, processes refunds, and looks up invoices. Combined with identity verification (auth_gate), this enables fully automated payment support.

Example: Verified customer asks for a refund. Workflow calls stripe__create_refund and sends a confirmation message.

Ticketing MCP (Zendesk, Jira, etc.)

Bot searches existing tickets, creates new ones, and updates status. Useful for support bots that need to track issues across conversations.

Example: If the bot cannot resolve an issue, it creates a Zendesk ticket with the conversation summary and notifies the user with the ticket number.

Knowledge Base MCP

Bot searches documentation, FAQs, and internal knowledge bases. RAG-style retrieval gives the LLM relevant context for answering questions.

Example: Community manager bot searches the FAQ knowledge base when a user asks a product question, and cites the source article.

Custom Skills and Functions

Any custom logic you deploy as a Skill or Edge Function becomes available as a tool. Process images, run calculations, call internal APIs, or execute business logic.

Example: A code review bot calls a custom analyze_pr skill that clones the repo, runs linters, and returns a summary.


Best Practices

  1. Start with 3-5 tools. Add more as you identify gaps. Too many tools upfront confuses the LLM.

  2. Write specific tool descriptions. “Get the current status and tracking details for a specific order by order ID” is far better than “Get order info.”

  3. Use the system prompt to guide tool selection. Explicitly tell the LLM when to use which tools and in what order.

  4. Test with the test console. Send realistic messages and verify the bot calls the right tools with the right arguments before going live.

  5. Monitor tool usage in analytics. The analytics dashboard shows top tools used, failure rates, and latency. Identify tools that fail frequently and fix the underlying MCP server.


Next Steps