Documentation Index
Fetch the complete documentation index at: https://docs.kataven.ai/llms.txt
Use this file to discover all available pages before exploring further.
Building your own AI agent rather than using a packaged app like Claude Desktop or ChatGPT? Most agent frameworks support stdio MCP servers directly — no tunnel needed. This page shows the patterns for the most common frameworks.
OpenAI Agents SDK
The cleanest path if your agent uses ChatGPT-style models:
from agents import Agent, Runner
from agents.mcp import MCPServerStdio
kataven_mcp = MCPServerStdio(
params={
"command": "kataven-mcp",
"env": {"KATAVEN_API_KEY": "sk_live_acme_..."},
}
)
agent = Agent(
name="Ops",
instructions="You manage Kataven voice agents on the user's behalf.",
mcp_servers=[kataven_mcp],
)
result = Runner.run_sync(agent, "Pause every running campaign so I can deploy.")
print(result.final_output)
The model sees rich JSON-Schema for each Kataven tool’s arguments and fills in fields itself.
LangGraph
LangChain has a first-party MCP adapter that wraps any stdio server as LangChain tools:
from langchain_mcp_adapters.client import MultiServerMCPClient
from langgraph.prebuilt import create_react_agent
from langchain_anthropic import ChatAnthropic # or ChatOpenAI
client = MultiServerMCPClient({
"kataven": {
"command": "kataven-mcp",
"transport": "stdio",
"env": {"KATAVEN_API_KEY": "sk_live_acme_..."},
}
})
# Pull in tools at agent-creation time:
tools = await client.get_tools()
agent = create_react_agent(ChatAnthropic(model="claude-sonnet-4"), tools)
result = await agent.ainvoke({"messages": "Place a test call to my cell."})
Agno
from agno.agent import Agent
from agno.models.anthropic import Claude
from agno.tools.mcp import MCPTools
async with MCPTools(
command="kataven-mcp",
env={"KATAVEN_API_KEY": "sk_live_acme_..."},
) as kataven_tools:
agent = Agent(
model=Claude(id="claude-sonnet-4"),
tools=[kataven_tools],
instructions="You manage Kataven voice agents on the user's behalf.",
)
await agent.aprint_response("List my agents.")
AutoGen
from autogen_ext.tools.mcp import StdioServerParams, mcp_server_tools
from autogen_agentchat.agents import AssistantAgent
from autogen_ext.models.openai import OpenAIChatCompletionClient
server_params = StdioServerParams(
command="kataven-mcp",
env={"KATAVEN_API_KEY": "sk_live_acme_..."},
)
tools = await mcp_server_tools(server_params)
agent = AssistantAgent(
name="ops",
model_client=OpenAIChatCompletionClient(model="gpt-4o"),
tools=tools,
)
Official MCP TypeScript SDK
For Node / TypeScript agents, use @modelcontextprotocol/sdk directly:
import { Client } from "@modelcontextprotocol/sdk/client/index.js";
import { StdioClientTransport } from "@modelcontextprotocol/sdk/client/stdio.js";
const transport = new StdioClientTransport({
command: "kataven-mcp",
env: { KATAVEN_API_KEY: "sk_live_acme_..." },
});
const client = new Client({ name: "my-agent", version: "1.0.0" }, { capabilities: {} });
await client.connect(transport);
// List available tools:
const { tools } = await client.listTools();
console.log(tools.map(t => t.name)); // ['kataven_list_agents', 'kataven_create_agent', ...]
// Call a tool:
const result = await client.callTool({
name: "kataven_list_agents",
arguments: {},
});
console.log(result);
Hand the resulting tools to whichever LLM client you’re using (OpenAI, Anthropic, etc.) following that client’s tool-use protocol.
Why stdio, not the tunneled HTTPS path?
Programmatic frameworks all support stdio because they’re already running locally — they can spawn subprocesses, pipe stdio, and cleanly shut them down. The HTTPS-tunnel path exists only for chat surfaces (ChatGPT, Claude.ai web, n8n) that can’t spawn local processes.
If you specifically need an HTTPS endpoint (e.g. you’re deploying your agent to a serverless environment that can’t run subprocesses), use the same tunnel recipe packaged AI clients use.
See also