Unified MCP Server

Installation & Usage

OpenAI · Anthropic · Gemini · Cohere · Grok & Groq · Claude.ai · Claude desktop · Cursor

OpenAI API:

OpenAI's chat-completion API supports remote MCP directly. Enter in our Streamable HTTP URL.

resp = client.responses.create(
    model="gpt-4.1",
    tools=[{
        "type": "mcp",
        "server_label": "unifiedMCP",
        "server_url": "https://mcp-api.unified.to/sse?token=XXXXXXXX&connection=YYYYYYY",
        "require_approval": "never",
        "allowed_tools": [],
    }],
    input="list the candidates and then analyse the resumes from their applications",
)

OpenAI also supports sending in a list of MCP tools and having their API request that you call a specific tool with specific parameters. You would first call our MCP /tools endpoint, then take the output and include it in your prompt API call:

resp = client.responses.create(
    model="gpt-4.1",
    tools=$TOOLS,
    input="list the candidates and then analyse the resumes from their applications",
)

Once you call that MCP tool (using our /tools/{id}/call endpoint), you would create a new prompt and reference the original responce with a previous_response_id value.

Please see this article for more information.

Anthropic API:

Anthropic's chat-completion API allows for the addition of MCP tools and will return back an intermediate response asking you to call that MCP tool and then continue the request by providing its output.

You would first call our MCP /tools endpoint, then take the output and include it in your prompt API call:

resp = client.messages.create(
    model="claude-3-5-sonnet-20241022",
    tools=$TOOLS,
    input="list the candidates and then analyse the resumes from their applications",
)

Anthropic's API will then return a response with a tool_use content blocks:

[
  {
    "type": "tool_use",
    "id": "toolu_01D7FLrfh4GYq7yT1ULFeyMV",
    "name": "list_candidates",
    "input": { "limit": "100" }
  }
]

Once you call that MCP tool (using our /tools/{id}/call endpoint), you would return the following back to the model in a subsequent user message:

[
  {
    "type": "tool_result",
    "tool_use_id": "toolu_01D7FLrfh4GYq7yT1ULFeyMV",
    "content": "..."
  }
]

Please see this article for more information. When creating a chat completion message, include the tools field with a list of tools.

Anthropic recently released a beta version that does support remote MCP server.

const completion = await anthropic.beta.messages.create({
        model: latestModel,
        max_tokens: 1024,
        messages: [
            {
                role: 'user',
                content: message,
            },
        ],
        stream: false,
        mcp_servers: [
            {
                type: 'url',
                url: "https://mcp-api.unified.to/mcp?token=XXXXXXXX&connection=YYYYYYY",
                name: 'unifiedMCP',
            },
        ],
        betas: ['mcp-client-2025-04-04'],
    });

Google Gemini API:

Google Gemini uses a similar concept that Anthropic use for tools, but they call it function_declarations.

First you will request a list of tools with a GET /tools?type=gemini, and then give those tools to the chat completion API request.

const completion = await gemini.models.generateContent({
        model: latestModel,
        contents: message,
        config: {
            tools: $TOOLS
        },
    });

Please see this article for more information.

Gemini will return a request for you to call that specific tool:

content {
    role: "model"
    parts {
      function_call {
        name: "list_candidates"
        args {
          fields {
            key: "limit"
            value {
              string_value: "100"
            }
          }
        }
      }
    }
}

When you respond back with another message creation, you can include the MCP's response in the content array:

{
  "role": "user",
  "parts": [
    {
      "functionResponse": {
        "name": "list_candidates",
        "response": {
          ...
        }
      }
    }
  ]
}

Cohere:

Cohere's chat API uses the same concept as Anthropic and Google Gemini. First get a list of tools from our GET /tools?type=cohere endpoint, then add that result to your chat API command:

response = co.chat(
    model="command-a-03-2025", messages=messages, tools=tools
)

Then call the POST /call/{id}/tool endpoint when requested to call a tool by Cohere's responce. Use the arguments as the properties.

Grok & Groq:

Both x.ai's Grok and Groq (different company) work the same way, which is similar to Anthropic and Google Gemini.

First get a list of tools from our GET /tools?type=grok or GET /tools?type=groq endpoint, then add that result to your chat API command:

response = client.chat.completions.create({
    // ...
    tools: tools,
    tool_choice: 'auto',
});

Respond to tool_calls in the Grok/Groq response:

const response_message = response.choices[0].message;
const tool_calls = response_message.tool_calls;

for (tool_call in tool_calls) {
    const tool_id = tool_Call.id;
    const function_args = tool_call.function.arguments;
    const function_name = tool_call.function.name;

    // call our POST /tools/${tool_id} with function_args in the POST payload

    const messages = [
        {
            tool_call_id: tool_id,
            role: 'tool',
            name: function_name,
            content: result_content,
        },
    ];

    const second_message = client.chat.completions.create({
        // ...
        messages,
    });
}

Claude.ai (online):

Go to claude.ai, then navigate to Settings > Integrations. Click on "Add custom integration". Enter the MCP URL:

https://mcp-api.unified.to/sse?token={connectionID}-{nonce}-{signature}

Make sure to provide your end-customer the appropriate token value.

Claude (desktop client):

Edit the claude_desktop_config.json file:

{
    "mcpServers": {
        "unified-mcp": {
            "command": "npx",
            "args": [
                "-y",
                "mcp-remote",
                "https://mcp-api.unified.to/sse?token=XXXXXXXX&connection=YYYYYYY",
                "--allow-http"
            ]
        }
    }
}

Make sure to provide your end-customer the appropriate token value.

Cursor:

Navigate to Cursor > Settings > Cursor Settings > MCP and edit the MCP configuration. Replace unified-mcp with the name of your own application and then make sure to provide your end-customer the appropriate token value.

{
    "mcpServers": {
        "unified-mcp": {
            "url": "https://mcp-api.unified.to/sse?token=XXXXXXXX&connection=YYYYYYY"
        }
    }
}

MCP is a new protocol and it is moving fast. We expect more LLM & agent clients to support its newer Streamable HTTP transport protocol. We also expect that the MCP protocol will continue to expand quickly. Stay tuned as we also keep up.

Are we missing anything? Let us know
Was this page helpful?