MCP Server
Unified has launched an MCP server that connects any Unified connection to LLM (Large Language Model) providers supporting the newest MCP protocols. The available MCP tools
will be determined by the integration’s feature support and the connection’s requested permissions.
The Unified MCP server sits on top of our Unified API (which calls individual APIs), and hides all of the complexity of API calling. Each call to a tool will count as 1 API request on your plan.
Warning: Our MCP server is in beta
and shouldn't be used for production systems (yet). Reach out to us if you would like to use it in your product application.
URLs
- Streamable HTTP: https://mcp-api.unified.to/mcp
- SSE: https://mcp-api.unified.to/sse
- stdin:
for real? it's 2025...
Authentication
You must provide a token
to the MCP server either as a token
URL parameter (eg. ?token={token}
) or in the Authorization header as a bearer token
(eg. Authorization: bearer {token}
)
The token
is generated as follows:{connection_ID}-{nonce}-{signature}
connection_ID | An end-customer’s connection ID from Unified |
---|---|
nonce | A random UTF-8 string that is at least 8 characters long |
signature | The SHA-256 HEX string result of the Connection_ID, nonce, and your Workspace secret (found in Settings > API Keys) |
Installation & Usage
OpenAI API:
OpenAI's chat-completion API supports remote MCP directly. Enter in our Streamable HTTP URL.
resp = client.responses.create(
model="gpt-4.1",
tools=[{
"type": "mcp",
"server_label": "unifiedMCP",
"server_url": "https://mcp-api.unified.to/mcp?token=XXXXXXXX",
"require_approval": "never",
"allowed_tools": [],
}],
input="list the candidates and then analyse the resumes from their applications",
)
OpenAI also supports sending in a list of MCP tools and having their API request that you call a specific tool with specific parameters. You would first call our MCP /tools
endpoint, then take the output and include it in your prompt API call:
resp = client.responses.create(
model="gpt-4.1",
tools=$TOOLS,
input="list the candidates and then analyse the resumes from their applications",
)
Once you call that MCP tool (using our /tools/{id}/call
endpoint), you would create a new prompt and reference the original responce with a previous_response_id
value.
Please see this article for more information.
Anthropic API:
Anthropic's chat-completion API allows for the addition of MCP tools and will return back an intermediate response asking you to call that MCP tool and then continue the request by providing its output.
You would first call our MCP /tools
endpoint, then take the output and include it in your prompt API call:
resp = client.messages.create(
model="claude-3-5-sonnet-20241022",
tools=$TOOLS,
input="list the candidates and then analyse the resumes from their applications",
)
Anthropic's API will then return a response with a tool_use
content blocks:
[
{
"type": "tool_use",
"id": "toolu_01D7FLrfh4GYq7yT1ULFeyMV",
"name": "list_candidates",
"input": { "limit": "100" }
}
]
Once you call that MCP tool (using our /tools/{id}/call
endpoint), you would return the following back to the model in a subsequent user message:
[
{
"type": "tool_result",
"tool_use_id": "toolu_01D7FLrfh4GYq7yT1ULFeyMV",
"content": "..."
}
]
Please see this article for more information. When creating a chat completion message, include the tools
field with a list of tools.
Gemini will return a request for you to call that specific tool:
content {
role: "model"
parts {
function_call {
name: "list_candidates"
args {
fields {
key: "limit"
value {
string_value: "100"
}
}
}
}
}
}
When you respond back with another message creation, you can include the MCP's response in the content
array:
{
"role": "user",
"parts": [
{
"functionResponse": {
"name": "list_candidates",
"response": {
...
}
}
}
]
}
Google Gemini API:
Google Gemini uses a similar concept that OpenAI and Anthropic use for tools, but they call it function_declarations
.
Please see this article for more information.
Claude.ai (online):
Go to claude.ai, then navigate to Settings > Integrations. Click on "Add custom integration". Enter the MCP URL:
https://mcp-api.unified.to/sse?token={connectionID}-{nonce}-{signature}
Make sure to provide your end-customer the appropriate token
value.
Claude (desktop client):
Edit the claude_desktop_config.json
file:
{
"mcpServers": {
"unified-mcp": {
"command": "npx",
"args": [
"-y",
"mcp-remote",
"https://mcp-api.unified.to/sse?token={connectionID}-{nonce}-{signature}",
"--allow-http"
]
}
}
}
Make sure to provide your end-customer the appropriate token
value.
Cursor:
Navigate to Cursor > Settings > Cursor Settings > MCP and edit the MCP configuration. Replace unified-mcp
with the name of your own application and then make sure to provide your end-customer the appropriate token
value.
{
"mcpServers": {
"unified-mcp": {
"url": "https://mcp-api.unified.to/sse?token={connectionID}-{nonce}-{signature}"
}
}
}
MCP is a new protocol and it is moving fast. We expect more LLM & agent clients to support its newer Streamable HTTP transport protocol. We also expect that the MCP protocol will continue to expand quickly. Stay tuned as we also keep up.
Additional API Endpoints
GET /tools
Get a list of the MCP tools associated with the connection. The payload will include an object with the parameters with name as the key and value. The result is an array of:
{
id: string;
description: string;
parameters: {
name: string;
description: string;
required: boolean;
}[]
}
POST /tools/{id}/call
Call that tools and return the result.
{
content: {
type: 'text';
text: string;
}[],
data: JSON-object
}[]