MCP Server
Unified has launched an MCP server that connects any Unified connection to LLM (Large Language Model) providers supporting the newest MCP protocols. The available MCP tools
will be determined by the integration’s feature support and the connection’s requested permissions.
The Unified MCP server sits on top of our Unified API (which calls individual APIs), and hides all of the complexity of API calling. Each call to a tool will count as 1 API request on your plan.
Warning: Our MCP server is in beta
and shouldn't be used for production systems (yet). Reach out to us if you would like to use it in your product application.
Changelog
July 10, 2025
- Added
structuredContent
output when the MCP-Protocol-Version is2025-06-18
or more recent - Added
type
parameter toGET /tools
to return tools in a specific LLM data-model - Added
hide_sensitive
parameter toPOST /tools/{id}/call
and the MCP server URL to remove PII/sensitive data from results. eg.hide_sensitive=true
- Added
permissions
parameter toGET /tools
,POST /tools/{id}/call
and MCP server URL to restrict tools - Added additional authentication mechanism
token
andconnection
to be ONLY used with LLM APIs
June 1, 2025
- Initial deploy
URLs
- Streamable HTTP: https://mcp-api.unified.to/mcp
- SSE: https://mcp-api.unified.to/sse
- stdin:
for real? it's 2025...
Authentication
You must provide a token
to the MCP server either as a token
URL parameter (eg. ?token={token}
) or in the Authorization header as a bearer token
(eg. Authorization: bearer {token}
)
There are two options:
- private direct LLM API; used when your applications connects directly to the MCP server or gives the LLM API the MCP URL
- public user; used to give your end-user the MCP server URL
Private LLM API Authentication:
This token is NOT safe to give out publically as it is a Unified.to workspace API key and will grant access to all of your connections and Unified.to account.
The token
is exactly the same as a Unified.to workspace API key. You MUST also include a connection
parameter that is the connection ID that you want to access.
Public End-User Authentication:
This token is safe to give out publically as it doesn't leak any sensitive information, like your Unified.to API token.
The token
is generated as follows:{connection_ID}-{nonce}-{signature}
connection_ID | An end-customer’s connection ID from Unified |
nonce | A random UTF-8 string that is at least 8 characters long |
signature | The SHA-256 HEX string result of the Connection_ID, nonce, and your Workspace secret (found in Settings > API Keys) |
MCP Server Options
Add these URL parameters to the MCP URL:
connection | The connection ID to access. Only used when token is a workspace API key. |
token | Either the public generated token or the workspace API key |
hide_sensitive | Hides sensitive (ie. PII) data from results. These fields include name, emails, telephones, ... |
permissions | A comma-delimited list of permissions from Unified.to |
Installation & Usage
OpenAI API:
OpenAI's chat-completion API supports remote MCP directly. Enter in our Streamable HTTP URL.
resp = client.responses.create(
model="gpt-4.1",
tools=[{
"type": "mcp",
"server_label": "unifiedMCP",
"server_url": "https://mcp-api.unified.to/mcp?token=XXXXXXXX",
"require_approval": "never",
"allowed_tools": [],
}],
input="list the candidates and then analyse the resumes from their applications",
)
OpenAI also supports sending in a list of MCP tools and having their API request that you call a specific tool with specific parameters. You would first call our MCP /tools
endpoint, then take the output and include it in your prompt API call:
resp = client.responses.create(
model="gpt-4.1",
tools=$TOOLS,
input="list the candidates and then analyse the resumes from their applications",
)
Once you call that MCP tool (using our /tools/{id}/call
endpoint), you would create a new prompt and reference the original responce with a previous_response_id
value.
Please see this article for more information.
Anthropic API:
Anthropic's chat-completion API allows for the addition of MCP tools and will return back an intermediate response asking you to call that MCP tool and then continue the request by providing its output.
You would first call our MCP /tools
endpoint, then take the output and include it in your prompt API call:
resp = client.messages.create(
model="claude-3-5-sonnet-20241022",
tools=$TOOLS,
input="list the candidates and then analyse the resumes from their applications",
)
Anthropic's API will then return a response with a tool_use
content blocks:
[
{
"type": "tool_use",
"id": "toolu_01D7FLrfh4GYq7yT1ULFeyMV",
"name": "list_candidates",
"input": { "limit": "100" }
}
]
Once you call that MCP tool (using our /tools/{id}/call
endpoint), you would return the following back to the model in a subsequent user message:
[
{
"type": "tool_result",
"tool_use_id": "toolu_01D7FLrfh4GYq7yT1ULFeyMV",
"content": "..."
}
]
Please see this article for more information. When creating a chat completion message, include the tools
field with a list of tools.
Gemini will return a request for you to call that specific tool:
content {
role: "model"
parts {
function_call {
name: "list_candidates"
args {
fields {
key: "limit"
value {
string_value: "100"
}
}
}
}
}
}
When you respond back with another message creation, you can include the MCP's response in the content
array:
{
"role": "user",
"parts": [
{
"functionResponse": {
"name": "list_candidates",
"response": {
...
}
}
}
]
}
Google Gemini API:
Google Gemini uses a similar concept that OpenAI and Anthropic use for tools, but they call it function_declarations
.
Please see this article for more information.
Claude.ai (online):
Go to claude.ai, then navigate to Settings > Integrations. Click on "Add custom integration". Enter the MCP URL:
https://mcp-api.unified.to/sse?token={connectionID}-{nonce}-{signature}
Make sure to provide your end-customer the appropriate token
value.
Claude (desktop client):
Edit the claude_desktop_config.json
file:
{
"mcpServers": {
"unified-mcp": {
"command": "npx",
"args": [
"-y",
"mcp-remote",
"https://mcp-api.unified.to/sse?token={connectionID}-{nonce}-{signature}",
"--allow-http"
]
}
}
}
Make sure to provide your end-customer the appropriate token
value.
Cursor:
Navigate to Cursor > Settings > Cursor Settings > MCP and edit the MCP configuration. Replace unified-mcp
with the name of your own application and then make sure to provide your end-customer the appropriate token
value.
{
"mcpServers": {
"unified-mcp": {
"url": "https://mcp-api.unified.to/sse?token={connectionID}-{nonce}-{signature}"
}
}
}
MCP is a new protocol and it is moving fast. We expect more LLM & agent clients to support its newer Streamable HTTP transport protocol. We also expect that the MCP protocol will continue to expand quickly. Stay tuned as we also keep up.
Additional API Endpoints
GET /tools
Get a list of the MCP tools associated with the connection. The payload will include an object with the parameters with name as the key and value.
You can use the permissions
parameter to restrict the tools.
You can also use the type
parameter to change the structure of the result to be used for OpenAI's function calling when type=openai
or Google Gemini's function declarations when type=gemini
.
The default (non-OpenAI, non-Gemini) result is an array of:
{
id: string;
description: string;
parameters: {
name: string;
description: string;
required: boolean;
}[]
}
POST /tools/{id}/call
You can use the permissions
parameter to restrict the tools. You can also use the hide_sensitive
parameter to hide PII/sensitive data from the results.
Call that tools and return the result.
{
content: {
type: 'text';
text: string;
}[],
structuredContent: JSON-object
}[]