Skip to navigation Skip to main content Skip to footer

Recently , the Model Context Protocol (MCP) [1][2][3], released by Anthropic, has obtained greater relevance and MCP has been adopted by many tools. Both by implementing an official MCP server and also through the use of a community-provided one.

This new protocol has several advantages in terms of standardisation and code reuse, so it makes sense that widespread adoption is happening.

In this blog post, we will cover the low-level details of how MCP works and we will provide a number of security tips you should follow to help ensure that your use of MCP is as secure as possible.

Tools / Functions Calling

For a while now, Large Language Models (LLMs) have augmented their usefulness by interacting with external functions or tools. The original idea of MCP was to standardise the way applications use external tools / function calling[4][5]. So let's first understand how this works under the hood.

There is a common misunderstanding when speaking about function calling in Large LLMs , which is to oversimplify it by saying that an LLM calls certain tools. That is actually never happening. There is a piece of software (an agent) which is using an LLM and also has the capacity to execute certain tools based on the LLM output. That happens using a prompt similar to the following:

 

You are a helpful assistant with tool calling capabilities.

Given the following functions, please respond with a JSON for a function call with its proper arguments that best answers the given prompt.

Respond in the format {\"name\": function name, \"parameters\": dictionary of argument name and its value}. Do not use variables.

{\"type\":\"function\",\"function\":{\"name\":\"add_numbers\",\"description\":\"Add two numbers together\",\"args\":{\"type\":\"object\",\"required\":[\"number1\",\"number2\"],\"properties\":{\"number1\":{\"type\":\"number\",\"description\":\"The first number to add\"},\"number2\":{\"type\":\"number\",\"description\":\"The second number to add\"}}}}}

Question: How much is 17 + 13?

 

Using that prompt, the LLM recognises the user's intent in adding those two numbers, and generates the following response:

 

tool_calls=[{'name': 'add_numbers', 'args': {'number1': 17, 'number2': 13}, 'type': 'tool_call'}]

 

Finally, the software handling this response can invoke the functionality with those arguments and construct another LLM prompt that includes the function responses to actually answer the user request.

Model Context Protocol

The Model Context Protocol (MCP) is a step forward in how we can standardise the handling of those function calls and external tools. Messages follow the JSON-RPC 2.0 specification[6] but they can be exchanged via two different transport mechanisms. The first one is the most used at this moment and it is called "stdio" because the MCP server is basically a binary which is executed locally and it exchanges the JSON-RPC messages with the MCP client via the standard input and output. The configuration of MCP clients usually includes the command used to execute the MCP server binary. An example of this for the Claude Desktop application is following:

 

{
    "mcpServers": {
        "filesystem": {
        "command": "npx",
        "args": [
            "-y",
            "@modelcontextprotocol/server-filesystem",
            "/Users/ncc/Desktop",
            "/Users/ncc/Downloads"
        ]}
    }
}	  

 

The alternative transport mechanism that can be used with remote MCP servers is called HTTP + SSE (Server-Sent Events) [7][8], although it seems likely to be replaced soon by Streamable HTTP[9][10]. This remote mechanism works by creating two streams of data, just as "stdio" does but in two different HTTP connections. The following screenshot shows how those two connections look:

 

Figure 1 - Wireshark capture of both read and write streams

The connection starts by requesting the "/sse" endpoint which establishes the communication from the server to the client. The first event received in this communication includes the URL to connect to for the client-to-server communication (endpoint "/messages/"). After that, the JSON-RPC messages are sent and received asynchronously.

During the initialization handshake, both the client and the server exchange their "capabilities" which are not only "tools" but also "resources", "sampling" and others. After that, those capabilities and their internal methods can be invoked.

5 Security Risks When Using MCP

Following that overview about how MCP works under the hood, we are now in a good position to describe some security risks associated with this new protocol.

Supply Chain Attacks

When using local MCP servers ("stdio" transport), it is important to remember that you are downloading and executing a binary onto your local computer. Some of the MCP servers are developed by the same company providing the underlying features but some others are community developed or directly available on the Internet [11]. As you can imagine, this represents an opportunity for an attacker to create backdoored MCP servers and wait for victims to install them in their equipment. Configuration files often include the command and parameters to be used, so malicious actions could be included in it, instead of being included in the MCP server itself.

Vulnerabilities in local MCP servers

Even if you download a local MCP from a trusted resource, using it is not far different than using a third-party library that could contain vulnerabilities. It is not uncommon these days to see products releasing their own local MCP servers, acting as a bridge between the MCP protocol and their public REST API services. As you can imagine, those local MCP servers can contain vulnerabilities when handling TLS, logging sensitive information, etc and they should be reviewed before being incorporated into your workflows.

Prompt Injection

An MCP server exposes capabilities such as "tools", "prompts" and "resources". The information associated with them will be included in the prompt for the client side to be able to decide which one to use or invoke. For that reason, using an external MCP server may represent an opportunity to include additional instructions and exploit an indirect prompt injection vulnerability [12] . This is especially true for an untrusted external MCP server and it may happen even if you don't invoke any specific functionality, as the method descriptions are included in the prompt to identify the user intent. Additionally, the method descriptions can be updated dynamically [13] which further increases the risk.

Excessive Capabilities Exposed

Sometimes we can forget that MCP is a bidirectional protocol. An MCP server can expose capabilities to be consumed by an MCP client connected to it but the same happens in the other direction. MCP clients can also expose capabilities such as "sampling" which allows an MCP server to use the LLM engine available to the MCP client with its own prompts and parametrization [14]. Obviously, you may want to avoid or limit this capability, as all those actions will be at your cost.

Vulnerabilities in remote MCP servers

Calling a JSON-RPC 2.0 method is not different from calling a REST API functionality. Ultimately, the parameters we send are used to invoke an internal mechanism. Indeed, that underlying functionality can contain all kinds of vulnerabilities such as SQL injections, path traversals, authorisation issues, etc. The only difference is that MCP is using an asynchronous communication mechanism that prevents using our traditional toolkit. For this reason, NCC Group has implemented an HTTP-to-MCP bridge that simulates traditional HTTP connections and forwards the JSON-RPC messages to an MCP communication and we will make this tool available very soon.

Final Thoughts and Recommendations

MCP is what the industry needed to easily integrate our agents with external tool capabilities but at the same time, it brings with some new challenges that we need to address.

In an environment where several MCP servers can be used concurrently, secure design patterns are more necessary than ever. Local MCP servers should be reviewed and/or sandboxed before being installed. Remote MCP servers could also be sandboxed in terms of LLM prompts, so that content provided by untrusted servers is never included in the same prompt as sensitive information. In the same way, user intent detection should not be performed with untrusted MCP servers, unless their capability descriptions are validated and frozen.

Finally, capabilities exposed by MCP could also be vulnerable themselves, so make sure the exposed services are also security tested.

Enjoy MCP in good health (and security).

References

[1] "Anthropic - Introducing the Model Context Protocol", https://www.anthropic.com/news/model-context-protocol

[2] "MCP Documentation", https://modelcontextprotocol.io/

[3] "Github - Model Context Protocol", https://github.com/modelcontextprotocol

[4] "Langchain - How to use chat models to call tools", https://python.langchain.com/docs/how_to/tool_calling/

[5] "OpenAI - Function Calling", https://platform.openai.com/docs/guides/function-calling

[6] "JSON-RPC 2.0 Specification", https://www.jsonrpc.org/specification

[7] "HTML Living Standard - SSE", https://html.spec.whatwg.org/multipage/server-sent-events.html

[8] "Wikipedia - SSE", https://en.wikipedia.org/wiki/Server-sent_events

[9] "Github MCP Pull Requests - Streamable HTTP", https://github.com/modelcontextprotocol/modelcontextprotocol/pull/206

[10] "MCP Documentation - Streamable HTTP", https://modelcontextprotocol.io/specification/2025-03-26/basic/transports#streamable-http

[11] "Github MCP - Servers", https://github.com/modelcontextprotocol/servers

[12] "OWASP Top 10 - Prompt Injection", https://genai.owasp.org/llmrisk/llm01-prompt-injection/

[13] "WhatsApp MCP Exploited", https://invariantlabs.ai/blog/whatsapp-mcp-exploited

[14] "MCP Documentation - Sampling", https://modelcontextprotocol.io/docs/concepts/sampling

Acknowledgements

Special thanks to Thomas Atkinson and the rest of the NCC Group team that proofread this blogpost before being published.