MCP Protocol for LLMs
Discussions revolve around the Model Context Protocol (MCP), a standard for providing tools, resources, and context to LLMs via servers, including its benefits over direct APIs, implementations by companies like GitHub and Figma, and debates on its necessity and complexity.
Activity Over Time
Top Contributors
Keywords
Sample Comments
You just described how to write a tool the LLM can use. Not MCP!! MCP is basically a tool that runs in a server so can be written in any programming language. Which is also its problem: now each MCP tool requires its own server with all the complications that come with it, including runtime overhead, security model fragmentation, incompatibilityâŚ
Not all clients support thatâcurrently limited to ChatGPT custom GPT actions. Itâs not a standard. Fortunately, Anthropic, Google, and OpenAI have agreed to adopt MCP as a shared protocol to enable models to use tools. This protocol mainly exists to simplify things for those building LLM-powered clients like Claude, ChatGPT, Cursor, etc. If you want an LLM (through API calls) to interact with an your API, you canât just hand it an API key and expect it to workâyou need to build an Agent for that
I'm not sure what you mean? LLMs can only be interacted with using prompting. Even the tool call response from OpenAPI is them just wrapping the prompt on their side of the API with another prompt.So everything else is just adding behavior around it.MCP is a way to add behavior around LLM prompting for user convenience.
This response is spot on. People seem very confused about what MCP actually is. It's just a standard way to provide an LLM with tools. And even how that happens is up to the agent implementation. There are some other less common features, but the core is just about providing tool definitions and handling the tool_call. Useful but basically just OpenAPI for LLM
I can specify and use tools with an LLM without MCP, so why do I need MCP?
Perhaps you haven't used many MCP server, but those that I have used (GitHub, Atlassian, Glean, BuildKite, Figma, Google Workspace, etc) work very well. They teach an LLM how to do exactly what you're saying - "use the API standards...your models/agents directly interact with those API endpoints." Most MCP severs don't sit in between the LLM and the API endpoints, they just teach them how to use the tools and then the LLM calls the APIs directly as any HTTP client w
You can, most MCP servers are just wrappers around existing SDKs or even rest endpoints.I think it all comes down to discovery. MCP has a lot of natural language written in each of its âcallsâ allowing the LLM to understand context.MCP is also not stateless, but to keep it short. I believe itâs just a way to make these tools more discoverable for the LLM. MCP doesnât do much that you canât with other options. Just makes it easier on the LLM.Thatâs my take as someone who wrote a few.E
How so? The protocol doesn't obfuscate things. Your agent can easily expose the entire MCP conversation, but generally just exposes the call and response. This is no different than any other method of providing a tool for the LLM to call.You have some weird bone to pick with MCP which is making you irrationally unreceptive to any good-faith attempt to help you understand.If you want to expose tools to the LLM you have to provide a tool definition to the LLM for each tool and you have
Benefit: A standard and purpose driven protocol for connecting agents (MCP Host/MCP Clients) to tools, resources, and prompts (MCP Server) that also exposes LLM services to said MCP Servers.The alternative you suggest is manually integrating each set of tools or data?Or maybe there's some misunderstanding about MCP? MCP currently has 2 transports, stdio and HTTP+SEE. The second one is, in fact, a "network-accessible API" as you call out.
MCP is just an API/SDK. It's an interface. It can't be "hype" any more than a pipe would be. It's just a different pipe shape that fits better with how LLMs operate.