Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't think that this is a fair comparison. HTTP actually has a specification that you need to follow if you need the web to _work_ (or at least, look good). MCP and others on the other hand are just you explaining to the LLM how you want it to format its responses.

MCP is barely worth being called a protocol; LLMs are now smart enough to follow instructions however you give them. When I played around with creating my own tool-using agent, I specified the tools in YAML, and asked it to use custom tags to 'call' them. It worked just fine.

More critically, there's nothing stopping the LLM from _not_ obeying whatever it is you give it, especially as the context fills up and/or the user trying to break it, except its own training. There is no proper HTTP server that would give you invalid responses 'just because'. Yeah, you could wrap the agent in a function that calls it again and again if the response isn't properly formatted with whatever formatting error happened, but I don't think any sane person would call that 'following the protocol', as the only entity it makes happy is whoever you're buying tokens from.



> MCP and others on the other hand are just you explaining to the LLM how you want it to format its responses.

There's a little bit more to MCP than that. Arguably the most useful part of MCP is more on the (MCP) server side, in regards to discoverability of tools. Having a standard protocol for listing tools, resources and prompts means all you need is the endpoint where the server is located, and you can just get the tool list in a couple of lines of code, pass it to the LLM and then invoke the tool(s) without hand-rolling a bunch of code.

This is probably more useful if you think in terms of using MCP tool servers written by other people. If you're only using your own stuff, then sure, you only write your own interface code once. But if you want to use tools provided by others, having a standard is handy (albeit certainly not required).

I like the way @01100011 phrased it in his analogy with Junit:

    "I kept thinking it was stupid because in my mind it
     barely did anything. But then I got it: it did barely 
    do anything, but it did things you'd commonly need to do 
    for testing and it did them in a somewhat standardized 
    way that kept you from having to roll your own every 
    time.".


We already had many standard ways to communicate API documentation before.

MCP brings nothing new.


You should probably spend some time understanding what MCP and other protocols do before aggressively trashing the concept everywhere.

I suggest reading at least the abstract of the linked paper.

> MCP brings nothing new.

https://github.com/modelcontextprotocol/servers


Pro tip: you can just feed the LLM an OpenAPI spec and it will work just as well.


It's all 'clever prompting' all the way down. 'Chain of thought', excuse me, 'reasoning', 'agents'. It's all just prompt scaffolding and language models.

There is no way this ends the way the salespeople are saying it will. Indexing over a compressed space and doing search is not the future of computation.


You can (theoretically) constrain LLM output with a formal grammar. This works on the next token selection step and not just another prompt hack. You could also (theoretically) have a standard way to prompt an LLM API with formal grammar constraints.

That would be a very useful feature.

MCP is not that, MCP is completely unnecessary bullshit.


The first part of this comment is great, but can you please avoid name-calling in the sense that the HN guidelines use the term?

"When disagreeing, please reply to the argument instead of calling names. 'That is idiotic; 1 + 1 is 2, not 3' can be shortened to '1 + 1 is 2, not 3." - https://news.ycombinator.com/newsguidelines.html

Your comment would be just fine without that last bit—and better still if you had replaced it with a good explanation of how MCP is not that.


Can you elaborate a bit more on "theoretical" formal grammars and constraints that would allow the LLM to use a search engine or git commands and produce the next tokens that take the results into account?

Here are some practical, non-theoretical projects based on a boring and imperfect standard (MCP) that provide LLMs capabilities to use many tools and APIs in the right situation: https://github.com/modelcontextprotocol/servers


You're confusing the agent and the LLM.

An LLM doesn't "use" anything. Your agent does that. The agent is the program that prompts the LLM and reads the response. (The agent typically runs locally on your device, the LLM is typically on a server somewhere and accessed through an API.)

For the agent to parse an LLM's reply properly you'd ideally want the LLM to give a response that adheres to a standard format. Hence grammars.

I'm guessing your confusion stems from the fact that you've only ever used LLM's in a chat box on a website. This is OpenAI's business model, but not how LLM's will be used when the technology eventually matures.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: