Contents | Bruce
Editing & Formatting | Huanhuan
Design | Daisy
The "USB-C moment" in the history of AI evolution. In November 2024, the MCP protocol released by Anthropic is causing an earthquake in Silicon Valley. This open standard, which is known as the "USB-C in the AI world", not only reconstructs the connection between large models and the physical world, but also hides the code to break the AI monopoly dilemma and reconstruct the production relations of digital civilization. While we are still arguing about the parameter scale of GPT-5, MCP has quietly paved the way for decentralization leading to the AGI era...
Bruce: I am currently researching Model Context Protocol (MCP). This is the second thing in the field of AI that excites me after ChatGPT, because it has the potential to solve three problems that I have been thinking about for many years:
- How can ordinary people, non-scientists and geniuses, participate in the AI industry and earn income?
- What is the win-win combination of AI and Ethereum?
- How to achieve AI d/acc? Avoid centralized big companies’ monopoly and censorship, and AGI destroys humanity?
01. What is MCP?
MCP is an open standard framework that simplifies the integration of LLM with external data sources and tools. If we compare LLM to the Windows operating system, and applications such as Cursor are keyboards and hardware, then MCP is the USB interface that supports flexible insertion of external data and tools, which users can then read and use.
MCP provides three capabilities to extend LLM:
- Resources (Knowledge Expansion)
- Tools (execute functions, call external systems)
- Prompts (pre-written prompt templates)
MCP can be developed and hosted by anyone, provided in the form of a server, and can be taken offline and stopped at any time.
02. Why do we need MCP?
Currently, LLM uses as much data as possible to perform a large number of operations and generate a large number of parameters, integrating knowledge into the model, so as to achieve the output of corresponding knowledge through dialogue. However, there are several major problems:
- Large amounts of data and computation require a lot of time and hardware, and the knowledge used for training is often outdated.
- Models with a large number of parameters are difficult to deploy and use on local devices, but in fact, users may not need all the information to meet their needs in most scenarios.
- Some models use crawlers to read external information for calculations to achieve timeliness, but due to the limitations of crawlers and the quality of external data, they may produce more misleading content.
- Since AI has not brought good benefits to creators, many websites and content have begun to implement anti-AI measures, generating a large amount of spam, which will cause the quality of LLM to gradually decline.
- LLM is difficult to extend to all aspects of external functions and operations. For example, it can accurately call the GitHub interface to implement some operations. It will generate code according to documents that may be outdated, but it cannot ensure accurate execution.
03. Architecture evolution of fat LLM and thin LLM + MCP
We can consider the current ultra-large-scale model as a fat LLM, and its architecture can be represented by the following simple diagram:
After the user inputs information, the input is disassembled and reasoned through the Perception & Reasoning layer, and then a large number of parameters are called to generate results.
Based on MCP, LLM may focus on language parsing itself, stripping away knowledge and capabilities to become a thin LLM:
Under the thin LLM architecture, the Perception & Reasoning layer will focus on how to parse all aspects of human physical environment information into tokens, including but not limited to: voice, tone, smell, image, text, gravity, temperature, etc., and then orchestrate and coordinate hundreds of MCP Servers through MCP Coordinator to complete the task. The training cost and speed of thin LLM will increase rapidly, and the requirements for deployment equipment will become very low.
04. How does MCP solve three major problems?
How can ordinary people participate in the AI industry?
Anyone with unique talents can create their own MCP Server to provide services to LLM. For example, a bird lover can provide his bird notes through MCP. When someone uses LLM to search for information related to birds, the current bird notes MCP service will be called. The creator will also get a share of the income.
This is a more accurate and automated creator economic cycle, with more standardized service content, and the number of calls and output tokens can be accurately counted. LLM providers can even call multiple BirdNote MCP Servers at the same time to let users choose and rate to determine who has better quality and gets a higher matching weight.
A win-win combination of AI and Ethereum
a. We can build an OpenMCP.Network creator incentive network based on Ethereum. MCP Server needs to host and provide stable services. Users pay LLM providers. LLM providers distribute actual incentives to the called MCP Servers through the network to maintain the sustainability and stability of the entire network and inspire MCP creators to continue to create and provide high-quality content. This network will need to use smart contracts to achieve automation, transparency, trustworthiness and anti-censorship of incentives. Signatures, permission verification, and privacy protection during operation can all be achieved using technologies such as Ethereum wallets and ZK.
b. Develop MCP Servers related to Ethereum chain operations, such as AA wallet call service. Users will be able to implement wallet payments through language in LLM without exposing related private keys and permissions to LLM.
c. There are also various developer tools to further simplify Ethereum smart contract development and code generation.
Decentralizing AI
a. MCP Servers decentralize the knowledge and capabilities of AI. Anyone can create and host MCP Servers, register on platforms such as OpenMCP.Network, and receive incentives based on the calls. No single company can control all MCP Servers. If an LLM provider gives unfair incentives to MCP Servers, creators will support blocking the company, and users will switch to other LLM providers to achieve fairer competition if they cannot get high-quality results.
b. Creators can implement fine-grained permission control on their own MCP Servers to protect privacy and copyright. Thin LLM providers should encourage creators to contribute high-quality MCP Servers by providing reasonable incentives.
c. The gap in thin LLM capabilities will gradually be eliminated, because human language has an upper limit on traversal and evolves slowly. LLM providers will need to focus their attention and funds on high-quality MCP Servers instead of reusing more graphics cards to make elixirs.
d. AGI's capabilities will be decentralized and downgraded. LLM will only be used for language processing and user interaction, and specific capabilities will be distributed in each MCP Server. AGI will not threaten humans, because after shutting down MCP Servers, only basic language conversations can be carried out.
05. Overall review
- The architectural evolution of LLM + MCP Servers essentially decentralizes AI capabilities and reduces the risk of AGI destroying humanity.
- The use of LLM enables the number of calls and input and output of MCP Servers to be counted and automated at the token level, laying the foundation for the construction of the AI creator economic system.
- A good economic system can drive creators to actively contribute to the creation of high-quality MCP Servers, thereby driving the development of the entire human race and achieving a positive flywheel. Creators no longer resist AI, and AI will also provide more jobs and income, and reasonably distribute the profits of monopoly commercial companies like OpenAI.
- This economic system, combined with its characteristics and the needs of creators, is very suitable for implementation based on Ethereum.
06. Future Outlook: The Next Step of Script Evolution
- MCP or MCP-like protocols will emerge in an endless stream, and several large companies will begin to compete for the definition of standards.
- MCP Based LLM will emerge, focusing on small models for parsing and processing human languages, with MCP Coordinator attached to access the MCP network. LLM will support automatic discovery and scheduling of MCP Servers without complex manual configuration.
- MCP Network service providers will emerge, each with its own economic incentive system, and MCP creators can earn income by registering and hosting their own servers.
- If the economic incentive system of the MCP Network is built using Ethereum and based on smart contracts, then the transactions of the Ethereum network will conservatively increase by about 150 times (based on a very conservative 100 million calls to MCP Servers per day, and the current 12s per Block including 100 txs).