Author: Kevin, the Researcher at BlockBooster
TLDR:
The emergence of DeepSeek has broken the computing power moat, and computing power optimization led by open source models has become a new direction;
DeepSeek benefits the model and application layers in the upstream and downstream industries, but has a negative impact on the computing power protocols in the infrastructure.
The good news of DeepSeek accidentally burst the last bubble in the Agent track, and DeFAI is most likely to give birth to new life;
The zero-sum game of project financing is expected to come to an end, and the new financing method of community launch + a small amount of VC may become the norm.
The impact caused by DeepSeek will have a profound impact on the upstream and downstream of the AI industry this year. DeepSeek successfully enabled home consumer graphics cards to complete large model training tasks that were originally only undertaken by a large number of high-end GPUs. The first moat surrounding the development of AI, computing power, began to collapse. When the algorithm efficiency is running at a rate of 68% per year, and the hardware performance follows the linear climb of Moore's Law, the valuation model that has been deeply rooted in the past three years is no longer applicable. The next chapter of AI will be opened by open source models.
Although Web3's AI protocol is completely different from Web2's, it is inevitably affected by DeepSeek. This impact will have an impact on the upstream and downstream of Web3 AI: infrastructure layer, middleware layer, model layer and application layer, giving birth to new use cases.
Sorting out the collaborative relationship between upstream and downstream agreements
Through the analysis of technical architecture, functional positioning and actual use cases, I divide the entire ecosystem into: infrastructure layer, middleware layer, model layer, application layer, and sort out their dependencies:
Infrastructure Layer
The infrastructure layer provides decentralized underlying resources (computing power, storage, L1), including computing power protocols: Render, Akash, io.net, etc.; storage protocols: Arweave, Filecoin, Storj, etc.; L1: NEAR, Olas, Fetch.ai, etc.
The computing layer protocol supports model training, reasoning, and framework operation; the storage protocol saves training data, model parameters, and on-chain interaction records; L1 optimizes data transmission efficiency and reduces latency through dedicated nodes.
Middleware layer
The middleware layer is a bridge connecting the infrastructure and upper-level applications, providing framework development tools, data services and privacy protection. The data labeling protocols include Grass, Masa, Vana, etc.; the development framework protocols include Eliza, ARC, Swarms, etc.; the privacy computing protocols include Phala, etc.
The data service layer provides fuel for model training, the development framework relies on the computing power and storage of the infrastructure layer, and the privacy computing layer protects the security of data during training/inference.
Model Layer
The model layer is used for model development, training, and distribution, including the open source model training platform: Bittensor.
The model layer relies on the computing power of the infrastructure layer and the data of the middleware layer; the model is deployed on the chain through the development framework; and the model market delivers the training results to the application layer.
Application Layer
The application layer is AI products for end users, including agents such as GOAT and AIXBT, and DeFAI protocols such as Griffain and Buzz.
The application layer calls the pre-trained model of the model layer; relies on privacy computing at the middleware layer; and complex applications require real-time computing power at the infrastructure layer.
DeepSeek may have a negative impact on decentralized computing power
According to a sample survey, about 70% of Web3 AI projects actually call OpenAI or centralized cloud platforms, only 15% of the projects use decentralized GPUs (such as the Bittensor subnet model), and the remaining 15% are hybrid architectures (sensitive data is processed locally, and general tasks are sent to the cloud).
The actual usage rate of decentralized computing protocols is far lower than expected and does not match its actual market value. There are three reasons for the low usage rate: Web2 developers continue to use the original tool chain when migrating to Web3; decentralized GPU platforms have not yet achieved price advantages; some projects evade data compliance reviews in the name of "decentralization", and the actual computing power still relies on centralized clouds.
AWS/GCP accounts for more than 90% of the AI computing power market share, while Akash's equivalent computing power is only 0.2% of AWS. The moats of centralized cloud platforms include: cluster management, RDMA high-speed network, and elastic expansion and contraction; decentralized cloud platforms have improved versions of the web3 of the above technologies, but the defects that cannot be improved include: latency issues: the communication latency of distributed nodes is 6 times that of centralized clouds; tool chain fragmentation: PyTorch/TensorFlow does not natively support decentralized scheduling.
DeepSeek reduces computing power consumption by 50% through sparse training, and dynamic model pruning enables consumer-grade GPUs to train models with tens of billions of parameters. The market's demand expectations for high-end GPUs in the short term have been significantly lowered, and the market potential of edge computing has been revalued. As shown in the figure above, before the emergence of DeepSeek, the vast majority of protocols and applications in the industry used platforms such as AWS, and only a very small number of use cases were deployed in decentralized GPU networks. Such use cases took advantage of the latter's price advantage in consumer-grade computing power and did not pay attention to the impact of latency.
This situation may be further exacerbated with the emergence of DeepSeek. DeepSeek has released the restrictions on long-tail developers, and low-cost and efficient reasoning models will be popularized at an unprecedented rate. In fact, the above-mentioned centralized cloud platforms and many countries have begun to deploy DeepSeek. The significant reduction in reasoning costs will give rise to a large number of front-end applications, which have a huge demand for consumer-grade GPUs. Faced with the upcoming huge market, centralized cloud platforms will launch a new round of user competition, not only competing with the head platforms, but also competing with countless small centralized cloud platforms. The most direct way to compete is to reduce prices. It can be foreseen that the price of 4090 on centralized platforms will be reduced, which is a disaster for Web3 computing power platforms. When price is not the only moat of the latter, and the computing power platforms in the industry are also forced to reduce prices, the result is that io.net , Render, and Akash cannot bear it. The price war will destroy the latter's only remaining valuation ceiling. The death spiral caused by declining revenue and user loss may transform the decentralized computing power protocol in a new direction.
The specific significance of DeepSeek to the upstream and downstream agreements of the industry
As shown in the figure, I think DeepSeek will have different impacts on the infrastructure layer, model layer, and application layer. From the positive perspective:
The application layer will benefit from the significant reduction in inference costs. More applications can ensure that agent applications are online for a long time and complete tasks in real time at a low cost.
At the same time, the low-cost model overhead of DeepSeek allows the DeFAI protocol to form a more complex SWARM. Thousands of agents are used for one use case, and the division of labor of each agent will be very subtle and clear, which can greatly improve the user experience and prevent user input from being incorrectly disassembled and executed by the model;
Developers at the application layer can fine-tune models and feed DeFi-related AI applications with prices, on-chain data and analysis, and protocol governance data without having to pay high license fees.
After the launch of DeepSeek, the open source model layer has proven its significance. Opening high-end models to long-tail developers can stimulate a wide range of development enthusiasm.
The high computing power wall built around high-end GPUs in the past three years has been completely broken down, giving developers more choices and establishing a direction for open source models. In the future, AI models will no longer compete on computing power but on algorithms. This change in belief will become the cornerstone of confidence for open source model developers.
Specific subnets around DeepSeek will emerge one after another, model parameters will increase under the same computing power, and more developers will join the open source community.
In terms of negative impact:
The objective usage delay of the computing power protocol in the infrastructure cannot be optimized;
Moreover, the hybrid network composed of A100 and 4090 has higher requirements for coordination algorithms, which is not the advantage of decentralized platforms.
DeepSeek bursts the last bubble in the Agent track, DeFAI may breed new life, and the industry's financing method will change
Agent is the last hope of AI in the industry. The emergence of DeepSeek has freed up computing power limitations and depicted the future expectation of application explosion. This was originally a huge positive for the Agent track, but due to the strong correlation between the industry and the US stock market and the Federal Reserve's policies, the remaining bubble was punctured and the track's market value fell to the bottom.
In the wave of integration of AI and the industry, technological breakthroughs and market competition always go hand in hand. The chain reaction caused by the shock of NVIDIA's market value is like a magic mirror, reflecting the deep dilemma of AI narrative in the industry: from On-chain Agent to DeFAI engine, the seemingly complete ecological map hides the cruel reality of weak technical infrastructure, hollow value logic, and capital dominance. The seemingly prosperous on-chain ecology hides hidden diseases: a large number of high-FDV tokens compete for limited liquidity, obsolete assets rely on FOMO emotions to survive, and developers are trapped in PVP involution to consume innovation potential. When incremental funds and user growth hit the ceiling, the entire industry falls into the "innovator's dilemma" - it is eager to break through the narrative, but it is difficult to get rid of the shackles of path dependence. This torn state provides a historic opportunity for AI Agent: it is not only an upgrade of the technical toolbox, but also a reconstruction of the value creation paradigm.
Over the past year, more and more teams in the industry have found that the traditional financing model is failing - the routine of giving VCs a small share, highly controlling the market, and waiting for the exchange to pull the market is no longer sustainable. Under the triple pressure of VCs tightening their pockets, retail investors refusing to take over, and the high threshold for listing on major exchanges, a new set of gameplay that is more adaptable to the bear market is emerging: combining top KOLs + a small number of VCs, launching a large proportion of communities, and cold starting with a low market value.
Innovators such as Soon and Pump Fun are opening up new paths through "community launches" - with the endorsement of top KOLs, 40%-60% of tokens are distributed directly to the community, and projects are launched at a valuation level as low as $10 million FDV, achieving millions of dollars in financing. This model builds consensus FOMO through the influence of KOLs, allowing the team to lock in profits in advance, while exchanging high liquidity for market depth. Although it gives up the short-term control advantage, it can repurchase tokens at low prices in a bear market through a compliant market-making mechanism. In essence, this is a paradigm shift in the power structure: from the VC-dominated game of passing the parcel (institutions take over - the exchange sells - retail investors pay) to a transparent game of community consensus pricing, and the project party and the community form a new symbiotic relationship in the liquidity premium. When the industry enters the transparency revolution cycle, projects that stick to the traditional control logic may become the afterimage of the times under the wave of power migration.
The short-term pain in the market just proves the irreversibility of the long-term trend of technology. When AI Agent reduces the cost of on-chain interaction by two orders of magnitude, and when the adaptive model continues to optimize the capital efficiency of the DeFi protocol, the industry is expected to usher in the long-awaited Massive Adoption. This change does not rely on concept hype or capital acceleration, but is rooted in the technical penetration of real needs - just as the electricity revolution has never stagnated due to the bankruptcy of light bulb companies, Agent will eventually become a real golden track after the bubble bursts. And DeFAI may be the fertile ground for new life. When low-cost reasoning becomes a daily routine, we may soon see the birth of use cases where hundreds of Agents are combined into a Swarm. Under equivalent computing power, the substantial increase in model parameters can ensure that Agents in the open source model era can be more fully fine-tuned, and even in the face of complex user input instructions, they can be split into task pipelines that can be fully executed by a single Agent. Each Agent optimizes on-chain operations, which may promote the increase in the activity and liquidity of the overall DeFi protocol. More complex DeFi products, led by DeFAI, will appear, and this is where new opportunities emerge after the last round of bubbles burst.