The Mira network's public testnet was launched yesterday. It attempts to build a trust layer for AI. So, why does AI need to be trusted? How does Mira solve this problem?

When people discuss AI, they focus more on the powerful aspects of AI. However, it is interesting that AI has "illusions" or biases. People don't pay much attention to this. What is AI's "illusion"? Simply put, AI sometimes "makes up" and speaks nonsense in a serious manner. For example, if you ask AI why the moon is pink, it may give you many seemingly reasonable explanations in a serious manner.

AI’s “illusions” or biases are related to some current AI technology paths. For example, generative AI outputs content by predicting the “most likely” to achieve coherence and rationality, but sometimes it is impossible to verify the authenticity. In addition, the training data itself also contains errors, biases, and even fictitious content, which will also affect AI’s output. In other words, AI itself learns human language patterns rather than the facts themselves.

In short, the current probability generation mechanism + data-driven model almost inevitably brings the possibility of AI hallucination.

If this kind of biased or hallucinated output is just ordinary knowledge or entertainment content, there will be no direct consequences for the time being. However, if it occurs in highly rigorous fields such as medicine, law, aviation, and finance, it will directly have major consequences. Therefore, how to solve AI hallucinations and biases is one of the core issues in the evolution of AI. Some use retrieval enhancement generation technology (combined with real-time databases to prioritize the output of verified facts), and some introduce human feedback to correct model errors through manual labeling and human supervision.

The Mira project is also trying to solve the problem of AI bias and illusion. In other words, Mira is trying to build a trust layer for AI, reduce AI bias and illusion, and improve the reliability of AI. So, from the overall framework, how does Mira reduce AI bias and illusion and ultimately achieve trustworthy AI?

The core of Mira's realization of this is to verify AI output through the consensus of multiple AI models. In other words, Mira itself is a verification network that verifies the reliability of AI output, and it leverages the consensus of multiple AI models. In addition, another important point is to verify through decentralized consensus.

Therefore, the key to the Mira network is decentralized consensus verification. Decentralized consensus verification is what the encryption field is good at. At the same time, it also takes advantage of the synergy of multiple models to reduce bias and illusions through collective verification mode.

In terms of verification architecture, it requires an independently verifiable statement. The Mira protocol supports converting complex content into independently verified statements. These statements require node operators to participate in verification. In order to ensure the honesty of node operators, cryptoeconomic incentives/penalties will be used here to achieve different AI models + decentralized node operators to ensure the reliability of verification results.

Mira's network architecture includes content conversion, distributed verification, and consensus mechanisms to achieve the reliability of verification. In this architecture, content conversion is an important part. The Mira network will first decompose the candidate content (generally submitted by the customer) into different verifiable statements (to ensure that the model can be understood in the same context). These statements are distributed to nodes by the system for verification to determine the validity of the statement and summarize the results to reach a consensus. These results and consensus will be returned to the customer. In addition, in order to protect customer privacy, the candidate content conversion is decomposed into statement pairs, which will be given to different nodes in a random sharding manner to prevent information leakage during the verification process.

Node operators are responsible for running the validator model, processing claims, and submitting verification results. Why are node operators willing to participate in the verification of claims? Because they can get benefits. Where does the benefit come from? From the value created for customers. The purpose of the Mira network is to reduce the error rate of AI (illusions and biases). Once the purpose can be achieved, value can be generated, such as reducing the error rate in the fields of medicine, law, aviation, finance, etc., which will generate huge value. Therefore, customers are willing to pay. Of course, the sustainability and scale of payment depends on whether the Mira network can continue to bring value to customers (reducing the error rate of AI). In addition, in order to prevent opportunistic behavior of random node responses, nodes that continue to deviate from the consensus will have their staked tokens reduced. In short, it is to ensure that node operators participate in verification honestly through the game of economic mechanisms.

In general, Mira provides a new solution to achieve the reliability of AI, which is to build a decentralized consensus verification network based on multiple AI models, bring higher reliability to customers' AI services, reduce AI bias and illusions, and meet customers' needs for higher accuracy and precision. And on the basis of providing value to customers, it brings benefits to participants of the Mira network. To sum it up in one sentence, Mira is trying to build a trust layer for AI. This will promote the in-depth application of AI.

Currently, Mira's AI agent frameworks include ai16z, ARC, etc. The Mira network's public testnet was launched yesterday. Users can participate in the Mira public testnet by using Klok, which is a LLM chat application based on Mira. Using the Klok application, you can experience verified AI output (you can compare it with unverified AI output) and earn Mira points. As for the future use of the points, it has not yet been revealed.