By William M. Peaster, Bankless

Compiled by: Baishui, Golden Finance

Back in 2014, Ethereum founder Vitalik Buterin started thinking about autonomous agents and DAOs, when it was still a distant dream for most of the world.

In his early vision, as he described it in “DAOs, DACs, DAs, etc.: An Incomplete Guide to Terminology,” DAOs were decentralized entities with “automation at the center and humans at the edges” — organizations that relied on code rather than a hierarchy of humans to maintain efficiency and transparency.

AI-driven DAOs are on the rise: 5 challenges to watch out for

Ten years later, Variant’s Jesse Walden has just published “DAO 2.0,” reflecting on how DAOs have evolved in practice since Vitalik’s early writings.

In short, Walden noted that the initial wave of DAOs often resembled cooperatives — digital organizations that were human-centric and did not emphasize automation.

Nonetheless, Walden continues to argue that new advances in AI — particularly large language models (LLMs) and generative models — now hold promise for better achieving the decentralized autonomy that Vitalik envisioned a decade ago.

However, as DAO experiments increasingly adopt AI agents, we will face new implications and questions here. Below, let’s look at five key areas that DAOs must grapple with when incorporating AI into their approaches.

Transforming governance

In Vitalik’s original framework, DAOs were designed to reduce reliance on hierarchical human decision-making by encoding governance rules on-chain.

Initially, humans were still “on the margins” but still essential for complex judgments. In the DAO 2.0 world described by Walden, humans still linger on the margins—providing capital and strategic direction—but the center of power is gradually no longer human.

This dynamic will redefine the governance of many DAOs. We will still see human coalitions negotiating and voting on outcomes, but various operational decisions will be increasingly guided by the learning patterns of AI models. How to achieve this balance is currently an open question and design space.

Minimize model misalignment

Early visions of DAOs aimed to counteract human bias, corruption, and inefficiency through transparent, immutable code.

Now, a key challenge is to move away from unreliable human decision-making and toward ensuring that AI agents are “aligned” with the DAO’s goals. The main vulnerability here is no longer human collusion, but model misalignment: the risk that an AI-driven DAO optimizes for metrics or behaviors that deviate from human-intended outcomes.

In the DAO 2.0 paradigm, this consensus problem (originally a philosophical question in AI safety circles) becomes a practical problem of economics and governance.

This may not be a top-of-mind issue for DAOs experimenting with basic AI tools today, but as AI models become more advanced and deeply integrated into decentralized governance structures, expect it to become a major area of scrutiny and refinement.

New attack surface

Consider the recent Freysa competition, where the human p0pular.eth tricked the AI agent Freysa into misunderstanding its “approveTransfer” function, winning a $47,000 Ether prize.

Although Freysa had built-in safeguards — explicit instructions to never send prizes — human ingenuity eventually outsmarted the model, exploiting the interplay between prompts and the code’s logic until the AI released the funds.

This early competition example highlights that as DAOs incorporate more complex AI models, they will also inherit new attack surfaces. Just as Vitalik worried about a DO or DAO being compromised by humans, now DAO 2.0 must consider adversarial inputs to AI training data or engineering attacks on the fly.

Manipulating the LLM’s reasoning process, feeding it misleading on-chain data, or subtly influencing its parameters could become a new form of “governance takeover,” where the battlefield shifts from human majority voting attacks to more subtle and sophisticated forms of AI exploitation.

New centralization issues

The evolution of DAO 2.0 shifts significant power to those who create, train, and control the AI models underlying a particular DAO, a dynamic that could lead to new forms of centralized choke points.

Of course, training and maintaining advanced AI models requires specialized expertise and infrastructure, so in some organizations in the future we will see direction ostensibly in the hands of the community, but in reality in the hands of skilled experts.

This is understandable. But going forward, it will be interesting to track how DAOs for AI experiments cope with issues such as model updates, parameter tuning, and hardware configuration.

Strategic and strategic operations roles and community support

Walden’s “strategy vs. operations” distinction suggests a long-term balance: AI could handle day-to-day DAO tasks, while humans would provide strategic direction.

However, as AI models become more advanced, they may also gradually intrude upon the strategic layer of the DAO. Over time, the role of “marginalists” may shrink further.

This raises the question: what will happen to the next wave of AI-driven DAOs, where humans, in many cases, may simply provide funding and watch from the sidelines?

In this paradigm, will humans largely become interchangeable investors with minimal influence, moving away from an approach that co-owns brands to something more akin to autonomous economic machines managed by AI?

I think we will see more of a trend towards organizational models in the DAO scene where humans simply play the role of passive shareholders rather than active managers. However, as meaningful human decision making becomes increasingly rare and it becomes easier to provide on-chain capital elsewhere, maintaining community support may become an ongoing challenge over time.

How DAOs Can Stay Proactive

The good news is that all of these challenges can be addressed proactively. For example:

  • In terms of governance — DAOs could experiment with governance mechanisms that reserve certain high-impact decisions for human voters or rotating committees of human experts.
  • Regarding inconsistency — By treating consistency checks as a recurring operational expense (like security audits), DAOs can ensure that AI agents’ fidelity to public goals is not a one-time issue, but an ongoing responsibility.
  • Regarding centralization — DAOs can invest in broader skill building among community members. Over time, this will mitigate the risk of a small number of “AI wizards” controlling governance and promote a decentralized approach to technology management.
  • On support — As humans become more passive stakeholders in DAOs, these organizations can double down on storytelling, shared mission, and community rituals to transcend the immediate logic of capital allocation and maintain long-term support.

Whatever happens next, it’s clear the future here is bright.

Consider how Vitalik recently launched Deep Funding, which is not a DAO effort, but rather aims to use artificial intelligence and human judges to pioneer a new funding mechanism for Ethereum open source development.

This is just one new experiment, but it highlights a broader trend: the intersection of AI and decentralized collaboration is accelerating. As new mechanisms arrive and mature, we can expect DAOs to increasingly adapt and expand on these AI ideas. These innovations will present unique challenges, so now is the time to start preparing.