In 2018, I wrote a short story – as part of a collection that formed the book Stories from 2045: AI and the future of work – called The Tao of DAO that imagined a world where decentralised autonomous organisations had replaced traditional companies. In that story, blockchain and AI converged to let anyone launch open projects with global contributors, paid fairly through reputation systems. Open-source versions of major platforms displaced corporate monopolies. Poverty was nearly eliminated. It was, I’ll admit, wildly optimistic.
Seven years later, none of that has happened. DAOs raised billions, governed poorly, got hacked, and mostly stalled. The dream of coordination without centralised control remained stubbornly out of reach. Smart contracts turned out to be too rigid, too brittle, and too stupid to replace the messy, contextual, adaptive work that human organisations actually do.
But here’s what I didn’t anticipate in 2018: the missing ingredient wasn’t better smart contracts. It was intelligence itself.
What follows is a thesis, not a case study. We are too early in the deployment of agentic AI to have definitive evidence that agent-augmented organisations outperform traditional ones. But I believe the convergence of AI agents, new organisational thinking, and lessons from the DAO experiment points toward a model with transformative potential – if we get the design right. This article makes the case for what that model could look like, where the foundations already exist, and where the hard problems remain unsolved.
The definition of intelligence I keep returning to comes from Robert Sternberg and William Salter: goal-directed adaptive behaviour. Three words, but the one that does all the heavy lifting is “adaptive.” Not the strongest. Not the fastest. Not the most efficient. Adaptive.
This is the same insight that sits at the heart of evolutionary biology. Darwin’s “survival of the fittest” – a phrase actually coined by Herbert Spencer – never meant strongest or smartest. It meant best adapted to the immediate, local environment. The species that thrives is the one that adjusts. The one that doesn’t is the one that dies, regardless of how powerful it once was.
If intelligence is goal-directed adaptive behaviour, then the most intelligent organisation is the one that is maximally adaptive in pursuing its goals. Which raises an uncomfortable question for most enterprises: how adaptive is your organisation, really?
When I speak to business leaders, I often ask them for their definition of innovation. Most struggle. The best definition I’ve encountered is deceptively simple: creativity that ships. Steve Jobs captured the spirit of this when he told the original Macintosh team that “real artists ship” – it’s not enough to have ideas; what matters is getting them to the point where they generate value.
For me, the most important word in “creativity that ships” isn’t “creativity” – it’s “that.” The word “that” represents the entire innovation process: the messy, friction-filled journey from idea to impact. And that process is where most organisations fail. Not because they lack creative people, but because bureaucracy, disconnected systems, organisational silos, and institutional inertia create friction at every turn.
The more friction you remove, the more ideas ship. The more ideas ship, the more adaptive you become. The more adaptive you become, the more intelligent your organisation is by Sternberg’s definition. Intelligence, innovation, and adaptiveness form a virtuous cycle – or, in most organisations, a vicious one.
AI is already removing friction. Agents draft documents, analyse data, automate workflows, and accelerate individual productivity. But these are point solutions – valuable, but incremental. The transformational opportunity is to connect these solutions together to create something far more powerful: a digital twin of the entire organisation.
A digital twin is a dynamic simulation model that mirrors how an organisation actually operates – its processes, decisions, resource allocations, and human behaviours. It lets you ask “what if?” at organisational scale. What I’m about to describe is aspirational – these are directions of travel, not finished products. None of these will materialise overnight; the data integration alone is a multi-year undertaking for most enterprises. But I believe there are three digital twins every organisation should be building toward if it wants to remain competitive in the next decade, and the organisations that start now will have a structural advantage that compounds over time.
The first is the front office or supply chain twin. This creates a simulation of the flow of goods and services across your entire value chain. Here’s a question I pose to retailers: if I run a marketing campaign that increases demand by 10%, can you tell me whether your suppliers will default on their commitments? Whether you have enough warehouse capacity? Enough distribution drivers? Enough retail floor space to fulfil the promise to the customer? Most organisations have disconnected supply chains and genuinely cannot answer these questions. The promise of AI agents is to wire these systems together into a living simulation where such scenarios can be modelled in minutes, not months. An agent swarm monitoring live data could adjust pricing, reroute logistics, or flag capacity constraints in real time – not at the next quarterly review.
The second is the back office twin. Every organisation has back office processes – hiring, onboarding, offboarding, expenses, budgeting, compliance – and most of them are bureaucratic nightmares that actively hinder innovation. Consider some of the radical alternatives that already exist, and what they suggest about what’s possible.
W.L. Gore & Associates, the company behind Gore-Tex, has operated for over 65 years with a “lattice” structure: no job titles, no hierarchy, no predetermined communication channels. Their peer-based compensation system has every associate ranked by 20–30 colleagues, with pay following the contribution curve. They’ve been profitable every year since 1958 and have appeared on every Fortune “100 Best Companies to Work For” list since the ranking began.
These aren’t fringe experiments, but they come with important caveats. Gore’s model has worked in part because of its specific culture, industry, and scale – the company deliberately keeps facility sizes small to preserve the lattice dynamics. The positive outcomes are real and suggestive, but they developed under conditions that are difficult to replicate at scale through cultural effort alone.
This is precisely the gap AI agents could fill. Gore and similar organisations achieved radical transparency and peer governance through heroic and sustained cultural effort. AI agents have the potential to automate much of the coordination overhead – the constant assessment, information-sharing, and consensus-building – that previously made these models exhausting to sustain. The bureaucratic processes that once required entire departments could, in principle, be run end-to-end by specialised agent swarms: financial closing, regulatory compliance, vendor management, all orchestrated autonomously with humans providing strategic oversight and handling exceptions. Whether this potential is realised will depend on implementation quality and, critically, on verification – but the direction of travel is clear.
The third – and most radical – is the workforce digital twin. Most organisations allocate decision-making authority through fixed hierarchies. This is, bluntly, a crude mechanism. A hierarchy is a compression algorithm for trust: we give authority to people in senior positions because we lack the tools to assess who actually has the right expertise for each specific decision. But what if we didn’t have to compress?
At Satalia, the company I founded in 2008, we experimented with exactly this. We had no managers. Everyone chose their own work. People set their own salaries. AI optimised the allocation of people to projects – and there are more ways of allocating 60 employees to 60 projects than there are atoms in the universe. We used algorithmic approaches to ensure the right diverse group of experts swarmed around each opportunity, weighted by their relevant expertise, learning goals, and personal values.
This is the direction a workforce digital twin points toward: understanding your people at a granular level and creating weighted, dynamic hierarchies where authority flows to expertise rather than position. Organisations like Buurtzorg – the Dutch healthcare provider with 10,000+ nurses operating in self-managing teams of 10–12, with only 2 directors and fewer than 50 back-office staff – offer suggestive evidence. Their overhead costs are 8% versus an industry average of 25%, and patient satisfaction is 30% higher than competitors. Haier, the Chinese appliance giant, eliminated 12,000 middle management positions and replaced them with 4,000+ microenterprises of 10–15 people, each with autonomous decision-making rights. Revenue now exceeds €47 billion.
These are impressive results, but honest analysis requires noting their context. Buurtzorg’s model works within a specific Dutch healthcare infrastructure and has had mixed results in international replication. Haier’s transformation was driven by Zhang Ruimin’s extraordinary personal authority – a decentralisation paradoxically imposed from the top in a Chinese corporate governance context where such authority was available. These models produced genuine outcomes, but they depended on conditions – exceptional founders, specific institutional contexts, favourable regulatory environments – that most organisations cannot simply reproduce. The question is whether AI agents can provide the coordination infrastructure that makes these approaches viable without requiring those exceptional preconditions. I believe they can, but this remains a thesis to be tested rather than a proven conclusion.
The workforce twin also points toward acceleration. Imagine being able to take an inexperienced graduate and meaningfully compress the journey to expert-level competence, because AI understands their learning style, their knowledge gaps, and the optimal sequence of experiences to close them. This is speculative – expertise in complex domains involves tacit knowledge and embodied experience that may not be fully compressible through any technology. But even partial progress toward this goal would be transformative. I’ll explore this dimension in a dedicated article, but the core insight is this: as we shift from Division of Labour to Division of Cognition, we need a workforce that can orchestrate, not just execute – and that demands a fundamentally different approach to talent development.
What these companies share is a vision of what I call the “liquid organisation” – a structure that can reshape itself in response to the demands of its environment, much as water takes the shape of its container.
Joost Minnaar and Pim de Morree, founders of Corporate Rebels, have visited over 150 pioneering organisations and identified eight trends that distinguish progressive companies from traditional ones. The shifts are stark: from hierarchical pyramids to networks of teams, from directive leadership to supportive leadership, from centralised to distributed decision-making, from secrecy to radical transparency, from job descriptions to talent and mastery. Frederic Laloux’s Reinventing Organizations maps a similar evolution, from command-and-control “Amber” organisations through competitive “Orange” to what he calls “Teal” – living systems characterised by self-management, wholeness, and evolutionary purpose.
The ideas are beautiful. But let’s be honest about their limitations. Zappos’ holacracy experiment – a bottom-up self-management system imposed top-down via CEO ultimatum – saw 18% of staff leave and was described as “the management equivalent of Dungeons and Dragons.” Valve’s flat structure, famously featuring desks on wheels, produced remarkable games but also informal power cliques, diversity problems, and decision paralysis on social issues. Morning Star, the world’s largest tomato processor, runs on pure self-management with zero managers and nearly $1 billion in revenue – but roughly half of experienced external hires leave within two years.
The pattern is clear: radical organisational models can work brilliantly in specific contexts but hit scaling limits. The coordination overhead of genuine self-management – the constant negotiation, the consensus-building, the information-sharing – becomes exponentially more expensive as organisations grow. What these models need is a new kind of infrastructure. And this is where the most interesting convergence begins.
This is precisely what Decentralised Autonomous Organisations were supposed to provide. The original DAO promise was intoxicating: smart contracts replacing management, token-based democratic governance, censorship-resistant global coordination. But smart contracts proved too rigid for the nuance that real governance demands, voter turnout across DAOs averages below 20%, and in ten major DAOs, 1% of token holders control 90% of votes. Of 30,000 DAOs analysed, 53% are inactive. Vitalik Buterin himself observed that when people have to make decisions every week, participation starts strong but inevitably decays.
The problem was philosophical as much as technical. DAOs tried to encode human coordination into deterministic code. But coordination isn’t deterministic – it’s adaptive, contextual, and requires judgment. Smart contracts can execute rules perfectly; they cannot exercise discretion. They can enforce a vote; they cannot assess whether the question was the right one to ask.
This is where AI agents change the equation.
The first empirical study of agentic AI in DAO governance, published by Capponi et al. in October 2025, found strong alignment between AI agent decisions and human outcomes across 3,000+ proposals from major protocols. This is an encouraging starting point, though it’s worth noting that alignment with past human decisions measures consistency, not necessarily quality – if existing governance was shaped by low participation and whale-dominated voting, then faithfully replicating those patterns isn’t automatically progress. What matters is whether AI agents can improve governance quality over time, and that remains to be demonstrated.
That said, the potential is significant. AI agents could address the problems that crippled DAOs in three ways. They address voter apathy by enabling token holders to delegate to AI agents programmed with specific strategies, ensuring every proposal receives informed assessment. They reduce decision fatigue by handling routine decisions – treasury operations, parameter adjustments, compliance checks – while escalating only strategic decisions to human stakeholders. And most importantly, AI agents are adaptive rather than deterministic. Unlike smart contracts, they can interpret context, process natural language, learn from outcomes, and exercise something approximating judgment.
But let’s not romanticise this. Delegating your vote to an agent and walking away is just a more sophisticated form of the disengagement that killed DAOs in the first place. The real shift isn’t from “human votes” to “agent votes” – it’s from humans making every micro-decision to humans setting strategy and periodically recalibrating the agents that execute it. The human role in an agent-augmented DAO is more like a portfolio manager than a voter: you define the investment thesis, review performance, and adjust the mandate – you don’t trade every position yourself.
The honest question is whether humans will actually stay engaged in the recalibration role. The participation decay that Buterin identified – people showing up enthusiastically at first and then drifting away – is a human motivation problem, not a technical one. Adding an abstraction layer between humans and decisions might reduce friction, but it might also make disengagement easier. The moment humans stop actively shaping agent strategies, you’ve just built a faster path to the same governance decay. Solving this will require thoughtful incentive design and transparency mechanisms, not just better technology.
Buterin’s January 2026 framework calls explicitly for a DAO renaissance where AI augments – but doesn’t replace – human judgment. ai16z, launched in late 2024, became the first DAO led by an autonomous AI agent, reaching a $2.6 billion market cap. Autonolas created “Governatooorr,” an AI-enabled governance delegate. The hybrid human+AI DAO is no longer theoretical – it’s emerging, though it’s far too early to declare success.
But I want to push further than hybrid DAOs. What’s emerging is something more fundamental than bolting AI agents onto existing organisational structures. It’s an entirely new operating model – one I’d describe as the agentic decentralised organisation.
The Industrial Revolution gave us the Division of Labour: break complex work into simple, repeatable tasks and assign each to a specialist. This logic has shaped organisational design for 250 years. The pyramid, the functional silo, the assembly line, the outsourced call centre – all are expressions of the same principle: decompose and allocate human effort.
What AI agents enable is something categorically different: a Division of Cognition. Instead of dividing labour across people, you divide cognitive work across humans and AI agents according to their respective strengths. AI agents handle complex, high-volume cognitive workflows – analysing market data, generating creative variants, running financial models, managing compliance checks, orchestrating customer journeys. Humans provide strategic oversight, ethical judgment, creative direction, and exception handling. The human role shifts from “doer” to “orchestrator” – from performing tasks to supervising, directing, and curating the output of agent swarms.
This changes the fundamental unit of the organisation. The traditional building block is the department – marketing, finance, operations – each a functional silo with its own hierarchy. The agentic building block is what I’d call the “cell”: a small, multi-disciplinary team of perhaps 2–5 people who act as strategic orchestrators managing 50–100+ specialised AI agents that run end-to-end business processes autonomously.
These cells aren’t organised by function. They’re organised by mission. Not “the marketing department” but “the team reducing cart abandonment.” Not “the finance team” but “the cell optimising working capital across the supply chain.” Each cell has the autonomy to make real-time decisions – deploying agents, adjusting strategies, launching experiments – without waiting for approval from three layers of management. The traditional pyramid flattens into a networked, cell-based model where authority is distributed, speed is the default, and the organisation can reconfigure itself around new opportunities as fluidly as water finding a new path.
The implications, if this model proves viable, are profound. It could decouple cost from growth, since agents do the scaling. It could enable hyper-accelerated innovation, with small teams running rapid experiments through agent swarms at a pace traditional organisations cannot match. And it could create genuine real-time adaptability, with agents monitoring live data and adjusting pricing, logistics, or risk flags continuously rather than quarterly.
I should flag that the Division of Cognition raises hard questions I’m not going to fully resolve here – they deserve dedicated treatment in a follow-up piece. Who decides which cognitive tasks go to agents and which stay with humans, and how does that boundary evolve as agents become more capable? In the cell model, who is accountable when agents make consequential errors that no human specifically reviewed? Is there a risk that “orchestrator” gradually becomes a euphemism for “bystander” as the scope of human judgment narrows? These are not objections to the model – they’re design challenges that any serious implementation will need to address.
But – and this is critical – high autonomy without governance is anarchy. Agentic organisations require a fundamentally different approach to control. Slow, manual compliance checks and annual audits cannot govern systems that make thousands of decisions per hour. Instead, governance must be embedded directly into the AI workflows themselves.
This means dedicated “Control Agents” – specialised agents that act as automated, continuous auditors, monitoring every decision for compliance, fairness, and alignment with organisational values in real time. To make this concrete: imagine a marketing cell deploying 80 agents to generate and distribute campaign content across 30 markets. A Control Agent sits alongside them, scoring every piece of outbound copy against regulatory requirements, brand guidelines, and cultural sensitivity thresholds before it ships. It doesn’t just flag violations after the fact – it gates the workflow, preventing non-compliant content from ever reaching a customer. When edge cases arise that fall outside its confidence threshold, it escalates to a human orchestrator who makes the judgment call and, crucially, feeds that decision back into the system so the Control Agent learns.
The obvious question is: who governs the governors? If smart contracts failed because they were too rigid, won’t Control Agents eventually become rigid blockers themselves? This is where the adaptiveness of AI agents is genuinely different from deterministic code. A smart contract either permits or blocks – there’s no nuance, no learning, no escalation path. A well-designed Control Agent operates on a spectrum of confidence, can flag uncertainty rather than just enforcing rules, and improves over time as it encounters new scenarios. It’s not perfect – no governance system is – but it’s categorically more capable of handling the messiness of real-world decisions than a static set of if-then rules.
The critical discipline is continuous verification: ensuring these agents are actually doing what they’re supposed to do, detecting drift, and maintaining alignment over time. This is an area of active development, and I believe it will become one of the defining challenges – and opportunities – in the agentic era.
Humans operate not “in the loop” but “above the loop” – setting strategy, defining boundaries, and intervening on high-stakes decisions and ethical questions that require human judgment. This is where the DAO parallel becomes vivid. A DAO’s smart contracts were supposed to be its governance layer – immutable, transparent, autonomous. They failed because they were too rigid. An agentic organisation’s Control Agents serve the same function but with the adaptiveness that smart contracts lacked. The agentic decentralised organisation is, in effect, a DAO that could actually work – not because it removes humans from the loop, but because it finds the right division of cognition between human judgment and machine execution.
If you’re a CEO reading this and thinking it all sounds very theoretical, here’s the practical framework. In a previous article, I argued that solving Shadow AI – the phenomenon of employees using AI tools without IT approval – requires three things. Let me reframe those same three priorities through the lens of innovation and adaptiveness.
First, enable AI agents at the edge. Shadow AI isn’t a crisis – it’s a signal that your employees are innovating faster than your governance can adapt. At WPP, we’ve seen employees build over 28,000 AI agents through our Agent Builder platform. Rather than restricting this, channel it – within sandboxed environments that protect IP and client data while giving people genuine room to experiment. This is how cells form organically – small teams discovering that a cluster of agents can automate an entire workflow, then taking ownership of the outcome. The most adaptive organisations will be those where innovation happens at the edges, not the centre.
Second, build centralised SuperAgent capability. Edge innovation is necessary but insufficient. You also need curated, verified expert agents deployed centrally – the institutional knowledge of your organisation encoded in AI. Our Agent Hub provides “Super Agents” in brand analytics, behavioural science, and creative strategy, arming 100,000+ employees with expertise that previously sat in specialist teams. This is both the beginning of a back office digital twin and the foundation for the Division of Cognition: making expert-level cognitive capability available to every cell in the organisation, on demand.
Third, choose the right partners to enable an adaptive back office. No organisation will build all of this alone. The technology partnerships you choose now – for infrastructure, for agent verification, for embedded governance – will determine whether your back office becomes a source of adaptive capability or remains a drag on innovation. And agent verification matters enormously: in a world where agents are making thousands of decisions autonomously, you need rigorous, continuous assurance that they’re doing what they’re supposed to do.
In my 2018 story, I wrote that “ironically, it was the same technologies that gave the Horsemen dominance that eventually destroyed them.” I was imagining a world where AI and blockchain together dismantled corporate monopolies and enabled truly decentralised coordination.
I stand by the vision, even if the timeline was optimistic. What I underestimated was how long it would take for the right kind of intelligence to emerge. Smart contracts gave us deterministic execution without judgment. AI agents give us adaptive behaviour directed toward goals – which, by Sternberg’s definition, is intelligence itself.
The liquid organisation, the Teal paradigm, the DAO revolution – these were all attempts to solve the same fundamental problem: how to coordinate human effort without sacrificing the adaptiveness that intelligence requires. Each approach worked in specific contexts but lacked the connective tissue to scale. Gore needed heroic culture. Buurtzorg needed extraordinary founders and a supportive national infrastructure. DAOs needed humans to show up and vote. Haier needed a CEO with near-absolute authority to impose decentralisation from the top.
The agentic decentralised organisation doesn’t rely on heroism or perfect participation – it distributes cognition across humans and agents in a way that has the potential to be scalable, governable, and genuinely adaptive. Whether it fulfils that potential depends on how seriously we take the design challenges: verification, accountability, the governance of autonomous agents, and the human motivation problem that technology alone cannot solve.
The DAO dream didn’t die in 2016. It was waiting for the right kind of intelligence to bring it to life. And now that intelligence is arriving – not as a single superintelligent system, but as swarms of specialised agents that could handle the coordination overhead, embed governance into every decision, and enable small teams of orchestrators to operate with the speed and scale that were previously the exclusive advantage of corporate giants.
The most intelligent organisation of the next decade won’t be the one with the most AI. It will be the one that is most adaptive – the one where creativity ships fastest, where the right expertise swarms around every opportunity, and where the Division of Cognition between humans and agents becomes not a threat to jobs but a catalyst for the most innovative era in organisational history.
Real artists ship. The question is whether your organisation is structured to let them.
Daniel Hulme is Chief AI Officer at WPP and founder of Conscium, an AI safety company focused on machine consciousness research. He founded Satalia in 2008 and co-founded Faculty AI. His 2018 article “2048 – Tao of DAO” is available on Medium.