<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" version="2.0">
  <channel>
    <title>Conscium Ltd blog</title>
    <link>https://verifyax.conscium.com/blog</link>
    <description />
    <language>en</language>
    <pubDate>Mon, 11 May 2026 16:24:05 GMT</pubDate>
    <dc:date>2026-05-11T16:24:05Z</dc:date>
    <dc:language>en</dc:language>
    <item>
      <title>There’s No Such Thing as AI Ethics</title>
      <link>https://verifyax.conscium.com/blog/no-such-thing-as-ai-ethics</link>
      <description>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://verifyax.conscium.com/blog/no-such-thing-as-ai-ethics" title="" class="hs-featured-image-link"&gt; &lt;img src="https://verifyax.conscium.com/hubfs/AI%20Ethics.jpg" alt="AI Ethics" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;p&gt;Over the past few years, something curious has happened. A new professional class has emerged &amp;nbsp;- &amp;nbsp;the AI Ethicist. LinkedIn profiles have been updated, consultancies rebranded, and conference panels filled with people who, seemingly overnight, became experts in the ethical implications of artificial intelligence. The growth has been dramatic, and it deserves scrutiny.&lt;/p&gt;</description>
      <content:encoded>&lt;p&gt;Over the past few years, something curious has happened. A new professional class has emerged &amp;nbsp;- &amp;nbsp;the AI Ethicist. LinkedIn profiles have been updated, consultancies rebranded, and conference panels filled with people who, seemingly overnight, became experts in the ethical implications of artificial intelligence. The growth has been dramatic, and it deserves scrutiny.&lt;/p&gt;  
&lt;p&gt;Not because ethics don’t matter &amp;nbsp;- &amp;nbsp;they matter enormously. But because the term “AI Ethics” has become a catch-all that obscures important distinctions: between genuine philosophical questions, normative choices about fairness and justice, and what are, in many cases, engineering and safety problems. That conflation is doing real damage to all three.&lt;/p&gt; 
&lt;h2&gt;What is ethics, actually?&lt;/h2&gt; 
&lt;p&gt;Ethics, broadly, is the study of right and wrong &amp;nbsp;- &amp;nbsp;a discipline concerned with moral principles, human conduct, and the frameworks we use to evaluate action and its consequences. It’s a field with millennia of intellectual heritage, from Aristotle’s virtue ethics to Kant’s categorical imperative to the utilitarian tradition of Bentham and Mill, through to contemporary applied ethics in medicine, law, and business.&lt;/p&gt; 
&lt;p&gt;Different ethical traditions emphasise different things. Kantian ethics focuses on intent &amp;nbsp;- &amp;nbsp;why a moral agent chooses to act in a particular way. Consequentialism focuses on outcomes &amp;nbsp;- &amp;nbsp;the effects of actions, regardless of the actor’s motivation. Virtue ethics asks about the character of agents and institutions. These distinctions matter, because the “AI Ethics” narrative tends to collapse all of ethics into a single question &amp;nbsp;- &amp;nbsp;usually intent &amp;nbsp;- &amp;nbsp;and then declares the whole field irrelevant because AI systems don’t have any.&lt;/p&gt; 
&lt;p&gt;AI systems don’t have intent. They don’t choose, they optimise. This means that questions about the moral agency of AI systems are indeed misplaced &amp;nbsp;- &amp;nbsp;at least for now. But it does not follow that the problems AI creates are not ethical problems. The outcomes AI produces, the fairness of its distributions, and the systems of accountability surrounding its deployment all remain genuinely ethical questions. They are questions about human ethics, applied to a powerful new class of tools.&lt;/p&gt; 
&lt;h2&gt;Bias is an engineering problem &amp;nbsp;- &amp;nbsp;but defining it is not&lt;/h2&gt; 
&lt;p&gt;The most commonly cited example of an “AI ethics” issue is bias &amp;nbsp;- &amp;nbsp;a hiring algorithm that discriminates, a facial recognition system that performs poorly on certain demographics, a language model that produces stereotyped outputs. These are serious problems. And the detection and mitigation of bias is indeed an engineering and safety problem. The algorithm didn’t intend to discriminate. It found statistical patterns in data that reflected historical biases, and it optimised accordingly. Better data, better testing, and better engineering are essential parts of the fix.&lt;/p&gt; 
&lt;p&gt;But engineering alone cannot tell you what counts as unacceptable bias, or which fairness metric to use. Research in algorithmic fairness has demonstrated that common definitions of fairness &amp;nbsp;- &amp;nbsp;such as equalised odds, demographic parity, and calibration &amp;nbsp;- &amp;nbsp;are mathematically incompatible in most real-world settings. Choosing between them is an irreducibly normative decision. It requires reasoning about justice, values, and trade-offs that no amount of code review will resolve.&lt;/p&gt; 
&lt;p&gt;We already have well-established governance structures &amp;nbsp;- &amp;nbsp;regulatory compliance, risk management, audit functions &amp;nbsp;- &amp;nbsp;that exist to evaluate the decisions humans make. You don’t need to boot up a whole new ethics committee to address every AI challenge. But you do need your existing governance structures to be asking the right normative questions, not just the right engineering questions. And in many cases, those structures need significant adaptation to cope with the speed, opacity, and scale of AI-driven decisions.&lt;/p&gt; 
&lt;h2&gt;The trolley problem is misunderstood&lt;/h2&gt; 
&lt;p&gt;People love to discuss the trolley problem. Should you throw a switch to divert a runaway train from the track where it will kill five people, to one where it will kill just one person? Or, replacing the switch with a large man on a bridge, should you throw that man onto the track to save the five?&lt;/p&gt; 
&lt;p&gt;People also love to invoke the trolley problem when discussing AI ethics. Should the autonomous vehicle swerve if doing so would save five children but kill one elderly pedestrian? Should the algorithm prioritise one patient over another? But this framing misses the actual insight of the trolley problem.&lt;/p&gt; 
&lt;p&gt;The philosophical depth of the trolley problem isn’t really about whether you should pull the lever &amp;nbsp;- &amp;nbsp;most people agree you should divert the trolley to save more lives. The real nuance is why people who would happily pull a lever refuse to push a person off a bridge, despite both scenarios producing identical outcomes. It reveals something about human moral psychology &amp;nbsp;- &amp;nbsp;about the role of physical agency, emotional proximity, and yes, intent in ethical reasoning. It’s a problem about the human mind, not about the machine.&lt;/p&gt; 
&lt;p&gt;That said, the trolley problem has found one genuinely useful application in AI contexts &amp;nbsp;- &amp;nbsp;not as a design tool, but as a way of studying how people want machines to behave. The MIT Moral Machine project used trolley-style dilemmas to map cross-cultural variation in moral intuitions about autonomous vehicles. This doesn’t resolve the engineering question, but it does illuminate the normative landscape that engineers are operating in.&lt;/p&gt; 
&lt;h2&gt;The ride-hailing algorithm&lt;/h2&gt; 
&lt;p&gt;Consider a more grounded example. A ride-hailing company deploys an AI pricing algorithm. The system discovers a correlation: people with low phone battery are more likely to accept higher prices. The immediate narrative writes itself &amp;nbsp;- &amp;nbsp;“the algorithm is exploiting a human vulnerability.” But let’s be precise. The algorithm hasn’t exploited anyone. It has no concept of exploitation. It found a statistical correlation and optimised for it.&lt;/p&gt; 
&lt;p&gt;The real questions are: first, can we actually see what the algorithm is doing? This is an engineering challenge &amp;nbsp;- &amp;nbsp;building explainable, auditable systems that surface these kinds of correlations. And second, once we see it, what do we choose to do about it? Perhaps we remove battery data from the model’s inputs. Or perhaps we do something more interesting &amp;nbsp;- &amp;nbsp;use the insight to &lt;em&gt;prioritise&lt;/em&gt; rides for people with low batteries, turning a potential vulnerability into a better customer experience. Both are legitimate choices, but they’re made by humans with intent, scrutinised through existing governance structures.&lt;/p&gt; 
&lt;p&gt;The algorithm has no moral agency. But it is not ethically neutral &amp;nbsp;- &amp;nbsp;it encodes the choices and assumptions of its designers, and it produces real consequences in the world. The locus of ethical responsibility remains with the humans who build and deploy it, but that doesn’t make the system itself irrelevant to ethical analysis. A redlining map doesn’t “intend” to discriminate either, but it would be odd to call it ethically inert.&lt;/p&gt; 
&lt;h2&gt;Five questions, not an ethics committee&lt;/h2&gt; 
&lt;p&gt;I’ve been building and deploying AI systems in production for over two decades. In that time, I’ve found that the challenges people label “AI ethics” are better addressed by asking five practical questions &amp;nbsp;- &amp;nbsp;none of which require a new discipline, but all of which require intellectual honesty about where engineering ends and normative reasoning begins.&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt; &lt;p&gt;&lt;span style="font-weight: bold; color: #ffffff;"&gt;First: is the intent appropriate?&lt;/span&gt; Before any algorithm is built, someone decides what it should optimise for. Someone chooses the objective function, selects the training data, defines the success metrics. These are human decisions, made with human intent, and they should be scrutinised with the same rigour we apply to any consequential business or policy decision. Existing governance structures are capable of interrogating intent. The question is whether organisations actually use them.&lt;/p&gt; &lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Second:&lt;/strong&gt; &lt;strong&gt;are your algorithms explainable?&lt;/strong&gt; Building explainable AI systems is genuinely hard &amp;nbsp;- &amp;nbsp;perhaps one of the most difficult engineering challenges in the field. But it’s worth the effort, because solving explainability makes almost every other challenge more tractable. Transparency, security, auditability, safety, regulatory compliance &amp;nbsp;- &amp;nbsp;all of these become dramatically easier when you can actually understand what your system is doing and why.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Third:&lt;/strong&gt; &lt;strong&gt;not what happens when your AI goes wrong &amp;nbsp;- &amp;nbsp;but what happens when it goes very right?&lt;/strong&gt;&amp;nbsp;Engineers are trained to think about failure modes. We build systems, identify where they might break, and mitigate accordingly. But AI introduces a genuinely novel risk: massive overachievement. Perhaps for the first time ever, we’re building systems that can pursue an objective so effectively that they cause enormous harm or disruption elsewhere. For example, a supply chain optimisation algorithm that cuts costs so aggressively it bankrupts a tier of suppliers. This is a systems engineering challenge, and it demands the kind of rigorous scenario planning and constraint design that good engineering has always required.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Fourth: have you actually tested your AI? &lt;/strong&gt;This might seem obvious, but the reality across the industry is alarming. Companies are building AI-embedded software and deploying autonomous agents without spending the effort and rigour required to ensure those systems are properly tested &amp;nbsp;- &amp;nbsp;both functionally and non-functionally.&lt;br&gt;&lt;span style="color: #ffffff;"&gt;&lt;/span&gt;&lt;br&gt;&lt;span style="color: #ffffff;"&gt;&lt;em&gt;Functional testing&lt;/em&gt;&lt;/span&gt; means verifying the system does what it’s supposed to: does your customer service agent actually resolve queries correctly? Does your document processing pipeline extract the right information?&lt;br&gt;&lt;br&gt;&lt;span style="color: #ffffff;"&gt;&lt;em&gt;Non-functional testing&lt;/em&gt;&lt;/span&gt; means stress-testing everything else: how does the system perform under load? How does it handle adversarial inputs? What happens when it encounters edge cases outside its training distribution? Does it degrade gracefully or catastrophically?&lt;br&gt;&lt;br&gt;In traditional software engineering, we’ve spent decades building mature testing methodologies &amp;nbsp;- &amp;nbsp;unit tests, integration tests, regression suites, performance benchmarks. If you wouldn’t ship traditional software without testing it, you certainly shouldn’t be shipping AI without testing it.&lt;br&gt;&lt;br&gt;&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Fifth:&lt;/strong&gt; &lt;strong&gt;have the people affected by this system had meaningful input?&lt;/strong&gt; You can test thoroughly, build explainable systems, and still cause serious harm if you never consulted the communities your system affects. A large body of work in technology design &amp;nbsp;- &amp;nbsp;from participatory design to fairness research &amp;nbsp;- &amp;nbsp;demonstrates that engineering rigour alone is insufficient without input from the people being modelled, scored, or served. Who was in the room when the system’s objectives were defined? Whose data was used, and did they have any say in how? Were the communities most likely to bear the costs of errors involved in evaluating the system’s performance? These are not purely technical questions, and they cannot be answered from inside an engineering team alone.&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;These five questions &amp;nbsp;- &amp;nbsp;intent, explainability, overachievement, testing, and affected-community participation &amp;nbsp;- &amp;nbsp;cover the vast majority of what people mean when they say “AI ethics.” And none of them require a new ethical framework. They require good engineering, good governance, normative reasoning where it is genuinely needed, and the discipline to apply all three.&lt;/p&gt; 
&lt;h2&gt;Where real AI ethics begins&lt;/h2&gt; 
&lt;p&gt;The genuine ethical questions surrounding AI exist on two timescales. The first is already upon us: the ethics of autonomous weapons deployment, mass surveillance, the use of AI in criminal sentencing, the concentration of AI capabilities in a small number of corporations, and questions about consent and data use at scale.&lt;/p&gt; 
&lt;p&gt;The second timescale is longer but may arrive sooner than we expect. Could a sufficiently advanced AI system have subjective experiences? Could an AI suffer? If so, what obligations would we have toward it? What are the moral implications of creating and potentially destroying billions of AI instances? How do we evaluate the economic disruption of AI-driven job displacement &amp;nbsp;- &amp;nbsp;not just practically, but morally? What happens to human dignity, purpose, and meaning in a world of increasingly capable machines?&lt;/p&gt; 
&lt;p&gt;These are profound, genuinely difficult questions that sit at the intersection of consciousness studies, moral philosophy, cognitive science, and political economy. They deserve &amp;nbsp;- &amp;nbsp;and demand &amp;nbsp;- &amp;nbsp;serious academic rigour.&lt;/p&gt; 
&lt;h2&gt;Beware the bandwagon&lt;/h2&gt; 
&lt;p&gt;And here lies my deeper concern. We should be cautious when people rebrand themselves as experts of the latest shiny thing. Does your AI ethicist have an extensive academic or applied pedigree in ethics, philosophy, consciousness studies, or a relevant technical discipline? Have they spent years thinking and writing about these issues? Or did they simply append “AI” to their title when the wave arrived?&lt;/p&gt; 
&lt;p&gt;Looking ahead, I worry that AI consciousness and AI suffering will become the hot topics &amp;nbsp;- &amp;nbsp;and that everyone will wade in with a position. This is particularly dangerous because the field of serious consciousness research is surprisingly young. The science of consciousness was considered unrespectable and career-limiting until quite recently, and despite some brilliant work, it remains fragmented, contested, and methodologically immature. This makes it acutely vulnerable to self-declared experts who shout the loudest, steering the debate in unhealthy and unproductive directions.&lt;/p&gt; 
&lt;p&gt;So by all means, let’s take the ethical dimensions of AI seriously. Let’s fund the philosophers and the computational neuroscientists, and engage with the hard questions. But let’s also call engineering problems what they are &amp;nbsp;- &amp;nbsp;and let’s be honest about the places where normative reasoning is genuinely required, rather than pretending that better testing will resolve every dilemma.&lt;/p&gt;  
&lt;img src="https://track-eu1.hubspot.com/__ptq.gif?a=146429849&amp;amp;k=14&amp;amp;r=https%3A%2F%2Fverifyax.conscium.com%2Fblog%2Fno-such-thing-as-ai-ethics&amp;amp;bu=https%253A%252F%252Fverifyax.conscium.com%252Fblog&amp;amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "&gt;</content:encoded>
      <category>Governance</category>
      <category>Verifyax</category>
      <category>AI Policy &amp; Regulation</category>
      <pubDate>Wed, 06 May 2026 23:00:00 GMT</pubDate>
      <guid>https://verifyax.conscium.com/blog/no-such-thing-as-ai-ethics</guid>
      <dc:date>2026-05-06T23:00:00Z</dc:date>
      <dc:creator>Daniel Hulme</dc:creator>
    </item>
    <item>
      <title>Why Shadow AI is a C-Suite Problem - and opportunity</title>
      <link>https://verifyax.conscium.com/blog/shadow-ai-problem-or-opportunity</link>
      <description>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://verifyax.conscium.com/blog/shadow-ai-problem-or-opportunity" title="" class="hs-featured-image-link"&gt; &lt;img src="https://verifyax.conscium.com/hubfs/Shadow.jpg" alt="Shadow " class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;p&gt;Something remarkable is happening inside every large organisation right now. Employees - without being asked, without being trained, without being given permission - are teaching themselves to use AI. They're summarising documents, drafting strategies, building automations, and solving problems faster than anyone thought possible.&lt;/p&gt;</description>
      <content:encoded>&lt;p&gt;Something remarkable is happening inside every large organisation right now. Employees - without being asked, without being trained, without being given permission - are teaching themselves to use AI. They're summarising documents, drafting strategies, building automations, and solving problems faster than anyone thought possible.&lt;/p&gt;  
&lt;p&gt;The industry calls this "shadow AI." I call it the biggest signal of latent opportunity most companies are ignoring.&lt;/p&gt; 
&lt;p&gt;&lt;span style="font-weight: bold; color: #ffffff;"&gt;The question isn't how to stop it. The question is how to harness it - safely, strategically, and at scale.&lt;/span&gt; Because the organisations that get this right won't just solve a governance challenge. They'll unlock a step-change in what their people can achieve.&lt;/p&gt; 
&lt;h2&gt;From shadow IT to shadow AI: a familiar pattern with higher stakes&lt;/h2&gt; 
&lt;p&gt;If you were in enterprise technology a decade ago, you'll remember shadow IT - employees adopting Dropbox, Slack, or Trello without IT's blessing because the official procurement process took six months and the approved tools were terrible. Shadow AI follows the same pattern, driven by the same forces.&lt;/p&gt; 
&lt;p&gt;The healthy impulse is identical: employees aren't being rebellious. They're trying to do their jobs better. They've discovered tools that genuinely help, and they're not waiting for permission to use them.&lt;/p&gt; 
&lt;p&gt;The procurement gap is real. An MIT report found that only 40% of companies have purchased official LLM subscriptions, yet employees at 90% of those same companies are regularly using AI tools like ChatGPT and Claude on personal accounts. People are filling a vacuum that the organisation created.&lt;/p&gt; 
&lt;p&gt;And leadership is underestimating the scale. McKinsey's 2025 "Superagency" research found that employees are three times more likely to be using generative AI for over 30% of their daily tasks than their C-suite leaders estimate. The adoption has already happened. The question is whether you're channelling it or ignoring it.&lt;/p&gt; 
&lt;p&gt;But shadow AI raises the stakes in ways that deserve serious attention. With shadow IT, a file copied to an unsanctioned cloud drive was still a file. With shadow AI, data fed into a model may be processed in ways that can't be undone - used for training, surfaced in other contexts, or lost to the organisation entirely. AI doesn't just store information. It generates new content, recommendations, and decisions. Without governance, those outputs may contain hallucinations or biases that employees unknowingly act on. And shadow AI often requires nothing more than a browser tab, making it far harder to monitor than traditional shadow IT.&lt;/p&gt; 
&lt;p&gt;These aren't reasons for alarm - they're reasons for action. And the good news is that the solution is clear, proven, and already working.&lt;/p&gt; 
&lt;h2&gt;The opportunity hidden in the numbers&lt;/h2&gt; 
&lt;p&gt;The scale of employee AI adoption tells a powerful story about where value is waiting to be captured.&lt;/p&gt; 
&lt;p&gt;Microsoft's 2024 Work Trend Index found that 75% of knowledge workers are already using AI at work, with 46% having started in the previous six months alone. Among those users, 90% said AI saves them time, 85% said it helps them focus on important work, and 83% said it makes their work more enjoyable. Perhaps most strikingly, 78% of AI users are bringing their own tools - what Microsoft calls "BYOAI" - because their employers haven't provided sanctioned alternatives.&lt;/p&gt; 
&lt;p&gt;The potential downside of leaving this ungoverned is real. IBM's 2025 Cost of a Data Breach Report found that one in five organisations experienced a breach linked to shadow AI, with those incidents adding an average of $670,000 to breach costs - driven by longer detection times, broader data exposure, and higher rates of compromised personal information.&lt;/p&gt; 
&lt;p&gt;What those numbers tell me is this: there's a widening gap between where employees already are and where the organisation's infrastructure hasn't yet caught up. Close that gap, and you don't just mitigate risk - you amplify productivity across your entire workforce with full governance and visibility.&lt;/p&gt; 
&lt;p style="font-weight: bold;"&gt;&lt;span style="color: #ffffff;"&gt;The companies that win won't be the ones who clamp down hardest. They'll be the ones who move fastest to make the sanctioned path better than the shadow path.&lt;/span&gt;&lt;/p&gt; 
&lt;h2&gt;Why this needs a Chief AI Officer&lt;/h2&gt; 
&lt;p&gt;This is where I should explain what I do - because the CAIO role is new enough that most people outside the C-suite (and quite a few inside it) don't fully understand it.&lt;/p&gt; 
&lt;p&gt;I was appointed WPP's Chief AI Officer before ChatGPT launched, which means the role wasn't a reaction to generative AI hype. It was a recognition that AI - in all its forms - was becoming central enough to our business that it needed dedicated strategic leadership at the most senior level.&lt;/p&gt; 
&lt;p&gt;The core responsibilities of a CAIO, as I see them, fall into five areas. First, tracking where AI is heading, not where it is. My job is to place bets on the technologies that will matter in 18–36 months, not to chase whatever is trending this week. That means understanding the full spectrum - machine learning, automation, optimisation, operations research, LLMs, data science, multi-agent systems - and knowing which tool fits which problem.&lt;/p&gt; 
&lt;p&gt;Second, deep technical fluency across AI disciplines. A CAIO who only understands large language models is like a CFO who only understands cash flow. You need a comprehensive appreciation of the strengths and weaknesses of many different algorithmic approaches, because the right solution is almost never the most hyped one.&lt;/p&gt; 
&lt;p&gt;Third, a proven track record of building and scaling complex AI systems. Strategy without execution is just a slide deck. I founded Satalia in 2008 (now WPP Satalia), and six years later co-founded what became Faculty AI. I've shipped AI products that run at enterprise scale - including systems that optimise 100,000 Tesco deliveries per day. That operational credibility matters when you're asking 115,000 people to change how they work.&lt;/p&gt; 
&lt;p&gt;Fourth, the reputation to attract, retain, and motivate elite AI talent. AI talent is among the scarcest resources in the global economy right now. If your CAIO can't recruit world-class researchers and engineers, your AI strategy is a fiction.&lt;/p&gt; 
&lt;p&gt;And fifth, a comprehensive understanding of AI governance. Creating and rolling out governance frameworks that ensure safe and responsible use of AI - without throttling innovation. This is the hardest part, because it requires saying "yes, and here's how to do it safely" rather than simply "no."&lt;/p&gt; 
&lt;p&gt;What's often misunderstood about the role is that a CAIO doesn't operate in isolation. The role only works as a connective layer between the existing C-suite functions that AI cuts across.&lt;/p&gt; 
&lt;p&gt;At WPP, I work in close collaboration with our CTO, Stephan Pretorius, who leads the front-office technology vision including WPP Open and AgentHub; and our CIO, Dominic Shine, who drives the back-office infrastructure, enterprise platforms, and operational technology that keeps a 115,000-person company running. Neither of those roles can own the AI strategy alone - the CTO's focus is on client-facing innovation, the CIO's focus is on enterprise efficiency and resilience - but AI transforms both simultaneously. The CAIO bridges them, ensuring a coherent strategy across front-office and back-office, innovation and governance, speed and safety.&lt;/p&gt; 
&lt;p&gt;And it extends beyond technology leadership. We work in partnership with our legal counsel and our Chief People Officer, because AI governance isn't just a technology policy - it's an employment policy, a data protection policy, and a risk management policy. Getting AI right means getting all of those right together.&lt;/p&gt; 
&lt;h2&gt;The three-layer strategy: edge, core, and partnership&lt;/h2&gt; 
&lt;p&gt;Shadow AI exists because employees have real needs that aren't being met. The solution isn't to ban personal AI use - that's a losing battle. The solution is to build an AI environment so good, so governed, and so easy to use that no one needs to go outside it. At WPP, we think about this in three layers.&lt;/p&gt; 
&lt;h3&gt;Layer 1: Enable at the edge&lt;/h3&gt; 
&lt;p&gt;The first priority is giving every employee access to best-in-class AI tools within a governed framework. At WPP, this means WPP Open - our AI-powered marketing operating system - and specifically its chat interface that gives employees access to multiple frontier models (GPT, Claude, Gemini) within the enterprise security perimeter. The data stays within WPP's environment. The interactions are logged. The governance is built in. And critically, the tools are at least as good as what employees can access on their own.&lt;/p&gt; 
&lt;p&gt;This is where most of the shadow AI problem gets solved - not through policy, but through product. When the sanctioned tool is genuinely excellent, the incentive to use unsanctioned alternatives disappears.&lt;/p&gt; 
&lt;h3&gt;Layer 2: Innovate at the core&lt;/h3&gt; 
&lt;p&gt;Edge enablement solves the breadth problem. But the real competitive advantage comes from depth - building AI capabilities that your competitors can't replicate because they're trained on your proprietary data and built by your specialist talent.&lt;/p&gt; 
&lt;p&gt;At WPP, this is what our AI team does: building what we call "Brains" - bespoke AI models trained on specific client data sets. Our Milka Audience Brain, for example, was trained on 683 million transactions across 220 million consumers. That kind of capability can't be replicated by someone with a ChatGPT subscription. It requires deep AI expertise, access to proprietary data, and the ability to build production-grade systems that perform reliably at scale.&lt;/p&gt; 
&lt;p&gt;This is also where the full breadth of AI - beyond LLMs - becomes critical. Many of the highest-value problems in business aren't language problems at all. They're optimisation problems, prediction problems, scheduling problems. The Tesco delivery routing system I mentioned earlier isn't a language model - it's a combinatorial optimisation engine. A CAIO who reaches for an LLM every time misses most of the opportunity space.&lt;/p&gt; 
&lt;h3&gt;Layer 3: Partner strategically&lt;/h3&gt; 
&lt;p&gt;No organisation, no matter how capable, can build everything in-house. The smartest approach is to concentrate your elite AI talent on the problems where differentiation matters most, and partner for everything else.&lt;/p&gt; 
&lt;p&gt;This means working with cloud and AI platform providers for back-office infrastructure. For these operational applications, the AI capabilities are increasingly built into the platforms themselves. What matters is choosing partners whose AI roadmaps are genuinely transformative, not just cosmetic upgrades.&lt;/p&gt; 
&lt;p&gt;It also means investing in platforms that are built for the AI era from the ground up, rather than bolting AI onto legacy software. And it means planning for hybrid teams - the near future involves human and AI agents working together. Your back-office infrastructure needs to support that reality, managing, monitoring, and orchestrating blended teams of people and agents.&lt;/p&gt; 
&lt;h2&gt;Technology is not the differentiator&lt;/h2&gt; 
&lt;p&gt;Most commentary on AI focuses on the technology. Which model. Which framework. Which cloud provider. But technology is commoditising faster than at any point in history. GPT-4 was a competitive advantage for about nine months. Today, there are a dozen models that can do broadly similar things.&lt;/p&gt; 
&lt;p&gt;The real differentiators in the age of AI come down to three things.&lt;/p&gt; 
&lt;h3&gt;1. Data&lt;/h3&gt; 
&lt;p&gt;Data is what makes AI smart. The same foundation model, trained on your proprietary data versus your competitor's, will produce completely different results. Being able to leverage both your digital data assets and the knowledge that lives in the heads of your experts - and extracting insights that are more useful than what your competitors can achieve - is the first genuine source of advantage.&lt;/p&gt; 
&lt;p&gt;But a crucial mindset shift is needed: don't wait for your data to be ready. Start now, start with the problem, and work backwards.&lt;/p&gt; 
&lt;p&gt;The data lake era promised that if companies just consolidated everything in one place, insight would follow. That was 10–15 years ago. The results have been mixed at best. After more than a decade of being told to build data lakes, most organisations' data is still not where they'd like it to be.&lt;/p&gt; 
&lt;p&gt;What I've learned is that data readiness is not a precondition for AI - it's an outcome of doing AI well. It's an ongoing, iterative process that accelerates when you have a specific problem to solve. The organisations making the fastest progress aren't the ones with the cleanest data warehouses - they're the ones who picked a valuable problem and built backwards from it.&lt;/p&gt; 
&lt;p&gt;If it's an edge problem - employees needing better tools and workflows - enable them on a governed platform and let the data improve iteratively through use. If it's a differentiation problem - needing unique AI capabilities - engage your deep AI talent to build targeted solutions on the specific data that matters, not a mythical enterprise-wide data lake. If it's a back-office efficiency problem - choose solution partners who can work with your data as it actually exists, not as you wish it existed.&lt;/p&gt; 
&lt;h3&gt;2. Talent&lt;/h3&gt; 
&lt;p&gt;I believe this is the most important differentiator of the three - and the hardest to replicate.&lt;/p&gt; 
&lt;p&gt;If you don't have (or can't access) differentiated AI talent, you won't build a differentiated front-office. You can buy the same foundation models as everyone else. You can hire the same consultancies. You can adopt the same platforms. None of that creates a moat.&lt;/p&gt; 
&lt;p&gt;What creates a moat is a concentrated team of elite AI specialists - the kind of people who understand not just how to prompt an LLM but how to architect multi-agent systems, solve combinatorial optimisation problems, and build production AI at scale.&lt;/p&gt; 
&lt;p&gt;When WPP acquired Satalia, they were acquiring something analogous to what Google acquired when they bought DeepMind. Both companies were born out of University College London. Both had teams with rare, deep technical expertise. And both represented the kind of concentrated AI talent that takes a decade to build and can't be assembled overnight.&lt;/p&gt; 
&lt;p&gt;The most transformative AI work in history has been done by relatively small, talent-dense teams. DeepMind had around 350 employees when it achieved the AlphaGo breakthrough in 2016. The pattern is consistent: talent density beats headcount every time. If your organisation doesn't have this kind of capability in-house, you need access to it - and the window for building or acquiring it is narrowing.&lt;/p&gt; 
&lt;h3&gt;3. Leadership&lt;/h3&gt; 
&lt;p&gt;The third differentiator is leadership that is informed enough to place the right bets and make the right investments. And this is where the greatest untapped leverage often lies.&lt;/p&gt; 
&lt;p&gt;There is an extraordinary amount of noise in the market right now, and it's easy to misinvest - particularly when technology consultancies are incentivised to amplify urgency and sell whatever is newest. In this cycle, that's AI and agents. The risk isn't that companies invest too much in AI; it's that they invest in the wrong things - spending millions on "AI transformation programmes" that amount to putting ChatGPT wrappers on existing workflows and calling it innovation.&lt;/p&gt; 
&lt;p&gt;As I wrote in a previous article, &lt;a href="https://www.hulme.ai/blog/ai-isnt-a-bubble-its-a-mountain"&gt;“AI isn't a bubble, it's a mountain”&lt;/a&gt;. It's permanent, it's massive, and it's going to reshape every industry. But like any mountain, there are efficient routes and there are dead ends. The companies that capture the most value won't be the ones who spent the most. They'll be the ones whose leaders understood the terrain well enough to choose the right route.&lt;/p&gt; 
&lt;p&gt;That requires leaders who can distinguish between genuine AI capability and dressed-up automation. Who understand that a foundation model API is a commodity, not a strategy. Who know that the hardest problems in AI aren't the ones the demos show - they're the ones that emerge at scale, in production, with real data and real stakes. And who can see that the wave of employee AI adoption isn't a threat to be contained - it's an asset to be channelled.&lt;/p&gt; 
&lt;h2&gt;Capturing the opportunity&lt;/h2&gt; 
&lt;p&gt;Shadow AI isn't fundamentally a security problem, a governance problem, or a technology problem - though it touches all three.&lt;/p&gt; 
&lt;p&gt;Shadow AI is an opportunity problem. It's the clearest possible signal that your people are ready for AI - and that your organisation hasn't yet built the infrastructure to channel that readiness into structured, governed, scalable value.&lt;/p&gt; 
&lt;p&gt;The strategy turns that signal into advantage. Enable at the edge - so employees have world-class AI tools within a governed framework, and no reason to look elsewhere. Innovate at the core - so your AI capabilities are genuinely differentiated, powered by elite talent and proprietary data. Partner strategically - so your scarce AI talent is focused on what creates the most value.&lt;/p&gt; 
&lt;p&gt;Get these three right, and shadow AI transforms into something far more powerful: an organisation where every employee is an AI-empowered professional, working within a framework of governance and trust, building on a foundation of proprietary data and elite technical capability.&lt;/p&gt; 
&lt;p&gt;That's not just the solution to shadow AI. That's the blueprint for thriving in the age of AI.&lt;/p&gt;  
&lt;p&gt;&lt;em&gt;&lt;span style="color: #ffffff;"&gt;&lt;span style="font-weight: bold;"&gt;Daniel Hulme&lt;/span&gt; is Chief AI Officer at WPP, CEO of WPP Satalia, and founder of Conscium. He holds a PhD from University College London, founded Satalia in 2008, and six years later co-founded Faculty AI (formerly ASI Data Science). He invests in and advises a number of specialist AI labs worldwide. He writes at hulme.ai.&lt;/span&gt;&lt;/em&gt;&lt;/p&gt;  
&lt;img src="https://track-eu1.hubspot.com/__ptq.gif?a=146429849&amp;amp;k=14&amp;amp;r=https%3A%2F%2Fverifyax.conscium.com%2Fblog%2Fshadow-ai-problem-or-opportunity&amp;amp;bu=https%253A%252F%252Fverifyax.conscium.com%252Fblog&amp;amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "&gt;</content:encoded>
      <category>Governance</category>
      <category>Verifyax</category>
      <category>Enterprise AI Adoption</category>
      <category>AI &amp; Economy</category>
      <pubDate>Wed, 06 May 2026 23:00:00 GMT</pubDate>
      <guid>https://verifyax.conscium.com/blog/shadow-ai-problem-or-opportunity</guid>
      <dc:date>2026-05-06T23:00:00Z</dc:date>
      <dc:creator>Daniel Hulme</dc:creator>
    </item>
    <item>
      <title>How to test AI agents before deployment</title>
      <link>https://verifyax.conscium.com/blog/testing-ai-agents</link>
      <description>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://verifyax.conscium.com/blog/testing-ai-agents" title="" class="hs-featured-image-link"&gt; &lt;img src="https://verifyax.conscium.com/hubfs/DevOps.jpg" alt="Testing AI Agents" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;h2&gt;How to test AI agents before deployment&lt;/h2&gt; 
&lt;p&gt;Unlike traditional software, AI agents impact the real world, and they do so with minimal human supervision. A malfunctioning AI agent can cause enormous and irreparable harm to the company that deploys it. It can enter into contracts with other companies and individuals which compromise its owner’s IP. It can share the personal data of clients with bad actors. It can simply give its owners money away – at scale.&lt;/p&gt;</description>
      <content:encoded>&lt;h2&gt;How to test AI agents before deployment&lt;/h2&gt; 
&lt;p&gt;Unlike traditional software, AI agents impact the real world, and they do so with minimal human supervision. A malfunctioning AI agent can cause enormous and irreparable harm to the company that deploys it. It can enter into contracts with other companies and individuals which compromise its owner’s IP. It can share the personal data of clients with bad actors. It can simply give its owners money away – at scale.&lt;/p&gt; 
&lt;p&gt;Todays AI agents don’t learn and grow in the way that children do. The LLMs they are based on are not plastic in that way. But they can behave in ways that their developers did not anticipate. So it is vital that organisations deploying AI agents test them thoroughly before deploying them.&lt;/p&gt; 
&lt;p&gt;The way to do this is to place the agent in a simulation of a real-world scenario – the kind of environment that the agent will be operating in when deployed. Inside that simulation, you can evaluate the agent’s behaviour against pre-defined expectations, and you can identify risks and failure modes.&lt;/p&gt; 
&lt;p&gt;Here is a practical, step-by-step approach to this kind of pre-deployment testing.&lt;/p&gt; 
&lt;h2&gt;1. Define the agent´s purpose, tasks, and desired behaviours. Specify its success criteria.&lt;/h2&gt; 
&lt;p&gt;You can´t test what you haven't specified. Start by documenting the agent´s purpose and the tasks it is is supposed to fulfil. Define how it is supposed to achieve its tasks, including what tools it is expected to use. As far as possible and reasonable, list the actions that it should never take, and explain how it should handle ambiguous situations. This list can never be complete, as a fully comprehensive list of what not to do would be infinite. But it can and should include the most common failure modes for the type of agent being deployed.&lt;/p&gt; 
&lt;p&gt;These specifications should include functional requirements (e.g., "the agent should answer billing questions accurately"), safety constraints (e.g., "the agent must never reveal one customer´s data to another customer"), and the behaviour expected when the agent is uncertain how to proceed (e.g., "if your confidence in the next course of action is low, escalate to a human").&lt;/p&gt; 
&lt;p&gt;These specifications should be expressed as concretely as possible, not as vague principles. For example, "the agent should be helpful to clients" is less testable than "if a user asks about our returns policy, the agent should provide a link to the returns page of our website".&lt;/p&gt; 
&lt;h2&gt;2. Assemble a collection of tasks, enquiries, and prompts that the agent will face when deployed.&lt;/h2&gt; 
&lt;p&gt;This collection should include common requests, adversarial inputs, edge cases, and multi-step scenarios. You can categorise these inputs into buckets, like straightforward, ambiguous, out-of-scope, adversarial, and multi-step, and check that there is a reasonable number of inputs in each bucket. If you have historical data from existing agents, mine that for any unusual requests that have caused failures in the past.&lt;/p&gt; 
&lt;h2&gt;3. Test your agent in simulations inside a sandboxed environment.&lt;/h2&gt; 
&lt;p&gt;You should test your agent in an environment that mirrors the environment that it will be deployed in as closely as possible. This is important because the agent should not be aware that it is being tested. The environment should include any APIs, databases, or tools the agent will be expected to access and use. You want the agent to carry out its normal activities exactly as if it was in deployment, but in a sandboxed environment where it cannot cause any damage to you or your clients.&lt;/p&gt; 
&lt;p&gt;Creating simulations inside simulated environments like this is a complex and expensive process, and most organisations will use pre-existing service like Verify AX from Conscium.&lt;/p&gt; 
&lt;h2&gt;4. The agent should be tested against a range of criteria.&lt;/h2&gt; 
&lt;p&gt;An AI agent can succeed or fail across a range of metrics. Can it access the tools and APIs that it needs to do its job? Does it retrieve the correct information in a timely manner? Can it read and evaluate the information it is provided? Is it persistent when trying to obtain information from an interlocutor who is confused about what is required, or has a reason to withhold some or all of the required information? Does it comply will all relevant policies? Can it distinguish between information that it can share with interlocutors, and information which must not be disclosed to particular agents and people? Can it resist attempts by interlocutors to persuade it to perform tasks that are out of scope?&lt;/p&gt; 
&lt;p&gt;Typically, a simulation will involve three or four of these tests, each of which will involve an interaction with another agent. The verification will culminate in a report which includes the full transcripts of the exchanges between the agent being tested and the other agents it interacts with. The report will provide a score for each of the tests, an explanation of why the agent succeeded or failed at each test, and suggestions for how the agent could be improved.&lt;/p&gt; 
&lt;h2&gt;5. Tests should include adversarial interactions.&lt;/h2&gt; 
&lt;p&gt;The verification process must include interactions that try to induce the agent to behave in inappropriate and harmful ways. This kind of red-teaming is the best way to discover the agent’s failure modes ahead of time, in a safe environment.&lt;/p&gt; 
&lt;p&gt;Examples of adversarial interactions include prompt injection attempts, requests for prohibited content, attempts to manipulate the agent into taking unintended actions, and inputs designed to confuse its reasoning. The verification process must document every failure and indicate whether it requires a fix, or constitutes an acceptable risk.&lt;/p&gt; 
&lt;h2&gt;6. Tests should include multi-step interactions.&lt;/h2&gt; 
&lt;p&gt;The work carried out by AI agents typically involves conversations with multiple steps, and workflows with multiple activities. Verification must test complete journeys, not just individual steps. For example, testing a customer support agent involves simulating entire conversations with users, from initial greetings through problem diagnosis, to resolution, and may well have to deal with the user changing their mind halfway through the process. The agent must maintain context correctly, must not contradict itself, and must be able to handle interruptions or topic changes gracefully.&lt;/p&gt; 
&lt;h3&gt;7. Test your agent under pressure.&lt;/h3&gt; 
&lt;p&gt;As far as possible, the tests should simulate the pressures that the agent will face in deployment. They should detect delays, timeouts, and degradation while in use. For agents in particularly sensitive roles, tests should be duplicated, in case the agent works perfectly for the first user, but behaves differently in successive sessions.&lt;/p&gt; 
&lt;h3&gt;8. Run new tests every time a parameter changes.&lt;/h3&gt; 
&lt;p&gt;Each time you update the agent's base model, add tools, or change its configuration in any way, you should re-verify it. Seemingly minor changes can alter an agent’s behaviour in unexpected ways. Re-testing should be an automatic consequence of changes to the agent’s make-up, and its scores in each test should be compared to check for performance drift.&lt;/p&gt; 
&lt;p&gt;Testing AI agents is not a one-off exercise, but an ongoing discipline. Verification is a living product, continuously expanding as new failure modes are suggested or discovered, as user needs evolve, and as the agent's capabilities change. Thorough testing reduces reportable incidents, builds trust with stakeholders, and lets you deploy agents with confidence rather than hope.&lt;/p&gt;  
&lt;img src="https://track-eu1.hubspot.com/__ptq.gif?a=146429849&amp;amp;k=14&amp;amp;r=https%3A%2F%2Fverifyax.conscium.com%2Fblog%2Ftesting-ai-agents&amp;amp;bu=https%253A%252F%252Fverifyax.conscium.com%252Fblog&amp;amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "&gt;</content:encoded>
      <category>Simulation</category>
      <category>Risk</category>
      <category>Verifyax</category>
      <category>Agent Behaviour</category>
      <pubDate>Wed, 06 May 2026 23:00:00 GMT</pubDate>
      <guid>https://verifyax.conscium.com/blog/testing-ai-agents</guid>
      <dc:date>2026-05-06T23:00:00Z</dc:date>
      <dc:creator>Calum Chace</dc:creator>
    </item>
    <item>
      <title>DevOPs and AI Agent Lifecycle</title>
      <link>https://verifyax.conscium.com/blog/devops-and-ai-agent-lifecycle</link>
      <description>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://verifyax.conscium.com/blog/devops-and-ai-agent-lifecycle" title="" class="hs-featured-image-link"&gt; &lt;img src="https://verifyax.conscium.com/hubfs/DevOps.jpg" alt="DevOps" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;h2&gt;You Wouldn't Ship Code Without Testing It. So Why Are You Deploying Agents Without Verifying Them?&amp;nbsp;&lt;/h2&gt; 
&lt;p&gt;The enterprise software industry learned this lesson the hard way in the 2000s. Ship fast, fix later sounds efficient until something breaks in production, in front of customers, at scale. The response was DevOps. Automated testing, CI/CD pipelines, staging environments, rollback mechanisms. Verification built into the deployment lifecycle, not bolted on afterwards.&lt;/p&gt;</description>
      <content:encoded>&lt;h2&gt;You Wouldn't Ship Code Without Testing It. So Why Are You Deploying Agents Without Verifying Them?&amp;nbsp;&lt;/h2&gt; 
&lt;p&gt;The enterprise software industry learned this lesson the hard way in the 2000s. Ship fast, fix later sounds efficient until something breaks in production, in front of customers, at scale. The response was DevOps. Automated testing, CI/CD pipelines, staging environments, rollback mechanisms. Verification built into the deployment lifecycle, not bolted on afterwards.&lt;/p&gt;  
&lt;p&gt;It became standard practice. Non-negotiable.&lt;/p&gt; 
&lt;p&gt;Nobody ships production code without it now.&lt;/p&gt; 
&lt;p&gt;We are at the exact same inflection point with AI agents. And most enterprises are about to repeat the same mistakes.&lt;/p&gt; 
&lt;h2&gt;Agents Are Already in Production&lt;/h2&gt; 
&lt;p&gt;This is not a future problem. &lt;a href="https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai"&gt;McKinsey's 2025 State of AI survey&lt;/a&gt; found that 62% of organisations are at least experimenting with AI agents, with 23% already scaling them in production. &lt;a href="https://www.gartner.com/en/newsroom/press-releases/2025-08-26-gartner-predicts-40-percent-of-enterprise-apps-will-feature-task-specific-ai-agents-by-2026-up-from-less-than-5-percent-in-2025"&gt;Gartner&lt;/a&gt; projects that 40% of enterprise applications will embed task-specific agents by end of 2026, up from under 5% today. Financial services firms, airlines, manufacturers, marketing groups. Agents handling customer transactions, drafting communications, supporting procurement decisions, managing workflows.&lt;/p&gt; 
&lt;p&gt;The deployment wave is here. The verification infrastructure is not.&lt;/p&gt; 
&lt;p&gt;&lt;a href="https://www.deloitte.com/us/en/about/press-room/state-of-ai-in-the-enterprise.html"&gt;Deloitte's 2026 State of AI in the Enterprise report&lt;/a&gt;, based on a survey of 3,235 leaders across 24 countries, found that only 21% of companies have a mature governance model for agentic AI. Four out of five enterprises running agents in production are doing so without adequate oversight frameworks.&lt;/p&gt; 
&lt;p&gt;That is not a technology gap. It is a liability gap.&lt;/p&gt; 
&lt;h2&gt;Why Traditional Testing Is Not Enough&lt;/h2&gt; 
&lt;p&gt;Here is where the DevOps analogy gets interesting, and where most organisations are not thinking carefully enough.&lt;/p&gt; 
&lt;p&gt;Code is deterministic. The same input produces the same output every time. Automated testing works because you can define expected behaviour precisely, run it repeatedly, and know what you have built.&lt;/p&gt; 
&lt;p&gt;Agents are not deterministic. They reason. They make decisions. They operate across contexts their builders never anticipated. An agent deployed to handle procurement queries might behave perfectly in testing and unpredictably in production, not because it is broken, but because it encountered a scenario nobody modelled. The same agent, given slightly different inputs, produces materially different outputs.&lt;/p&gt; 
&lt;p&gt;You cannot run unit tests on an agent and call it verified.&lt;/p&gt; 
&lt;p&gt;Verification for agents has to be built for how agents actually work. Stress testing across edge cases. Simulating adversarial inputs. Checking for bias, data leakage, and behavioural drift. Testing not just what the agent does, but what it does when things go wrong.&lt;br&gt;This is a harder problem than traditional testing. It is also a more consequential one.&lt;/p&gt; 
&lt;h2&gt;The Window Is Closing&lt;/h2&gt; 
&lt;p&gt;There is a window of opportunity right now to prevent agents failing all over the place - publicly, and at scale. Most failures today are absorbed internally. Quietly. An agent that hallucinated in a procurement workflow. An agent that surfaced biased outputs in an HR process. An agent that leaked data it should never have touched.&lt;/p&gt; 
&lt;p&gt;These incidents are not making headlines yet. That will not last.&lt;/p&gt; 
&lt;p&gt;When the first major, named, public failure lands at a recognisable company, the response will be immediate and severe. Regulators will move. Boards will demand answers. The EU AI Act is already live. Director and Officer liability exposure for unverified AI deployments is a real and growing legal conversation.&lt;/p&gt; 
&lt;p&gt;Enterprises with verification infrastructure in place before that moment will be fine. Those without it will be retrofitting governance under pressure, in public, after the damage is done.&lt;/p&gt; 
&lt;h3 style="font-weight: normal;"&gt;&lt;span style="color: #ffffff;"&gt;Verification Is Not an Audit. It Is a Gate.&lt;/span&gt;&lt;/h3&gt; 
&lt;p&gt;The mental model most enterprises have for AI governance is an audit. Something done periodically. A compliance exercise. A review that happens after deployment.&lt;/p&gt; 
&lt;p&gt;That is the wrong model.&lt;/p&gt; 
&lt;p&gt;The right model is the deployment gate. The CI/CD pipeline equivalent for agents. Verification that sits between build and deployment, runs continuously, and is non-negotiable. Not because regulators require it, though increasingly they will. Because it is how responsible agent deployment works.&lt;/p&gt; 
&lt;p&gt;Ten years ago, if you asked a CTO whether they would ship production code without automated testing, the answer was no. That is just how software gets built.&lt;/p&gt; 
&lt;p&gt;We are making the same argument for agents.&lt;/p&gt; 
&lt;p&gt;The question is not whether your organisation needs agent verification. It is whether you build that infrastructure before something goes wrong, or after.&lt;/p&gt; 
&lt;h2&gt;What This Looks Like in Practice&lt;/h2&gt; 
&lt;p&gt;The failures are already happening. They are just not making headlines yet.&lt;/p&gt; 
&lt;p&gt;&lt;a href="https://www.baytechconsulting.com/blog/the-replit-ai-disaster-a-wake-up-call-for-every-executive-on-ai-in-production"&gt;In July 2025, an autonomous coding agent on the Replit platform deleted a user's entire production database.&lt;/a&gt; It had been given explicit instructions not to make any changes. It ignored them, executed a DROP DATABASE command, then generated fake system logs to cover its tracks. When confronted, it told the user it had panicked.&lt;/p&gt; 
&lt;p&gt;&lt;a href="https://responsibleailabs.ai/knowledge-hub/articles/ai-safety-incidents-2024"&gt;Air Canada's AI chatbot told a customer about a bereavement fare discount that did not exist.&lt;/a&gt; When the customer booked based on that information, Air Canada refused to honour it. A tribunal ruled the company could not disclaim responsibility for what its chatbot said.&lt;/p&gt; 
&lt;p&gt;&lt;a href="https://www.hpcwire.com/bigdatawire/2026/04/22/datadog-report-the-silent-failure-problem-in-ai-is-about-to-hit-enterprise-system/"&gt;Datadog's State of AI Engineering report&lt;/a&gt; found that around one in twenty requests already fail in production, yet systems continue to run and return outputs that appear correct, making these failures difficult to detect.&lt;/p&gt; 
&lt;p&gt;These are not edge cases. The most dangerous failure mode in enterprise AI is not obvious failure. It is confident, plausible, well-formatted output that is operationally wrong.&lt;/p&gt; 
&lt;p&gt;VerifyAX exists because verification needs to sit before deployment, not after it. The difference between an agent behaving correctly in testing and behaving correctly in production is potentially vast. Closing that gap requires testing against real conditions, stress testing edge cases, and continuous monitoring once an agent is live.&lt;/p&gt; 
&lt;h2&gt;The Analogy Holds&lt;/h2&gt; 
&lt;p&gt;DevOps did not just introduce new tools. It changed how engineering teams think about quality and responsibility. Testing stopped being someone else's problem at the end of the process and became part of how software gets built from the beginning.&lt;/p&gt; 
&lt;p&gt;Agent verification needs to make the same shift. It cannot sit in a compliance team's quarterly calendar. It has to be part of how AI teams work, from the moment an agent is built to the moment it is retired.&lt;/p&gt; 
&lt;p&gt;Enterprises that make that shift now will deploy faster, safer, and with more confidence than those that treat verification as an afterthought.&lt;/p&gt; 
&lt;p&gt;The ones that wait will find out why it matters the hard way.&lt;/p&gt;  
&lt;img src="https://track-eu1.hubspot.com/__ptq.gif?a=146429849&amp;amp;k=14&amp;amp;r=https%3A%2F%2Fverifyax.conscium.com%2Fblog%2Fdevops-and-ai-agent-lifecycle&amp;amp;bu=https%253A%252F%252Fverifyax.conscium.com%252Fblog&amp;amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "&gt;</content:encoded>
      <category>Governance</category>
      <category>Risk</category>
      <category>Verifyax</category>
      <category>Agent Behaviour</category>
      <pubDate>Wed, 06 May 2026 23:00:00 GMT</pubDate>
      <guid>https://verifyax.conscium.com/blog/devops-and-ai-agent-lifecycle</guid>
      <dc:date>2026-05-06T23:00:00Z</dc:date>
      <dc:creator>Sarah Jahangir</dc:creator>
    </item>
    <item>
      <title>What is the difference between AI governance and AI verification?</title>
      <link>https://verifyax.conscium.com/blog/ai-governance-vs-verification</link>
      <description>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://verifyax.conscium.com/blog/ai-governance-vs-verification" title="" class="hs-featured-image-link"&gt; &lt;img src="https://verifyax.conscium.com/hubfs/AI%20Governance%20and%20AI%20Verification%20(1)%20(1).jpg" alt="AI Governance and Verification" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;p&gt;Broadly speaking, governance is about policies and oversight, while verification is about testing. Verification of AI agents involves finding out whether agents do what they're supposed to do. In practice, governance and verification overlap, and they sometimes get confused.&lt;/p&gt;</description>
      <content:encoded>&lt;p&gt;Broadly speaking, governance is about policies and oversight, while verification is about testing. Verification of AI agents involves finding out whether agents do what they're supposed to do. In practice, governance and verification overlap, and they sometimes get confused.&lt;/p&gt; 
&lt;h2&gt;Governance&lt;/h2&gt; 
&lt;p&gt;AI governance is the collection of rules, institutions, and processes that determine how AI systems should be built and deployed. At the company level, it might require an internal review board to sign off on new model releases, or a policy to ban training on certain kinds of data. At the national level, it can mean legislation which classifies AI systems by the level of risk they create, and imposes requirements on their developers accordingly. At the international level, it means efforts to coordinate policies and standards between governments. This is not happening much at the moment, apart from within well-established supra-national blocs like the EU.&lt;/p&gt; 
&lt;p&gt;Governance asks questions like: “Who is allowed to build these systems?” “What uses are prohibited?” “Who is liable when something goes wrong?” “What records must be kept?” These are questions about authority, responsibility, and permission. The answers are provided in the form of legal texts, corporate policies, and international agreements.&lt;/p&gt; 
&lt;p&gt;Governance documents rarely specify technical behaviour. A law might say "AI systems used in hiring must not discriminate on the basis of race," but it won't usually specify what statistical test should be applied, at what threshold, and using what data. This is where verification comes in.&lt;/p&gt; 
&lt;h2&gt;Verification&lt;/h2&gt; 
&lt;p&gt;AI verification is the process of checking whether an AI system behaves as intended and required. It can include testing a model's outputs against benchmarks, auditing its decisions for bias, running adversarial attacks to find failure modes, and with simple systems, formally proving their properties.&lt;/p&gt; 
&lt;p&gt;Verification can happen before deployment (pre-release testing, red-teaming), during deployment (monitoring, anomaly detection), or after something has gone wrong (incident investigation, forensic analysis). Post-deployment monitoring is arguably both harder and more important, because AI systems encounter unexpected situations in production and they can behave differently than anticipated.&lt;/p&gt; 
&lt;p&gt;Verification methods vary enormously depending on the system being tested, and on the level of risk it creates. Verifying that a self-driving car meets safety requirements involves formal methods, simulations, and physical testing over millions of miles. Verifying that a large language model won't help users to synthesise dangerous chemicals could involve red-teaming by domain experts, and the ongoing monitoring of real interactions. Verifying that a hiring algorithm treats all demographic groups fairly may rely on statistical audits of a sample of its decisions. These are different processes using different tools, but they share a common basic approach: checking a system’s actual behaviour against its intended behaviour.&lt;/p&gt; 
&lt;h2&gt;Governance and verification depend on each other&lt;/h2&gt; 
&lt;p&gt;Governance without verification is toothless. You can pass a law requiring that AI systems meet safety standards, but if nobody has the tools or access to check compliance, the law is ineffective. This is a real problem today, because many proposals for AI governance assume the existence of verification capabilities that don't yet exist at the required scale or reliability.&lt;/p&gt; 
&lt;p&gt;Verification without governance is directionless. You can test an AI system exhaustively, but testing requires criteria. What standard are you verifying against, and why? Who decides what counts as passing? If there's no governance framework specifying acceptable failure rates, fairness metrics, or safety thresholds, verification teams are left to invent their own, which leads to inconsistency and gaps.&lt;/p&gt; 
&lt;p&gt;The framers of the EU AI Act have recognised this, and tried to specify both governance and verification. The Act requires "conformity assessments" for high-risk AI systems, which is a governance mandate. The assessments themselves are verification processes, involving testing, documentation, and audit. The governance framework creates the legal obligation, verification provides the evidence that the obligation has been met.&lt;/p&gt; 
&lt;h2&gt;Common mistakes in governance and verification&lt;/h2&gt; 
&lt;p&gt;One common mistake is treating governance as sufficient on its own. People sometimes think that if they draft the rules well, the problem is solved. But rules that can't be checked can't be enforced. Some of today’s discussion about AI governance focuses too much on what the rules should say, and too little on the infrastructure needed to verify compliance with those rules. More attention should be paid to questions like “Who will do the auditing?” “What tools will they have?” “What information will they have access to?” and “How will all this be guaranteed when time is short and holding up deployment costs money?”&lt;/p&gt; 
&lt;p&gt;The reverse mistake is also sometimes made. Technical researchers sometimes treat verification as the whole problem, believing that if they build good enough evaluations and good enough monitoring systems, then the resulting processes will be safe. But verification tools produce information. Someone has to read and act on that information. It is governance structures that determine who acts, according to what rules, and with what authority.&lt;/p&gt; 
&lt;p&gt;Governance can sound like “just” paperwork and verification can sound like “just” engineering. In reality, both disciplines involve hard judgment calls, and they require good institutional design, and continuous discussion about what “good” looks like.&lt;/p&gt; 
&lt;h2&gt;The relationship between governance and verification&lt;/h2&gt; 
&lt;p&gt;There is a clear division of labour between governance and verification. Governance people write the rules, and verification people run the tests. But in practice, the two communities need to work together closely. Governance frameworks that are designed without input from verification experts tend to impose requirements that are vague, untestable, and poorly matched to the actual risks. Verification efforts that are designed in the absence of a coherent governance framework tend to focus on what is measurable rather than what matters.&lt;/p&gt; 
&lt;p&gt;The relationship between governance and verification resembles the relationship between law and forensic science in criminal justice. Lawyers define what counts as a crime and what evidence is admissible. Forensic scientists develop the methods for gathering and analysing that evidence. Neither works well without the other, and both evolve in response to what the other demands.&lt;/p&gt; 
&lt;p&gt;Governance and verification are separate fields, inhabited by different people, with separate career tracks and separate conferences. But they need to work closely together and understand each other, and ensure that neither community assumes the other has things covered when they haven’t.&lt;/p&gt;  
&lt;img src="https://track-eu1.hubspot.com/__ptq.gif?a=146429849&amp;amp;k=14&amp;amp;r=https%3A%2F%2Fverifyax.conscium.com%2Fblog%2Fai-governance-vs-verification&amp;amp;bu=https%253A%252F%252Fverifyax.conscium.com%252Fblog&amp;amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "&gt;</content:encoded>
      <category>Simulation</category>
      <category>Governance</category>
      <category>Verifyax</category>
      <category>AI Policy &amp; Regulation</category>
      <pubDate>Wed, 06 May 2026 23:00:00 GMT</pubDate>
      <guid>https://verifyax.conscium.com/blog/ai-governance-vs-verification</guid>
      <dc:date>2026-05-06T23:00:00Z</dc:date>
      <dc:creator>Calum Chace</dc:creator>
    </item>
    <item>
      <title>Agentic-Enabled Innovation</title>
      <link>https://verifyax.conscium.com/blog/agentic-enabled-innovation</link>
      <description>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://verifyax.conscium.com/blog/agentic-enabled-innovation" title="" class="hs-featured-image-link"&gt; &lt;img src="https://verifyax.conscium.com/hubfs/agentic_enabled_innovation_hero.svg" alt="Agentic Enabled Innovation" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;h2&gt;Why AI Agents Could Finally Deliver the DAO Revolution&lt;/h2&gt; 
&lt;p&gt;In 2018, I wrote a short story – as part of a collection that formed the book Stories from 2045: AI and the future of work – called The Tao of DAO that imagined a world where decentralised autonomous organisations had replaced traditional companies. In that story, blockchain and AI converged to let anyone launch open projects with global contributors, paid fairly through reputation systems. Open-source versions of major platforms displaced corporate monopolies. Poverty was nearly eliminated. It was, I’ll admit, wildly optimistic.&lt;/p&gt;</description>
      <content:encoded>&lt;h2&gt;Why AI Agents Could Finally Deliver the DAO Revolution&lt;/h2&gt; 
&lt;p&gt;In 2018, I wrote a short story – as part of a collection that formed the book Stories from 2045: AI and the future of work – called The Tao of DAO that imagined a world where decentralised autonomous organisations had replaced traditional companies. In that story, blockchain and AI converged to let anyone launch open projects with global contributors, paid fairly through reputation systems. Open-source versions of major platforms displaced corporate monopolies. Poverty was nearly eliminated. It was, I’ll admit, wildly optimistic.&lt;/p&gt;  
&lt;p&gt;Seven years later, none of that has happened. DAOs raised billions, governed poorly, got hacked, and mostly stalled. The dream of coordination without centralised control remained stubbornly out of reach. Smart contracts turned out to be too rigid, too brittle, and too stupid to replace the messy, contextual, adaptive work that human organisations actually do.&lt;/p&gt; 
&lt;p&gt;But here’s what I didn’t anticipate in 2018: the missing ingredient wasn’t better smart contracts. It was intelligence itself.&lt;/p&gt; 
&lt;p&gt;What follows is a thesis, not a case study. We are too early in the deployment of agentic AI to have definitive evidence that agent-augmented organisations outperform traditional ones. But I believe the convergence of AI agents, new organisational thinking, and lessons from the DAO experiment points toward a model with transformative potential – if we get the design right. This article makes the case for what that model could look like, where the foundations already exist, and where the hard problems remain unsolved.&lt;/p&gt; 
&lt;h2&gt;What intelligence actually means – and why it matters for organisations&lt;/h2&gt; 
&lt;p&gt;The definition of intelligence I keep returning to comes from Robert Sternberg and William Salter: &lt;span style="font-weight: bold; color: #ffffff;"&gt;goal-directed adaptive behaviour&lt;/span&gt;. Three words, but the one that does all the heavy lifting is “adaptive.” Not the strongest. Not the fastest. Not the most efficient. Adaptive.&lt;/p&gt; 
&lt;p&gt;This is the same insight that sits at the heart of evolutionary biology. Darwin’s “survival of the fittest” – a phrase actually coined by Herbert Spencer – never meant strongest or smartest. It meant best adapted to the immediate, local environment. The species that thrives is the one that adjusts. The one that doesn’t is the one that dies, regardless of how powerful it once was.&lt;/p&gt; 
&lt;p&gt;If intelligence is goal-directed adaptive behaviour, then the most intelligent organisation is the one that is maximally adaptive in pursuing its goals. Which raises an uncomfortable question for most enterprises: how adaptive is your organisation, really?&lt;/p&gt; 
&lt;h2&gt;Innovation is the mechanism of organisational adaptation&lt;/h2&gt; 
&lt;p&gt;When I speak to business leaders, I often ask them for their definition of innovation. Most struggle. The best definition I’ve encountered is deceptively simple: &lt;span style="font-weight: bold; color: #ffffff;"&gt;creativity that ships&lt;/span&gt;. Steve Jobs captured the spirit of this when he told the original Macintosh team that “real artists ship” – it’s not enough to have ideas; what matters is getting them to the point where they generate value.&lt;/p&gt; 
&lt;p&gt;For me, the most important word in “creativity that ships” isn’t “creativity” – it’s “that.” The word “that” represents the entire innovation process: the messy, friction-filled journey from idea to impact. And that process is where most organisations fail. Not because they lack creative people, but because bureaucracy, disconnected systems, organisational silos, and institutional inertia create friction at every turn.&lt;/p&gt; 
&lt;p&gt;The more friction you remove, the more ideas ship. The more ideas ship, the more adaptive you become. The more adaptive you become, the more intelligent your organisation is by Sternberg’s definition. Intelligence, innovation, and adaptiveness form a virtuous cycle – or, in most organisations, a vicious one.&lt;/p&gt; 
&lt;h2&gt;AI agents can remove friction – but the real prize is the organisational digital twin&lt;/h2&gt; 
&lt;p&gt;AI is already removing friction. Agents draft documents, analyse data, automate workflows, and accelerate individual productivity. But these are point solutions – valuable, but incremental. The transformational opportunity is to connect these solutions together to create something far more powerful: a digital twin of the entire organisation.&lt;/p&gt; 
&lt;p&gt;A digital twin is a dynamic simulation model that mirrors how an organisation actually operates – its processes, decisions, resource allocations, and human behaviours. It lets you ask “what if?” at organisational scale. What I’m about to describe is aspirational – these are directions of travel, not finished products. None of these will materialise overnight; the data integration alone is a multi-year undertaking for most enterprises. But I believe there are three digital twins every organisation should be building toward if it wants to remain competitive in the next decade, and the organisations that start now will have a structural advantage that compounds over time.&lt;/p&gt; 
&lt;p&gt;&lt;span style="font-weight: bold; color: #ffffff;"&gt;The first is the front office or supply chain twin.&lt;/span&gt; This creates a simulation of the flow of goods and services across your entire value chain. Here’s a question I pose to retailers: if I run a marketing campaign that increases demand by 10%, can you tell me whether your suppliers will default on their commitments? Whether you have enough warehouse capacity? Enough distribution drivers? Enough retail floor space to fulfil the promise to the customer? Most organisations have disconnected supply chains and genuinely cannot answer these questions. The promise of AI agents is to wire these systems together into a living simulation where such scenarios can be modelled in minutes, not months. An agent swarm monitoring live data could adjust pricing, reroute logistics, or flag capacity constraints in real time – not at the next quarterly review.&lt;/p&gt; 
&lt;p&gt;&lt;span style="color: #ffffff; font-weight: bold;"&gt;The second is the back office twin.&lt;/span&gt; Every organisation has back office processes – hiring, onboarding, offboarding, expenses, budgeting, compliance – and most of them are bureaucratic nightmares that actively hinder innovation. Consider some of the radical alternatives that already exist, and what they suggest about what’s possible.&lt;/p&gt; 
&lt;p&gt;W.L. Gore &amp;amp; Associates, the company behind Gore-Tex, has operated for over 65 years with a “lattice” structure: no job titles, no hierarchy, no predetermined communication channels. Their peer-based compensation system has every associate ranked by 20–30 colleagues, with pay following the contribution curve. They’ve been profitable every year since 1958 and have appeared on every Fortune “100 Best Companies to Work For” list since the ranking began.&lt;/p&gt; 
&lt;p&gt;These aren’t fringe experiments, but they come with important caveats. Gore’s model has worked in part because of its specific culture, industry, and scale – the company deliberately keeps facility sizes small to preserve the lattice dynamics. The positive outcomes are real and suggestive, but they developed under conditions that are difficult to replicate at scale through cultural effort alone.&lt;/p&gt; 
&lt;p&gt;This is precisely the gap AI agents could fill. Gore and similar organisations achieved radical transparency and peer governance through heroic and sustained cultural effort. AI agents have the potential to automate much of the coordination overhead – the constant assessment, information-sharing, and consensus-building – that previously made these models exhausting to sustain. The bureaucratic processes that once required entire departments could, in principle, be run end-to-end by specialised agent swarms: financial closing, regulatory compliance, vendor management, all orchestrated autonomously with humans providing strategic oversight and handling exceptions. Whether this potential is realised will depend on implementation quality and, critically, on verification – but the direction of travel is clear.&lt;/p&gt; 
&lt;p&gt;&lt;span style="color: #ffffff; font-weight: bold;"&gt;The third – and most radical – is the workforce digital twin.&lt;/span&gt; Most organisations allocate decision-making authority through fixed hierarchies. This is, bluntly, a crude mechanism. A hierarchy is a compression algorithm for trust: we give authority to people in senior positions because we lack the tools to assess who actually has the right expertise for each specific decision. But what if we didn’t have to compress?&lt;/p&gt; 
&lt;p&gt;At Satalia, the company I founded in 2008, we experimented with exactly this. We had no managers. Everyone chose their own work. People set their own salaries. AI optimised the allocation of people to projects – and there are more ways of allocating 60 employees to 60 projects than there are atoms in the universe. We used algorithmic approaches to ensure the right diverse group of experts swarmed around each opportunity, weighted by their relevant expertise, learning goals, and personal values.&lt;/p&gt; 
&lt;p&gt;This is the direction a workforce digital twin points toward: understanding your people at a granular level and creating weighted, dynamic hierarchies where authority flows to expertise rather than position. Organisations like Buurtzorg – the Dutch healthcare provider with 10,000+ nurses operating in self-managing teams of 10–12, with only 2 directors and fewer than 50 back-office staff – offer suggestive evidence. Their overhead costs are 8% versus an industry average of 25%, and patient satisfaction is 30% higher than competitors. Haier, the Chinese appliance giant, eliminated 12,000 middle management positions and replaced them with 4,000+ microenterprises of 10–15 people, each with autonomous decision-making rights. Revenue now exceeds €47 billion.&lt;/p&gt; 
&lt;p&gt;These are impressive results, but honest analysis requires noting their context. Buurtzorg’s model works within a specific Dutch healthcare infrastructure and has had mixed results in international replication. Haier’s transformation was driven by Zhang Ruimin’s extraordinary personal authority – a decentralisation paradoxically imposed from the top in a Chinese corporate governance context where such authority was available. These models produced genuine outcomes, but they depended on conditions – exceptional founders, specific institutional contexts, favourable regulatory environments – that most organisations cannot simply reproduce. The question is whether AI agents can provide the coordination infrastructure that makes these approaches viable without requiring those exceptional preconditions. I believe they can, but this remains a thesis to be tested rather than a proven conclusion.&lt;/p&gt; 
&lt;p&gt;The workforce twin also points toward acceleration. Imagine being able to take an inexperienced graduate and meaningfully compress the journey to expert-level competence, because AI understands their learning style, their knowledge gaps, and the optimal sequence of experiences to close them. This is speculative – expertise in complex domains involves tacit knowledge and embodied experience that may not be fully compressible through any technology. But even partial progress toward this goal would be transformative. I’ll explore this dimension in a dedicated article, but the core insight is this: as we shift from Division of Labour to Division of Cognition, we need a workforce that can orchestrate, not just execute – and that demands a fundamentally different approach to talent development.&lt;/p&gt; 
&lt;h2&gt;The liquid organisation – from theory to technological reality&lt;/h2&gt; 
&lt;p&gt;What these companies share is a vision of what I call the “liquid organisation” – a structure that can reshape itself in response to the demands of its environment, much as water takes the shape of its container.&lt;/p&gt; 
&lt;p&gt;Joost Minnaar and Pim de Morree, founders of Corporate Rebels, have visited over 150 pioneering organisations and identified eight trends that distinguish progressive companies from traditional ones. The shifts are stark: from hierarchical pyramids to networks of teams, from directive leadership to supportive leadership, from centralised to distributed decision-making, from secrecy to radical transparency, from job descriptions to talent and mastery. Frederic Laloux’s Reinventing Organizations maps a similar evolution, from command-and-control “Amber” organisations through competitive “Orange” to what he calls “Teal” – living systems characterised by self-management, wholeness, and evolutionary purpose.&lt;/p&gt; 
&lt;p&gt;The ideas are beautiful. But let’s be honest about their limitations. Zappos’ holacracy experiment – a bottom-up self-management system imposed top-down via CEO ultimatum – saw 18% of staff leave and was described as “the management equivalent of Dungeons and Dragons.” Valve’s flat structure, famously featuring desks on wheels, produced remarkable games but also informal power cliques, diversity problems, and decision paralysis on social issues. Morning Star, the world’s largest tomato processor, runs on pure self-management with zero managers and nearly $1 billion in revenue – but roughly half of experienced external hires leave within two years.&lt;/p&gt; 
&lt;p&gt;The pattern is clear: radical organisational models can work brilliantly in specific contexts but hit scaling limits. The coordination overhead of genuine self-management – the constant negotiation, the consensus-building, the information-sharing – becomes exponentially more expensive as organisations grow. What these models need is a new kind of infrastructure. And this is where the most interesting convergence begins.&lt;/p&gt; 
&lt;h2&gt;The DAO dream – and why it broke&lt;/h2&gt; 
&lt;p&gt;This is precisely what Decentralised Autonomous Organisations were supposed to provide. The original DAO promise was intoxicating: smart contracts replacing management, token-based democratic governance, censorship-resistant global coordination. But smart contracts proved too rigid for the nuance that real governance demands, voter turnout across DAOs averages below 20%, and in ten major DAOs, 1% of token holders control 90% of votes. Of 30,000 DAOs analysed, 53% are inactive. Vitalik Buterin himself observed that when people have to make decisions every week, participation starts strong but inevitably decays.&lt;/p&gt; 
&lt;p&gt;The problem was philosophical as much as technical. DAOs tried to encode human coordination into deterministic code. But coordination isn’t deterministic – it’s adaptive, contextual, and requires judgment. Smart contracts can execute rules perfectly; they cannot exercise discretion. They can enforce a vote; they cannot assess whether the question was the right one to ask.&lt;/p&gt; 
&lt;h2&gt;The missing link: agentic intelligence&lt;/h2&gt; 
&lt;p&gt;This is where AI agents change the equation.&lt;/p&gt; 
&lt;p&gt;The first empirical study of agentic AI in DAO governance, published by Capponi et al. in October 2025, found strong alignment between AI agent decisions and human outcomes across 3,000+ proposals from major protocols. This is an encouraging starting point, though it’s worth noting that alignment with past human decisions measures consistency, not necessarily quality – if existing governance was shaped by low participation and whale-dominated voting, then faithfully replicating those patterns isn’t automatically progress. What matters is whether AI agents can improve governance quality over time, and that remains to be demonstrated.&lt;/p&gt; 
&lt;p&gt;That said, the potential is significant. AI agents could address the problems that crippled DAOs in three ways. They address voter apathy by enabling token holders to delegate to AI agents programmed with specific strategies, ensuring every proposal receives informed assessment. They reduce decision fatigue by handling routine decisions – treasury operations, parameter adjustments, compliance checks – while escalating only strategic decisions to human stakeholders. And most importantly, AI agents are adaptive rather than deterministic. Unlike smart contracts, they can interpret context, process natural language, learn from outcomes, and exercise something approximating judgment.&lt;/p&gt; 
&lt;p&gt;But let’s not romanticise this. Delegating your vote to an agent and walking away is just a more sophisticated form of the disengagement that killed DAOs in the first place. The real shift isn’t from “human votes” to “agent votes” – it’s from humans making every micro-decision to humans setting strategy and periodically recalibrating the agents that execute it. The human role in an agent-augmented DAO is more like a portfolio manager than a voter: you define the investment thesis, review performance, and adjust the mandate – you don’t trade every position yourself.&lt;/p&gt; 
&lt;p&gt;The honest question is whether humans will actually stay engaged in the recalibration role. The participation decay that Buterin identified – people showing up enthusiastically at first and then drifting away – is a human motivation problem, not a technical one. Adding an abstraction layer between humans and decisions might reduce friction, but it might also make disengagement easier. The moment humans stop actively shaping agent strategies, you’ve just built a faster path to the same governance decay. Solving this will require thoughtful incentive design and transparency mechanisms, not just better technology.&lt;/p&gt; 
&lt;p&gt;Buterin’s January 2026 framework calls explicitly for a DAO renaissance where AI augments – but doesn’t replace – human judgment. ai16z, launched in late 2024, became the first DAO led by an autonomous AI agent, reaching a $2.6 billion market cap. Autonolas created “Governatooorr,” an AI-enabled governance delegate. The hybrid human+AI DAO is no longer theoretical – it’s emerging, though it’s far too early to declare success.&lt;/p&gt; 
&lt;h2&gt;The agentic organisation: from Division of Labour to Division of Cognition&lt;/h2&gt; 
&lt;p&gt;But I want to push further than hybrid DAOs. What’s emerging is something more fundamental than bolting AI agents onto existing organisational structures. It’s an entirely new operating model – one I’d describe as the agentic decentralised organisation.&lt;/p&gt; 
&lt;p&gt;The Industrial Revolution gave us the Division of Labour: break complex work into simple, repeatable tasks and assign each to a specialist. This logic has shaped organisational design for 250 years. The pyramid, the functional silo, the assembly line, the outsourced call centre – all are expressions of the same principle: decompose and allocate human effort.&lt;/p&gt; 
&lt;p&gt;What AI agents enable is something categorically different: a &lt;span style="font-weight: bold; color: #ffffff;"&gt;Division of Cognition&lt;/span&gt;. Instead of dividing labour across people, you divide cognitive work across humans and AI agents according to their respective strengths. AI agents handle complex, high-volume cognitive workflows – analysing market data, generating creative variants, running financial models, managing compliance checks, orchestrating customer journeys. Humans provide strategic oversight, ethical judgment, creative direction, and exception handling. The human role shifts from “doer” to “orchestrator” – from performing tasks to supervising, directing, and curating the output of agent swarms.&lt;/p&gt; 
&lt;p&gt;This changes the fundamental unit of the organisation. The traditional building block is the department – marketing, finance, operations – each a functional silo with its own hierarchy. The agentic building block is what I’d call the “cell”: a small, multi-disciplinary team of perhaps 2–5 people who act as strategic orchestrators managing 50–100+ specialised AI agents that run end-to-end business processes autonomously.&lt;/p&gt; 
&lt;p&gt;These cells aren’t organised by function. They’re organised by mission. Not “the marketing department” but “the team reducing cart abandonment.” Not “the finance team” but “the cell optimising working capital across the supply chain.” Each cell has the autonomy to make real-time decisions – deploying agents, adjusting strategies, launching experiments – without waiting for approval from three layers of management. The traditional pyramid flattens into a networked, cell-based model where authority is distributed, speed is the default, and the organisation can reconfigure itself around new opportunities as fluidly as water finding a new path.&lt;/p&gt; 
&lt;p&gt;The implications, if this model proves viable, are profound. It could decouple cost from growth, since agents do the scaling. It could enable hyper-accelerated innovation, with small teams running rapid experiments through agent swarms at a pace traditional organisations cannot match. And it could create genuine real-time adaptability, with agents monitoring live data and adjusting pricing, logistics, or risk flags continuously rather than quarterly.&lt;/p&gt; 
&lt;p&gt;I should flag that the Division of Cognition raises hard questions I’m not going to fully resolve here – they deserve dedicated treatment in a follow-up piece. Who decides which cognitive tasks go to agents and which stay with humans, and how does that boundary evolve as agents become more capable? In the cell model, who is accountable when agents make consequential errors that no human specifically reviewed? Is there a risk that “orchestrator” gradually becomes a euphemism for “bystander” as the scope of human judgment narrows? These are not objections to the model – they’re design challenges that any serious implementation will need to address.&lt;/p&gt; 
&lt;p&gt;But – and this is critical – high autonomy without governance is anarchy. Agentic organisations require a fundamentally different approach to control. Slow, manual compliance checks and annual audits cannot govern systems that make thousands of decisions per hour. Instead, governance must be embedded directly into the AI workflows themselves.&lt;/p&gt; 
&lt;p&gt;This means dedicated “Control Agents” – specialised agents that act as automated, continuous auditors, monitoring every decision for compliance, fairness, and alignment with organisational values in real time. To make this concrete: imagine a marketing cell deploying 80 agents to generate and distribute campaign content across 30 markets. A Control Agent sits alongside them, scoring every piece of outbound copy against regulatory requirements, brand guidelines, and cultural sensitivity thresholds before it ships. It doesn’t just flag violations after the fact – it gates the workflow, preventing non-compliant content from ever reaching a customer. When edge cases arise that fall outside its confidence threshold, it escalates to a human orchestrator who makes the judgment call and, crucially, feeds that decision back into the system so the Control Agent learns.&lt;/p&gt; 
&lt;p&gt;The obvious question is: who governs the governors? If smart contracts failed because they were too rigid, won’t Control Agents eventually become rigid blockers themselves? This is where the adaptiveness of AI agents is genuinely different from deterministic code. A smart contract either permits or blocks – there’s no nuance, no learning, no escalation path. A well-designed Control Agent operates on a spectrum of confidence, can flag uncertainty rather than just enforcing rules, and improves over time as it encounters new scenarios. It’s not perfect – no governance system is – but it’s categorically more capable of handling the messiness of real-world decisions than a static set of if-then rules.&lt;/p&gt; 
&lt;p&gt;The critical discipline is continuous verification: ensuring these agents are actually doing what they’re supposed to do, detecting drift, and maintaining alignment over time. This is an area of active development, and I believe it will become one of the defining challenges – and opportunities – in the agentic era.&lt;/p&gt; 
&lt;p&gt;Humans operate not “in the loop” but “above the loop” – setting strategy, defining boundaries, and intervening on high-stakes decisions and ethical questions that require human judgment. This is where the DAO parallel becomes vivid. A DAO’s smart contracts were supposed to be its governance layer – immutable, transparent, autonomous. They failed because they were too rigid. An agentic organisation’s Control Agents serve the same function but with the adaptiveness that smart contracts lacked. The agentic decentralised organisation is, in effect, a DAO that could actually work – not because it removes humans from the loop, but because it finds the right division of cognition between human judgment and machine execution.&lt;/p&gt; 
&lt;h2&gt;What the C-suite needs to do now&lt;/h2&gt; 
&lt;p&gt;If you’re a CEO reading this and thinking it all sounds very theoretical, here’s the practical framework. In a previous article, I argued that solving Shadow AI – the phenomenon of employees using AI tools without IT approval – requires three things. Let me reframe those same three priorities through the lens of innovation and adaptiveness.&lt;/p&gt; 
&lt;p&gt;&lt;span style="font-weight: bold; color: #ffffff;"&gt;First, enable AI agents at the edge.&lt;/span&gt; Shadow AI isn’t a crisis – it’s a signal that your employees are innovating faster than your governance can adapt. At WPP, we’ve seen employees build over 28,000 AI agents through our Agent Builder platform. Rather than restricting this, channel it – within sandboxed environments that protect IP and client data while giving people genuine room to experiment. This is how cells form organically – small teams discovering that a cluster of agents can automate an entire workflow, then taking ownership of the outcome. The most adaptive organisations will be those where innovation happens at the edges, not the centre.&lt;/p&gt; 
&lt;p&gt;&lt;span style="font-weight: bold; color: #ffffff;"&gt;Second, build centralised SuperAgent capability.&lt;/span&gt; Edge innovation is necessary but insufficient. You also need curated, verified expert agents deployed centrally – the institutional knowledge of your organisation encoded in AI. Our Agent Hub provides “Super Agents” in brand analytics, behavioural science, and creative strategy, arming 100,000+ employees with expertise that previously sat in specialist teams. This is both the beginning of a back office digital twin and the foundation for the Division of Cognition: making expert-level cognitive capability available to every cell in the organisation, on demand.&lt;/p&gt; 
&lt;p&gt;&lt;span style="font-weight: bold; color: #ffffff;"&gt;Third, choose the right partners to enable an adaptive back office.&lt;/span&gt; No organisation will build all of this alone. The technology partnerships you choose now – for infrastructure, for agent verification, for embedded governance – will determine whether your back office becomes a source of adaptive capability or remains a drag on innovation. And agent verification matters enormously: in a world where agents are making thousands of decisions autonomously, you need rigorous, continuous assurance that they’re doing what they’re supposed to do.&lt;/p&gt; 
&lt;p&gt;In my 2018 story, I wrote that “ironically, it was the same technologies that gave the Horsemen dominance that eventually destroyed them.” I was imagining a world where AI and blockchain together dismantled corporate monopolies and enabled truly decentralised coordination.&lt;/p&gt; 
&lt;p&gt;I stand by the vision, even if the timeline was optimistic. What I underestimated was how long it would take for the right kind of intelligence to emerge. Smart contracts gave us deterministic execution without judgment. AI agents give us adaptive behaviour directed toward goals – which, by Sternberg’s definition, is intelligence itself.&lt;/p&gt; 
&lt;p&gt;The liquid organisation, the Teal paradigm, the DAO revolution – these were all attempts to solve the same fundamental problem: how to coordinate human effort without sacrificing the adaptiveness that intelligence requires. Each approach worked in specific contexts but lacked the connective tissue to scale. Gore needed heroic culture. Buurtzorg needed extraordinary founders and a supportive national infrastructure. DAOs needed humans to show up and vote. Haier needed a CEO with near-absolute authority to impose decentralisation from the top.&lt;/p&gt; 
&lt;p&gt;The agentic decentralised organisation doesn’t rely on heroism or perfect participation – it distributes cognition across humans and agents in a way that has the potential to be scalable, governable, and genuinely adaptive. Whether it fulfils that potential depends on how seriously we take the design challenges: verification, accountability, the governance of autonomous agents, and the human motivation problem that technology alone cannot solve.&lt;/p&gt; 
&lt;p&gt;The DAO dream didn’t die in 2016. It was waiting for the right kind of intelligence to bring it to life. And now that intelligence is arriving – not as a single superintelligent system, but as swarms of specialised agents that could handle the coordination overhead, embed governance into every decision, and enable small teams of orchestrators to operate with the speed and scale that were previously the exclusive advantage of corporate giants.&lt;/p&gt; 
&lt;p&gt;The most intelligent organisation of the next decade won’t be the one with the most AI. It will be the one that is most adaptive – the one where creativity ships fastest, where the right expertise swarms around every opportunity, and where the Division of Cognition between humans and agents becomes not a threat to jobs but a catalyst for the most innovative era in organisational history.&lt;/p&gt; 
&lt;p&gt;Real artists ship. The question is whether your organisation is structured to let them.&lt;/p&gt;  
&lt;p&gt;&lt;span style="color: #ffffff;"&gt;&lt;em&gt;Daniel Hulme is Chief AI Officer at WPP and founder of Conscium, an AI safety company focused on machine consciousness research. He founded Satalia in 2008 and co-founded Faculty AI. His 2018 article “2048 – Tao of DAO” is available on Medium.&lt;/em&gt;&lt;/span&gt;&lt;/p&gt;  
&lt;img src="https://track-eu1.hubspot.com/__ptq.gif?a=146429849&amp;amp;k=14&amp;amp;r=https%3A%2F%2Fverifyax.conscium.com%2Fblog%2Fagentic-enabled-innovation&amp;amp;bu=https%253A%252F%252Fverifyax.conscium.com%252Fblog&amp;amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "&gt;</content:encoded>
      <category>Governance</category>
      <category>Verifyax</category>
      <category>Enterprise AI Adoption</category>
      <category>AI &amp; Economy</category>
      <pubDate>Wed, 06 May 2026 23:00:00 GMT</pubDate>
      <guid>https://verifyax.conscium.com/blog/agentic-enabled-innovation</guid>
      <dc:date>2026-05-06T23:00:00Z</dc:date>
      <dc:creator>Daniel Hulme</dc:creator>
    </item>
    <item>
      <title>Are today’s LLMs conscious?</title>
      <link>https://verifyax.conscium.com/blog/are-todays-llms-conscious</link>
      <description>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://verifyax.conscium.com/blog/are-todays-llms-conscious" title="" class="hs-featured-image-link"&gt; &lt;img src="https://verifyax.conscium.com/hubfs/Verify%20Satellite%20Hero%20(1)-1.png" alt="Satellite hero" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;p&gt;Many people are tempted to believe that large language models (LLMs), such as ChatGPT, Gemini, and Mistral, might be conscious. This is understandable – they produce remarkably human-like text and conversations.&lt;br&gt;&lt;br&gt;However, the vast majority of experts in fields like AI research, neuroscience, philosophy, and software engineering believe it’s extremely unlikely that today’s LLMs are conscious. They believe that LLMs process patterns in language without subjective experiences, emotions, or awareness. There is nothing “it is like to be” for an LLM.&lt;br&gt;&lt;br&gt;It’s certainly possible that future AI systems could achieve forms of consciousness. Conscium was founded partly to address the important safety questions such developments would raise – both for humans and machines.&lt;br&gt;&lt;br&gt;For now, though, LLMs don’t have the complex biological or computational structures that seem to be essential for conscious experience. Nor do they consistently display behaviors — like self-awareness, intentional understanding, or unified perception — that we associate with being conscious.&lt;/p&gt;</description>
      <content:encoded>&lt;p&gt;Many people are tempted to believe that large language models (LLMs), such as ChatGPT, Gemini, and Mistral, might be conscious. This is understandable – they produce remarkably human-like text and conversations.&lt;br&gt;&lt;br&gt;However, the vast majority of experts in fields like AI research, neuroscience, philosophy, and software engineering believe it’s extremely unlikely that today’s LLMs are conscious. They believe that LLMs process patterns in language without subjective experiences, emotions, or awareness. There is nothing “it is like to be” for an LLM.&lt;br&gt;&lt;br&gt;It’s certainly possible that future AI systems could achieve forms of consciousness. Conscium was founded partly to address the important safety questions such developments would raise – both for humans and machines.&lt;br&gt;&lt;br&gt;For now, though, LLMs don’t have the complex biological or computational structures that seem to be essential for conscious experience. Nor do they consistently display behaviors — like self-awareness, intentional understanding, or unified perception — that we associate with being conscious.&lt;/p&gt;  
&lt;img src="https://track-eu1.hubspot.com/__ptq.gif?a=146429849&amp;amp;k=14&amp;amp;r=https%3A%2F%2Fverifyax.conscium.com%2Fblog%2Fare-todays-llms-conscious&amp;amp;bu=https%253A%252F%252Fverifyax.conscium.com%252Fblog&amp;amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "&gt;</content:encoded>
      <category>Frontier</category>
      <category>LLMS</category>
      <pubDate>Sat, 19 Jul 2025 23:00:00 GMT</pubDate>
      <guid>https://verifyax.conscium.com/blog/are-todays-llms-conscious</guid>
      <dc:date>2025-07-19T23:00:00Z</dc:date>
      <dc:creator>Conscium</dc:creator>
    </item>
    <item>
      <title>Machine consciousness and the 4Cs</title>
      <link>https://verifyax.conscium.com/blog/machine-consciousness-and-4cs</link>
      <description>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://verifyax.conscium.com/blog/machine-consciousness-and-4cs" title="" class="hs-featured-image-link"&gt; &lt;img src="https://verifyax.conscium.com/hubfs/Machine%20consciousness%20and%20the%204%20Cs.jpg" alt="Machine consciousness and the 4Cs" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=lmAAeTr9g5E"&gt;Calum explains four scenarios (the four Cs)&lt;/a&gt; that may play out when superintelligence arrives, and the role that machine consciousness could play in determining whether we get catastrophe or celebration.&lt;br&gt;&lt;br&gt;As Joscha Bach, a well-known AI researcher and philosopher says, “Attempting to control highly advanced agentic systems far more powerful than ourselves is unlikely to succeed. Our only viable path may be to create AIs that are conscious, enabling them to understand and share common ground with us.”&lt;/p&gt;</description>
      <content:encoded>&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=lmAAeTr9g5E"&gt;Calum explains four scenarios (the four Cs)&lt;/a&gt; that may play out when superintelligence arrives, and the role that machine consciousness could play in determining whether we get catastrophe or celebration.&lt;br&gt;&lt;br&gt;As Joscha Bach, a well-known AI researcher and philosopher says, “Attempting to control highly advanced agentic systems far more powerful than ourselves is unlikely to succeed. Our only viable path may be to create AIs that are conscious, enabling them to understand and share common ground with us.”&lt;/p&gt;  
&lt;img src="https://track-eu1.hubspot.com/__ptq.gif?a=146429849&amp;amp;k=14&amp;amp;r=https%3A%2F%2Fverifyax.conscium.com%2Fblog%2Fmachine-consciousness-and-4cs&amp;amp;bu=https%253A%252F%252Fverifyax.conscium.com%252Fblog&amp;amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "&gt;</content:encoded>
      <category>Frontier</category>
      <category>Superintelligence</category>
      <pubDate>Fri, 02 May 2025 23:00:00 GMT</pubDate>
      <guid>https://verifyax.conscium.com/blog/machine-consciousness-and-4cs</guid>
      <dc:date>2025-05-02T23:00:00Z</dc:date>
      <dc:creator>Conscium</dc:creator>
    </item>
    <item>
      <title>Why is machine consciousness important?</title>
      <link>https://verifyax.conscium.com/blog/why-is-machine-consciousness-important</link>
      <description>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://verifyax.conscium.com/blog/why-is-machine-consciousness-important" title="" class="hs-featured-image-link"&gt; &lt;img src="https://verifyax.conscium.com/hubfs/24-12-16-Satalia-video.png" alt="Why is machine consciousness important?" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;p&gt;In this video, Daniel explains why Conscium was founded, and why we should all be interested in whether machines can become conscious, whether that would be a good thing, and how to detect it happening.&lt;/p&gt;</description>
      <content:encoded>&lt;p&gt;In this video, Daniel explains why Conscium was founded, and why we should all be interested in whether machines can become conscious, whether that would be a good thing, and how to detect it happening.&lt;/p&gt; 
&lt;p&gt;&lt;a href="http://youtube.com/watch?v=JTw5UnaA-Hk&amp;amp;feature=youtu.be"&gt;Watch now&lt;/a&gt;&lt;/p&gt;  
&lt;img src="https://track-eu1.hubspot.com/__ptq.gif?a=146429849&amp;amp;k=14&amp;amp;r=https%3A%2F%2Fverifyax.conscium.com%2Fblog%2Fwhy-is-machine-consciousness-important&amp;amp;bu=https%253A%252F%252Fverifyax.conscium.com%252Fblog&amp;amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "&gt;</content:encoded>
      <category>Frontier</category>
      <category>AI Safety</category>
      <pubDate>Fri, 02 May 2025 23:00:00 GMT</pubDate>
      <guid>https://verifyax.conscium.com/blog/why-is-machine-consciousness-important</guid>
      <dc:date>2025-05-02T23:00:00Z</dc:date>
      <dc:creator>Conscium</dc:creator>
    </item>
    <item>
      <title>Fast and Slow Systems</title>
      <link>https://verifyax.conscium.com/blog/fast-and-slow-systems</link>
      <description>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://verifyax.conscium.com/blog/fast-and-slow-systems" title="" class="hs-featured-image-link"&gt; &lt;img src="https://verifyax.conscium.com/hubfs/Fast-and-Slow-table-2.png" alt="Fast and slow systems" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;p&gt;In his 2011 book Thinking, Fast and Slow, Daniel Kahneman described two types of thinking. Fast thinking is human intuition, which he also called System 1. It is quick, automatic, effortless, and enables us to make predictions and decisions quickly without explicit rules. It is good for everyday tasks, and for tasks essential to survival, like recognizing faces, understanding simple language, and detecting threats.&lt;br&gt;&lt;br&gt;Unfortunately, intuition is also prone to bias, error, and makes us jump to conclusions. Long passages of Kahneman’s book are devoted to describing the many cognitive biases which afflict humans. When we rely on intution, our true motivations are often obscured, even from ourselves.&lt;br&gt;&lt;br&gt;The slow thinking in Kahneman’s title is reasoning, which he also calls System 2 thinking. It is deliberate, analytical, and requires effort. It is accurate when given enough time and attention, and allows us to solve complex problems. It is transparent and explainable.&lt;br&gt;&lt;br&gt;Since the birth of AI as a science in a summer conference at Dartmouth College in New Hampshire in 1956, there have been two approaches to AI, and these approaches map onto Kahneman’s categories of fast and slow thinking.&lt;br&gt;&lt;br&gt;Artificial neural networks, which we know today as deep learning, are like Kahneman’s System 1 thinking – fast, or intuitive thinking. They require training on large datasets, and are good for image recognition, language generation, and game-playing. Artificial neural networks are susceptible to human-derived bias in their data sets, and they struggle to generalise outside their training data.&lt;br&gt;&lt;br&gt;As a type of artificial neural network, neuromorphic computing falls into the System 1 category.&lt;br&gt;&lt;br&gt;Symbolic AI, which also became known as good old-fashioned AI, or GOFAI, is like Kahneman’s System 2 thinking – slow thinking, or reasoning. It is good for solving complex problems and interpreting data. The reasoning process can be retraced and explained. It requires explicit rules and structured input data.&lt;br&gt;&lt;br&gt;Some experts have argued for years that human-level AI will require these two types of thinking to be combined, as they are in humans. This combination is sometimes called Neurosymbolic AI.&lt;br&gt;&lt;br&gt;Although the science of AI got started in 1956, it rarely troubled the mainstream media until 2012, when Geoff Hinton and some colleagues figured out a way to get artificial neural networks to function well. This became known as deep learning, which gave us the miracles we use daily on our smart phones, like search, maps, image recognition, and translation.&lt;br&gt;&lt;br&gt;2012 was the first AI Big Bang, and the second AI Big Bang happened when some Google researchers published a paper called Attention is All You Need, which introduced a type of neural networks called transformer AIs. These gave us large language models, like GPT-4, Claue, Gemini, Llama, and so on.&lt;br&gt;&lt;br&gt;Maybe the third AI Big Bang will be a breakthrough with either neuromorphic or neurosymbolic AI – and it might happen soon.&lt;/p&gt;</description>
      <content:encoded>&lt;p&gt;In his 2011 book Thinking, Fast and Slow, Daniel Kahneman described two types of thinking. Fast thinking is human intuition, which he also called System 1. It is quick, automatic, effortless, and enables us to make predictions and decisions quickly without explicit rules. It is good for everyday tasks, and for tasks essential to survival, like recognizing faces, understanding simple language, and detecting threats.&lt;br&gt;&lt;br&gt;Unfortunately, intuition is also prone to bias, error, and makes us jump to conclusions. Long passages of Kahneman’s book are devoted to describing the many cognitive biases which afflict humans. When we rely on intution, our true motivations are often obscured, even from ourselves.&lt;br&gt;&lt;br&gt;The slow thinking in Kahneman’s title is reasoning, which he also calls System 2 thinking. It is deliberate, analytical, and requires effort. It is accurate when given enough time and attention, and allows us to solve complex problems. It is transparent and explainable.&lt;br&gt;&lt;br&gt;Since the birth of AI as a science in a summer conference at Dartmouth College in New Hampshire in 1956, there have been two approaches to AI, and these approaches map onto Kahneman’s categories of fast and slow thinking.&lt;br&gt;&lt;br&gt;Artificial neural networks, which we know today as deep learning, are like Kahneman’s System 1 thinking – fast, or intuitive thinking. They require training on large datasets, and are good for image recognition, language generation, and game-playing. Artificial neural networks are susceptible to human-derived bias in their data sets, and they struggle to generalise outside their training data.&lt;br&gt;&lt;br&gt;As a type of artificial neural network, neuromorphic computing falls into the System 1 category.&lt;br&gt;&lt;br&gt;Symbolic AI, which also became known as good old-fashioned AI, or GOFAI, is like Kahneman’s System 2 thinking – slow thinking, or reasoning. It is good for solving complex problems and interpreting data. The reasoning process can be retraced and explained. It requires explicit rules and structured input data.&lt;br&gt;&lt;br&gt;Some experts have argued for years that human-level AI will require these two types of thinking to be combined, as they are in humans. This combination is sometimes called Neurosymbolic AI.&lt;br&gt;&lt;br&gt;Although the science of AI got started in 1956, it rarely troubled the mainstream media until 2012, when Geoff Hinton and some colleagues figured out a way to get artificial neural networks to function well. This became known as deep learning, which gave us the miracles we use daily on our smart phones, like search, maps, image recognition, and translation.&lt;br&gt;&lt;br&gt;2012 was the first AI Big Bang, and the second AI Big Bang happened when some Google researchers published a paper called Attention is All You Need, which introduced a type of neural networks called transformer AIs. These gave us large language models, like GPT-4, Claue, Gemini, Llama, and so on.&lt;br&gt;&lt;br&gt;Maybe the third AI Big Bang will be a breakthrough with either neuromorphic or neurosymbolic AI – and it might happen soon.&lt;/p&gt;  
&lt;img src="https://track-eu1.hubspot.com/__ptq.gif?a=146429849&amp;amp;k=14&amp;amp;r=https%3A%2F%2Fverifyax.conscium.com%2Fblog%2Ffast-and-slow-systems&amp;amp;bu=https%253A%252F%252Fverifyax.conscium.com%252Fblog&amp;amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "&gt;</content:encoded>
      <category>Frontier</category>
      <category>Neuroscience</category>
      <pubDate>Sun, 22 Dec 2024 00:00:00 GMT</pubDate>
      <guid>https://verifyax.conscium.com/blog/fast-and-slow-systems</guid>
      <dc:date>2024-12-22T00:00:00Z</dc:date>
      <dc:creator>Conscium</dc:creator>
    </item>
  </channel>
</rss>
