There’s No Such Thing as AI Ethics

Written by h2o | May 8, 2026 7:15:22 AM

Over the past few years, something curious has happened. A new professional class has emerged  -  the AI Ethicist. LinkedIn profiles have been updated, consultancies rebranded, and conference panels filled with people who, seemingly overnight, became experts in the ethical implications of artificial intelligence. The growth has been dramatic, and it deserves scrutiny.

Not because ethics don’t matter  -  they matter enormously. But because the term “AI Ethics” has become a catch-all that obscures important distinctions: between genuine philosophical questions, normative choices about fairness and justice, and what are, in many cases, engineering and safety problems. That conflation is doing real damage to all three.

What is ethics, actually?

Ethics, broadly, is the study of right and wrong  -  a discipline concerned with moral principles, human conduct, and the frameworks we use to evaluate action and its consequences. It’s a field with millennia of intellectual heritage, from Aristotle’s virtue ethics to Kant’s categorical imperative to the utilitarian tradition of Bentham and Mill, through to contemporary applied ethics in medicine, law, and business.

Different ethical traditions emphasise different things. Kantian ethics focuses on intent  -  why a moral agent chooses to act in a particular way. Consequentialism focuses on outcomes  -  the effects of actions, regardless of the actor’s motivation. Virtue ethics asks about the character of agents and institutions. These distinctions matter, because the “AI Ethics” narrative tends to collapse all of ethics into a single question  -  usually intent  -  and then declares the whole field irrelevant because AI systems don’t have any.

AI systems don’t have intent. They don’t choose, they optimise. This means that questions about the moral agency of AI systems are indeed misplaced  -  at least for now. But it does not follow that the problems AI creates are not ethical problems. The outcomes AI produces, the fairness of its distributions, and the systems of accountability surrounding its deployment all remain genuinely ethical questions. They are questions about human ethics, applied to a powerful new class of tools.

Bias is an engineering problem  -  but defining it is not

The most commonly cited example of an “AI ethics” issue is bias  -  a hiring algorithm that discriminates, a facial recognition system that performs poorly on certain demographics, a language model that produces stereotyped outputs. These are serious problems. And the detection and mitigation of bias is indeed an engineering and safety problem. The algorithm didn’t intend to discriminate. It found statistical patterns in data that reflected historical biases, and it optimised accordingly. Better data, better testing, and better engineering are essential parts of the fix.

But engineering alone cannot tell you what counts as unacceptable bias, or which fairness metric to use. Research in algorithmic fairness has demonstrated that common definitions of fairness  -  such as equalised odds, demographic parity, and calibration  -  are mathematically incompatible in most real-world settings. Choosing between them is an irreducibly normative decision. It requires reasoning about justice, values, and trade-offs that no amount of code review will resolve.

We already have well-established governance structures  -  regulatory compliance, risk management, audit functions  -  that exist to evaluate the decisions humans make. You don’t need to boot up a whole new ethics committee to address every AI challenge. But you do need your existing governance structures to be asking the right normative questions, not just the right engineering questions. And in many cases, those structures need significant adaptation to cope with the speed, opacity, and scale of AI-driven decisions.

The trolley problem is misunderstood

People love to discuss the trolley problem. Should you throw a switch to divert a runaway train from the track where it will kill five people, to one where it will kill just one person? Or, replacing the switch with a large man on a bridge, should you throw that man onto the track to save the five?

People also love to invoke the trolley problem when discussing AI ethics. Should the autonomous vehicle swerve if doing so would save five children but kill one elderly pedestrian? Should the algorithm prioritise one patient over another? But this framing misses the actual insight of the trolley problem.

The philosophical depth of the trolley problem isn’t really about whether you should pull the lever  -  most people agree you should divert the trolley to save more lives. The real nuance is why people who would happily pull a lever refuse to push a person off a bridge, despite both scenarios producing identical outcomes. It reveals something about human moral psychology  -  about the role of physical agency, emotional proximity, and yes, intent in ethical reasoning. It’s a problem about the human mind, not about the machine.

That said, the trolley problem has found one genuinely useful application in AI contexts  -  not as a design tool, but as a way of studying how people want machines to behave. The MIT Moral Machine project used trolley-style dilemmas to map cross-cultural variation in moral intuitions about autonomous vehicles. This doesn’t resolve the engineering question, but it does illuminate the normative landscape that engineers are operating in.

The ride-hailing algorithm

Consider a more grounded example. A ride-hailing company deploys an AI pricing algorithm. The system discovers a correlation: people with low phone battery are more likely to accept higher prices. The immediate narrative writes itself  -  “the algorithm is exploiting a human vulnerability.” But let’s be precise. The algorithm hasn’t exploited anyone. It has no concept of exploitation. It found a statistical correlation and optimised for it.

The real questions are: first, can we actually see what the algorithm is doing? This is an engineering challenge  -  building explainable, auditable systems that surface these kinds of correlations. And second, once we see it, what do we choose to do about it? Perhaps we remove battery data from the model’s inputs. Or perhaps we do something more interesting  -  use the insight to prioritise rides for people with low batteries, turning a potential vulnerability into a better customer experience. Both are legitimate choices, but they’re made by humans with intent, scrutinised through existing governance structures.

The algorithm has no moral agency. But it is not ethically neutral  -  it encodes the choices and assumptions of its designers, and it produces real consequences in the world. The locus of ethical responsibility remains with the humans who build and deploy it, but that doesn’t make the system itself irrelevant to ethical analysis. A redlining map doesn’t “intend” to discriminate either, but it would be odd to call it ethically inert.

Five questions, not an ethics committee

I’ve been building and deploying AI systems in production for over two decades. In that time, I’ve found that the challenges people label “AI ethics” are better addressed by asking five practical questions  -  none of which require a new discipline, but all of which require intellectual honesty about where engineering ends and normative reasoning begins.

  • First: is the intent appropriate? Before any algorithm is built, someone decides what it should optimise for. Someone chooses the objective function, selects the training data, defines the success metrics. These are human decisions, made with human intent, and they should be scrutinised with the same rigour we apply to any consequential business or policy decision. Existing governance structures are capable of interrogating intent. The question is whether organisations actually use them.

  • Second: are your algorithms explainable? Building explainable AI systems is genuinely hard  -  perhaps one of the most difficult engineering challenges in the field. But it’s worth the effort, because solving explainability makes almost every other challenge more tractable. Transparency, security, auditability, safety, regulatory compliance  -  all of these become dramatically easier when you can actually understand what your system is doing and why.
  • Third: not what happens when your AI goes wrong  -  but what happens when it goes very right? Engineers are trained to think about failure modes. We build systems, identify where they might break, and mitigate accordingly. But AI introduces a genuinely novel risk: massive overachievement. Perhaps for the first time ever, we’re building systems that can pursue an objective so effectively that they cause enormous harm or disruption elsewhere. For example, a supply chain optimisation algorithm that cuts costs so aggressively it bankrupts a tier of suppliers. This is a systems engineering challenge, and it demands the kind of rigorous scenario planning and constraint design that good engineering has always required.
  • Fourth: have you actually tested your AI? This might seem obvious, but the reality across the industry is alarming. Companies are building AI-embedded software and deploying autonomous agents without spending the effort and rigour required to ensure those systems are properly tested  -  both functionally and non-functionally.

    Functional testing means verifying the system does what it’s supposed to: does your customer service agent actually resolve queries correctly? Does your document processing pipeline extract the right information?

    Non-functional testing means stress-testing everything else: how does the system perform under load? How does it handle adversarial inputs? What happens when it encounters edge cases outside its training distribution? Does it degrade gracefully or catastrophically?

    In traditional software engineering, we’ve spent decades building mature testing methodologies  -  unit tests, integration tests, regression suites, performance benchmarks. If you wouldn’t ship traditional software without testing it, you certainly shouldn’t be shipping AI without testing it.

  • Fifth: have the people affected by this system had meaningful input? You can test thoroughly, build explainable systems, and still cause serious harm if you never consulted the communities your system affects. A large body of work in technology design  -  from participatory design to fairness research  -  demonstrates that engineering rigour alone is insufficient without input from the people being modelled, scored, or served. Who was in the room when the system’s objectives were defined? Whose data was used, and did they have any say in how? Were the communities most likely to bear the costs of errors involved in evaluating the system’s performance? These are not purely technical questions, and they cannot be answered from inside an engineering team alone.

These five questions  -  intent, explainability, overachievement, testing, and affected-community participation  -  cover the vast majority of what people mean when they say “AI ethics.” And none of them require a new ethical framework. They require good engineering, good governance, normative reasoning where it is genuinely needed, and the discipline to apply all three.

Where real AI ethics begins

The genuine ethical questions surrounding AI exist on two timescales. The first is already upon us: the ethics of autonomous weapons deployment, mass surveillance, the use of AI in criminal sentencing, the concentration of AI capabilities in a small number of corporations, and questions about consent and data use at scale.

The second timescale is longer but may arrive sooner than we expect. Could a sufficiently advanced AI system have subjective experiences? Could an AI suffer? If so, what obligations would we have toward it? What are the moral implications of creating and potentially destroying billions of AI instances? How do we evaluate the economic disruption of AI-driven job displacement  -  not just practically, but morally? What happens to human dignity, purpose, and meaning in a world of increasingly capable machines?

These are profound, genuinely difficult questions that sit at the intersection of consciousness studies, moral philosophy, cognitive science, and political economy. They deserve  -  and demand  -  serious academic rigour.

Beware the bandwagon

And here lies my deeper concern. We should be cautious when people rebrand themselves as experts of the latest shiny thing. Does your AI ethicist have an extensive academic or applied pedigree in ethics, philosophy, consciousness studies, or a relevant technical discipline? Have they spent years thinking and writing about these issues? Or did they simply append “AI” to their title when the wave arrived?

Looking ahead, I worry that AI consciousness and AI suffering will become the hot topics  -  and that everyone will wade in with a position. This is particularly dangerous because the field of serious consciousness research is surprisingly young. The science of consciousness was considered unrespectable and career-limiting until quite recently, and despite some brilliant work, it remains fragmented, contested, and methodologically immature. This makes it acutely vulnerable to self-declared experts who shout the loudest, steering the debate in unhealthy and unproductive directions.

So by all means, let’s take the ethical dimensions of AI seriously. Let’s fund the philosophers and the computational neuroscientists, and engage with the hard questions. But let’s also call engineering problems what they are  -  and let’s be honest about the places where normative reasoning is genuinely required, rather than pretending that better testing will resolve every dilemma.