Everyone agrees AI bias is dangerous. But here's the question nobody's asking: what happens when you remove all bias from AI? You get a system that treats a physics textbook and a conspiracy theory with equal weight. A system with no preference for human safety over human harm. A system that is, by design, useless.
The truth is, every AI model in production today is biased. On purpose. The question was never whether AI should have bias. The question is who chose it, what they optimized for, and whether you even noticed.
Liz unpacks the invisible moral architecture of the AI systems millions of people use every day and makes the case for why the right biases, wielded deliberately, are not just acceptable but essential.
Why This Matters Now
The bias conversation in AI has been stuck in a binary: bias is bad, neutrality is the goal. But neutrality is impossible, and pretending otherwise is how invisible ideology gets baked into the systems that govern our lives.
This keynote reframes the entire conversation. It gives audiences a new lens for evaluating the AI systems they use and the AI systems being used on them.