Everywhere, AI is breaking. And everywhere, it’s breaking us.
The breaking ensues whenever AI encounters ambiguity or volatility. And in our hazy, unstable world, that’s all the time: Either the data can be interpreted another way or it’s obsolesced by new events. At which point, AI finds itself looking at life through errant eyes, seeing left as right or now as yesterday. Yet because AI lacks self-awareness, it doesn’t realize that its worldview has cracked. So, on it whirs, unwittingly transmitting the fracture to all the things plugged into it. Cars are crashed. Insults are hurled. Allies are auto-targeted.
This breaks humans in the direct sense of harming, even killing, us. But it has also begun breaking us in a more subtle way. AI can malfunction at the mildest hint of data slip, so its architects are doing all they can to dampen ambiguity and volatility. And since the world’s primary source of ambiguity and volatility is humans, we have found ourselves aggressively stifled. We’ve been forced into metric assessments at school, standard flow patterns at work, and regularized sets at hospitals, gyms, and social-media hangouts. In the process, we’ve lost large chunks of the independence, creativity, and daring that our biology evolved to keep us resilient, making us more anxious, angry, and burned out.
If we want a better future, we need to pursue a different remedy to AI’s mental fragility. Instead of remaking ourselves in AI’s brittle image, we should do the opposite. We should remake AI in the image of our antifragility.
Durability is simply resisting damage and chaos; antifragility is getting stronger from damage and smarter from chaos. This can seem more magic than mechanical, but it’s an innate capacity of many biological systems, including human psychology. When we’re kicked in the face, we can bounce back tougher with courage. When our plans collapse, we can rally for the win with creativity.
Building these antifragile powers into AI would be revolutionary. (Disclosure: Angus Fletcher is currently advising AI projects, which include antifragile AI, within the US Department of Defense). We can achieve the revolution, if we upend our current way of thinking.
First, we must banish the futurist delusion that AI is the smarter version of ourselves. AI’s method of cogitation is mechanically distinct from human intelligence: Computers lack emotion, so they can’t literally be courageous, and their logic boards can’t process narrative, rendering them incapable of adaptive strategy. Which means that AI antifragility won’t ever be human, let alone superhuman; it will be a complementary tool with its own strengths and weaknesses.
We then must step toward heresy by acknowledging that the root source of AI’s current fragility is the very thing that AI design now venerates as its high ideal: optimization.
Optimization is the push to make AI as accurate as possible. In the abstract world of logic, this push is unambiguously good. Yet in the real world where AI operates, every benefit comes at a cost. In the case of optimization, the cost is data. More data is required to improve the precision of machine-learning’s statistical computations, and better data is necessary to ensure that the computations are true. To optimize AI’s performance, its handlers must intel-gather at scale, hoovering up cookies from apps and online spaces, spying on us when we’re too oblivious or exhausted to resist, and paying top dollar for inside information and backroom spreadsheets.