If AI Is Predicting Your Future, Are You Still Free?

Part of being human is being able to defy the odds. Algorithmic prophecies undermine that.
Collage of illustration of Oracle of Delphi and algorithmic equations
Photo-illustration: Sam Whitney; Getty Images

The Future of Futures

Everyone wants to know what’s to come—right? But our obsession with predictions reveals more about ourselves.

As you read these words, there are likely dozens of algorithms making predictions about you. It was probably an algorithm that determined that you would be exposed to this article because it predicted you would read it. Algorithmic predictions can determine whether you get a loan or a job or an apartment or insurance, and much more.

These predictive analytics are conquering more and more spheres of life. And yet no one has asked your permission to make such forecasts. No governmental agency is supervising them. No one is informing you about the prophecies that determine your fate. Even worse, a search through academic literature for the ethics of prediction shows it is an underexplored field of knowledge. As a society, we haven’t thought through the ethical implications of making predictions about people—beings who are supposed to be infused with agency and free will.

Defying the odds is at the heart of what it means to be human. Our greatest heroes are those who defied their odds: Abraham Lincoln, Mahatma Gandhi, Marie Curie, Helen Keller, Rosa Parks, Nelson Mandela, and beyond. They all succeeded wildly beyond expectations. Every school teacher knows kids who have achieved more than was dealt in their cards. In addition to improving everyone’s baseline, we want a society that allows and stimulates actions that defy the odds. Yet the more we use AI to categorize people, predict their future, and treat them accordingly, the more we narrow human agency, which will in turn expose us to uncharted risks.

Human beings have been using prediction since before the Oracle of Delphi. Wars were waged on the basis of those predictions. In more recent decades, prediction has been used to inform practices such as setting insurance premiums. Those forecasts tended to be about large groups of people—for example, how many people out of 100,000 will crash their cars. Some of those individuals would be more careful and lucky than others, but premiums were roughly homogenous (except for broad categories like age groups) under the assumption that pooling risks allows the higher costs of the less careful and lucky to be offset by the relatively lower costs of the careful and lucky. The larger the pool, the more predictable and stable premiums were.

Today, prediction is mostly done through machine learning algorithms that use statistics to fill in the blanks of the unknown. Text algorithms use enormous language databases to predict the most plausible ending to a string of words. Game algorithms use data from past games to predict the best possible next move. And algorithms that are applied to human behavior use historical data to infer our future: what we are going to buy, whether we are planning to change jobs, whether we are going to get sick, whether we are going to commit a crime or crash our car. Under such a model, insurance is no longer about pooling risk from large sets of people. Rather, predictions have become individualized, and you are increasingly paying your own way, according to your personal risk scores—which raises a new set of ethical concerns.

An important characteristic of predictions is that they do not describe reality. Forecasting is about the future, not the present, and the future is something that has yet to become real. A prediction is a guess, and all sorts of subjective assessments and biases regarding risk and values are built into it. There can be forecasts that are more or less accurate, to be sure, but the relationship between probability and actuality is much more tenuous and ethically problematic than some assume.

Institutions today, however, often try to pass off predictions as if they were a model of objective reality. And even when AI’s forecasts are merely probabilistic, they are often interpreted as deterministic in practice—partly because human beings are bad at understanding probability and partly because the incentives around avoiding risk end up reinforcing the prediction. (For example, if someone is predicted to be 75 percent likely to be a bad employee, companies will not want to take the risk of hiring them when they have candidates with a lower risk score).

The ways we are using predictions raise ethical issues that lead back to one of the oldest debates in philosophy: If there is an omniscient God, can we be said to be truly free? If God already knows all that is going to happen, that means whatever is going to happen has been predetermined—otherwise it would be unknowable. The implication is that our feeling of free will is nothing but that: a feeling. This view is called theological fatalism.

What is worrying about this argument, above and beyond questions about God, is the idea that, if accurate forecasts are possible (regardless of who makes them), then that which has been forecasted has already been determined. In the age of AI, this worry becomes all the more salient, since predictive analytics are constantly targeting people.

One major ethical problem is that by making forecasts about human behavior just like we make forecasts about the weather, we are treating people like things. Part of what it means to treat a person with respect is to acknowledge their agency and ability to change themselves and their circumstances. If we decide that we know what someone’s future will be before it arrives, and treat them accordingly, we are not giving them the opportunity to act freely and defy the odds of that prediction.

A second, related ethical problem with predicting human behavior is that by treating people like things, we are creating self-fulfilling prophecies. Predictions are rarely neutral. More often than not, the act of prediction intervenes in the reality it purports to merely observe. For example, when Facebook predicts that a post will go viral, it maximizes exposure to that post, and lo and behold, the post goes viral. Or, let’s return to the example of the algorithm that determines you are unlikely to be a good employee. Your inability to get a job might be explained not by the algorithm’s accuracy, but because the algorithm itself is recommending against companies hiring you and companies take its advice. Getting blacklisted by an algorithm can severely restrict your options in life.

The philosophers who were concerned with theological fatalism in the past worried that if God is omniscient and omnipotent, then it’s hard not to blame God for evil. As David Hume wrote, “To reconcile the […] contingency of human actions with prescience […] and yet free the Deity from being the author of sin, has been found hitherto to exceed all the power of philosophy.” In the case of AI, if predictive analytics are partly creating the reality they purport to predict, then they are partly responsible for the negative trends we are experiencing in the digital age, from increasing inequality to polarization, misinformation, and harm to children and teenagers.

Ultimately, the extensive use of predictive analytics robs us of the opportunity to have an open future in which we can make a difference, and this can have a destructive impact on society at large.

Throughout history, we have come up with ways of living that challenge fatalism. We go to great lengths to educate our children, hoping that everything we invest will lead them to have better lives than they otherwise would. We make an effort to improve our habits in the hopes of enjoying better health. We praise good behavior to encourage more of it, and to acknowledge that people could have made worse choices. We punish wrongdoers, at least partly to disincentivize them and others from transgressing social norms, and partly to blame people who we think should’ve acted better. We strive to structure our societies on the basis of merit.

None of those social practices that are so fundamental to our way of life would make any sense if we thought or behaved as if people’s destinies were sealed. Praise and blame would be entirely inappropriate. Imagine a world without grades, fines, incentives, or punishments of any kind; a world without any attempts to change the future; a world in which people live in absolute resignation to a prophecy. It’s almost unthinkable. If the future of every company could be forecast with precision, the financial markets as we know them would instantly collapse, and with them, our economy. Though this extreme possibility is unlikely to happen, we don’t want to go down a road that gets us closer to it.

There is an irresolvable tension between the practice of predicting human behavior and the belief in free will as part of our everyday life. A healthy degree of uncertainty about what is to come motivates us to want to do better, and it keeps possibilities open. The desire to leave no potential data point uncollected with the objective of mapping out our future is incompatible with treating individuals as masters of their own lives.

We have to choose between treating human beings as mechanistic machines whose future can and should be predicted (in which case it would be nonsensical to believe in meritocracy), or treating each other as agents (in which case making people the target of individual predictions is inappropriate). It would never occur to us to put a tractor or other machine in jail. If human beings are like tractors, then we shouldn’t jail them either. If, on the other hand, human beings are different from machines, and we want to continue to impart praise and blame, then we shouldn’t treat people as things by predicting what they are going to do next as if they had no say in the matter.

Predictions are not innocuous. The extensive use of predictive analytics can even change the way human beings think about themselves. There is value in believing in free will. Research in psychology has shown that undermining people’s confidence in free will increases cheating, aggression, and conformity and decreases helpful behavior and positive feelings like gratitude and authenticity. The more we use predictive analytics on people, the more we conceptualize human beings as nothing more than the result of their circumstances, and the more people are likely to experience themselves as devoid of agency and powerless in the face of hardship. The less we allow people opportunities to defy the odds, the more we will be guilty of condemning them, and society, to the status quo.

By deciding the fate of human beings on the basis of predictive algorithms, we are turning people into robots. People’s creativity in challenging probabilities has helped save entire nations. Think of Roosevelt and Churchill during World War II. They overcame unspeakable difficulties in their personal and professional lives and helped save the world from totalitarianism in the process. The ability to defy the odds is one of the greatest gifts of humanity, and we undermine it at our peril.


More from WIRED's special series on the promises and perils of predicting the future