fbpx

How AI predictions are predicting a autonomy-less future for consumers

As humans, we have been using prediction since before the Oracle of Delphi. Wars were waged on the basis of those predictions. In more recent decades, prediction has been used to inform practices such as setting insurance premiums. Those forecasts tended to be about large groups of people—for example, how many people out of 100,000 will crash their cars. 

Today, prediction is mostly done through machine learning algorithms that use statistics to fill in the blanks of the unknown. Text algorithms use enormous language databases to predict the most plausible ending to a string of words. Game algorithms use data from past games to predict the best possible next move. And algorithms that are applied to human behaviour use historical data to infer our future: what we are going to buy, whether we are planning to change jobs, whether we are going to get sick, whether we are going to commit a crime or crash our car. 

An important characteristic of predictions is that they do not describe reality. Forecasting is about the future, not the present, and the future is something that has yet to become real. A prediction is a guess, and all sorts of subjective assessments and biases regarding risk and values are built into it. There can be forecasts that are more or less accurate, to be sure, but the relationship between probability and actuality is much more tenuous and ethically problematic than some assume.

It seems, however, that institutions today try to pass off predictions as if they were a model of objective reality. And even when AI’s forecasts are merely probabilistic, they are often interpreted as deterministic in practice—partly because the incentives around avoiding risk end up reinforcing the prediction. (For example, if someone is predicted to be 75 percent likely to be a bad employee, companies will not want to take the risk of hiring them when they have candidates with a lower risk score).

A major ethical problem with modern day predictions is that by making forecasts about human behaviour just like we make forecasts about the weather, we are treating people like things and removing them of their autonomy. TikTok and the music industry are a great example of ths. Rather than fund artists making diverse and artistic music, labels are funding artists in a money-first fashion prioritising songs that are more likely (by prediction) to go viral on the app.

Part of what it means to treat a person and community of people with respect is to acknowledge their agency and ability to change themselves and their circumstances. If we decide that we know what someone’s future will be before it arrives, and treat them accordingly, we are not giving them the opportunity to act freely and defy the odds of that prediction.

A second, related ethical problem with predicting human behaviour is that by treating people like things, we are creating self-fulfilling prophecies. Predictions are rarely neutral. More often than not, the act of prediction intervenes in the reality it purports to merely observe. For example, when Facebook predicts that a post will go viral, it maximises exposure to that post, and lo and behold, the post goes viral. 

Furthermore, the extensive use of predictive analytics robs us of the opportunity to have an open future in which we can make a difference, and this can have a destructive impact on society at large.

There is an irresolvable tension between the practice of predicting human behaviour and the belief in free will as part of our everyday life. A healthy degree of uncertainty about what is to come motivates us to want to do better, and it keeps possibilities open. The desire to leave no potential data point uncollected with the objective of mapping out our future is incompatible with treating individuals as masters of their own lives.

Predictions are not innocuous. The extensive use of predictive analytics can even change the way human beings think about themselves. There is value in believing in free will. Research in psychology has shown that undermining people’s confidence in free will increases cheating, aggression, and conformity and decreases helpful behaviour and positive feelings like gratitude and authenticity. The more we use predictive analytics on people, the more we conceptualise human beings as nothing more than the result of their circumstances, and the more people are likely to experience themselves as devoid of agency and powerless in the face of hardship. The less we allow people opportunities to defy the odds, the more we will be guilty of condemning them, and society, to the status quo.

Leave a Reply

Related Posts