Since the advent of the smart phone, those damned algorithms have been working to control our thoughts and feelings.
For those people who are rolling their eyes at this claim, artificial intelligence – AI – is making you do that too. Any time now.
In a new series of experiments, AI algorithms “were able to influence people’s preferences for fictitious political candidates or potential romantic partners”.
The question here is: should we be surprised?
The new study – from the University of Deusto in Spain – arose from concerns that filthy-rich private companies such as Google and Facebook “are conducting extensive research on the data of their users, generating insights into human behaviour that are not publicly available”.
Independent academic researchers cannot keep up. As the authors note:
“Academic social science research lags behind private research, and public knowledge on how AI algorithms might shape people’s decisions is lacking”.
In other words, overlord nerds are up to all sorts of manipulative evil we don’t know about.
What was the experiment?
PhD candidate Ujué Agudo and Dr Helena Matute, from the Faculty of Psychology and Education, tested the influence of AI algorithms in different contexts.
Human participants were recruited and handed over to a “simplistic” algorithm to play with.
The algorithm showed off a series of photographs, sometimes said to be political candidates, other times said to be online dating contenders.
The participants were told to choose who they would vote for or ask out on a date. There was no profile provided for these fictitious politicians or possible love interests.
The algorithms promoted some candidates over others. Sometimes this was done explicitly, with the algorithm ranking one contender as 90 per cent compatible, while ranking another at 40 per cent.
Sometimes, the attempted manipulation was more covert, such as by showing certain photos more often than others.
And the findings?
Overall, the experiments showed the algorithms had a “significant influence on participants’ decisions of who to vote for or message”.
For political decisions, explicit manipulation significantly influenced decisions, while covert manipulation was not effective. The opposite effect was seen for dating decisions.
The researchers speculate these results “might reflect people’s preference for human explicit advice when it comes to subjective matters such as dating, while people might prefer algorithmic advice on rational political decisions”.
Meanwhile, the researchers are calling for efforts to educate the public on the risks of blind trust in recommendations from algorithms.
The authors write: “If a fictitious and simplistic algorithm like ours can achieve such a level of persuasion without establishing actually customised profiles of the participants (and using the same photographs in all cases), a more sophisticated algorithm such as those with which people interact in their daily lives should certainly be able to exert a much stronger influence.”