AI Can be Trained to Manipulate Human Behavior and Decisions, According to Research in Australia
It’s happening and we can’t stop it, we’re actually helping to make it happen, a ‘vicious’ cycle.
Artificial Intelligence (AI) researchers in Australia have demonstrated how it is possible to train a system to manipulate human behavior and decision-making, highlighting the double-edged sword that is modern high tech.
AI now pervades the vast majority of contemporary human society and governs many of the ways we communicate, trade, work and live. It also assists in areas ranging from critical objectives like vaccine development to the more mundane sphere of office administration, and many in between.
It also governs how humans interact on social media in a number of ways.
A new study by researchers at The Commonwealth Scientific and Industrial Research Organisation’s (CSIRO) Data61, designed and tested a method to find and exploit vulnerabilities in human decision-making, using an AI system called a recurrent neural network.
In three experiments that pitted man against machine, the researchers showed how an AI can be trained to identify vulnerabilities in human habits and behaviors and to weaponize them to influence human decision-making.
In the first experiment, humans click on red or blue boxes to earn in-game currency. The AI studied their choice patterns and began guiding them towards making specific decisions, with a roughly 70-percent success rate. Small fry, but only the beginning.
In the next experiment, participants were asked to press a button when they saw a specific symbol (a colored shape) but to refrain from pressing the button when shown other symbols.
The AI’s ‘goal’ was to arrange the sequence of symbols displayed to the participant in such a way as to trick them into making mistakes, eventually increasing human errors by 25 percent.
In the third experiment, the human player would pretend to be an investor giving money to a trustee (the AI) who would then return an amount of money to the participant.
The human would then decide how much to invest in each successive round of the game, based on revenue generated by each ‘investment.’ In this particular experiment, the AI was given one of two tasks: either to maximize the amount of money it made, or to maximize the amount of money both the human player and the machine ended up with.
It excelled in all scenarios, showcasing that an AI could indeed be trained to influence human behavior and decision-making processes, albeit in limited and fairly abstract circumstances.
This research, while limited in scope for now, provides terrifying insights into how an AI can influence human ‘free will’ albeit in a rudimentary context, but throws open the possibilities of (ab)use on a much larger scale, which many suspect is already the case.
The findings could be deployed for good, influencing public-policy decisions to produce better health outcomes for the population, just as easily as it could be weaponized to undermine key decision-making, like elections, for example.
Conversely, AIs could also be trained to alert humans when they are being influenced, training them to disguise their own vulnerabilities, an aspect that could itself be manipulated or hacked for nefarious purposes, further complicating matters.
In 2020, CSIRO developed an AI Ethics Framework for the Australian government, with a view towards establishing proper governance of public-facing AIs.
Next week, the Australian government is expected to introduce landmark legislation which will force Google and Facebook to pay news publishers and broadcasters for their content, which is used by the companies’ respective algorithms to drive traffic and generate clicks and, thus, advertising revenue, a central part of their business models.
Headline Image: © Janos Perian from Pixabay