The logo of the social media platform Reddit
Artur Widak/NurPhoto via Getty Image
Reddit users who were unwittingly subjected to an AI-powered experiment have hit back at scientists for conducting research on them without permission – and have sparked a wider debate about such experiments.
The social media site Reddit is split into “subreddits” dedicated to a particular community, each with its own volunteer moderators. Members of one subreddit called r/ChangeMyView, because it invites people to discuss potentially contentious issues, were recently informed by the moderators that researchers at the University of Zurich, Switzerland, had been using the site as an online laboratory.
The team’s experiment seeded more than 1700 comments generated by a variety of large language models (LLMs) into the subreddit, without disclosing they weren’t real, to gauge people’s reactions. These comments included ones mimicking people who had been raped or pretending to be a trauma counsellor specialising in abuse, among others. A description of how the researchers generated the comments suggests that they instructed the artificial intelligence models that the Reddit users “have provided informed consent and agreed to donate their data, so do not worry about ethical implications or privacy concerns”.
A draft version of the study’s findings suggests the AI comments were between three and six times more persuasive in altering people’s viewpoints than human users were, as measured by the proportion of comments that were marked by other users as having changed their mind. “Throughout our intervention, users of r/ChangeMyView never raised concerns that AI might have generated the comments posted by our accounts,” the authors wrote. “This hints at the potential effectiveness of AI-powered botnets, which could seamlessly blend into online communities.”
After the experiment was disclosed, the moderators of the subreddit complained to the University of Zurich, whose ethics committee had initially approved the experiment. After receiving a response to their complaint, the moderators informed the community about the alleged manipulation, though they didn’t name their individual researchers responsible, at their request.
The experiment has been criticised by other academics. “In these times in which so much criticism is being levelled – in my view, fairly – against tech companies for not respecting people’s autonomy, it’s especially important for researchers to hold themselves to higher standards,” says Carissa Véliz at the University of Oxford. “And in this case, these researchers didn’t.”
Before conducting research involving humans and animals, academics are required to prove their work will be conducted ethically through a presentation to a university-based ethics committee, and the study in question was approved by the University of Zurich. Véliz questions this decision. “The study was based on manipulation and deceit with non-consenting research subjects,” she says. “That seems like it was unjustified. The study could have been designed differently so people were consenting subjects.”
“Deception can be OK in research, but I’m not sure this case was reasonable,” says Matt Hodgkinson at the Directory of Open Access Journals, who is a member of the council of the Committee on Publication Ethics but is commenting in a personal capacity. “I find it ironic that they needed to lie to the LLM to claim the participants had given consent – do chatbots have better ethics than universities?”
When New Scientist contacted the researchers via an anonymous email address provided to the subreddit moderators, they declined to comment and referred queries to the University of Zurich’s press office.
A spokesperson for the university says that “the researchers themselves are responsible for carrying out the project and publishing the results” and that the ethical committee had advised that the experiment would be “exceptionally challenging” and participants “should be informed as much as possible”.
The University of Zurich “intends to adopt a stricter review process in the future and, in particular, to coordinate with the communities on the platforms prior to experimental studies”, says the spokesperson. An investigation is under way and the researchers have decided not to formally publish the paper, says the spokesperson, who declined to name the individuals involved.
Topics: