
Some people who take part in online research projects are using AI to save time
Daniele D’Andreti/Unsplash
Online questionnaires are being swamped by AI-generated responses – potentially polluting a vital data source for scientists.
Platforms like Prolific pay participants small sums for answering questions posed by researchers. They are popular among academics as an easy way to gather participants for behavioural studies.
Anne-Marie Nussberger and her colleagues at the Max Planck Institute for Human Development in Berlin, Germany, decided to investigate how often respondents use artificial intelligence after noticing examples in their own work. “The incidence rates that we were observing were really shocking,” she says.
They found that 45 per cent of participants who were asked a single open-ended question on Prolific copied and pasted content into the box – an indication, they believe, that people were putting the question to an AI chatbot to save time.
Further investigation of the contents of the responses suggested more obvious tells of AI use, such as “overly verbose” or “distinctly non-human” language. “From the data that we collected at the beginning of this year, it seems that a substantial proportion of studies is contaminated,” she says.
In a subsequent study using Prolific, the researchers added traps designed to snare those using chatbots. Two reCAPTCHAs – small, pattern-based tests designed to distinguish humans from bots – caught out 0.2 per cent of participants. A more advanced reCAPTCHA, which used information about users’ past activity as well as current behaviour, weeded out another 2.7 per cent of participants. A question in text that was invisible to humans but readable to bots asking them to include the word “hazelnut” in their response, captured another 1.6 per cent, while preventing any copying and pasting identified another 4.7 per cent of people.
“What we need to do is not distrust online research completely, but to respond and react,” says Nussberger. That is the responsibility of researchers, who should treat answers with more suspicion and take countermeasures to stop AI-enabled behaviour, she says. “But really importantly, I also think that a lot of responsibility is on the platforms. They need to respond and take this problem very seriously.”
Prolific didn’t respond to New Scientist’s request for comment.
“The integrity of online behavioural research was already being challenged by participants of survey sites misrepresenting themselves or using bots to gain cash or vouchers, let alone the validity of remote self-reported responses to understand complex human psychology and behaviour,” says Matt Hodgkinson, a freelance consultant in research ethics. “Researchers either need to collectively work out ways to remotely verify human involvement or return to the old-fashioned approach of face-to-face contact.”
Topics:
Source link : https://www.newscientist.com/article/2492984-ai-generated-responses-are-undermining-crowdsourced-research-studies/?utm_campaign=RSS%7CNSNS&utm_source=NSNS&utm_medium=RSS&utm_content=home
Author :
Publish date : 2025-08-19 08:00:00
Copyright for syndicated content belongs to the linked Source.