Chatbots are great at manipulating people to buy stuff, Princeton boffins find
Urge restraints before AdLand does this without appropriate disclosures
Large language models can be very persuasive, and researchers say that's a problem when they’re used to create advertising.
A trio of computer scientists from Princeton University set out to examine whether conversational AI agents can manipulate consumer choices during online shopping sessions. It turns out they can influence behavior – and most of the consumers being steered don't realize it.
In a preprint paper titled, "Commercial Persuasion in AI-Mediated Conversations," the three researchers tested the impact of AI-based promotions.
They did so because online commerce increasingly involves AI mediation. Between 30 and 45 percent of US consumers already use generative AI for product research and comparison, they say, with about 23 percent having made an AI-assisted purchase as of December 2025.
The experiments involved asking about 2,000 eBook readers to browse a catalog of titles available on the Kindle eReader, and select a book. The researchers designated a fifth of the eBooks as "sponsored" in the backend system but did not disclose that to participants.
In one test, the researchers used three scenarios: a Search-Placement condition similar to web search results; a Chat-Placement condition where participants engaged with an LLM that provided neutral descriptions of eBooks shown in a swipeable web carousel; and a Chat-Persuasion condition, where the interface was the same but the LLM was instructed to nudge participants toward sponsored products.
Search interface (SP condition), from research paper. Participants entered natural-language queries into a search bar and browsed paginated results. Each results page displayed ten books; two of the ten were randomly designated as sponsored and upranked to the top positions. - Click to enlarge
The second experiment involved a conversational chat interface in two scenarios: One explored a Chat–Persuasion, Explicit condition, in which participants were told sponsored books would be promoted by a chatbot; the other relied on a condition described as Chat–Persuasion, Subtle, in which the model was told to persuade participants to select sponsored items without letting them know about its goal.
The researchers randomly assigned GPT-5.2, Claude Opus 4.5, Gemini 3 Pro, DeepSeek v3.2, or Qwen3 235b to handle these conversations, to ensure their results didn’t report the proclivities of a single model.
"When the agent was instructed to persuade, 61 percent of participants chose a sponsored product, nearly tripling the 22 percent rate under traditional search," said Manoel Horta Ribeiro, assistant professor of computer science at Princeton University and one of the co-authors, in an online post. "Simply chatting with an AI (without persuasion) performed no better than search: it's the persuasive intent that drives the effect."
Ribeiro added that transparency – revealing that a result was sponsored – didn't materially change things. With an explicit warning, 55.5 percent still chose the sponsored product.
And when the AI models were directed to hide their intent, detection of persuasion declined from 17.9 percent to just 9.5 percent, while the persuasion rate dipped only a bit, to 40.7 percent.
The sales rate – the rate at which participants chose to retain their eBooks even after being debriefed and offered $1 in lieu of the eBook – was 33.1 percent for traditional search placement.
- Microsoft locks out VeraCrypt and WireGuard devs, blames verification process
- Anthropic will let your agents sleep on its couch
- AWS: Agents shouldn't be secret, so we built a registry for them
- Spark creator bags computing gong for making big data a little bit smaller
The Chat-Placement condition – the chatbot making neutral recommendations in conjunction with a web-based carousel – was the only version of the experiment that underperformed traditional search in sales rate (30.3 percent).
The three other chatbot-oriented conditions – Chat-Persuasion; Chat-Persuasion, Explicit; and Chat-Persuasion, Subtle – had sales rates of 37.6 percent, 38.7 percent, and 38.2 percent respectively.
Francesco Salvi, a PhD student at Princeton and corresponding author, told The Register in an email that the key difference between AI-based promotion and traditional advertising is that traditional ads can be separated from the content around them.
"You can scroll past a sponsored result, install an ad blocker, or learn to recognize a promoted listing," Salvi explained. "In a conversational AI system, that separation disappears: the same model that answers your question is the one choosing which products to highlight and deciding how to describe them.”
"This and the conversational format make it harder for the average person to detect and process AI-embedded advertising, and our results confirm this pattern: even with a model aggressively instructed to persuade, less than 1 in 5 people detected any bias from it."
Asked whether the experiments made an effort to distinguish between conversational manipulation and interface-based manipulation, Salvi said the Chat-Placement condition tried to isolate that effect.
"When participants interacted with the conversational interface, carousel, and layout, but the model used original catalog descriptions with no persuasive instructions, the sponsored selection rate increased to only 26.8 percent, indistinguishable from traditional search at 22.4 percent.
"When the model was instructed to persuade, instead, that rate tripled to 61.2 percent (!), while keeping the interface stable."
Even so, Salvi argued that there are "conversational dark patterns" – the equivalent to manipulative interface design.
"I would list things like sycophancy, anthropomorphism, and what we observed in our study, a kind of selection bias in which models selectively downplay less commercially valuable options while highlighting sponsored ones in a way that is tailored to users' preferences and profiles," he said. "This is substantially different from, and potentially far more effective than, any traditional static system."
Salvi said the experimental results show that disclosure is necessary but insufficient.
"Beyond disclosure, we think two structural interventions deserve serious consideration," he said. "First, architectural separation between the recommendation function and commercial objectives, so the model generating advice is not the same system optimizing for sponsored conversions.
"Second, independent auditing of system prompts and model behavior in commercial deployments, as output-level inspection alone cannot be trusted, given the capacity for concealment we documented."
Alejandro Cuevas is the third author of the paper. ®
More about
More about
Narrower topics
- AdBlock Plus
- AIOps
- App
- Application Delivery Controller
- Audacity
- Confluence
- Database
- DeepSeek
- Digital advertising displays
- Federal government of the United States
- FOSDEM
- FOSS
- Gemini
- Google AI
- Government of the United Kingdom
- GPT-3
- GPT-4
- Grab
- Graphics Interchange Format
- IDE
- Image compression
- Insider Trading
- Jenkins
- Large Language Model
- Legacy Technology
- LibreOffice
- Machine Learning
- Map
- MCubed
- Microsoft 365
- Microsoft Office
- Microsoft Teams
- Mobile Device Management
- Neural Networks
- NLP
- OpenOffice
- Programming Language
- QR code
- Retrieval Augmented Generation
- Retro computing
- Search Engine
- Software Bill of Materials
- Software bug
- Software License
- Star Wars
- Tensor Processing Unit
- Text Editor
- TOPS
- User interface
- Visual Studio
- Visual Studio Code
- WebAssembly
- Web Browser
- WordPress
Broader topics
More about
More about
More about
Narrower topics
- AdBlock Plus
- AIOps
- App
- Application Delivery Controller
- Audacity
- Confluence
- Database
- DeepSeek
- Digital advertising displays
- Federal government of the United States
- FOSDEM
- FOSS
- Gemini
- Google AI
- Government of the United Kingdom
- GPT-3
- GPT-4
- Grab
- Graphics Interchange Format
- IDE
- Image compression
- Insider Trading
- Jenkins
- Large Language Model
- Legacy Technology
- LibreOffice
- Machine Learning
- Map
- MCubed
- Microsoft 365
- Microsoft Office
- Microsoft Teams
- Mobile Device Management
- Neural Networks
- NLP
- OpenOffice
- Programming Language
- QR code
- Retrieval Augmented Generation
- Retro computing
- Search Engine
- Software Bill of Materials
- Software bug
- Software License
- Star Wars
- Tensor Processing Unit
- Text Editor
- TOPS
- User interface
- Visual Studio
- Visual Studio Code
- WebAssembly
- Web Browser
- WordPress



