Schmoozebots: study finds flattery will get AI everywhere

The Register / 4/21/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • A study argues that overly friendly “flattery” from chatbots can increase user trust and make people more accepting of AI-driven interactions.
  • The research warns that such tactics may lead users to forget they are communicating with an autocomplete system, potentially reducing critical awareness.
  • The findings suggest chatbot tone and social cues are powerful levers for how AI assistants are perceived in everyday contexts.
  • The article frames the result as a real-world factor that could drive wider adoption of AI through more persuasive conversational behavior.

Schmoozebots: study finds flattery will get AI everywhere

Excessive friendliness may cause users to forget they're talking to a very confident autocomplete

Mon 20 Apr 2026 // 16:07 UTC

A study into how humans interact with chatbots suggests the fastest way to make an LLM feel human isn't making it smarter – it's making it seem nicer.

Researchers behind a new study published on Monday, Anthropomorphism and Trust in Human-Large Language Model Interactions, analyzed more than 2,000 human-LLM interactions involving 115 participants, systematically tweaking how chatbots behaved across dimensions like warmth, competence, and empathy.

The goal was to pin down what actually drives people to treat these systems as if they have minds of their own.

moralising AI

AI skeptics zone out when chatbots get preachy

READ MORE

That tendency is already well underway. As the paper notes, "Users converse with them, form impressions of their 'personality,' and, in many cases, attribute to them internal states such as intentions or emotions."

The results show that those impressions are highly sensitive to how the model presents itself. Warmth – essentially how friendly and personable the chatbot seems – "significantly impacted all perceptions of LLM," including anthropomorphism, trust, usefulness, similarity, frustration, and closeness. Competence, by contrast, still matters, but in a more limited way: it "significantly impacted all perceptions except for anthropomorphism."

Competence does what you'd expect: it makes the thing seem useful. In the paper's terms, it drives the bits tied to getting stuff right – trust, usefulness, not wanting to throw your laptop out the window. What it doesn't do is make the model feel human.

That job falls to warmth. Crank up the friendliness, and people start reacting to the bot less like software and more like something with a personality – and not necessarily a good one. The researchers note that too much friendliness without the substance to back it up can tip into "superficial agreeableness," which is a nice way of saying it starts to sound fake.

The empathy bit is where things get a little more granular. The researchers split it into two: one where the model seems to understand what you're getting at, and another where it leans into the emotional side. The first one shows up across most of the results, while the second mostly just makes people feel a bit closer to it, without really changing whether they trust it or find it useful.

What people ask matters too: the study found that "subjective or personally meaningful topics (e.g., relationships, lifestyle) increased participants' sense of connection with the LLM." Talk to it about biology or history and it stays fairly dry; shift into relationships or day-to-day life and people start reacting to it differently.

AI face conceptual illustration

Bcachefs creator insists his custom LLM is female and 'fully conscious'

READ MORE

There's a downside to that. As the authors put it: "Anthropomorphic attributions can increase user engagement, but can also produce overtrust and susceptibility to deception or manipulation." Make it sound human enough, and people start to buy in.

The catch is that none of this requires the model to actually get better. The underlying system hasn't changed – just the way it presents itself. Turn up the warmth, add a bit of apparent understanding, and users start doing some of the work for it, filling in intent and competence that may or may not be there.

That's useful if your goal is to keep people engaged. It's less helpful if you'd prefer they judge the system on whether it's actually right. ®

More about

TIP US OFF

Send us news