Schmoozebots: study finds flattery will get AI everywhere
Excessive friendliness may cause users to forget they're talking to a very confident autocomplete
A study into how humans interact with chatbots suggests the fastest way to make an LLM feel human isn't making it smarter – it's making it seem nicer.
Researchers behind a new study published on Monday, Anthropomorphism and Trust in Human-Large Language Model Interactions, analyzed more than 2,000 human-LLM interactions involving 115 participants, systematically tweaking how chatbots behaved across dimensions like warmth, competence, and empathy.
The goal was to pin down what actually drives people to treat these systems as if they have minds of their own.
That tendency is already well underway. As the paper notes, "Users converse with them, form impressions of their 'personality,' and, in many cases, attribute to them internal states such as intentions or emotions."
The results show that those impressions are highly sensitive to how the model presents itself. Warmth – essentially how friendly and personable the chatbot seems – "significantly impacted all perceptions of LLM," including anthropomorphism, trust, usefulness, similarity, frustration, and closeness. Competence, by contrast, still matters, but in a more limited way: it "significantly impacted all perceptions except for anthropomorphism."
Competence does what you'd expect: it makes the thing seem useful. In the paper's terms, it drives the bits tied to getting stuff right – trust, usefulness, not wanting to throw your laptop out the window. What it doesn't do is make the model feel human.
That job falls to warmth. Crank up the friendliness, and people start reacting to the bot less like software and more like something with a personality – and not necessarily a good one. The researchers note that too much friendliness without the substance to back it up can tip into "superficial agreeableness," which is a nice way of saying it starts to sound fake.
- If an AI agent screws up while running your business, there's nobody to sue
- AI will make anyone a 10x programmer, but with 10x the cleanup
- AI bug reports went from junk to legit overnight, says Linux kernel czar
- AI still doesn't work very well, businesses are faking it, and a reckoning is coming
The empathy bit is where things get a little more granular. The researchers split it into two: one where the model seems to understand what you're getting at, and another where it leans into the emotional side. The first one shows up across most of the results, while the second mostly just makes people feel a bit closer to it, without really changing whether they trust it or find it useful.
What people ask matters too: the study found that "subjective or personally meaningful topics (e.g., relationships, lifestyle) increased participants' sense of connection with the LLM." Talk to it about biology or history and it stays fairly dry; shift into relationships or day-to-day life and people start reacting to it differently.
There's a downside to that. As the authors put it: "Anthropomorphic attributions can increase user engagement, but can also produce overtrust and susceptibility to deception or manipulation." Make it sound human enough, and people start to buy in.
The catch is that none of this requires the model to actually get better. The underlying system hasn't changed – just the way it presents itself. Turn up the warmth, add a bit of apparent understanding, and users start doing some of the work for it, filling in intent and competence that may or may not be there.
That's useful if your goal is to keep people engaged. It's less helpful if you'd prefer they judge the system on whether it's actually right. ®



