I've noticed something about Claude from talking to it. It's very very distinct in its talking style, much more of an individual than some other LLMs I know. I tried feeding that exact same system prompt Sonnet 4.5 to Qwen3.5 27B and it didn't change how it acted, so I ruled out the system prompt doing the heavy lifting.
I've seen many many distills out there claiming that Claude's responses/thinking traces have been distilled into another model and testing is rather... disappointing. I've searched far and wide, and unless I'm missing something (I hope I'm not, apologies if I am though...), I believe that it's justified to ask:
Why can't we make a model talk like Claude?
It's not even reasoning, it's just talking "style" and "vibes", which isn't even hidden from Claude's API/web UI. Is it some sort of architecture difference that just so happens to make a model not be able to talk like Claude no matter how hard you try? Or is it a model size thing along with a good system prompt (a >200B model prompted properly can talk like Claude)?
I've tried system prompts for far too long, but the model seems to always miss:
- formatting (I've noticed Claude strays from emojis and tries to not use bullet points as much as possible, unlike other models)
- length of response (sometimes it can ramble for 5 paragraphs about what Satin is and yet talk about Gated DeltaNets for 1)
Thank you!
[link] [comments]



