I feel like if they made a local model focused specifically on RP it would be god tier even if tiny

Reddit r/LocalLLaMA / 3/24/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The author argues that a small local model specifically optimized for roleplay (RP) could outperform general large models by having higher-quality, purpose-built writing data.
  • They suggest the bottleneck is dataset quality and the lack of strong, non-repetitive, low-“slop” training material in large models.
  • They note that “good writing” is subjective, with different users preferring different styles (e.g., some liking purple prose).
  • The post proposes crowdsourcing dataset/quality improvements and humorously suggests renting a GPU to fine-tune a model on preferred data.

Like, we’ve seen that the large models don’t actually have that great of datasets. So imagine a local model who is filled to the brim with good quality writing without repeats and without slop. Can we crowdsource the work or something 😂

But then I suppose the problem is that everyone has different opinions of what’s good. I’ve seen people love purple prose!

Maybe the real solution is me just renting a gpu and training it on shit lol

submitted by /u/Borkato
[link] [comments]