why gemma 4 31b so bad in long context?

Reddit r/LocalLLaMA / 4/16/2026

💬 OpinionDeveloper Stack & InfrastructureTools & Practical Usage

Key Points

  • The post asks why Gemma 4 31B performs poorly on long-context prompts (20K+ tokens), specifically when it abruptly stops after executing a task like “put that to the file” or other operations requested in the prompt.

question, I'm using it for text translations and on each large prompt (20K+) it stops with a remark 'now I'm going to put that to the file' or some other operation I have asked in the prompt for but it did nothing, just stopped. I'm running it through opencode and this is really annoying. any suggestion to improve, please?

submitted by /u/Steus_au
[link] [comments]