| Gotta say my first experience with the model didn't go that well. [link] [comments] |
I asked Gemma 4 26b to code a simple single page breakout game to test its coding abilities and it just started going full schizophrenic
Reddit r/LocalLLaMA / 4/3/2026
💬 OpinionSignals & Early TrendsTools & Practical Usage
Key Points
- The post describes an experiment where the user asked the Gemma 4 26B model to code a simple single-page Breakout game to assess coding ability.
- According to the report, the model’s output quickly became erratic and “schizophrenic,” failing to produce coherent results for the requested task.
- The submission frames the experience as a negative first impression of the model’s practical coding performance in this specific context.
- The content is shared on a local LLaMA-focused forum, implying relevance to users testing consumer/local model behavior rather than an official benchmark or release.
Related Articles

Black Hat USA
AI Business

Black Hat Asia
AI Business

90000 Tech Workers Got Fired This Year and Everyone Is Blaming AI but Thats Not the Whole Story
Dev.to

Microsoft’s $10 Billion Japan Bet Shows the Next AI Battleground Is National Infrastructure
Dev.to

TII Releases Falcon Perception: A 0.6B-Parameter Early-Fusion Transformer for Open-Vocabulary Grounding and Segmentation from Natural Language Prompts
MarkTechPost