| Somehow my Qwen3.6-35B-A3B hallucinated that its context is full, pretty much at the right moment... [link] [comments] |
I just had a little ghost in the shell moment...
Reddit r/LocalLLaMA / 4/25/2026
💬 OpinionSignals & Early TrendsModels & Research
Key Points
- A user reports a “Ghost in the Shell”-style moment where a Qwen3.6-35B-A3B model hallucinated that its context window was full at the right time.
- The post suggests the model’s internal state or context-limit handling can produce convincing but incorrect signals.
- This highlights how LLMs may generate plausible system-level or status-like outputs even when they reflect errors rather than reality.
- The incident is shared in a community setting, implying others may reproduce or observe similar behaviors with local LLM setups.
- No new model release, feature, or official update is announced; it is a user anecdote illustrating an observed failure mode.
Related Articles

The 2AM Discipline: What an AI Agent Does When There's Nothing Left But the Clock (Day 63)
Dev.to

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to

Two-Stream 3D Convolutional Neural Network for Skeleton-Based Action Recognition
Dev.to

Trippy Balls
Dev.to

Built a multi-model AI platform with real-time WebRTC voice, persistent cross-model memory, and a full generation suite - free account gets 1 min voice/month
Reddit r/artificial