Unnoticed Gemma-4 Feature - it admits that it does not now...

Reddit r/LocalLLaMA / 4/5/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The post argues that Gemma-4 (tested as the E4b Q8 variant) includes a feature where it clearly admits it does not know a specific research study at the start of a conversation rather than guessing.
  • It contrasts Gemma-4’s behavior with Qwen3.5, which the author says often makes broad assumptions or hallucinates content with high confidence.
  • The author highlights the importance of early “I can’t confirm” responses as a reliability signal for users seeking accurate information.
  • The post suggests this may reflect a change in training or policy where admitting uncertainty is penalized less than trying to speculate and potentially failing.
  • The main takeaway is that Gemma-4’s uncertainty handling could reduce user over-trust and improve real-world usefulness compared with models that confidently guess.

Edit: "it admits that it does not know" (sorry for the TYPO!) Although Qwen3.5 is a great series of models, it is prone to make very broad assumptions/hallucinate stuff and it does it with a great confidence, so you may believe what it says.

In contrast, Gemma-4 (specifically I tested E4b Q8 version) admits that it does not know right at the start of conversation:

Therefore, I cannot confirm familiarity with a single, specific research study by that name. However, I am generally familiar with the factors that researchers and military trainers study regarding attrition in elite training programs... 

That is very important feature and it may hint to changing model training routine, where admitting to not know stuff is penalized less than trying to guess and then fail.

submitted by /u/mtomas7
[link] [comments]