Selective Neuron Amplification for Training-Free Task Enhancement

arXiv cs.LG / 4/9/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that some LLM failures on tasks they “should” understand are caused by internal circuits not being sufficiently activated during inference rather than by missing knowledge.
  • It introduces Selective Neuron Amplification (SNA), an inference-time technique that boosts task-relevant neuron influence without changing the model’s parameters.
  • The authors report that SNA is most helpful when the model is uncertain and has limited benefit when the model is already confident.
  • The results suggest a path for training-free performance improvements by manipulating activation strength rather than retraining or fine-tuning.

Abstract

Large language models often fail on tasks they seem to already understand. In our experiments, this appears to be less about missing knowledge and more about certain internal circuits not being strongly activated during inference. We explore Selective Neuron Amplification, which increases the influence of task relevant neurons without changing the model's parameters. The method works at inference time and does not permanently alter the model. SNA helps mainly when the model is uncertain, while having low effect when the model is already confident. This suggests that some model failures are due to weak activation rather than lack of capability.