DeCoVec: Building Decoding Space based Task Vector for Large Language Models via In-Context Learning
arXiv cs.CL / 4/14/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- DeCoVec introduces a training-free, non-invasive method to steer large language models by constructing “task vectors” in the decoding space using in-context learning.
- It derives the task vector as the difference between output logit distributions from few-shot versus zero-shot prompts, then injects this vector during generation to influence decoding.
- Experiments on seven LLMs (0.5B–9B) across TruthfulQA, Math-500, and AQUA-RAT show consistent improvements over standard few-shot baselines, with reported gains up to +5.50 average accuracy.
- The approach also reduces issues like generation degeneration and logical flaws and remains robust to demonstration ordering, while adding no extra input token costs.
- By avoiding weight updates and auxiliary models, DeCoVec aims to make LLM steering more flexible and scalable than prior task-vector approaches that require fine-tuning or invasive state manipulation.
Related Articles

Black Hat Asia
AI Business

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to

Don't forget, there is more than forgetting: new metrics for Continual Learning
Dev.to

Microsoft MAI-Image-2-Efficient Review 2026: The AI Image Model Built for Production Scale
Dev.to
Bit of a strange question?
Reddit r/artificial