Knowledge Vector of Logical Reasoning in Large Language Models
arXiv cs.CL / 4/28/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper examines how large language models internally represent different logical reasoning types—deductive, inductive, and abductive—and studies how these representations relate to each other.
- It finds that each reasoning type can be represented as a distinct, reasoning-specific knowledge vector in a linear embedding space, with relatively weak dependence between the vectors.
- Motivated by cognitive science and evidence that reasoning chains from one type can help another, the authors propose a refinement method to make these vectors complementary rather than isolated.
- The proposed complementary subspace-constrained refinement framework uses a complementary loss (to share helpful auxiliary knowledge) and a subspace constraint loss (to preserve unique characteristics), leading to consistent performance gains in steering experiments.
- A mechanism-interpretability analysis further identifies which features of reasoning are shared versus unique across the different logical reasoning vectors in LLMs.
Related Articles
LLMs will be a commodity
Reddit r/artificial
Indian Developers: How to Build AI Side Income with $0 Capital in 2026
Dev.to

What it feels like to have to have Qwen 3.6 or Gemma 4 running locally
Reddit r/LocalLLaMA

Dex lands $5.3M to grow its AI-driven talent matching platform
Tech.eu
AI Citation Registry: Why Daily Updates Leave No Time for Data Structuring
Dev.to