Are Finer Citations Always Better? Rethinking Granularity for Attributed Generation
arXiv cs.CL / 4/6/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper argues that citation granularity (sentence vs paragraph vs document) is a key design lever for attributed generation, but that simply choosing finer citations for human verifiability is not necessarily optimal for model performance.
- Across four model scales (8B–120B), enforcing fine-grained (sentence-level) citations significantly reduces attribution quality—by 16% to 276% compared with the best-performing granularity setting.
- The study finds a consistent optimum at intermediate granularity, with paragraph-level citations producing the highest attribution quality, while overly coarse citations add distracting noise.
- The performance penalty for fine-grained constraints varies non-monotonically with model scale, with larger models being disproportionately harmed—suggesting sentence-level “atomic” citation units interfere with the multi-sentence semantic synthesis these models rely on.
- It concludes that improving attribution requires aligning citation granularity with the model’s natural semantic scope, and that citation-optimal granularity can substantially improve attribution while preserving or even improving answer correctness.
Related Articles

Оказывается, эта нейросеть рисует бесплатно. Я узнал случайно.
Dev.to

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to

Three-Layer Memory Governance: Core, Provisional, Private
Dev.to

I Researched AI Prompting So You Don’t Have To
Dev.to

Top AI Tools Every Growing Business Should Use in 2026
Dev.to