LiRA: A Multi-Agent Framework for Reliable and Readable Literature Review Generation
arXiv cs.CL / 3/23/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- LiRA introduces a multi-agent workflow with specialized agents for outlining content, writing subsections, editing, and reviewing to automate literature-review generation.
- It is evaluated on SciReviewGen and a proprietary ScienceDirect dataset, where LiRA outperforms baselines such as AutoSurvey and MASS-Survey in writing quality and citation quality while preserving similarity to human reviews.
- The study demonstrates robustness to reviewer model variation and viability in real-world document retrieval scenarios, supporting the practical utility of agentic LLM workflows for scientific writing.
- The results indicate that substantial improvements can be achieved without domain-specific tuning, suggesting LiRA's approach may scale to broader systematic-review tasks.
Related Articles
Is AI becoming a bubble, and could it end like the dot-com crash?
Reddit r/artificial

Externalizing State
Dev.to

I made a 'benchmark' where LLMs write code controlling units in a 1v1 RTS game.
Dev.to

My AI Does Not Have a Clock
Dev.to
How to settle on a coding LLM ? What parameters to watch out for ?
Reddit r/LocalLLaMA