Evaluating LLM-Driven Summarisation of Parliamentary Debates with Computational Argumentation
arXiv cs.CL / 4/22/2026
📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper addresses how LLM-generated summaries of parliamentary debates can make complex policy discussions more accessible to outside audiences.
- It highlights that existing automated summarization metrics often correlate poorly with human assessments of faithfulness (consistency between summary and source).
- The authors propose a formal evaluation framework that uses computational argumentation to structure and assess argumentative content grounded in contested proposals.
- The proposed method focuses on formal properties that test whether the reasoning supporting or opposing policy outcomes is faithfully preserved in the summary.
- The framework is demonstrated via a case study using European Parliament debate materials and corresponding LLM-driven summaries.


