Evaluating LLM-Based Goal Extraction in Requirements Engineering: Prompting Strategies and Their Limitations
arXiv cs.AI / 4/27/2026
💬 OpinionIdeas & Deep AnalysisTools & Practical UsageModels & Research
Key Points
- The paper proposes automating parts of goal-oriented requirements engineering by extracting functional goals from software documentation using a multi-phase LLM pipeline (actor identification, then high- and low-level goal extraction).
- It evaluates prompting strategies, including different in-context learning variants and similarity measures between inputs and example shots, to understand how context affects extraction quality.
- A generation–critic feedback loop using two LLMs is introduced, and the study finds this zero-shot critic setup outperforms standalone few-shot prompting.
- While the pipeline reaches 61% accuracy for low-level goal identification, results suggest the approach is best used to accelerate human-led extraction rather than fully replace manual work.
- The authors report that combining the feedback mechanism with few-shot provides no advantage and plan future improvements via RAG and chain-of-thought prompting.




