Evaluating LLM-Based Goal Extraction in Requirements Engineering: Prompting Strategies and Their Limitations

arXiv cs.AI / 4/27/2026

💬 OpinionIdeas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • The paper proposes automating parts of goal-oriented requirements engineering by extracting functional goals from software documentation using a multi-phase LLM pipeline (actor identification, then high- and low-level goal extraction).
  • It evaluates prompting strategies, including different in-context learning variants and similarity measures between inputs and example shots, to understand how context affects extraction quality.
  • A generation–critic feedback loop using two LLMs is introduced, and the study finds this zero-shot critic setup outperforms standalone few-shot prompting.
  • While the pipeline reaches 61% accuracy for low-level goal identification, results suggest the approach is best used to accelerate human-led extraction rather than fully replace manual work.
  • The authors report that combining the feedback mechanism with few-shot provides no advantage and plan future improvements via RAG and chain-of-thought prompting.

Abstract

Due to the textual and repetitive nature of many Requirements Engineering (RE) artefacts, Large Language Models (LLMs) have proven useful to automate their generation and processing. In this paper, we discuss a possible approach for automating the Goal-Oriented Requirements Engineering (GORE) process by extracting functional goals from software documentation through three phases: actor identification, high and low-level goal extraction. To implement these functionalities, we propose a chain of LLMs fed with engineered prompts. We experimented with different variants of in-context learning and measured the similarities between input data and in-context examples to better investigate their impact. Another key element is the generation-critic mechanism, implemented as a feedback loop involving two LLMs. Although the pipeline achieved 61% accuracy in low-level goal identification, the final stage, these results indicate the approach is best suited as a tool to accelerate manual extraction rather than as a full replacement. The feedback-loop mechanism with Zero-shot outperformed stand-alone Few-shot, with an ablation study suggesting that performance slightly degrades without the feedback cycle. However, we reported that the combination of the feedback mechanism with Few-shot does not deliver any advantage, possibly suggesting that the primary performance ceiling is the prompting strategy applied to the 'critic' LLM. Together with the refinement of both the quantity and quality of the Shot examples, future research will integrate Retrieval-Augmented Generation (RAG) and Chain-of-Thought (CoT) prompting to improve accuracy.