Scalable Identification and Prioritization of Requisition-Specific Personal Competencies Using Large Language Models

arXiv cs.LG / 4/2/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that current AI recruitment systems often miss requisition-specific personal competencies (PCs) that go beyond generic job categories.
  • It proposes an LLM-based method that combines dynamic few-shot prompting, reflection-driven self-improvement, similarity filtering, and multi-stage validation to extract and rank req-specific PCs.
  • Evaluated on Program Manager requisitions, the approach achieves an average accuracy of 0.76 for identifying the highest-priority req-specific PCs, nearing human expert inter-rater reliability.
  • The method also keeps an out-of-scope rate low (0.07), suggesting it can avoid selecting irrelevant competencies when prompts or data are mismatched.
  • The overall contribution is a more scalable workflow for deriving fine-grained competency signals from requisitions, potentially improving how recruiters and HR teams operationalize candidate evaluation criteria.

Abstract

AI-powered recruitment tools are increasingly adopted in personnel selection, yet they struggle to capture the requisition (req)-specific personal competencies (PCs) that distinguish successful candidates beyond job categories. We propose a large language model (LLM)-based approach to identify and prioritize req-specific PCs from reqs. Our approach integrates dynamic few-shot prompting, reflection-based self-improvement, similarity-based filtering, and multi-stage validation. Applied to a dataset of Program Manager reqs, our approach correctly identifies the highest-priority req-specific PCs with an average accuracy of 0.76, approaching human expert inter-rater reliability, and maintains a low out-of-scope rate of 0.07.