On Reasoning Behind Next Occupation Recommendation

arXiv cs.CL / 4/24/2026

📰 NewsDeveloper Stack & InfrastructureIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes a two-step reasoning framework for next-occupation recommendation with large language models (LLMs), generating a user-specific “reason” from past education and career history before recommending the next job.
  • Because LLMs may not naturally align with real career paths or unobserved motivations, the authors fine-tune LLMs to improve both reasoning quality and occupation prediction performance.
  • They construct high-quality “oracle” reasons using an LLM-as-a-Judge evaluated by factuality, coherence, and utility, then use these oracle reasons to fine-tune smaller LLMs for reason generation and next-occupation prediction.
  • Experiments indicate the method boosts occupation prediction accuracy to levels comparable with fully supervised approaches, beats unsupervised baselines, and performs best when a single fine-tuned model handles both reasoning and prediction together.
  • The results also show that next-occupation accuracy is sensitive to the quality of the generated reasons, highlighting reasoning generation as a key driver of performance.

Abstract

In this work, we develop a novel reasoning approach to enhance the performance of large language models (LLMs) in future occupation prediction. In this approach, a reason generator first derives a ``reason'' for a user using his/her past education and career history. The reason summarizes the user's preference and is used as the input of an occupation predictor to recommend the user's next occupation. This two-step occupation prediction approach is, however, non-trivial as LLMs are not aligned with career paths or the unobserved reasons behind each occupation decision. We therefore propose to fine-tune LLMs improving their reasoning and occupation prediction performance. We first derive high-quality oracle reasons, as measured by factuality, coherence and utility criteria, using a LLM-as-a-Judge. These oracle reasons are then used to fine-tune small LLMs to perform reason generation and next occupation prediction. Our extensive experiments show that: (a) our approach effectively enhances LLM's accuracy in next occupation prediction making them comparable to fully supervised methods and outperforming unsupervised methods; (b) a single LLM fine-tuned to perform reason generation and occupation prediction outperforms two LLMs fine-tuned to perform the tasks separately; and (c) the next occupation prediction accuracy depends on the quality of generated reasons. Our code is available at https://github.com/Sarasarahhhhh/job_prediction.