FinTrace: Holistic Trajectory-Level Evaluation of LLM Tool Calling for Long-Horizon Financial Tasks

arXiv cs.AI / 4/14/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • FinTrace is introduced as a new trajectory-level benchmark for evaluating LLM tool calling in long-horizon financial tasks, addressing limitations of existing call-level metrics and narrow scenarios.
  • The benchmark includes 800 expert-annotated trajectories across 34 real-world financial task categories and uses a rubric with nine metrics across four axes: action correctness, execution efficiency, process quality, and output quality.
  • Evaluations of 13 LLMs show a recurring gap: models can often select the right tools, but struggle with information utilization and producing high-quality final answers.
  • To go beyond diagnosis, the paper constructs FinTrace-Training, an 8,196-trajectory preference dataset with tool-augmented contexts and preference pairs for financial tool calling.
  • Fine-tuning Qwen-3.5-9B with supervised fine-tuning plus DPO improves intermediate reasoning/process metrics and reduces failure modes, but end-to-end final answer quality remains a bottleneck.

Abstract

Recent studies demonstrate that tool-calling capability enables large language models (LLMs) to interact with external environments for long-horizon financial tasks. While existing benchmarks have begun evaluating financial tool calling, they focus on limited scenarios and rely on call-level metrics that fail to capture trajectory-level reasoning quality. To address this gap, we introduce FinTrace, a benchmark comprising 800 expert-annotated trajectories spanning 34 real-world financial task categories across multiple difficulty levels. FinTrace employs a rubric-based evaluation protocol with nine metrics organized along four axes -- action correctness, execution efficiency, process quality, and output quality -- enabling fine-grained assessment of LLM tool-calling behavior. Our evaluation of 13 LLMs reveals that while frontier models achieve strong tool selection, all models struggle with information utilization and final answer quality, exposing a critical gap between invoking the right tools and reasoning effectively over their outputs. To move beyond diagnosis, we construct FinTrace-Training, the first trajectory-level preference dataset for financial tool-calling, containing 8,196 curated trajectories with tool-augmented contexts and preference pairs. We fine-tune Qwen-3.5-9B using supervised fine-tuning followed by direct preference optimization (DPO) and show that training on FinTrace-Training consistently improves intermediate reasoning metrics, with DPO more effectively suppressing failure modes. However, end-to-end answer quality remains a bottleneck, indicating that trajectory-level improvements do not yet fully propagate to final output quality.