R2IF: Aligning Reasoning with Decisions via Composite Rewards for Interpretable LLM Function Calling

arXiv cs.LG / 4/23/2026

📰 NewsModels & Research

Key Points

  • The paper introduces R2IF, a reasoning-aware reinforcement learning framework designed to align an LLM’s internal reasoning with its external tool-call decisions in function calling.
  • R2IF uses a composite reward that combines format/correctness constraints with a Chain-of-Thought Effectiveness Reward (CER) and a Specification-Modification-Value (SMV) reward.
  • The method is optimized with GRPO and evaluated on BFCL/ACEBench to improve both tool-calling accuracy and the interpretability of the model’s reasoning.
  • Experiments show R2IF achieves up to a 34.62% improvement over baselines (e.g., Llama3.2-3B on BFCL) while maintaining positive average CoT effectiveness (0.05 for Llama3.2-3B).

Abstract

Function calling empowers large language models (LLMs) to interface with external tools, yet existing RL-based approaches suffer from misalignment between reasoning processes and tool-call decisions. We propose R2IF, a reasoning-aware RL framework for interpretable function calling, adopting a composite reward integrating format/correctness constraints, Chain-of-Thought Effectiveness Reward (CER), and Specification-Modification-Value (SMV) reward, optimized via GRPO. Experiments on BFCL/ACEBench show R2IF outperforms baselines by up to 34.62% (Llama3.2-3B on BFCL) with positive Average CoT Effectiveness (0.05 for Llama3.2-3B), enhancing both function-calling accuracy and interpretability for reliable tool-augmented LLM deployment.