Reciprocal Co-Training (RCT): Coupling Gradient-Based and Non-Differentiable Models via Reinforcement Learning

arXiv cs.CL / 4/21/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes Reciprocal Co-Training (RCT), a framework that couples an LLM with a non-differentiable Random Forest (RF) classifier despite their incompatible training paradigms.
  • It uses reinforcement learning to create a bidirectional feedback loop: the LLM improves using RF probability signals, while the RF benefits from LLM-derived embeddings by augmenting its feature space.
  • Tabular data are converted into standardized text representations so the LLM can process them and produce useful embeddings for the RF.
  • Experiments on three medical datasets show consistent performance improvements for both models, with particularly strong gains for the LLM, and ablations attribute gains to iterative refinement, hybrid reward design, and dimensionality control.
  • The authors position RCT as a general mechanism for integrating otherwise incompatible model families by enabling reciprocal adaptation.

Abstract

Large language models (LLMs) and classical machine learning methods offer complementary strengths for predictive modeling, yet their fundamentally different representations and training paradigms hinder effective integration: LLMs rely on gradient-based optimization over textual data, whereas models such as Random Forests (RF) employ non-differentiable feature partitioning. This work introduces a reciprocal co-training framework that couples an LLM with an RF classifier via reinforcement learning, creating an iterative feedback loop in which each model improves using signals from the other. Tabular data are reformulated into standardized textual representations for the LLM, whose embeddings augment the RF feature space, while calibrated RF probability estimates provide feedback signals that guide reinforcement learning updates of the LLM. Experiments across three medical datasets demonstrate consistent performance gains for both models, with particularly strong effects for the LLM. Ablation analyses show that iterative refinement, hybrid reward design, and dimensionality control jointly contribute to these gains. The proposed framework provides a general mechanism that allows incompatible model families to leverage each other's strengths through bidirectional adaptation.