When Valid Signals Fail: Regime Boundaries Between LLM Features and RL Trading Policies

arXiv cs.CL / 4/14/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes a pipeline where a frozen LLM converts daily news/filings into fixed-dimensional numerical features that feed a PPO reinforcement learning trading agent.
  • It uses an automated prompt-optimization loop that tunes the extraction prompt as a discrete hyperparameter against Information Coefficient (Spearman rank correlation) rather than standard NLP losses.
  • While the optimized prompts can yield genuinely predictive features (IC above 0.15 on held-out data), those features can fail to improve trading performance under distribution shift from macroeconomic shock.
  • In the stressed regime the LLM-derived features add noise and the augmented agent underperforms a price-only baseline, though performance can recover in calmer periods.
  • The study emphasizes a “feature-level validity vs policy-level robustness” gap under distribution shift, with macroeconomic state variables remaining the most robust drivers of improvement.

Abstract

Can large language models (LLMs) generate continuous numerical features that improve reinforcement learning (RL) trading agents? We build a modular pipeline where a frozen LLM serves as a stateless feature extractor, transforming unstructured daily news and filings into a fixed-dimensional vector consumed by a downstream PPO agent. We introduce an automated prompt-optimization loop that treats the extraction prompt as a discrete hyperparameter and tunes it directly against the Information Coefficient - the Spearman rank correlation between predicted and realized returns - rather than NLP losses. The optimized prompt discovers genuinely predictive features (IC above 0.15 on held-out data). However, these valid intermediate representations do not automatically translate into downstream task performance: during a distribution shift caused by a macroeconomic shock, LLM-derived features add noise, and the augmented agent under-performs a price-only baseline. In a calmer test regime the agent recovers, yet macroeconomic state variables remain the most robust driver of policy improvement. Our findings highlight a gap between feature-level validity and policy-level robustness that parallels known challenges in transfer learning under distribution shift.