Moira: Language-driven Hierarchical Reinforcement Learning for Pair Trading

arXiv cs.AI / 5/5/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces Moira, a language-driven hierarchical reinforcement learning framework for pair trading where high-level semantic decisions (pair selection) constrain lower-level execution.
  • It frames pair trading as a hierarchical RL problem under delayed and ambiguous feedback, tackling the credit-assignment challenge between abstractions and execution.
  • Both the high-level and low-level policies are parameterized by large language models (LLMs), and the method optimizes them solely via prompt updates rather than gradient-based fine-tuning.
  • By explicitly separating abstraction selection from execution, the approach reduces non-stationarity across hierarchical levels and enables targeted adaptation under delayed rewards.
  • Experiments on real market data report consistent improvements over traditional and LLM-based baselines, supporting the effectiveness of language-driven hierarchical RL.

Abstract

Many sequential decision-making problems exhibit hierarchical structure, where high-level semantic choices constrain downstream actions and feedback is delayed and ambiguous. Learning in such settings is challenging due to credit assignment: performance degradation may arise from flawed abstractions, suboptimal execution, or their interaction. We study this challenge through pair trading, a domain that naturally combines long-horizon semantic reasoning for asset pair selection with short-horizon execution under partial observability. We formulate pair trading as a hierarchical reinforcement learning problem and propose a language-driven optimization framework in which both high-level and low-level policies are parameterized by large language models (LLMs) and optimized exclusively through prompt updates. Our approach leverages pretrained LLMs as hierarchical policies and uses trajectory- and episode-level textual feedback to adapt abstractions and execution without gradient-based fine-tuning. By explicitly separating abstraction selection from execution, the framework reduces non-stationarity across hierarchical levels and enables targeted adaptation under delayed feedback. Experiments on real-world market data show consistent improvements over traditional and LLM-based baselines, demonstrating the effectiveness of language-driven hierarchical reinforcement learning.