MemGuard-Alpha: Detecting and Filtering Memorization-Contaminated Signals in LLM-Based Financial Forecasting via Membership Inference and Cross-Model Disagreement

arXiv cs.LG / 3/31/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • MemGuard-Alpha is presented as a zero-cost, post-generation framework to detect and filter memorization-contaminated signals that can cause look-ahead bias in LLM-based financial forecasting.
  • The approach combines a MemGuard Composite Score (MCS), which aggregates multiple membership inference attack signals with temporal proximity features, and Cross-Model Memorization Disagreement (CMMD), which leverages different training cutoff dates across LLMs to flag memorized outputs.
  • Experiments across seven LLMs, 50 S&P 100 stocks, and 42,800 prompts over 2019–2024 show substantially improved trading performance after filtering, including a higher Sharpe ratio (4.11 vs 2.76) and much larger average daily returns for “clean” signals.
  • The paper reports a clear memorization signature: in-sample accuracy increases with contamination while out-of-sample accuracy declines, directly illustrating that memorization inflates apparent model performance.
  • The authors argue prior mitigations like retraining or input anonymization are costly or information-losing, positioning MemGuard-Alpha as a practical real-time filtering alternative for quantitative strategies.

Abstract

Large language models (LLMs) are increasingly used to generate financial alpha signals, yet growing evidence shows that LLMs memorize historical financial data from their training corpora, producing spurious predictive accuracy that collapses out-of-sample. This memorization-induced look-ahead bias threatens the validity of LLM-based quantitative strategies. Prior remedies -- model retraining and input anonymization -- are either prohibitively expensive or introduce significant information loss. No existing method offers practical, zero-cost signal-level filtering for real-time trading. We introduce MemGuard-Alpha, a post-generation framework comprising two algorithms: (i) the MemGuard Composite Score (MCS), which combines five membership inference attack (MIA) methods with temporal proximity features via logistic regression, achieving Cohen's d = 18.57 for contamination separation (d = 0.39-1.37 using MIA features alone); and (ii) Cross-Model Memorization Disagreement (CMMD), which exploits variation in training cutoff dates across LLMs to separate memorized signals from genuine reasoning. Evaluated across seven LLMs (124M-7B parameters), 50 S&P 100 stocks, 42,800 prompts, and five MIA methods over 5.5 years (2019-2024), CMMD achieves a Sharpe ratio of 4.11 versus 2.76 for unfiltered signals (49% improvement). Clean signals produce 14.48 bps average daily return versus 2.13 bps for tainted signals (7x difference). A striking crossover pattern emerges: in-sample accuracy rises with contamination (40.8% to 52.5%) while out-of-sample accuracy falls (47% to 42%), providing direct evidence that memorization inflates apparent accuracy at the cost of generalization.