CoFEE: Reasoning Control for LLM-Based Feature Discovery

arXiv cs.AI / 4/25/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper frames feature discovery from complex unstructured data as a reasoning problem that must find predictive abstractions while avoiding leakage, proxy signals, and post-outcome information.
  • It introduces CoFEE (Cognitive Feature Engineering Engine), a reasoning-control framework that forces an LLM to follow structured “cognitive behaviors” during feature generation.
  • The enforced behaviors include backward chaining from outcomes, subgoal decomposition, verification against observability/leakage criteria, and explicit backtracking of unproductive reasoning paths.
  • In controlled comparisons against unconstrained “vanilla” LLM prompting, CoFEE produces more empirically predictable features, with a 15.2% higher Success Rate Score, 29% fewer generated features, and 53.3% lower costs.
  • Held-out feature evaluation suggests that reasoning control can improve both the quality and efficiency of LLM-based feature discovery beyond the data used for discovery.

Abstract

Feature discovery from complex unstructured data is fundamentally a reasoning problem: it requires identifying abstractions that are predictive of a target outcome while avoiding leakage, proxies, and post-outcome signals. With the introduction of ever-improving Large Language Models (LLMs), our method provides a structured method for addressing this challenge. LLMs are well suited for this task by being able to process large amounts of information, but unconstrained feature generation can lead to weak features. In this work, we study reasoning control in LLMs by inducing cognitive behaviors for improving feature discovery. We introduce CoFEE (Cognitive Feature Engineering Engine), a reasoning control framework that enforces cognitive behaviors in how the LLM reasons during feature discovery. From a machine learning perspective, these cognitive behaviors act as structured inductive biases over the space of candidate features generated by the model. These behaviors have been exploited with success in ML models, and include backward chaining from outcomes, subgoal decomposition, verification against observability and leakage criteria, and explicit backtracking of rejected reasoning paths. In a controlled comparison, we show that enforcing cognitive behaviors yields features with higher empirical predictability than those under unconstrained vanilla LLM prompts. CoFEE achieves an average Success Rate Score that is 15.2% higher than the vanilla approach, while generating 29% fewer features and reducing costs by 53.3%. Using held-out feature evaluation, we assess whether cognitively induced features generalize beyond the data used for discovery. Our results indicate that, in our evaluated setting, reasoning control is associated with improvements in quality and efficiency of LLM-based feature discovery.