Evolving Demonstration Optimization for Chain-of-Thought Feature Transformation
arXiv cs.AI / 3/12/2026
💬 OpinionIdeas & Deep AnalysisTools & Practical UsageModels & Research
Key Points
- The paper presents a closed-loop framework that evolves trajectory-level transformation experiences from reinforcement learning into an experience library of downstream-verified feature transformations to guide LLM-based feature transformation.
- It uses a diversity-aware selector to form contexts and incorporates chain-of-thought guidance to steer transformed feature generation toward higher performance.
- Experiments on diverse tabular benchmarks show the approach outperforms classical and LLM-based baselines and is more stable than one-shot generation.
- The framework generalizes across API-based and open-source LLMs and remains robust across different downstream evaluators.
Related Articles

Astral to Join OpenAI
Dev.to

I Built a MITM Proxy to See What Claude Code Actually Sends to Anthropic
Dev.to

Your AI coding agent is installing vulnerable packages. I built the fix.
Dev.to

ChatGPT Prompt Engineering for Freelancers: Unlocking Efficient Client Communication
Dev.to

PearlOS. We gave swarm intelligence a local desktop environment and code control to self-evolve. Has been pretty incredible to see so far. Open source and free if you want your own.
Reddit r/LocalLLaMA