Can Large Language Models Reinvent Foundational Algorithms?
arXiv cs.AI / 4/8/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper investigates whether large language models can “reinvent” established foundational computer science algorithms after they are intentionally unlearned from the model’s pretrained knowledge.
- It introduces an Unlearn-and-Reinvent pipeline that uses a GRPO-based, on-policy unlearning method to remove specific algorithms (e.g., Dijkstra’s, Euclid’s) and then evaluates reinvention in a controlled setting.
- Experiments across 10 target algorithms, three open-weight models, and multiple hint levels show that the best-performing model (Qwen3-4B-Thinking-2507) reinvents 50% of algorithms with no hints, rising to 70% with hint level 1 and 90% with hint level 2.
- The study finds that hints can significantly help for simpler algorithms, but even step-by-step hints may not work for more complex ones, while test-time reinforcement learning enables successful reinvention for the Strassen algorithm at higher hint levels.
- Analysis and ablations suggest that a generative verifier during the reinvention phase is crucial for maintaining reasoning quality and avoiding “thought collapse,” revealing both potential and limitations of LLM-based algorithmic innovation.
Related Articles

Black Hat Asia
AI Business
Meta's latest model is as open as Zuckerberg's private school
The Register

AI fuels global trade growth as China-US flows shift, McKinsey finds
SCMP Tech
Why multi-agent AI security is broken (and the identity patterns that actually work)
Dev.to
BANKING77-77: New best of 94.61% on the official test set (+0.13pp) over our previous tests 94.48%.
Reddit r/artificial