KumoRFM-2: Scaling Foundation Models for Relational Learning
arXiv cs.LG / 4/15/2026
📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- KumoRFM-2 is a new pre-trained foundation model designed for relational data, enabling in-context learning and fine-tuning across diverse predictive tasks.
- Unlike tabular foundation models, it natively processes connected tables together without manual flattening or target-variable generation, while preserving temporal consistency.
- The model is pre-trained on a mix of synthetic and real-world data across four representation axes (row/column within tables and foreign-key/cross-sample across databases).
- A key architectural/training change is injecting task information earlier, which improves selection of task-relevant columns and robustness to noisy inputs.
- Experiments on 41 benchmarks show up to 8% gains over supervised and other foundation approaches, including strong performance under cold-start and noisy conditions, and it scales to billion-scale relational datasets.
Related Articles

Black Hat Asia
AI Business
The Complete Guide to Better Meeting Productivity with AI Note-Taking
Dev.to
5 Ways Real-Time AI Can Boost Your Sales Call Performance
Dev.to

RAG in Practice — Part 4: Chunking, Retrieval, and the Decisions That Break RAG
Dev.to
Why dynamically routing multi-timescale advantages in PPO causes policy collapse (and a simple decoupled fix) [R]
Reddit r/MachineLearning