Machines acquire scientific taste from institutional traces
arXiv cs.AI / 3/18/2026
📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- A new study demonstrates that fine-tuning language models on journal publication decisions enables them to exhibit evaluative judgment about which ideas deserve pursuit, a capability not captured by frontier models or unadapted human experts.
- In a held-out benchmark using four quality tiers for research pitches in management, frontier models averaged 31% accuracy while panels of editors reached 42% by majority vote.
- Fine-tuned models trained on years of publication records surpass frontier models and expert panels, with the best single model achieving 59% accuracy and calibrated confidence, including 100% accuracy on its highest-confidence predictions.
- The mechanism transfers to untrained pairwise comparisons and one-sentence summaries, and when trained on economics publication records, reaches about 70% accuracy.
- The findings suggest a scalable method to triage the expanding volume of scientific production across disciplines where quality cannot be easily verified, effectively depositing scientific taste into the institutional record.
Related Articles

Hey dev.to community – sharing my journey with Prompt Builder, Insta Posts, and practical SEO
Dev.to

How to Build Passive Income with AI in 2026: A Developer's Practical Guide
Dev.to

The Research That Doesn't Exist
Dev.to

ベテランの若手育成負担を減らせ、PLC制御の「ラダー図」をAIで生成
日経XTECH

Jeff Bezos reportedly wants $100 billion to buy and transform old manufacturing firms with AI
TechCrunch