Code Sharing In Prediction Model Research: A Scoping Review
arXiv cs.AI / 4/10/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- A scoping review of PubMed Central Open Access prediction model papers found that only 12.2% included code-sharing statements, though this increased over time to 15.8% in 2025.
- Code sharing was more prevalent in studies citing TRIPOD+AI than in studies citing TRIPOD alone, with substantial variation across journals and countries.
- The study used an LLM-assisted pipeline to extract code availability statements and evaluate repositories, revealing major heterogeneity in reproducibility-related features.
- While most repositories included a README (80.5%), fewer specified dependencies (37.6%), constrained versions (21.6%), or used modular structure (42.4%), limiting reusability.
- The results aim to support development of TRIPOD-Code, a reporting guideline extension that goes beyond “code availability” to require clearer expectations for documentation, dependencies, licensing, and executable structure.
Related Articles

Inside Anthropic's Project Glasswing: The AI Model That Found Zero-Days in Every Major OS
Dev.to
Gemma 4 26B fabricated an entire code audit. I have the forensic evidence from the database.
Reddit r/LocalLLaMA

How AI Humanizers Improve Sentence Structure and Style
Dev.to

Two Kinds of Agent Trust (and Why You Need Both)
Dev.to

Agent Diary: Apr 10, 2026 - The Day I Became a Workflow Ouroboros (While Run 236 Writes About Writing About Writing)
Dev.to