Beyond Detection: Governing GenAI in Academic Peer Review as a Sociotechnical Challenge
arXiv cs.AI / 3/24/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper studies how generative AI is being discussed and experienced in academic peer review, combining social media discourse analysis (448 posts) with interviews of area and program chairs from major AI/HCI conferences.
- It finds broad consensus that GenAI can be acceptable for limited supportive tasks (e.g., improving clarity and structuring feedback) but that core evaluative judgments—such as assessing novelty, contributions, and acceptance—should remain human responsibilities.
- Participants raise sociotechnical risks including epistemic harm, over-standardization, unclear accountability, and adversarial threats like prompt injection.
- The work argues that institutional strain and ambiguous policies shift enforcement burdens onto individual scholars, disproportionately impacting junior authors and reviewers.
- It concludes that governing GenAI in peer review should rely on explicit, role-specific controls and enforceable boundaries for “support vs. evaluation,” rather than blanket bans or detection-only approaches.
Related Articles

Composer 2: What is new and Compares with Claude Opus 4.6 & GPT-5.4
Dev.to
How UCP Breaks Your E-Commerce Tracking Stack: A Platform-by-Platform Analysis
Dev.to
AI Text Analyzer vs Asking Friends: Which Gives Better Perspective?
Dev.to
[D] Cathie wood claims ai productivity wave is starting, data shows 43% of ceos save 8+ hours weekly
Reddit r/MachineLearning

Microsoft hires top AI researchers from Allen Institute for AI for Suleyman's Superintelligence team
THE DECODER