On the Carbon Footprint of Economic Research in the Age of Generative AI
arXiv cs.AI / 3/31/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper argues that Green AI research should measure the carbon footprint of end-to-end computational workflows using GenAI tools, not just the emissions of training or model inference.
- It models prompts as decision policies that determine what gets executed and when iteration stops, framing researcher–system discretion as a controllable lever for emissions.
- The authors map recent Green AI work into seven themes, noting that training footprint remains the largest area while inference efficiency and system-level optimization are accelerating.
- Benchmarking a modern economic survey workflow (LDA-based mapping) with GenAI-assisted coding shows that generic “green” prompt wording has no reliable effect, but operational constraints and decision-rule prompts produce large, stable CO2e reductions.
- The results suggest human-in-the-loop governance and rule-based prompt constraints can align GenAI productivity with environmental efficiency without changing the decision-equivalent outputs.
Related Articles

Black Hat Asia
AI Business
[D] How does distributed proof of work computing handle the coordination needs of neural network training?
Reddit r/MachineLearning

Claude Code's Entire Source Code Was Just Leaked via npm Source Maps — Here's What's Inside
Dev.to

BYOK is not just a pricing model: why it changes AI product trust
Dev.to

AI Citation Registries and Identity Persistence Across Records
Dev.to