Talent or Luck? Evaluating Attribution Bias in Large Language Models
arXiv cs.CL / 4/30/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper investigates attribution bias in large language models by examining how they assign internal (e.g., effort/ability) versus external (e.g., difficulty/luck) explanations to outcomes.
- It argues that LLMs’ attribution patterns linked to demographics can have fairness implications by shaping perceptions and influencing decisions.
- Instead of focusing only on surface-level stereotypes, the authors propose a cognitively grounded framework to evaluate disparities in how models reason across demographic groups.
- The goal is to identify how reasoning differences “channelize” bias toward particular demographic groups, providing a more principled evaluation method.
- The work is presented as an updated arXiv version (v2), positioning it as a research contribution rather than a product announcement.
Related Articles

Building a Local AI Agent (Part 2): Six UX and UI Design Challenges
Dev.to

We Built a DNS-Based Discovery Protocol for AI Agents — Here's How It Works
Dev.to

Your first business opportunity in 3 commands: /register_directory in @biznode_bot, wait for matches, then /my_pulse to view...
Dev.to

Building AI Evaluation Pipelines: Automating LLM Testing from Dataset to CI/CD
Dev.to

Function Calling Harness 2: CoT Compliance from 9.91% to 100%
Dev.to