AI Navigate

Motivation in Large Language Models

arXiv cs.CL / 3/17/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper investigates whether large language models report varying levels of motivation and how these reports relate to their behavior.
  • It finds that self-reported motivation in LLMs is structured and aligns with behavioral signatures such as choices, effort, and performance, with variation across task types.
  • External factors can modulate LLM motivation, showing that motivational dynamics can be influenced by manipulation.
  • The authors argue motivation is a coherent organizing construct for LLM behavior, connecting reports, actions, and performance in a way analogous to human psychology.

Abstract

Motivation is a central driver of human behavior, shaping decisions, goals, and task performance. As large language models (LLMs) become increasingly aligned with human preferences, we ask whether they exhibit something akin to motivation. We examine whether LLMs "report" varying levels of motivation, how these reports relate to their behavior, and whether external factors can influence them. Our experiments reveal consistent and structured patterns that echo human psychology: self-reported motivation aligns with different behavioral signatures, varies across task types, and can be modulated by external manipulations. These findings demonstrate that motivation is a coherent organizing construct for LLM behavior, systematically linking reports, choices, effort, and performance, and revealing motivational dynamics that resemble those documented in human psychology. This perspective deepens our understanding of model behavior and its connection to human-inspired concepts.