From Prediction to Justification: Aligning Sentiment Reasoning with Human Rationale via Reinforcement Learning

arXiv cs.AI / 4/16/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that aspect-based sentiment analysis (ABSA) models are often accurate but lack the explicit, human-like causal reasoning behind sentiment labels.
  • It proposes ABSA-R1, a large language model framework that follows a “reason-before-predict” paradigm by generating natural-language justifications before outputting sentiment.
  • A Cognition-Aligned Reward Model is introduced to enforce consistency between the model’s reasoning path and the final emotional label during reinforcement learning.
  • The approach adds a performance-driven rejection sampling strategy, inspired by metacognitive monitoring, to focus generation on hard cases where internal reasoning is uncertain or inconsistent.
  • Experiments on four benchmarks show that adding explicit reasoning improves both interpretability and downstream sentiment classification/triplet extraction versus non-reasoning baselines.

Abstract

While Aspect-based Sentiment Analysis (ABSA) systems have achieved high accuracy in identifying sentiment polarities, they often operate as "black boxes," lacking the explicit reasoning capabilities characteristic of human affective cognition. Humans do not merely categorize sentiment; they construct causal explanations for their judgments. To bridge this gap, we propose ABSA-R1, a large language model framework designed to mimic this ``reason-before-predict" cognitive process. By leveraging reinforcement learning (RL), ABSA-R1 learns to articulate the why behind the what, generating natural language justifications that ground its sentiment predictions. We introduce a Cognition-Aligned Reward Model (formerly sentiment-aware reward model) that enforces consistency between the generated reasoning path and the final emotional label. Furthermore, inspired by metacognitive monitoring, we implement a performance-driven rejection sampling strategy that selectively targets hard cases where the model's internal reasoning is uncertain or inconsistent. Experimental results on four benchmarks demonstrate that equipping models with this explicit reasoning capability not only enhances interpretability but also yields superior performance in sentiment classification and triplet extraction compared to non-reasoning baselines.