Beyond Cosine Similarity: Zero-Initialized Residual Complex Projection for Aspect-Based Sentiment Analysis

arXiv cs.CL / 3/31/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses Aspect-Based Sentiment Analysis (ABSA) difficulties caused by representation entanglement, where aspect meaning and sentiment polarity become conflated in embedding spaces.
  • It introduces Zero-Initialized Residual Complex Projection (ZRCP), projecting text features into a complex semantic space so that phase helps disentangle sentiment polarities while amplitude captures semantic intensity and lexical richness.
  • To reduce contrastive learning’s false-negative collisions (especially for high-frequency aspects), the method adds an Anti-collision Masked Angle Loss that preserves cohesion within the same polarity and enlarges the discriminative margin across polarities by over 50%.
  • Experiments report a new state-of-the-art Macro-F1 of 0.8851, supported by geometric analyses showing that constraining complex amplitude too strongly harms subjective representation learning.
  • Overall, the framework combines complex-valued representation learning with loss engineering to achieve more robust, fine-grained sentiment-aspect disentanglement.

Abstract

Aspect-Based Sentiment Analysis (ABSA) is fundamentally challenged by representation entanglement, where aspect semantics and sentiment polarities are often conflated in real-valued embedding spaces. Furthermore, standard contrastive learning suffers from false-negative collisions, severely degrading performance on high-frequency aspects. In this paper, we propose a novel framework featuring a Zero-Initialized Residual Complex Projection (ZRCP) and an Anti-collision Masked Angle Loss,inspired by quantum projection and entanglement ideas. Our approach projects textual features into a complex semantic space, systematically utilizing the phase to disentangle sentiment polarities while allowing the amplitude to encode the semantic intensity and lexical richness of subjective descriptions. To tackle the collision bottleneck, we introduce an anti-collision mask that elegantly preserves intra-polarity aspect cohesion while expanding the inter-polarity discriminative margin by over 50%. Experimental results demonstrate that our framework achieves a state-of-the-art Macro-F1 score of 0.8851. Deep geometric analyses further reveal that explicitly penalizing the complex amplitude catastrophically over-regularizes subjective representations, proving that our unconstrained-amplitude and phase-driven objective is crucial for robust, fine-grained sentiment disentanglement.