REZE: Representation Regularization for Domain-adaptive Text Embedding Pre-finetuning

arXiv cs.CL / 4/21/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that contrastive pre-finetuning (PFT) on heterogeneous, scattered domain tasks can inject task-induced bias that causes uncontrolled representation shifts and degrades embedding performance.
  • It introduces REZE, a representation regularization method that constrains representation shift during embedding pre-finetuning by analyzing anchor-positive pair relations in an eigenspace.
  • REZE measures task-wise dispersion per eigencomponent to find task-variant directions, then applies adaptive soft-shrinkage to suppress task-specific noise while preserving task-invariant semantic structure.
  • Experiments across multiple embedding backbones and specialized benchmarks show REZE generally outperforms standard PFT and isotropy-based post-hoc regularization, and maintains stability where existing PFT variants may collapse.
  • Additional embedding-space analyses indicate that REZE produces controlled shifts that align with the original embedding manifold, supporting the idea that representation-shift control is crucial for robust domain-adaptive embedding pre-finetuning.

Abstract

Recent text embedding models are often adapted to specialized domains via contrastive pre-finetuning (PFT) on a naive collection of scattered, heterogeneous tasks. However, this approach often introduces task-induced bias alongside domain knowledge, leading to uncontrolled representation shifts that distort the pretrained embedding geometry and cause substantial performance degradation. To address this issue, we propose REZE}, a representation regularization framework that explicitly controls representation shift during embedding pre-finetuning. REZE operates on the relations of anchor-positive pairs and decomposes them in an eigenspace. It then measures task-wise dispersion along each eigencomponent to identify task-variant directions and applies adaptive soft-shrinkage to suppress task-induced noise while preserving task-invariant semantic structure, without inference-time overhead. Experiments across multiple embedding backbones and specialized benchmarks show that REZE outperforms standard pre-finetuning and isotropy-oriented post-hoc regularization in most settings, remaining stable where existing PFT variants collapse. Embedding space analyses further confirm that REZE induces controlled shifts aligned with the original embedding manifold, underscoring representation shift control as a key principle for robust embedding pre-finetuning under heterogeneous supervision.