AI Navigate

Secure Linear Alignment of Large Language Models

arXiv cs.AI / 3/20/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper investigates how independently trained language models exhibit representational convergence and proposes a privacy-preserving cross-silo inference framework leveraging this phenomenon.
  • It learns an affine transformation on a shared public dataset to align final hidden states across models and uses homomorphic encryption to protect client queries during inference, achieving sub-second latency while preserving security guarantees.
  • The approach is empirically evaluated on embedding classification and out-of-distribution detection, showing minimal performance degradation across model pairs and, in some cases, enabling text generation across independently trained models.
  • This method enables secure cross-model collaboration under privacy, data-sharing, or competitive constraints, opening new application domains where direct data or model sharing is restricted.

Abstract

Language models increasingly appear to learn similar representations, despite differences in training objectives, architectures, and data modalities. This emerging compatibility between independently trained models introduces new opportunities for cross-model alignment to downstream objectives. Moreover, it unlocks new potential application domains, such as settings where security, privacy, or competitive constraints prohibit direct data or model sharing. In this work, we propose a privacy-preserving framework that exploits representational convergence to enable cross-silo inference between independent language models. The framework learns an affine transformation over a shared public dataset and applies homomorphic encryption to protect client queries during inference. By encrypting only the linear alignment and classification operations, the method achieves sub-second inference latency while maintaining strong security guarantees. We support this framework with an empirical investigation into representational convergence, in which we learn linear transformations between the final hidden states of independent models. We evaluate these cross-model mappings on embedding classification and out-of-distribution detection, observing minimal performance degradation across model pairs. Additionally, we show for the first time that linear alignment sometimes enables text generation across independently trained models.