AuthorMix: Modular Authorship Style Transfer via Layer-wise Adapter Mixing

arXiv cs.CL / 3/25/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces AuthorMix, a modular approach to authorship style transfer that aims to change a target author’s style while preserving the original text’s meaning.
  • Instead of training one large model for all styles, it trains lightweight, style-specific LoRA adapters on high-resource authors and then adapts to new targets using learned, layer-wise adapter mixing.
  • AuthorMix is designed to require only a small number of example texts for each new target author, reducing the high cost and limited flexibility of prior single-model approaches.
  • Experimental results claim that AuthorMix outperforms existing state-of-the-art style-transfer baselines and GPT-5.1, with the biggest gains appearing in low-resource target settings and with improved meaning preservation.

Abstract

The task of authorship style transfer involves rewriting text in the style of a target author while preserving the meaning of the original text. Existing style transfer methods train a single model on large corpora to model all target styles at once: this high-cost approach offers limited flexibility for target-specific adaptation, and often sacrifices meaning preservation for style transfer. In this paper, we propose AuthorMix: a lightweight, modular, and interpretable style transfer framework. We train individual, style-specific LoRA adapters on a small set of high-resource authors, allowing the rapid training of specialized adaptation models for each new target via learned, layer-wise adapter mixing, using only a handful of target style training examples. AuthorMix outperforms existing, SoTA style-transfer baselines -- as well as GPT-5.1 -- for low-resource targets, achieving the highest overall score and substantially improving meaning preservation.