Intrinsic Fingerprint of LLMs: Continue Training is NOT All You Need to Steal A Model!

arXiv cs.CL / 4/27/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that commonly proposed watermarking approaches may not withstand continue training, leaving LLM attribution and copyright protection vulnerable.
  • It proposes a robust LLM fingerprinting method based on intrinsic model characteristics, specifically the standard deviation distributions of attention parameter matrices across layers.
  • The authors report that these distribution “signatures” remain stable even after extensive continued training and can be used to identify model lineage and detect potential infringement.
  • Experiments across multiple model families validate the method’s effectiveness for model authentication.
  • The study presents evidence that Huawei’s recently released Pangu Pro MoE model may have been upcycled from Qwen-2.5 14B rather than trained from scratch, suggesting possible plagiarism and IP/copyright violations.

Abstract

Large language models (LLMs) face significant copyright and intellectual property challenges as the cost of training increases and model reuse becomes prevalent. While watermarking techniques have been proposed to protect model ownership, they may not be robust to continue training and development, posing serious threats to model attribution and copyright protection. This work introduces a simple yet effective approach for robust LLM fingerprinting based on intrinsic model characteristics. We discover that the standard deviation distributions of attention parameter matrices across different layers exhibit distinctive patterns that remain stable even after extensive continued training. These parameter distribution signatures serve as robust fingerprints that can reliably identify model lineage and detect potential copyright infringement. Our experimental validation across multiple model families demonstrates the effectiveness of our method for model authentication. Notably, our investigation uncovers evidence that a recently Pangu Pro MoE model released by Huawei is derived from Qwen-2.5 14B model through upcycling techniques rather than training from scratch, highlighting potential cases of model plagiarism, copyright violation, and information fabrication. These findings underscore the critical importance of developing robust fingerprinting methods for protecting intellectual property in large-scale model development and emphasize that deliberate continued training alone is insufficient to completely obscure model origins.