Evaluation without Generation: Non-Generative Assessment of Harmful Model Specialization with Applications to CSAM

arXiv cs.LG / 4/29/2026

📰 NewsDeveloper Stack & InfrastructureSignals & Early TrendsModels & Research

Key Points

  • The paper highlights a governance challenge: auditing open-weight generative models for harmful specialization is difficult to scale with standard prompt-based generative evaluation.
  • It proposes “Evaluation without Generation,” arguing that when output generation is legally or ethically constrained (e.g., CSAM), capabilities should be inferred from the model’s state such as parameters or internal representations rather than outputs.
  • The authors introduce “Gaussian probing,” which measures how LoRA adaptors perturb a model’s internal representations using responses to Gaussian latent ensembles.
  • They report that Gaussian probing can reliably separate benign from harmful specialization without sampling outputs, and it works in high-risk domains including detecting CSAM-specialized models.
  • The method is also shown to be robust against adversarial manipulation like weight rescaling, suggesting practical resilience for platform-level auditing.

Abstract

Auditing the fine-tunes of open-weight generative models for harmful specialization has become a new governance challenge for model hosting platforms. The standard toolkit, generative evaluation via curated prompts or red-teaming, does not scale to platform-level auditing and breaks down entirely for domains like CSAM where generation is legally constrained. This motivates the Evaluation without Generation problem: assessing model capabilities without producing outputs. We argue that in such settings, capability must be inferred from the model's state, either its parameters or internal representations, rather than its outputs. We introduce Gaussian probing, a method that characterizes how LoRA adaptors perturb a model's internal representations by measuring responses to Gaussian latent ensembles. Unlike raw-weight baselines, Gaussian probing reliably distinguishes benign from harmful specialization without sampling outputs. We demonstrate effectiveness in high-risk domains, including detecting models specialized for child sexual abuse material (CSAM), where output-based evaluation is legally and ethically constrained. Our results show that Gaussian probing provides a scalable non-generative alternative for evaluating high-risk generative systems and remains robust to weight rescaling, a representative adversarial manipulation.