CAP: Controllable Alignment Prompting for Unlearning in LLMs

arXiv cs.LG / 4/24/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that LLMs trained on unfiltered data can retain sensitive or non-compliant information, making selective “unlearning” necessary for safety and regulatory compliance.
  • Existing unlearning approaches that modify model parameters are criticized for being expensive, hard to control at exact forgetting boundaries, and often require direct access to model weights.
  • The proposed CAP (Controllable Alignment Prompting for Unlearning) framework performs unlearning via an end-to-end, prompt-driven process that uses reinforcement learning to optimize a prompt generator working alongside the LLM.
  • CAP aims to suppress specific target knowledge while preserving general capabilities, and it supports reversible restoration by revoking the prompt.
  • Experiments reported in the study claim CAP delivers precise, controllable unlearning without updating model parameters and improves on prior methods’ limited transferability.

Abstract

Large language models (LLMs) trained on unfiltered corpora inherently risk retaining sensitive information, necessitating selective knowledge unlearning for regulatory compliance and ethical safety. However, existing parameter-modifying methods face fundamental limitations: high computational costs, uncontrollable forgetting boundaries, and strict dependency on model weight access. These constraints render them impractical for closed-source models, yet current non-invasive alternatives remain unsystematic and reliant on empirical experience. To address these challenges, we propose the Controllable Alignment Prompting for Unlearning (CAP) framework, an end-to-end prompt-driven unlearning paradigm. CAP decouples unlearning into a learnable prompt optimization process via reinforcement learning, where a prompt generator collaborates with the LLM to suppress target knowledge while preserving general capabilities selectively. This approach enables reversible knowledge restoration through prompt revocation. Extensive experiments demonstrate that CAP achieves precise, controllable unlearning without updating model parameters, establishing a dynamic alignment mechanism that overcomes the transferability limitations of prior methods.