Visual prompting reimagined: The power of the Activation Prompts

arXiv cs.CV / 4/9/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces “activation prompts” (AP), extending visual prompting (VP) by applying universal perturbations to intermediate activation maps rather than only to the input.
  • It argues and demonstrates through theory and experiments that VP’s performance and efficiency are intrinsically limited, and that AP can outperform VP due to where perturbations are applied in a model.
  • AP shows close relationships to normalization tuning in CNNs and vision transformers, but with distinct, model-dependent layer preferences for where prompts are most effective.
  • Across extensive experiments on 29 datasets and multiple architectures, AP achieves higher accuracy and better efficiency than VP and parameter-efficient fine-tuning baselines, including improvements in time, parameters, memory, and throughput.

Abstract

Visual prompting (VP) has emerged as a popular method to repurpose pretrained vision models for adaptation to downstream tasks. Unlike conventional model fine-tuning techniques, VP introduces a universal perturbation directly into the input data to facilitate task-specific fine-tuning rather than modifying model parameters. However, there exists a noticeable performance gap between VP and conventional fine-tuning methods, highlighting an unexplored realm in theory and practice to understand and advance the input-level VP to reduce its current performance gap. Towards this end, we introduce a generalized concept, termed activation prompt (AP), which extends the scope of the input-level VP by enabling universal perturbations to be applied to activation maps within the intermediate layers of the model. By using AP to revisit the problem of VP and employing it as an analytical tool, we demonstrate the intrinsic limitations of VP in both performance and efficiency, revealing why input-level prompting may lack effectiveness compared to AP, which exhibits a model-dependent layer preference. We show that AP is closely related to normalization tuning in convolutional neural networks and vision transformers, although each model type has distinct layer preferences for prompting. We also theoretically elucidate the rationale behind such a preference by analyzing global features across layers. Through extensive experiments across 29 datasets and various model architectures, we provide a comprehensive performance analysis of AP, comparing it with VP and parameter-efficient fine-tuning baselines. Our results demonstrate AP's superiority in both accuracy and efficiency, considering factors such as time, parameters, memory usage, and throughput.