Visual prompting reimagined: The power of the Activation Prompts
arXiv cs.CV / 4/9/2026
📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces “activation prompts” (AP), extending visual prompting (VP) by applying universal perturbations to intermediate activation maps rather than only to the input.
- It argues and demonstrates through theory and experiments that VP’s performance and efficiency are intrinsically limited, and that AP can outperform VP due to where perturbations are applied in a model.
- AP shows close relationships to normalization tuning in CNNs and vision transformers, but with distinct, model-dependent layer preferences for where prompts are most effective.
- Across extensive experiments on 29 datasets and multiple architectures, AP achieves higher accuracy and better efficiency than VP and parameter-efficient fine-tuning baselines, including improvements in time, parameters, memory, and throughput.
Related Articles

Black Hat Asia
AI Business

Amazon CEO takes aim at Nvidia, Intel, Starlink, more in annual shareholder letter
TechCrunch

Why Anthropic’s new model has cybersecurity experts rattled
Reddit r/artificial
Does the AI 2027 paper still hold any legitimacy?
Reddit r/artificial

Why Most Productivity Systems Fail (And What to Do Instead)
Dev.to