Dictionary-Aligned Concept Control for Safeguarding Multimodal LLMs

arXiv cs.AI / 4/13/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes Dictionary-Aligned Concept Control (DACO) to safeguard multimodal LLMs by steering frozen-model activations at inference time against evolving malicious queries.
  • DACO builds a curated multimodal concept dictionary (DACO-400K) from over 400,000 caption-image stimuli, extracting 15,000 concept directions from aggregated activations.
  • It uses a Sparse Autoencoder (SAE) plus dictionary-aligned sparse coding to enable more granular intervention on specific safety-related concepts without broadly disrupting other capabilities.
  • The framework includes a new steering approach that initializes SAE training with the concept dictionary and automatically annotates SAE atoms’ semantics for safer control.
  • Experiments across multiple MLLMs (e.g., QwenVL, LLaVA, InternVL) and benchmarks (e.g., MM-SafetyBench, JailBreakV) report significant safety improvements while preserving general-purpose functionality.

Abstract

Multimodal Large Language Models (MLLMs) have been shown to be vulnerable to malicious queries that can elicit unsafe responses. Recent work uses prompt engineering, response classification, or finetuning to improve MLLM safety. Nevertheless, such approaches are often ineffective against evolving malicious patterns, may require rerunning the query, or demand heavy computational resources. Steering the activations of a frozen model at inference time has recently emerged as a flexible and effective solution. However, existing steering methods for MLLMs typically handle only a narrow set of safety-related concepts or struggle to adjust specific concepts without affecting others. To address these challenges, we introduce Dictionary-Aligned Concept Control (DACO), a framework that utilizes a curated concept dictionary and a Sparse Autoencoder (SAE) to provide granular control over MLLM activations. First, we curate a dictionary of 15,000 multimodal concepts by retrieving over 400,000 caption-image stimuli and summarizing their activations into concept directions. We name the dataset DACO-400K. Second, we show that the curated dictionary can be used to intervene activations via sparse coding. Third, we propose a new steering approach that uses our dictionary to initialize the training of an SAE and automatically annotate the semantics of the SAE atoms for safeguarding MLLMs. Experiments on multiple MLLMs (e.g., QwenVL, LLaVA, InternVL) across safety benchmarks (e.g., MM-SafetyBench, JailBreakV) show that DACO significantly improves MLLM safety while maintaining general-purpose capabilities.