GUARD-SLM: Token Activation-Based Defense Against Jailbreak Attacks for Small Language Models

arXiv cs.AI / 4/1/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies how 9 jailbreak attacks affect 7 small language models (SLMs) and 3 large language models (LLMs), finding that SLMs remain highly vulnerable to prompts that bypass safety alignment.
  • It analyzes hidden-layer activations across different layers and architectures, showing that different input types produce distinguishable internal representation patterns that relate to jailbreak behavior.
  • The authors propose GUARD-SLM, a lightweight token activation-based defense that filters malicious prompts directly in the representation space during inference while preserving benign requests.
  • The work highlights limitations in existing jailbreak defenses’ robustness across heterogeneous attacks and offers a practical pathway for improving secure deployment of SLMs on resource-constrained environments.

Abstract

Small Language Models (SLMs) are emerging as efficient and economically viable alternatives to Large Language Models (LLMs), offering competitive performance with significantly lower computational costs and latency. These advantages make SLMs suitable for resource-constrained and efficient deployment on edge devices. However, existing jailbreak defenses show limited robustness against heterogeneous attacks, largely due to an incomplete understanding of the internal representations across different layers of language models that facilitate jailbreak behaviors. In this paper, we conduct a comprehensive empirical study on 9 jailbreak attacks across 7 SLMs and 3 LLMs. Our analysis shows that SLMs remain highly vulnerable to malicious prompts that bypass safety alignment. We analyze hidden-layer activations across different layers and model architectures, revealing that different input types form distinguishable patterns in the internal representation space. Based on this observation, we propose GUARD-SLM, a lightweight token activation-based method that operates in the representation space to filter malicious prompts during inference while preserving benign ones. Our findings highlight robustness limitations across layers of language models and provide a practical direction for secure small language model deployment.