MOSAIC: Composable Safety Alignment with Modular Control Tokens
arXiv cs.AI / 3/18/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- MOSAIC proposes a modular safety alignment framework built on learnable control tokens that encode individual safety constraints and can be activated and composed at inference time on a frozen backbone model.
- It addresses the limitations of static parameter-level safety policies and prompt-based methods by enabling context-dependent safety across users, regions, and applications.
- The training uses order-based task sampling and a distribution-level alignment objective to improve efficiency and reduce over-refusal while preserving model utility.
- Experiments indicate MOSAIC achieves strong defense performance with substantially lower over-refusal compared to traditional approaches.
Related Articles
[R] Combining Identity Anchors + Permission Hierarchies achieves 100% refusal in abliterated LLMs — system prompt only, no fine-tuning
Reddit r/MachineLearning
How I Built an AI SDR Agent That Finds Leads and Writes Personalized Cold Emails
Dev.to
Complete Guide: How To Make Money With Ai
Dev.to
I Analyzed My Portfolio with AI and Scored 53/100 — Here's How I Fixed It to 85+
Dev.to
The Demethylation
Dev.to