Separable Expert Architecture: Toward Privacy-Preserving LLM Personalization via Composable Adapters and Deletable User Proxies
arXiv cs.AI / 4/25/2026
📰 NewsDeveloper Stack & InfrastructureModels & Research
Key Points
- The paper argues that traditional LLM personalization mixes user data into shared weights, making individual data removal effectively infeasible without retraining.
- It proposes a three-layer “Separable Expert Architecture” that uses a static base model, composable domain-expert LoRA adapters, and per-user proxy artifacts so that deleting a user’s proxy deterministically unlearns that user.
- Experiments on Phi-3.5-mini and Llama-3.1-8B show per-user differentiated outputs driven by personal data while keeping strong isolation between users, with baseline recovery after proxy removal.
- The authors claim the design mitigates privacy attacks such as model inversion, membership inference, and training-data extraction because personal information never enters the shared weights.
- The approach reframes “machine unlearning” as a deterministic deletion operation and is positioned as compatible with DP-SGD to improve privacy-preserving shared model training.
Related Articles
Navigating WooCommerce AI Integrations: Lessons for Agencies & Developers from a Bluehost Conflict
Dev.to

Underwhelming or underrated? DeepSeek V4 shows “impressive” gains
SCMP Tech

Claude Code: Hooks, Subagents, and Skills — Complete Guide
Dev.to

Finding the Gold: An AI Framework for Highlight Detection
Dev.to

Debugging AI Agents in Production: ADK+Gemini Cloud Assist | Google Cloud NEXT '26
Dev.to