Fusian: Multi-LoRA Fusion for Fine-Grained Continuous MBTI Personality Control in Large Language Models
arXiv cs.CL / 3/17/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- Fusian introduces a two-stage framework for fine-grained, continuous personality control in LLMs, first collecting a trajectory of LoRA adapters during SFT to map a trait's continuous manifold.
- In the second stage, a reinforcement learning policy dynamically fuses multiple frozen adapters by sampling from a Dirichlet distribution to reach a target trait intensity.
- Experiments on the Qwen3-14B model show Fusian achieves high precision in matching user-specified personality intensities and outperforms baseline methods.
- The approach enables continuous, nuanced personality control beyond discrete categories, with potential implications for more personalized and controllable assistant interactions.
Related Articles
The Honest Guide to AI Writing Tools in 2026 (What Actually Works)
Dev.to
Next-Generation LLM Inference Technology: From Flash-MoE to Gemini Flash-Lite, and Local GPU Utilization
Dev.to
The Wave of Open-Source AI and Investment in Security: Trends from Qwen, MS, and Google
Dev.to
How I built a 4-product AI income stack in 4 months (the honest version)
Dev.to
I stopped writing AI prompts from scratch. Here is the system I built instead.
Dev.to