Enhancing Value Alignment of LLMs with Multi-agent system and Combinatorial Fusion
arXiv cs.CL / 3/13/2026
💬 OpinionModels & Research
Key Points
- The paper highlights the challenge of aligning LLMs with human values and critiques current RLHF approaches for relying on a single evaluator and narrow reward signals.
- It proposes the Value Alignment System using Combinatorial Fusion Analysis (VAS-CFA), which uses multiple moral agents each fine-tuned to represent distinct normative perspectives and fuses their outputs via CFA with rank- and score-based aggregation.
- The design leverages cognitive diversity across agents to mitigate conflicts and redundancies, aiming to produce responses that better reflect human values.
- Empirical results show that VAS-CFA outperforms single-agent baselines and prior aggregation methods on standard metrics, supporting multi-agent fusion as an effective approach to value alignment in LLMs.
Related Articles

Jeff Bezos reportedly wants $100 billion to buy and transform old manufacturing firms with AI
TechCrunch
[R] Weekly digest: arXiv AI security papers translated for practitioners -- Cascade (cross-stack CVE+Rowhammer attacks on compound AI), LAMLAD (dual-LLM adversarial ML, 97% evasion), OpenClaw (4 vuln classes in agent frameworks)
Reddit r/MachineLearning
My Experience with Qwen 3.5 35B
Reddit r/LocalLLaMA

Cursor’s new coding model Composer 2 is here: It beats Claude Opus 4.6 but still trails GPT-5.4
VentureBeat
Qwen 3.5 122B completely falls apart at ~ 100K context
Reddit r/LocalLLaMA