Behavioural feasible set: Value alignment constraints on AI decision support
arXiv cs.AI / 3/24/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper argues that when organizations use commercial AI for decision support, they inherit opaque, vendor-embedded value judgments that constrain what recommendations the system can produce.
- It introduces the concept of a “behavioural feasible set,” defining the set of reachable recommendations under vendor-imposed alignment constraints and providing diagnostics for when organizational requirements exceed that flexibility.
- Through scenario-based experiments (binary decision scenarios and multi-stakeholder ranking tasks), the author finds that alignment significantly compresses the feasible set, making recommendations less adjustable under contextual pressure.
- Experiments comparing pre- and post-alignment variants of an open-weight model suggest alignment is the mechanism of increased rigidity, and leading commercial models show similar or stronger effects.
- In multi-stakeholder settings, alignment changes implied stakeholder priorities rather than simply neutralizing them, creating a governance problem that prompting alone cannot fix because vendor choice determines which trade-offs are negotiable.
Related Articles
How AI is Transforming Dynamics 365 Business Central
Dev.to
Algorithmic Gaslighting: A Formal Legal Template to Fight AI Safety Pivots That Cause Psychological Harm
Reddit r/artificial
Do I need different approaches for different types of business information errors?
Dev.to
ShieldCortex: What We Learned Protecting AI Agent Memory
Dev.to
WordPress Theme Customization Without Code: The AI Revolution
Dev.to