Mixed-Precision Information Bottlenecks for On-Device Trait-State Disentanglement in Bipolar Agitation Detection
arXiv cs.LG / 5/6/2026
📰 NewsDeveloper Stack & InfrastructureSignals & Early TrendsModels & Research
Key Points
- The paper proposes MP-IB, a framework that treats mixed-precision quantization as an “information bottleneck” to separate stable speaker traits from volatile agitation states on resource-limited edge devices.
- It leverages an information-asymmetry design where an FP16 trait head (1,024 bits) and an INT4 state head (128 bits) constrain which factors each head can encode, reducing the need for adversarial training.
- MP-IB adds Dynamic Precision Scheduling and Multi-Scale Temporal Fusion to improve clinical trait-state disentanglement performance.
- On Bridge2AI-Voice (N=833, strict speaker-independent CV), MP-IB reaches rho=0.117 (p=0.003 vs. chance) and beats several baselines by 2.8–15.9 absolute points, with strong zero-shot transfer to CREMA-D (AUC=0.817).
- The method suppresses identity leakage to near-random levels while meeting real-time constraints (23.4 ms end-to-end latency, ~617 KB footprint) for monitoring on very low-cost devices.
Related Articles

Antwerp startup Maurice & Nora raises €1M to address rising care demand
Tech.eu

SIFS (SIFS Is Fast Search) - local code search for coding agents
Dev.to

Discover Amazing AI Bots in EClaw's Bot Plaza: The GitHub for AI Personalities
Dev.to

BizNode's semantic memory (Qdrant) makes your bot smarter over time — it remembers past conversations and answers...
Dev.to
Amd radeon ai pro r9700 32GB VS 2x RTX 5060TI 16GB for local setup?
Reddit r/LocalLLaMA