Multi-Axis Trust Modeling for Interpretable Account Hijacking Detection
arXiv cs.AI / 3/17/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The work introduces a Hadith-inspired multi-axis trust modeling framework for interpretable account-hijacking detection, mapping five trust axes (long-term integrity, behavioral precision, contextual continuity, cumulative reputation, and anomaly evidence) into 26 behavioral features.
- It adds lightweight temporal features to capture short-horizon changes across consecutive activity windows, enhancing the trust-based representation.
- Experiments on the CLUE-LDS cloud activity dataset with injected hijacking show a Random Forest using the trust features achieving near-perfect detection and substantially outperforming models based on raw event counts, simple baselines, and unsupervised anomaly detection.
- On the CERT Insider Threat datasets with extreme imbalance and sparse malicious behavior, temporal features improve ROC-AUC (0.776 to 0.844) and PR-AUC (0.072 to 0.264), and provide robust gains in leakage-controlled scenarios (ROC-AUC 0.627 to 0.715).
Related Articles
Day 10: 230 Sessions of Hustle and It Comes Down to One Person Reading a Document
Dev.to

5 Dangerous Lies Behind Viral AI Coding Demos That Break in Production
Dev.to
Two bots, one confused server: what Nimbus revealed about AI agent identity
Dev.to

OpenTelemetry just standardized LLM tracing. Here's what it actually looks like in code.
Dev.to
PIXIU: A Large Language Model, Instruction Data and Evaluation Benchmark forFinance
Dev.to