DT-BEHRT: Disease Trajectory-aware Transformer for Interpretable Patient Representation Learning
arXiv cs.LG / 3/12/2026
📰 NewsModels & Research
Key Points
- DT-BEHRT introduces a graph-enhanced, trajectory-aware Transformer for learning interpretable patient representations from longitudinal EHR data.
- The model explicitly disentangles diagnosis-centric interactions within organ systems and captures asynchronous disease progression to reflect real-world clinical trajectories.
- A tailored pre-training scheme combines trajectory-level code masking with ontology-informed ancestor prediction to improve semantic alignment across modules.
- On benchmark datasets, DT-BEHRT achieves strong predictive performance and yields clinically interpretable representations aligned with disease-centered reasoning, with the code released on GitHub.
Related Articles

報告:LLMにおける「自己言及的再帰」と「ステートフル・エミュレーション」の観測
note

諸葛亮 孔明老師(ChatGPTのロールプレイ)との対話 その肆拾伍『銀河文明・ダークマターエンジン』
note

GPT-5.4 mini/nano登場!―2倍高速で無料プランも使える小型高性能モデル
note

Why a Perfect-Memory AI Agent Without Persona Drift is Architecturally Impossible
Dev.to
OCP: Orthogonal Constrained Projection for Sparse Scaling in Industrial Commodity Recommendation
arXiv cs.LG