TypeBandit: Type-Level Context Allocation and Reweighting for Effective Attribute Completion in Heterogeneous Graph Neural Networks
arXiv cs.LG / 5/1/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper studies heterogeneous graphs and formalizes “type-dependent information asymmetry,” where different node types contribute very different amounts of useful signal for attribute completion.
- It proposes TypeBandit, a model-agnostic method that uses a global sampling budget allocated across node types, then samples representative nodes per type to form shared contextual signals.
- TypeBandit is designed as a lightweight type-aware front end that can plug into existing heterogeneous GNN backbones such as R-GCN, HetGNN, HGT, and SimpleHGN without requiring new architectures.
- The method includes a hybrid pretraining strategy combining structural degree priors with feature propagation to produce a more reliable initializer than degree-only pretraining.
- Experiments on DBLP, IMDB, and ACM show practically meaningful, dataset-dependent improvements under fixed-split evaluation, with extensive ablations and OGBN-MAG tests supporting its effectiveness and efficiency.
Related Articles

Why Autonomous Coding Agents Keep Failing — And What Actually Works
Dev.to

Why Enterprise AI Pilots Fail
Dev.to

The PDF Feature Nobody Asked For (That I Use Every Day)
Dev.to

How to Fix OpenClaw Tool Calling Issues
Dev.to

Mistral's new flagship Medium 3.5 folds chat, reasoning, and code into one model
THE DECODER