LLM-Enhanced Energy Contrastive Learning for Out-of-Distribution Detection in Text-Attributed Graphs
arXiv cs.AI / 3/24/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper targets node-level out-of-distribution (OOD) detection in text-attributed graphs, where training/test distribution mismatch can severely degrade node classification performance.
- It introduces LECT (LLM-Enhanced Energy Contrastive Learning), combining large language models (LLMs) with energy-based contrastive learning to separate in-distribution (IND) from OOD nodes.
- LECT generates dependency-aware pseudo-OOD samples using LLM semantic understanding and contextual knowledge, enabling higher-quality OOD augmentation.
- Experiments across six benchmark datasets show LECT consistently outperforms existing state-of-the-art baselines while maintaining both high classification accuracy and robust OOD detection.
Related Articles

Composer 2: What is new and Compares with Claude Opus 4.6 & GPT-5.4
Dev.to
How UCP Breaks Your E-Commerce Tracking Stack: A Platform-by-Platform Analysis
Dev.to
AI Text Analyzer vs Asking Friends: Which Gives Better Perspective?
Dev.to
[D] Cathie wood claims ai productivity wave is starting, data shows 43% of ceos save 8+ hours weekly
Reddit r/MachineLearning

Microsoft hires top AI researchers from Allen Institute for AI for Suleyman's Superintelligence team
THE DECODER