Disentangle-then-Refine: LLM-Guided Decoupling and Structure-Aware Refinement for Graph Contrastive Learning
arXiv cs.AI / 4/17/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- Traditional graph contrastive learning for text-attributed graphs uses random stochastic augmentations that can mix task-relevant signals with noise.
- The proposed SDM-SCR framework uses an LLM-driven Semantic Decoupling Module to transform raw attributes into separate asymmetric views of semantic signal versus noise.
- A Semantic Consistency Regularization step applies a spectral, structure-aware selective filter that enforces consistency only in the signal subspace while suppressing high-frequency noise.
- The “Disentangle-then-Refine” design aims to purify semantic signals and reduce issues such as LLM hallucinations without causing harmful over-smoothing.
- Experiments reported for SDM-SCR indicate state-of-the-art performance, improving both accuracy and efficiency over prior approaches.
Related Articles
langchain-anthropic==1.4.1
LangChain Releases

🚀 Anti-Gravity Meets Cloud AI: The Future of Effortless Development
Dev.to

Talk to Your Favorite Game Characters! Mantella Brings AI to Skyrim and Fallout 4 NPCs
Dev.to

AI Will Run Companies. Here's Why That Should Excite You, Not Scare You.
Dev.to

The problem with Big Tech AI pricing (and why 8 countries can't afford to compete)
Dev.to