AI Navigate

GaLoRA: Parameter-Efficient Graph-Aware LLMs for Node Classification

arXiv cs.LG / 3/12/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • GaLoRA is a parameter-efficient framework that integrates structural information from graphs into large language models to improve node classification on text-attributed graphs (TAGs).
  • It achieves competitive performance with only 0.24% of the parameters required by full LLM fine-tuning.
  • The method is validated on three real-world datasets, demonstrating effective fusion of structural and textual information in TAGs.
  • This work shows a scalable approach to leveraging graph structure in LLMs without large-scale fine-tuning, enabling more practical deployment.

Abstract

The rapid rise of large language models (LLMs) and their ability to capture semantic relationships has led to their adoption in a wide range of applications. Text-attributed graphs (TAGs) are a notable example where LLMs can be combined with Graph Neural Networks to improve the performance of node classification. In TAGs, each node is associated with textual content and such graphs are commonly seen in various domains such as social networks, citation graphs, recommendation systems, etc. Effectively learning from TAGs would enable better representations of both structural and textual representations of the graph and improve decision-making in relevant domains. We present GaLoRA, a parameter-efficient framework that integrates structural information into LLMs. GaLoRA demonstrates competitive performance on node classification tasks with TAGs, performing on par with state-of-the-art models with just 0.24% of the parameter count required by full LLM fine-tuning. We experiment with three real-world datasets to showcase GaLoRA's effectiveness in combining structural and semantical information on TAGs.