Decoding Functional Networks for Visual Categories via GNNs

arXiv cs.CV / 4/1/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies how large-scale brain functional networks encode visual categories by using parcel-level graphs built from 7T fMRI data from the Natural Scenes Dataset.
  • It trains a signed Graph Neural Network with positive and negative interaction modeling, an edge-masking mechanism for sparsity, and class-specific saliency to interpret what connectivity patterns matter.
  • The approach successfully decodes category-specific functional connectivity states for categories such as sports, food, and vehicles.
  • Results highlight reproducible subnetworks that align with ventral and dorsal visual pathways, suggesting the learned representations are biologically meaningful.
  • Overall, the work links machine-learning methods with neuroscience by moving from voxel-level category selectivity toward a connectivity-based view of visual processing.

Abstract

Understanding how large-scale brain networks represent visual categories is fundamental to linking perception and cortical organization. Using high-resolution 7T fMRI from the Natural Scenes Dataset, we construct parcel-level functional graphs and train a signed Graph Neural Network that models both positive and negative interactions, with a sparse edge mask and class-specific saliency. The model accurately decodes category-specific functional connectivity states (sports, food, vehicles) and reveals reproducible, biologically meaningful subnetworks along the ventral and dorsal visual pathways. This framework bridges machine learning and neuroscience by extending voxel-level category selectivity to a connectivity-based representation of visual processing.