Can We Build Scene Graphs, Not Classify Them? FlowSG: Progressive Image-Conditioned Scene Graph Generation with Flow Matching

arXiv cs.CV / 4/22/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • FlowSG reframes Scene Graph Generation (SGG) as a progressive, generative task using continuous-time flow matching, rather than treating it as a one-shot classification problem.
  • The method uses a VQ-VAE to quantize scene-graph representations into discrete tokens, then employs a graph Transformer to jointly evolve bounding boxes and categorical tokens via a velocity field and flow-conditioned message passing.
  • Training combines flow-matching losses for geometric refinement with discrete-flow objectives for object and predicate tokens, enabling few-step inference.
  • Experiments on Visual Genome (VG) and PSG (with both closed- and open-vocabulary settings) report consistent improvements in predicate recall/mean recall and graph-level metrics, including an average ~3-point gain over USG-Par.
  • FlowSG is designed to be plug-and-play with standard detectors and segmenters, suggesting practical integration potential for image-conditioned scene graph synthesis.

Abstract

Scene Graph Generation (SGG) unifies object localization and visual relationship reasoning by predicting boxes and subject-predicate-object triples. Yet most pipelines treat SGG as a one-shot, deterministic classification problem rather than a genuinely progressive, generative task. We propose FlowSG, which recasts SGG as continuous-time transport on a hybrid discrete-continuous state: starting from a noised graph, the model progressively grows an image-conditioned scene graph through constraint-aware refinements that jointly synthesize nodes (objects) and edges (predicates). Specifically, we first leverage a VQ-VAE to quantize a scene graph (e.g., continuous visual features) into compact, predictable tokens; a graph Transformer then (i) predicts a conditional velocity field to transport continuous geometry (boxes) and (ii) updates discrete posteriors for categorical tokens (object features and predicate labels), coupling semantics and geometry via flow-conditioned message aggregation. Training combines flow-matching losses for geometry with a discrete-flow objective for tokens, yielding few-step inference and plug-and-play compatibility with standard detectors and segmenters. Extensive experiments on VG and PSG under closed- and open-vocabulary protocols show consistent gains in predicate R/mR and graph-level metrics, validating the mixed discrete-continuous generative formulation over one-shot classification baselines, with an average improvement of about 3 points over the state-of-the-art USG-Par.