ReLIC-SGG: Relation Lattice Completion for Open-Vocabulary Scene Graph Generation

arXiv cs.CV / 4/27/2026

📰 NewsModels & Research

Key Points

  • ReLIC-SGG addresses open-vocabulary scene graph generation by recognizing that many annotated triplets are incomplete and that unannotated relations should not be treated as definite negatives.
  • It introduces a semantic relation lattice that captures similarity, entailment, and contradiction among open-vocabulary predicates to better infer missing positive relations.
  • The method uses visual-language compatibility, graph context, and semantic consistency to recover relations across different granularity (e.g., on vs standing/resting/supported by).
  • ReLIC-SGG formulates training as a positive-unlabeled learning objective to reduce false-negative supervision and employs lattice-guided decoding to output more compact, semantically consistent graphs.
  • Experiments across conventional, open-vocabulary, and panoptic benchmarks show improved recognition of rare/unseen predicates and better recovery of missing relations.

Abstract

Open-vocabulary scene graph generation (SGG) aims to describe visual scenes with flexible relation phrases beyond a fixed predicate set. Existing methods usually treat annotated triplets as positives and all unannotated object-pair relations as negatives. However, scene graph annotations are inherently incomplete: many valid relations are missing, and the same interaction can be described at different granularities, e.g., \textit{on}, \textit{standing on}, \textit{resting on}, and \textit{supported by}. This issue becomes more severe in open-vocabulary SGG due to the much larger relation space. We propose \textbf{ReLIC-SGG}, a relation-incompleteness-aware framework that treats unannotated relations as latent variables rather than definite negatives. ReLIC-SGG builds a semantic relation lattice to model similarity, entailment, and contradiction among open-vocabulary predicates, and uses it to infer missing positive relations from visual-language compatibility, graph context, and semantic consistency. A positive-unlabeled graph learning objective further reduces false-negative supervision, while lattice-guided decoding produces compact and semantically consistent scene graphs. Experiments on conventional, open-vocabulary, and panoptic SGG benchmarks show that ReLIC-SGG improves rare and unseen predicate recognition and better recovers missing relations.