Learning to Reason: Targeted Knowledge Discovery and Fuzzy Logic Update for Robust Image Recognition

arXiv cs.CV / 5/1/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes a new approach to integrate domain knowledge into deep neural networks for better generalization in image recognition, addressing the challenge that useful symbolic rules are often unavailable in real-world tasks.
  • It introduces a Differentiable Knowledge Unit (DKU) that uses implication rules plus fuzzy inference to compute adjustments that modulate classifier logits and refine class probabilities.
  • The method learns implicit “concepts” without concept labels by training dedicated concept classifiers whose probabilities feed into the DKU alongside the main class probabilities.
  • The authors design a rule base with bidirectional logical relations between concepts and classes, and enforce that concepts remain distinct from each other and separable with respect to classes to provide a clean training signal.
  • Experiments on PASCAL-VOC, COCO, and MedMNIST show improved performance, including gains in domain generalization and robustness-style ablation/analysis compared with baselines.

Abstract

Integrating domain knowledge into deep neural networks is a promising way to improve generalization. Existing methods either encode prior knowledge in the loss function or apply post-processing modules, but both depend on identifying useful symbolic knowledge to integrate. Since such rules are often unavailable in real-world vision tasks, we propose a method for targeted knowledge discovery. We propose a Differentiable Knowledge Unit (DKU) that enables modulating the classifier logits, yielding refined class probabilities. The DKU uses implication rules to represent relationships between task classes and implicit concepts learned entirely from the main task supervision, without requiring concept labels. Concepts are identified by dedicated classifiers, whose probabilities are passed to DKU alongside the primary class probabilities. DKU computes a logic-based adjustment vector via fuzzy inference, which modulates the primary class logits to yield refined class probabilities. When concept classifiers represent concepts that do not support the logical rule structure, the resulting adjustments to the class probabilities do not directly minimize the supervision loss. Consequently, optimizing the supervision loss on these adjusted class probabilities implicitly trains the concept classifiers. We construct the rule base so that bidirectional logical relations connect concepts and classes. We enforce the concepts to be distinct from each other and with respect to the classes. This design enforces a clean supervision signal for concept learning. We evaluate our methods on the PASCAL-VOC, COCO, and MedMNIST datasets. We demonstrate improvement through our knowledge integration across these datasets. We conduct domain generalization and hard-sample ablation studies and find that our implicit knowledge discovery and integration outperforms the baseline.