Non-monotonic causal discovery with Kolmogorov-Arnold Fuzzy Cognitive Maps

arXiv cs.AI / 4/8/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces Kolmogorov-Arnold Fuzzy Cognitive Maps (KA-FCMs) to address a core limitation of standard Fuzzy Cognitive Maps: their monotonic activation and scalar edge weights make it difficult to represent non-monotonic causal dependencies.
  • KA-FCMs replace fixed scalar synaptic weights with learnable univariate B-spline functions on edges, moving nonlinearity from node aggregation to the causal influence stage.
  • The authors argue this design can represent arbitrary non-monotonic causal relationships without increasing graph density or adding hidden layers, preserving the graph-based interpretability typical of FCMs.
  • KA-FCMs are evaluated on tasks spanning non-monotonic inference (Yerkes–Dodson law), symbolic regression, and chaotic time-series forecasting, where they outperform standard FCMs and achieve competitive accuracy versus multi-layer perceptrons.
  • The approach is presented as enabling explicit extraction of mathematical laws from learned edge functions, combining performance with interpretability.

Abstract

Fuzzy Cognitive Maps constitute a neuro-symbolic paradigm for modeling complex dynamic systems, widely adopted for their inherent interpretability and recurrent inference capabilities. However, the standard FCM formulation, characterized by scalar synaptic weights and monotonic activation functions, is fundamentally constrained in modeling non-monotonic causal dependencies, thereby limiting its efficacy in systems governed by saturation effects or periodic dynamics. To overcome this topological restriction, this research proposes the Kolmogorov-Arnold Fuzzy Cognitive Map (KA-FCM), a novel architecture that redefines the causal transmission mechanism. Drawing upon the Kolmogorov-Arnold representation theorem, static scalar weights are replaced with learnable, univariate B-spline functions located on the model edges. This fundamental modification shifts the non-linearity from the nodes' aggregation phase directly to the causal influence phase. This modification allows for the modeling of arbitrary, non-monotonic causal relationships without increasing the graph density or introducing hidden layers. The proposed architecture is validated against both baselines (standard FCM trained with Particle Swarm Optimization) and universal black-box approximators (Multi-Layer Perceptron) across three distinct domains: non-monotonic inference (Yerkes-Dodson law), symbolic regression, and chaotic time-series forecasting. Experimental results demonstrate that KA-FCMs significantly outperform conventional architectures and achieve competitive accuracy relative to MLPs, while preserving graph- based interpretability and enabling the explicit extraction of mathematical laws from the learned edges.