Non-monotonic causal discovery with Kolmogorov-Arnold Fuzzy Cognitive Maps
arXiv cs.AI / 4/8/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces Kolmogorov-Arnold Fuzzy Cognitive Maps (KA-FCMs) to address a core limitation of standard Fuzzy Cognitive Maps: their monotonic activation and scalar edge weights make it difficult to represent non-monotonic causal dependencies.
- KA-FCMs replace fixed scalar synaptic weights with learnable univariate B-spline functions on edges, moving nonlinearity from node aggregation to the causal influence stage.
- The authors argue this design can represent arbitrary non-monotonic causal relationships without increasing graph density or adding hidden layers, preserving the graph-based interpretability typical of FCMs.
- KA-FCMs are evaluated on tasks spanning non-monotonic inference (Yerkes–Dodson law), symbolic regression, and chaotic time-series forecasting, where they outperform standard FCMs and achieve competitive accuracy versus multi-layer perceptrons.
- The approach is presented as enabling explicit extraction of mathematical laws from learned edge functions, combining performance with interpretability.
Related Articles

Meta's latest model is as open as Zuckerberg's private school
The Register

Why multi-agent AI security is broken (and the identity patterns that actually work)
Dev.to
BANKING77-77: New best of 94.61% on the official test set (+0.13pp) over our previous tests 94.48%.
Reddit r/artificial
A Comprehensive Implementation Guide to ModelScope for Model Search, Inference, Fine-Tuning, Evaluation, and Export
MarkTechPost

Harness Engineering: The Next Evolution of AI Engineering
Dev.to