Could the best LLM be able to generate a symbolic AI that is superior to itself, or is there something superior about matrices vs graphs?

Reddit r/artificial / 5/3/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The article discusses whether a top-performing LLM could generate a symbolic AI system that outperforms itself, challenging the notion that neural approaches are always superior.
  • It contrasts discrete symbolic representations (if/then rules, jumps, function calls, abstractions) with continuous “fuzzy” representations such as matrices.
  • The author asks for a principled explanation—potentially drawing on information-theoretic ideas like Shannon information—that would clarify why one representation class may be inherently better than the other.
  • It frames the question as an open research/analysis problem rather than reporting a new system or result, emphasizing the gap between what DNNs excel at and what symbolic methods might still offer.
  • Overall, the piece invites debate on the comparative strengths of symbolic AI versus DNN/LLM-based methods, including whether symbolic systems could be superior under some conditions.

Deep neural network AIs have beaten symbolic AIs across the board on many tasks, but is there a chance that symbolic AIs written by DNNs(LLMs), could beat those?

And if not, why not?

My gut tells me that no, discrete symbolic systems (of ifs/jumps/function calls/abstractions etc), are inferior to fuzzy matrices, but I'm curious if there is a formula or something that explains why (something like Shannon's information paper)?

submitted by /u/breck
[link] [comments]