From Scalars to Tensors: Declared Losses Recover Epistemic Distinctions That Neutrosophic Scalars Cannot Express

arXiv cs.AI / 4/14/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper extends prior work on neutrosophic T/I/F evaluation by replicating an “hyper-truth” finding (T+I+F>1.0) and shows it occurs across multiple LLM families from Anthropic, Meta, DeepSeek, Alibaba, and Mistral under an unconstrained prompt protocol.
  • It argues that scalar neutrosophic representations can fail to preserve key epistemic distinctions because certain model behaviors (e.g., an “Absorption” pattern) yield identical scalar outputs for different situations such as paradox, ignorance, and contingency.
  • The authors introduce “declared losses” as additional structured output describing what the model cannot evaluate and why, and show this substantially restores the lost epistemic distinctions.
  • They find that models that collapse scalar T/I/F distinctions tend to produce nearly disjoint loss vocabularies, using keyword overlap metrics (low Jaccard similarity) and severity/domain-rated loss declarations to differentiate uncertainty types.
  • Overall, the work concludes that scalars alone are necessary but insufficient, and that a tensor-structured approach (scalars plus loss information) better captures differences in LLM epistemic capability.

Abstract

Leyva-V\'azquez and Smarandache (2025) demonstrated that neutrosophic T/I/F evaluation, where Truth, Indeterminacy, and Falsity are independent dimensions not constrained to sum to 1.0, which reveals "hyper-truth"' (T+I+F > 1.0) in 35% of complex epistemic cases evaluated by LLMs. We extend their work in two directions. First, we replicate and extend their experiment across five model families from five vendors (Anthropic, Meta, DeepSeek, Alibaba, Mistral), finding hyper-truth in 84% of unconstrained evaluations, which confirms the phenomenon is cross-vendor under our prompt protocol. Second, and more significantly, we identify a limitation of scalar T/I/F that their framework cannot address: models adopting an `"Absorption" position (T=0, I=1, F=0) produce identical scalar outputs for fundamentally different epistemic situations (paradox, ignorance, contingency), collapsing the very distinctions neutrosophic logic was designed to preserve. We demonstrate that extending the evaluation to include declared losses (structured descriptions of what the model cannot evaluate and why) substantially recovers these distinctions. Models producing identical scalars for paradox and ignorance produce nearly disjoint loss vocabularies (Jaccard similarity < 0.10 on loss description keywords), with domain-specific, severity-rated loss declarations that differentiate the nature of their uncertainty. This suggests that scalar T/I/F is a necessary but insufficient representation of epistemic state, and that tensor-structured output (scalars + losses) provides a more faithful model of LLM epistemic capabilities.