From Scalars to Tensors: Declared Losses Recover Epistemic Distinctions That Neutrosophic Scalars Cannot Express
arXiv cs.AI / 4/14/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper extends prior work on neutrosophic T/I/F evaluation by replicating an “hyper-truth” finding (T+I+F>1.0) and shows it occurs across multiple LLM families from Anthropic, Meta, DeepSeek, Alibaba, and Mistral under an unconstrained prompt protocol.
- It argues that scalar neutrosophic representations can fail to preserve key epistemic distinctions because certain model behaviors (e.g., an “Absorption” pattern) yield identical scalar outputs for different situations such as paradox, ignorance, and contingency.
- The authors introduce “declared losses” as additional structured output describing what the model cannot evaluate and why, and show this substantially restores the lost epistemic distinctions.
- They find that models that collapse scalar T/I/F distinctions tend to produce nearly disjoint loss vocabularies, using keyword overlap metrics (low Jaccard similarity) and severity/domain-rated loss declarations to differentiate uncertainty types.
- Overall, the work concludes that scalars alone are necessary but insufficient, and that a tensor-structured approach (scalars plus loss information) better captures differences in LLM epistemic capability.
Related Articles

Don't forget, there is more than forgetting: new metrics for Continual Learning
Dev.to

Microsoft MAI-Image-2-Efficient Review 2026: The AI Image Model Built for Production Scale
Dev.to
Bit of a strange question?
Reddit r/artificial

One URL for Your AI Agent: HTML, JSON, Markdown, and an A2A Card
Dev.to

One URL for Your AI Agent: HTML, JSON, Markdown, and an A2A Card
Dev.to