From indicators to biology: the calibration problem in artificial consciousness
arXiv cs.AI / 3/31/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper argues that “artificial consciousness” research is moving from behavior-only evaluations to assessing internal architecture, using theory-derived indicators and probabilistic updates of beliefs about consciousness claims.
- It contends that this indicator-based approach is not epistemically well-calibrated because consciousness science is fragmented and the proposed indicators lack independent validation.
- The authors emphasize that there is no established ground truth for artificial phenomenality, making current probabilistic attribution of consciousness to existing AI systems premature.
- They propose a near-term, more defensible direction: biologically grounded engineering efforts such as biohybrid, neuromorphic, and connectome-scale systems to narrow the empirical gap with living biology.
- Overall, the work frames current evaluation methods as an intermediate research program with significant validation limitations, recommending a shift toward biology-anchored experimentation.



