From indicators to biology: the calibration problem in artificial consciousness

arXiv cs.AI / 3/31/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that “artificial consciousness” research is moving from behavior-only evaluations to assessing internal architecture, using theory-derived indicators and probabilistic updates of beliefs about consciousness claims.
  • It contends that this indicator-based approach is not epistemically well-calibrated because consciousness science is fragmented and the proposed indicators lack independent validation.
  • The authors emphasize that there is no established ground truth for artificial phenomenality, making current probabilistic attribution of consciousness to existing AI systems premature.
  • They propose a near-term, more defensible direction: biologically grounded engineering efforts such as biohybrid, neuromorphic, and connectome-scale systems to narrow the empirical gap with living biology.
  • Overall, the work frames current evaluation methods as an intermediate research program with significant validation limitations, recommending a shift toward biology-anchored experimentation.

Abstract

Recent work on artificial consciousness shifts evaluation from behaviour to internal architecture, deriving indicators from theories of consciousness and updating credences accordingly. This is progress beyond naive Turing-style tests. But the indicator-based programme remains epistemically under-calibrated: consciousness science is theoretically fragmented, indicators lack independent validation, and no ground truth of artificial phenomenality exists. Under these conditions, probabilistic consciousness attribution to current AI systems is premature. A more defensible near-term strategy is to redirect effort toward biologically grounded engineering -- biohybrid, neuromorphic, and connectome-scale systems -- that reduces the gap with the only domain where consciousness is empirically anchored: living systems.