The Query Channel: Information-Theoretic Limits of Masking-Based Explanations
arXiv cs.AI / 4/21/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper reinterprets masking-based post-hoc explanation methods (e.g., KernelSHAP, LIME) as a communication problem over a “query channel,” where each masked model evaluation is treated like a channel use.
- It characterizes the complexity of an explanation through the entropy of the hypothesis class and defines a per-query identification capacity that limits how much information each query can deliver.
- A strong converse result shows that when the required explanation recovery rate exceeds this capacity, exact recovery becomes impossible: the probability of error goes to one regardless of the explainer/decoder sequence.
- The authors also provide an achievability theorem, proving that under rates below capacity, reliable exact recovery is possible using a sparse maximum-likelihood decoder.
- Experiments and benchmarks (including a Monte Carlo mutual-information estimator) show information-theoretic conditions where explanations are theoretically feasible while common convex surrogates can still fail, and they analyze how resolution/tokenization choices and noise degrade the “channel.”
Related Articles

¿Hasta qué punto podría la IA reemplazarnos en nuestros trabajos? A veces creo que la gente exagera un poco.
Reddit r/artificial

Magnificent irony as Meta staff unhappy about running surveillance software on work PCs
The Register

ETHENEA (ETHENEA Americas LLC) Analyst View: Asset Allocation Resilience in the 2026 Global Macro Cycle
Dev.to

DEEPX and Hyundai Are Building Generative AI Robots
Dev.to

Stop Paying OpenAI to Read Garbage: The Two-Stage Agent Pipeline
Dev.to