Word Recovery in Large Language Models Enables Character-Level Tokenization Robustness
arXiv cs.CL / 3/12/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper identifies 'word recovery' as a core mechanism enabling LLMs to process character-level inputs despite non-canonical tokenization.
- It introduces a decoding-based method to detect word recovery and shows that hidden states reconstruct canonical word-level token identities from character-level inputs.
- It provides causal evidence by removing the corresponding subspace in hidden states, which degrades downstream task performance.
- An in-depth attention analysis reveals that in-group attention among characters belonging to the same canonical token is critical for word recovery; masking this attention in early layers reduces both recovery scores and task performance.
- The work offers a mechanistic explanation for tokenization robustness and identifies word recovery as a key mechanism shaping how LLMs handle character-level inputs.
Related Articles
[R] Combining Identity Anchors + Permission Hierarchies achieves 100% refusal in abliterated LLMs — system prompt only, no fine-tuning
Reddit r/MachineLearning
How I Built an AI SDR Agent That Finds Leads and Writes Personalized Cold Emails
Dev.to
Complete Guide: How To Make Money With Ai
Dev.to
I Analyzed My Portfolio with AI and Scored 53/100 — Here's How I Fixed It to 85+
Dev.to
The Demethylation
Dev.to