Beyond N-gram: Data-Aware X-GRAM Extraction for Efficient Embedding Parameter Scaling
arXiv cs.CL / 4/24/2026
📰 NewsDeveloper Stack & InfrastructureModels & Research
Key Points
- The paper introduces X-GRAM, a frequency-aware dynamic token-injection method to improve the parameter efficiency of large token-indexed lookup tables used for memory-augmented scaling.
- It argues that prior limitations stem from Zipfian under-training of long-tail tokens, different demand patterns across layers, and “slot collapse” that yields redundant embeddings.
- X-GRAM uses hybrid hashing and alias mixing to compress the long tail while keeping head capacity, and then refines retrieved vectors with a normalized SwiGLU + ShortConv module to better capture diverse local n-gram features.
- The extracted signals are injected into attention value streams and inter-layer residuals via depth-aware gating, creating a memory-to-context alignment that decouples model capacity from FLOPs.
- Experiments at 0.73B and 1.15B model scales report up to +4.4 average accuracy over the vanilla backbone and +3.2 over strong retrieval baselines, while achieving these gains with substantially smaller tables in a 50% setting, with code released on GitHub.
Related Articles

GPT-5.5 is here. So is DeepSeek V4. And honestly, I am tired of version numbers.
Dev.to

I Built an AI Image Workflow with GPT Image 2.0 (+ Fixing Its Biggest Flaw)
Dev.to
Max-and-Omnis/Nemotron-3-Super-64B-A12B-Math-REAP-GGUF
Reddit r/LocalLLaMA

Building a Visual Infrastructure Layer: How We’re Solving the "Visual Trust Gap" for E-com
Dev.to
Qwen3.6 35B-A3B is quite useful on 780m iGPU (llama.cpp,vulkan)
Reddit r/LocalLLaMA