The Power of Order: Fooling LLMs with Adversarial Table Permutations
arXiv cs.LG / 5/4/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper finds that modern LLMs can be vulnerable to the *layout* of tabular inputs, even when row/column rearrangements do not change the table’s underlying meaning.
- It introduces Adversarial Table Permutation (ATP), a gradient-based method that searches for worst-case row/column permutations that maximally disrupt model outputs.
- Extensive experiments show that ATP substantially degrades performance across many LLMs, including newer and widely used architectures.
- The results suggest a pervasive weakness in how current LLMs handle structured data, highlighting the need for permutation-robust model designs for reliable real-world use.
💡 Insights using this article
This article is featured in our daily AI news digest — key takeaways and action items at a glance.
Related Articles
A very basic litmus test for LLMs "ok give me a python program that reads my c: and put names and folders in a sorted list from biggest to small"
Reddit r/LocalLLaMA

ALM on Power Platform: ADO + GitHub, the best of both worlds
Dev.to

Experiment: Does repeated usage influence ChatGPT 5.4 outputs in a RAG-like setup?
Dev.to

Find 12 high-volume, low-competition GEO content topics Topify.ai should rank on
Dev.to

When a memorized rule fits your bug too well: a meta-trap of agent workflows
Dev.to