Improving Robustness of Tabular Retrieval via Representational Stability

arXiv cs.CL / 4/28/2026

📰 NewsDeveloper Stack & InfrastructureIdeas & Deep AnalysisModels & Research

Key Points

  • Transformer-based table retrieval can be highly sensitive to how tables are serialized (e.g., CSV/TSV/HTML/Markdown/DDL), producing different embeddings and retrieval outcomes even when table meaning is unchanged.
  • The paper proposes treating serialization-specific embeddings as noisy views of a shared semantic signal and using a centroid (average) representation as a canonical target to suppress format-induced variation.
  • Experiments across multiple benchmarks and retriever families show that centroid representations generally outperform individual serialization formats in aggregate pairwise comparisons (including MPNet, BGE-M3, ReasonIR, and SPLADE).
  • The authors also introduce a lightweight residual bottleneck adapter that maps embeddings from a single serialization toward centroid targets on top of a frozen encoder, improving robustness for dense retrievers but with smaller benefits for sparse lexical retrieval.
  • Overall, the work identifies serialization sensitivity as a key driver of retrieval variance and demonstrates a post hoc geometric correction approach to achieve serialization-invariant table retrieval.

Abstract

Transformer-based table retrieval systems flatten structured tables into token sequences, making retrieval sensitive to the choice of serialization even when table semantics remain unchanged. We show that semantically equivalent serializations, such as \texttt{csv}, \texttt{tsv}, \texttt{html}, \texttt{markdown}, and \texttt{ddl}, can produce substantially different embeddings and retrieval results across multiple benchmarks and retriever families. To address this instability, we treat serialization embedding as noisy views of a shared semantic signal and use its centroid as a canonical target representation. We show that centroid averaging suppresses format-specific variation and can recover the semantic content common to different serializations when format-induced shifts differ across tables. Empirically, centroid representations outrank individual formats in aggregate pairwise comparisons across \texttt{MPNet}, \texttt{BGE-M3}, \texttt{ReasonIR}, and \texttt{SPLADE}. We further introduce a lightweight residual bottleneck adapter on top of a frozen encoder that maps single-serialization embeddings towards centroid targets while preserving variance and enforcing covariance regularization. The adapter improves robustness for several dense retrievers, though gains are model-dependent and weaker for sparse lexical retrieval. These results identify serialization sensitivity as a major source of retrieval variance and show the promise of post hoc geometric correction for serialization-invariant table retrieval.