TabEmb: Joint Semantic-Structure Embedding for Table Annotation

arXiv cs.LG / 4/22/2026

📰 NewsModels & Research

Key Points

  • The paper addresses table annotation by noting that tables require representations that jointly capture each column’s semantics and the relationships between columns, unlike text where semantic embeddings alone often work.
  • It argues that prior approaches that flatten 2D tables into 1D token sequences (then encode with pretrained language models like BERT) suffer from weaker semantic quality, poorer generalization to unseen/rare values, and degraded structural modeling.
  • TabEmb improves this by decoupling semantic encoding from structural modeling: an LLM generates semantic embeddings per column, and a graph-based module models inter-column relationships to produce joint semantic-structural representations.
  • Experiments indicate that TabEmb consistently outperforms strong baselines across multiple table annotation tasks, and the authors provide code and datasets.
  • The work is positioned as a better alternative to 2D-to-1D flattening and context-length limitations by preserving structured interactions via a graph over columns.

Abstract

Table annotation is crucial for making web and enterprise tables usable in downstream NLP applications. Unlike textual data where learning semantically rich token or sentence embeddings often suffice, tables are structured combinations of columns wherein useful representations must jointly capture column's semantics and the inter-column relationships. Existing models learn by linearizing the 2D table into a 1D token sequence and encoding it with pretrained language models (PLMs) such as BERT. However, this leads to limited semantic quality and weaker generalization to unseen or rare values compared to modern LLMs, and degraded structural modeling due to 2D-to-1D flattening and context-length constraints. We propose TabEmb, which directly targets these limitations by decoupling semantic encoding from structural modeling. An LLM first produces semantically rich embeddings for each column, and a graph-based module over columns then injects relationships into the embeddings, yielding joint semantic-tructural representations for table annotation. Experiments show that TabEmb consistently outperforms strong baselines on different table annotation tasks. Source code and datasets are available at https://github.com/hoseinzadeehsan/TabEmb