TouchAI: Exploring human-AI perceptual alignment in touch through language model representations

arXiv cs.CL / 4/29/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The study examines “perceptual alignment” between large language models (LLMs) and human touch experiences, a dimension often overlooked compared with visual alignment.
  • Researchers introduced the “textile hand” task, where people described differences between two handled textile samples (target and reference) to an LLM, and the model inferred the target using similarity in a high-dimensional embedding space.
  • Results indicate partial perceptual alignment exists, but it varies widely across different textile types and materials.
  • While the LLM aligns well for some textiles such as silk satin, it performs poorly for others like cotton denim, and participants generally felt the model’s predictions did not closely match their touch experiences.
  • The authors discuss potential reasons for this variance and argue that improving human-AI perceptual alignment could enhance future everyday applications involving touch-based understanding.

Abstract

Aligning large language models (LLMs) behaviour with human intent is critical for future AI. An important yet often overlooked aspect of this alignment is the perceptual alignment. Perceptual modalities like touch are more multifaceted and nuanced compared to other sensory modalities such as vision. This work investigates how well LLMs align with human touch experiences using the "textile hand" task. We created a "Guess What Textile" interaction in which participants were given two textile samples -- a target and a reference -- to handle. Without seeing them, participants described the differences between them to the LLM. Using these descriptions, the LLM attempted to identify the target textile by assessing similarity within its high-dimensional embedding space. Our results suggest that a degree of perceptual alignment exists, however varies significantly among different textile samples. For example, LLM predictions are well aligned for silk satin, but not for cotton denim. Moreover, participants didn't perceive their textile experiences closely matched by the LLM predictions. This is only the first exploration into perceptual alignment around touch, exemplified through textile hand. We discuss possible sources of this alignment variance, and how better human-AI perceptual alignment can benefit future everyday tasks.