Object Referring-Guided Scanpath Prediction with Perception-Enhanced Vision-Language Models
arXiv cs.CV / 4/23/2026
📰 NewsModels & Research
Key Points
- The paper introduces Object Referring-guided Scanpath Prediction (ORSP), which predicts human visual attention scanpaths for a target object specified by a referring expression.
- It proposes ScanVLA, a model that uses a vision-language model (VLM) to extract and fuse visually and linguistically aligned representations from an input image and the referring text.
- To improve fine-grained positional accuracy, the work adds a History Enhanced Scanpath Decoder (HESD) that leverages past fixation positions when predicting the next fixation.
- The approach further incorporates a frozen Segmentation LoRA as an auxiliary module to localize the referred object more precisely while avoiding significant extra compute or time costs.
- Experiments show ScanVLA substantially outperforms prior scanpath prediction methods in the object-referring setting.
Related Articles

Trajectory Forecasts in Unknown Environments Conditioned on Grid-Based Plans
Dev.to

OpenAI Just Named It Workspace Agents. We Open-Sourced Our Lark Version Six Months Ago
Dev.to

GPT Image 2 Subject-Lock Editing: A Practical Guide to input_fidelity
Dev.to

GPT Image 2 vs DALL-E 3: What Actually Changed in OpenAI's New Image Model
Dev.to

AI Tutor for Science Students — Physics Chemistry Biology Solved by AI
Dev.to