広告

GazeQwen: Lightweight Gaze-Conditioned LLM Modulation for Streaming Video Understanding

arXiv cs.AI / 2026/3/30

💬 オピニオンSignals & Early TrendsIdeas & Deep AnalysisModels & Research

要点

  • The paper introduces GazeQwen, a lightweight, parameter-efficient method to make multimodal LLMs use eye-gaze information for streaming video understanding despite prior models struggling to incorporate gaze cues effectively.
  • GazeQwen uses a compact gaze resampler (about 1–5M trainable parameters) that encodes V-JEPA 2.1 video features plus fixation-based positional encodings, generating additive residuals injected into chosen LLM decoder layers via forward hooks.
  • An optional second training stage further improves gaze integration by adding LoRA modules to the underlying open-source MLLM.
  • On the StreamGaze benchmark (all 10 tasks), GazeQwen achieves 63.9% accuracy, outperforming the same Qwen2.5-VL-7B backbone with gaze treated as visual prompts (+16.1 points) and surpassing GPT-4o among tested models (+10.5 points).
  • The results indicate that learning optimal “where to inject gaze” inside an LLM can be more effective than simply increasing model size or refining prompt engineering.

Abstract

Current multimodal large language models (MLLMs) cannot effectively utilize eye-gaze information for video understanding, even when gaze cues are supplied via visual overlays or text descriptions. We introduce GazeQwen, a parameter efficient approach that equips an open-source MLLM with gaze awareness through hidden-state modulation. At its core is a compact gaze resampler (~1-5 M trainable parameters) that encodes V-JEPA 2.1 video features together with fixation-derived positional encodings and produces additive residuals injected into selected LLM decoder layers via forward hooks. An optional second training stage adds low-rank adapters (LoRA) to the LLM for tighter integration. Evaluated on all 10 tasks of the StreamGaze benchmark, GazeQwen reaches 63.9% accuracy, a +16.1 point gain over the same Qwen2.5-VL-7B backbone with gaze as visual prompts and +10.5 points over GPT-4o, the highest score among all open-source and proprietary models tested. These results suggest that learning where to inject gaze within an LLM is more effective than scaling model size or engineering better prompts. All code and checkpoints are available at https://github.com/phamtrongthang123/gazeqwen .

広告