AI Navigate

Hybrid Self-evolving Structured Memory for GUI Agents

arXiv cs.AI / 3/12/2026

📰 NewsModels & Research

Key Points

  • HyMEM is a graph-based memory that couples discrete high-level symbolic nodes with continuous trajectory embeddings to enable structured, multi-hop retrieval in GUI agents.
  • It supports self-evolution through node update operations and on-the-fly working-memory refreshing during inference, inspired by human memory organization.
  • Extensive experiments show HyMEM consistently improves open-source GUI agents, enabling 7B/8B backbones to match or surpass strong closed-source models, notably boosting Qwen2.5-VL-7B by +22.5% and outperforming Gemini2.5-Pro-Vision and GPT-4o.
  • The work points to broad implications for GUI automation tasks with long-horizon workflows and diverse interfaces by providing a memory-augmented approach.

Abstract

The remarkable progress of vision-language models (VLMs) has enabled GUI agents to interact with computers in a human-like manner. Yet real-world computer-use tasks remain difficult due to long-horizon workflows, diverse interfaces, and frequent intermediate errors. Prior work equips agents with external memory built from large collections of trajectories, but relies on flat retrieval over discrete summaries or continuous embeddings, falling short of the structured organization and self-evolving characteristics of human memory. Inspired by the brain, we propose Hybrid Self-evolving Structured Memory (HyMEM), a graph-based memory that couples discrete high-level symbolic nodes with continuous trajectory embeddings. HyMEM maintains a graph structure to support multi-hop retrieval, self-evolution via node update operations, and on-the-fly working-memory refreshing during inference. Extensive experiments show that HyMEM consistently improves open-source GUI agents, enabling 7B/8B backbones to match or surpass strong closed-source models; notably, it boosts Qwen2.5-VL-7B by +22.5% and outperforms Gemini2.5-Pro-Vision and GPT-4o.