Towards Scalable Lightweight GUI Agents via Multi-role Orchestration

arXiv cs.AI / 4/16/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes LAMO, a framework for making lightweight multimodal LLM-based GUI agents scalable enough for complex real-world tasks on resource-constrained devices.
  • LAMO improves GUI capability via role-oriented data synthesis and a two-stage training approach: supervised fine-tuning using Perplexity-Weighted Cross-Entropy for distillation and visual perception enhancement, followed by reinforcement learning for cooperative role exploration.
  • The resulting model, LAMO-3B, is designed for task scalability with both monolithic execution and multi-agent-system (MAS)-style orchestration.
  • By integrating with external planners as a plug-and-play policy executor, LAMO-3B can continuously leverage planner improvements to raise its achievable performance ceiling.
  • The authors report extensive static and online evaluations demonstrating effectiveness of the framework and training strategy.

Abstract

Autonomous Graphical User Interface (GUI) agents powered by Multimodal Large Language Models (MLLMs) enable digital automation on end-user devices. While scaling both parameters and data has yielded substantial gains, advanced methods still suffer from prohibitive deployment costs on resource-constrained devices. When facing complex in-the-wild scenarios, lightweight GUI agents are bottlenecked by limited capacity and poor task scalability under end-to-end episodic learning, impeding adaptation to multi-agent systems (MAS), while training multiple skill-specific experts remains costly. Can we strike an effective trade-off in this cost-scalability dilemma, enabling lightweight MLLMs to participate in realistic GUI workflows? To address these challenges, we propose the LAMO framework, which endows a lightweight MLLM with GUI-specific knowledge and task scalability, allowing multi-role orchestration to expand its capability boundary for GUI automation. LAMO combines role-oriented data synthesis with a two-stage training recipe: (i) supervised fine-tuning with Perplexity-Weighted Cross-Entropy optimization for knowledge distillation and visual perception enhancement, and (ii) reinforcement learning for role-oriented cooperative exploration. With LAMO, we develop a task-scalable native GUI agent, LAMO-3B, supporting monolithic execution and MAS-style orchestration. When paired with advanced planners as a plug-and-play policy executor, LAMO-3B can continuously benefit from planner advances, enabling a higher performance ceiling. Extensive static and online evaluations validate the effectiveness of our design.