From Imitation to Discrimination: Progressive Curriculum Learning for Robust Web Navigation

arXiv cs.LG / 4/15/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that text-based web agents struggle because real-world HTML is noisy and heterogeneous, and standard SFT both fails to discriminate against plausible wrong elements and generalizes poorly to new layouts.
  • It introduces the Triton dataset (590k instances) built with Structural-Semantic Hard Negative Mining and a Dual-Agent Consensus pipeline to generate hard distractors and cross-domain navigation tasks with verification.
  • A progressive curriculum is used to train three 32B models targeting different abilities: imitation (Triton-SFT-32B), robust discrimination via Odds Ratio Preference Optimization (Triton-ORPO-32B), and long-horizon consistency via Group Relative Policy Optimization (Triton-GRPO-32B).
  • On Mind2Web, Triton-GRPO-32B achieves state-of-the-art open-source performance with a 58.7% Step Success Rate and reportedly surpasses GPT-4.5 and Claude-4.5 by more than 16%, suggesting curriculum- and data-driven improvements can beat raw scale for web navigation.

Abstract

Text-based web agents offer computational efficiency for autonomous web navigation, yet developing robust agents remains challenging due to the noisy and heterogeneous nature of real-world HTML. Standard Supervised Fine-Tuning (SFT) approaches fail in two critical dimensions: they lack discrimination capabilities to reject plausible but incorrect elements in densely populated pages, and exhibit limited generalization to unseen website layouts. To address these challenges, we introduce the Triton dataset (590k instances) and a progressive training curriculum. Triton is constructed via Structural-Semantic Hard Negative Mining, which explicitly mines topologically similar distractors, and a Dual-Agent Consensus pipeline that synthesizes diverse cross-domain tasks with strict verification. Building upon this foundation, our progressive curriculum produces three models: Triton-SFT-32B for basic imitation, Triton-ORPO-32B for robust discrimination via Odds Ratio Preference Optimization, and Triton-GRPO-32B for long-horizon consistency through Group Relative Policy Optimization. Empirical evaluation on Mind2Web demonstrates that Triton-GRPO-32B achieves state-of-the-art performance among open-source models with 58.7% Step Success Rate, surpassing GPT-4.5 (42.4%) and Claude-4.5 (41.4%) by over 16%, validating that specialized data curriculum outweighs raw parameter scale for web navigation.