LATTICE: Evaluating Decision Support Utility of Crypto Agents

arXiv cs.AI / 4/30/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces LATTICE, a benchmark aimed at evaluating how well crypto agents support users' decision-making in realistic, user-facing copilot scenarios.
  • It defines six evaluation dimensions and 16 end-to-end task types covering the full crypto-copilot workflow, focusing specifically on decision support rather than only reasoning or final outcomes.
  • LATTICE uses LLM judges to score agent outputs across dimensions and tasks at scale, avoiding reliance on ground-truth labels from expert annotators or external data sources.
  • The authors evaluate six production-level crypto copilots on 1,200 diverse queries and find similar overall scores, but larger differences at the dimension and task levels, indicating important trade-offs by user priorities.
  • To enable reproducible research and continuous improvement, they open-source the LATTICE code and data and emphasize that judge rubrics can be audited and updated as new criteria and feedback emerge.

Abstract

We introduce LATTICE, a benchmark for evaluating the decision support utility of crypto agents in realistic user-facing scenarios. Prior crypto agent benchmarks mainly focus on reasoning-based or outcome-based evaluation, but do not assess agents' ability to assist user decision-making. LATTICE addresses this gap by: (1) defining six evaluation dimensions that capture key decision support properties; (2) proposing 16 task types that span the end-to-end crypto copilot workflow; and (3) using LLM judges to automatically score agent outputs based on these dimensions and tasks. Crucially, the dimensions and tasks are designed to be evaluable at scale using LLM judges, without relying on ground truth from expert annotators or external data sources. In lieu of these dependencies, LATTICE's LLM judge rubrics can be continually audited and updated given new dimensions, tasks, criteria, and human feedback, thus promoting reliable and extensible evaluation. While other benchmarks often compare foundation models sharing a generic agent framework, we use LATTICE to assess production-level agents used in actual crypto copilot products, reflecting the importance of orchestration and UI/UX design in determining agent quality. In this paper, we evaluate six real-world crypto copilots on 1,200 diverse queries and report breakdowns across dimensions, tasks, and query categories. Our experiments show that most of the tested copilots achieve comparable aggregate scores, but differ more significantly on dimension-level and task-level performance. This pattern suggests meaningful trade-offs in decision support quality: users with different priorities may be better served by different copilots than the aggregate rankings alone would indicate. To support reproducible research, we open-source all LATTICE code and data used in this paper.