AI Navigate

Sim2Act:敵対的較正とグループ相対摂動による堅牢なシミュレーションから意思決定学習

arXiv cs.LG / 2026/3/11

Ideas & Deep AnalysisModels & Research

要点

  • Sim2Actは、シミュレーターと方策の両方の堅牢性を高め、意思決定に重要な領域における予測誤差を解決するために設計された新しいシミュレーションから意思決定への学習フレームワークです。
  • 重要な状態-行動ペアのシミュレーション誤差を再重み付けする敵対的較正を導入し、代理の忠実度と実際の意思決定への影響との整合性を向上させます。
  • このフレームワークには、過度に慎重な制約を設けることなく、シミュレーターの不確実性の中で方策学習を安定化させるグループ相対摂動技術が含まれます。
  • さまざまなサプライチェーンのベンチマークでの実験により、Sim2Actはシミュレーションの堅牢性が向上し、異なる種類の摂動下でもより安定した意思決定を達成することが示されました。
  • この手法は、サプライチェーンや産業システムなどのミッションクリティカルな応用において、デジタル環境での安全で信頼性の高い方策訓練に特に重要です。

Computer Science > Machine Learning

arXiv:2603.09053 (cs)
[Submitted on 10 Mar 2026]

Title:Sim2Act: Robust Simulation-to-Decision Learning via Adversarial Calibration and Group-Relative Perturbation

View a PDF of the paper titled Sim2Act: Robust Simulation-to-Decision Learning via Adversarial Calibration and Group-Relative Perturbation, by Hongyu Cao and 6 other authors
View PDF HTML (experimental)
Abstract:Simulation-to-decision learning enables safe policy training in digital environments without risking real-world deployment, and has become essential in mission-critical domains such as supply chains and industrial systems. However, simulators learned from noisy or biased real-world data often exhibit prediction errors in decision-critical regions, leading to unstable action ranking and unreliable policies. Existing approaches either focus on improving average simulation fidelity or adopt conservative regularization, which may cause policy collapse by discarding high-risk high-reward actions.
We propose Sim2Act, a robust simulation-to-decision framework that addresses both simulator and policy robustness. First, we introduce an adversarial calibration mechanism that re-weights simulation errors in decision-critical state-action pairs to align surrogate fidelity with downstream decision impact. Second, we develop a group-relative perturbation strategy that stabilizes policy learning under simulator uncertainty without enforcing overly pessimistic constraints. Extensive experiments on multiple supply chain benchmarks demonstrate improved simulation robustness and more stable decision performance under structured and unstructured perturbations.
Comments:
Subjects: Machine Learning (cs.LG); Artificial Intelligence (cs.AI)
Cite as: arXiv:2603.09053 [cs.LG]
  (or arXiv:2603.09053v1 [cs.LG] for this version)
  https://doi.org/10.48550/arXiv.2603.09053
Focus to learn more
arXiv-issued DOI via DataCite

Submission history

From: Hongyu Cao [view email]
[v1] Tue, 10 Mar 2026 00:51:47 UTC (552 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled Sim2Act: Robust Simulation-to-Decision Learning via Adversarial Calibration and Group-Relative Perturbation, by Hongyu Cao and 6 other authors
  • View PDF
  • HTML (experimental)
  • TeX Source
Current browse context:
cs.LG
< prev   |   next >
Change to browse by:

References & Citations

export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo
Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle
alphaXiv (What is alphaXiv?)
Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub Toggle
DagsHub (What is DagsHub?)
GotitPub Toggle
Gotit.pub (What is GotitPub?)
Huggingface Toggle
Hugging Face (What is Huggingface?)
Links to Code Toggle
Papers with Code (What is Papers with Code?)
ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos

Demos

Replicate Toggle
Replicate (What is Replicate?)
Spaces Toggle
Hugging Face Spaces (What is Spaces?)
Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers

Recommenders and Search Tools

Link to Influence Flower
Influence Flower (What are Influence Flowers?)
Core recommender toggle
CORE Recommender (What is CORE?)
IArxiv recommender toggle
IArxiv Recommender (What is IArxiv?)
About arXivLabs

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.