Rule-based High-Level Coaching for Goal-Conditioned Reinforcement Learning in Search-and-Rescue UAV Missions Under Limited-Simulation Training
arXiv cs.RO / 4/30/2026
📰 NewsDeveloper Stack & InfrastructureModels & Research
Key Points
- The paper introduces a hierarchical UAV decision-making framework for search-and-rescue scenarios that pairs a fixed, rule-based high-level advisor with an online goal-conditioned reinforcement learning (RL) low-level controller.
- The high-level advisor is precompiled into deterministic, interpretable rules from a structured task specification, enabling mission- and safety-aware guidance including recommended/avoided actions and arbitration weights.
- The low-level RL component is trained online under limited-simulation conditions (including a strict no-pretraining regime) using task-defined dense rewards and enhanced experience replay that leverages rule-derived metadata.
- Experiments on battery-aware multi-goal delivery and moving-target delivery in obstacle-rich environments show improved early safety and sample efficiency, mainly by reducing collision-related terminations, while still allowing online adaptation to scenario dynamics.
Related Articles
Vector DB and ANN vs PHE conflict, is there a practical workaround? [D]
Reddit r/MachineLearning

Agent Amnesia and the Case of Henry Molaison
Dev.to

Azure Weekly: Microsoft and OpenAI Restructure Partnership as GPT-5.5 Lands in Foundry
Dev.to

Proven Patterns for OpenAI Codex in 2026: Prompts, Validation, and Gateway Governance
Dev.to

Vibe coding is a tool, not a shortcut. Most people are using it wrong.
Dev.to