Captioning Daily Activity Images in Early Childhood Education: Benchmark and Algorithm

arXiv cs.CV / 4/3/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses limitations in early childhood education (ECE) image captioning, focusing on the scarcity of domain-specific datasets and training methods that lead to generic descriptions or unstable optimization on hard samples.
  • It introduces ECAC, a large-scale benchmark with 256,121 real-world ECE daily activity images, expert-level captions, and fine-grained labels, along with a domain-specific evaluation protocol (Teaching Toy Recognition Score, TTS) for professional object naming accuracy.
  • To improve fine-grained recognition, it proposes RSRS, a hybrid training framework that conditionally switches between reinforcement learning and supervised fine-tuning to stabilize optimization and reduce “advantage collapse.”
  • Using ECAC and RSRS, the authors develop KinderMM-Cap-3B, a domain-adapted multimodal LLM, reporting a TTS of 51.06 and improved caption quality over prior baselines, suggesting usefulness for specialized educational applications.

Abstract

Image captioning for Early Childhood Education (ECE) is essential for automated activity understanding and educational assessment. However, existing methods face two key challenges. First, the lack of large-scale, domain-specific datasets limits the model's ability to capture fine-grained semantic concepts unique to ECE scenarios, resulting in generic and imprecise descriptions. Second, conventional training paradigms exhibit limitations in enhancing professional object description capability, as supervised learning tends to favor high-frequency expressions, while reinforcement learning may suffer from unstable optimization on difficult samples. To address these limitations, we introduce ECAC, a large-scale benchmark for ECE daily activity image captioning, comprising 256,121 real-world images annotated with expert-level captions and fine-grained labels. ECAC is further equipped with a domain-oriented evaluation protocol, the Teaching Toy Recognition Score (TTS), to explicitly measure professional object naming accuracy. Furthermore, we propose RSRS (Reward-Conditional Switch of Reinforcement Learning and Supervised Fine-Tuning), a hybrid training framework that dynamically alternates between RL and supervised optimization. By rerouting hard samples with zero rewards to supervised fine-tuning, RSRS effectively mitigates advantage collapse and enables stable optimization for fine-grained recognition. Leveraging ECAC and RSRS, we develop KinderMM-Cap-3B, a domain-adapted multimodal large language model. Extensive experiments demonstrate that our model achieves a TTS of 51.06, substantially outperforming state-of-the-art baselines while maintaining superior caption quality, highlighting its potential for specialized educational applications.