Tutor-Student Reinforcement Learning: A Dynamic Curriculum for Robust Deepfake Detection

arXiv cs.CV / 3/26/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that conventional supervised deepfake detection training is suboptimal because it treats all samples with equal importance, which can hinder robust generalization.
  • It introduces a Tutor-Student Reinforcement Learning (TSRL) framework that formulates curriculum learning as a Markov Decision Process where a PPO-based “Tutor” dynamically re-weights each training sample’s loss.
  • The Tutor’s state includes both visual features and training-history signals (e.g., EMA loss and forgetting counts), enabling it to focus on high-value “hard-but-learnable” examples.
  • The Tutor is rewarded according to the Student deepfake detector’s immediate performance improvements (moving from incorrect to correct), shaping a curriculum that improves training efficiency.
  • Experiments reportedly show better generalization against previously unseen deepfake manipulation techniques versus traditional uniform training.

Abstract

Standard supervised training for deepfake detection treats all samples with uniform importance, which can be suboptimal for learning robust and generalizable features. In this work, we propose a novel Tutor-Student Reinforcement Learning (TSRL) framework to dynamically optimize the training curriculum. Our method models the training process as a Markov Decision Process where a ``Tutor'' agent learns to guide a ``Student'' (the deepfake detector). The Tutor, implemented as a Proximal Policy Optimization (PPO) agent, observes a rich state representation for each training sample, encapsulating not only its visual features but also its historical learning dynamics, such as EMA loss and forgetting counts. Based on this state, the Tutor takes an action by assigning a continuous weight (0-1) to the sample's loss, thereby dynamically re-weighting the training batch. The Tutor is rewarded based on the Student's immediate performance change, specifically rewarding transitions from incorrect to correct predictions. This strategy encourages the Tutor to learn a curriculum that prioritizes high-value samples, such as hard-but-learnable examples, leading to a more efficient and effective training process. We demonstrate that this adaptive curriculum improves the Student's generalization capabilities against unseen manipulation techniques compared to traditional training methods. Code is available at https://github.com/wannac1/TSRL.