Plagiarism or Productivity? Students Moral Disengagement and Behavioral Intentions to Use ChatGPT in Academic Writing

arXiv cs.AI / 3/23/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The study identifies five moral disengagement mechanisms—moral justification, euphemistic labeling, displacement of responsibility, minimizing consequences, and attribution of blame—as predictors of Filipino college students' attitudes, subjective norms, and perceived behavioral control toward using ChatGPT in academic writing.
  • Among these mechanisms, attribution of blame has the strongest influence on attitudes and perceived control, making it a key driver of students' behavioral intention to use ChatGPT.
  • Attitudes toward using ChatGPT are the most powerful predictor of behavioral intention, and the model explains more than half of the variance in intention.
  • The findings suggest that institutional gaps and unclear rules help justify AI use, underscoring the need for clear academic integrity policies, ethical guidance, and classroom support, while noting that intention-based models may not fully capture student behavior.

Abstract

This study examined how moral disengagement influences Filipino college students' intention to use ChatGPT in academic writing. The model tested five mechanisms: moral justification, euphemistic labeling, displacement of responsibility, minimizing consequences, and attribution of blame. These mechanisms were analyzed as predictors of attitudes, subjective norms, and perceived behavioral control, which then predicted behavioral intention. A total of 418 students with ChatGPT experience participated. The results showed that several moral disengagement mechanisms influenced students' attitudes and sense of control. Among the predictors, attribution of blame had the strongest influence, while attitudes had the highest impact on behavioral intention. The model explained more than half of the variation in intention. These results suggest that students often rely on institutional gaps and peer behavior to justify AI use. Many believe it is acceptable to use ChatGPT for learning or when rules are unclear. This shows a need for clear academic integrity policies, ethical guidance, and classroom support. The study also recognizes that intention-based models may not fully explain student behavior. Emotional factors, peer influence, and convenience can also affect decisions. The results provide useful insights for schools that aim to support responsible and informed AI use in higher education.
広告