Simulating Validity: Modal Decoupling in MLLM Generated Feedback on Science Drawings

arXiv cs.AI / 5/1/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper investigates whether multimodal LLM-generated feedback on students’ hand-drawn science diagrams is actually grounded in the specific visual evidence in those drawings.
  • It finds frequent “grounding failures” consistent with modal decoupling, including object, attribute, and relation mismatches as well as “false absence,” where the model incorrectly treats depicted elements as missing.
  • Using 150 middle-school kinetic molecular theory drawings and generating 300 GPT-5.1 feedback instances, the study reports that 41.3% of feedback contained at least one grounding error.
  • An “inventory-list-first” workflow reduced some error types and the overall error rate, but about one in three outputs still remained flawed, with false absence the dominant failure mode.
  • The authors conclude that visually plausible feedback can be diagnostically unhelpful for detecting invalid cases, and that valid feedback will likely require new grounding mechanisms beyond standard prompting.

Abstract

In science education, students frequently construct hand-drawn visual models of scientific phenomena. These drawings rely on a visual structure where information is encoded through visual objects, their attributes, and relationships. Multimodal large language models (MLLMs) are increasingly used to generate feedback on students' hand-drawn scientific models. However, the validity of such feedback depends on whether model claims are grounded in the specific visual evidence of the student drawing. This study uncovers grounding failures, consistent with modal decoupling, in off-the-shelf MLLM feedback, where outputs remain pedagogically plausible in form while contradicting the drawing or treating depicted elements as missing. Using N = 150 middle school drawings from a kinetic molecular theory unit spanning five modeling tasks and three competence levels, we generated N = 300 feedback instances with GPT-5.1. All outputs were coded for four grounding error types: object mismatch, attribute mismatch, relation mismatch, and false absence. Grounding failures were common: 41.3% of feedback instances contained at least one error. An inventory-list-first workflow reduced several error categories and lowered the overall error rate, but it did not resolve the underlying limitation: approximately one in three outputs remained flawed, with false absence as the dominant failure mode. Moreover, feedback that appears visually grounded offered little diagnostic value for identifying invalid instances. The findings indicate that modal decoupling is a substantial limitation and that valid feedback will require grounding mechanisms beyond common prompting strategies.