Unlocking Positive Transfer in Incrementally Learning Surgical Instruments: A Self-reflection Hierarchical Prompt Framework
arXiv cs.CV / 4/6/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper addresses incremental learning for surgical video scene parsing, where models must learn to segment an expanding set of instruments over time without catastrophic forgetting.
- It proposes a self-reflection hierarchical prompt framework that enables both positive forward transfer (reusing past knowledge to learn new instrument classes) and positive backward transfer (improving earlier learned classes after learning new ones).
- The method uses a frozen pre-trained model with dynamically appended instrument-aware prompts organized in a hierarchical prompt parsing tree, exposing shared knowledge for easier learning of new classes.
- To strengthen backward transfer while preserving old capabilities, it applies self-reflection refinement using directed-weighted graph propagation informed by knowledge associations in the tree.
- Experiments indicate the framework works for both CNN-based and transformer-based (foundation) models, improving over competing approaches by more than 5% and 11% on two public benchmarks.
Related Articles

How Bash Command Safety Analysis Works in AI Systems
Dev.to

How to Get Better Output from AI Tools (Without Burning Time and Tokens)
Dev.to

How I Added LangChain4j Without Letting It Take Over My Spring Boot App
Dev.to

The Future of Artificial Intelligence in Everyday Life
Dev.to

Teaching Your AI to Read: Automating Document Triage for Investigators
Dev.to