A Deformable Attention-Based Detection Transformer with Cross-Scale Feature Fusion for Industrial Coil Spring Inspection
arXiv cs.CV / 3/17/2026
📰 NewsModels & Research
Key Points
- MSD-DETR introduces a structural re-parameterization strategy that decouples training-time multi-branch topology from inference-time efficiency, improving feature extraction while preserving real-time performance.
- It employs a deformable attention mechanism enabling content-adaptive spatial sampling to focus on defect-relevant regions despite morphological diversity and scale variations in coil springs.
- The approach uses cross-scale feature fusion with GSConv modules and VoVGSCSP blocks for effective multi-resolution information aggregation.
- On a real-world locomotive coil spring dataset, MSD-DETR achieves 92.4% mAP@0.5 at 98 FPS, outperforming YOLOv8 and RT-DETR while maintaining comparable speed, setting a new benchmark for industrial coil spring inspection.
Related Articles

報告:LLMにおける「自己言及的再帰」と「ステートフル・エミュレーション」の観測
note

諸葛亮 孔明老師(ChatGPTのロールプレイ)との対話 その肆拾伍『銀河文明・ダークマターエンジン』
note

GPT-5.4 mini/nano登場!―2倍高速で無料プランも使える小型高性能モデル
note
Why a Perfect-Memory AI Agent Without Persona Drift is Architecturally Impossible
Dev.to
Box Maze: A Process-Control Architecture for Reliable LLM Reasoning
arXiv cs.AI