PASTA: Vision Transformer Patch Aggregation for Weakly Supervised Target and Anomaly Segmentation

arXiv cs.CV / 4/14/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • PASTAは、鋼くずリサイクルや除草などの未見異常を扱う産業・農業向けに、弱い画像レベルの教師(weak image-level supervision)でターゲットと異常をセグメンテーションする新しいパイプラインを提案している。
  • 観察シーンと通常(nominal)参照をViT(self-supervised Vision Transformer)の特徴空間で分布比較し、ターゲット/異常を同定することで、厳密な画素注釈を前提としない点が特徴だ。
  • Segment Anything Model 3(SAM 3)のセマンティックなテキストプロンプトを用いたゼロショットのオブジェクトセグメンテーションによって、ガイド付きでピクセルレベルの領域推定を行う。
  • 評価では独自の鋼くずリサイクルデータセットとプラントデータセットで、学習時間を75.8%削減しつつ、ターゲットは最大88.3% IoU、異常は最大63.5% IoUと高いセグメンテーション性能を示した。

Abstract

Detecting unseen anomalies in unstructured environments presents a critical challenge for industrial and agricultural applications such as material recycling and weeding. Existing perception systems frequently fail to satisfy the strict operational requirements of these domains, specifically real-time processing, pixel-level segmentation precision, and robust accuracy, due to their reliance on exhaustively annotated datasets. To address these limitations, we propose a weakly supervised pipeline for object segmentation and classification using weak image-level supervision called 'Patch Aggregation for Segmentation of Targets and Anomalies' (PASTA). By comparing an observed scene with a nominal reference, PASTA identifies Target and Anomaly objects through distribution analysis in self-supervised Vision Transformer (ViT) feature spaces. Our pipeline utilizes semantic text-prompts via the Segment Anything Model 3 to guide zero-shot object segmentation. Evaluations on a custom steel scrap recycling dataset and a plant dataset demonstrate a 75.8% training time reduction of our approach to domain-specific baselines. While being domain-agnostic, our method achieves superior Target (up to 88.3% IoU) and Anomaly (up to 63.5% IoU) segmentation performance in the industrial and agricultural domain.