Automated Segmentation and Tracking of Group Housed Pigs Using Foundation Models

arXiv cs.CV / 4/7/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes a foundation-model (vision-language) centered pipeline for label-efficient, automated segmentation and tracking of group-housed nursery pigs in precision livestock farming.
  • It combines pretrained backbones with lightweight farm-specific adaptation via modular post-processing, reducing reliance on extensive per-farm labeled data and retraining.
  • Baseline detection using Grounding-DINO performs well in daytime but degrades under night-vision and heavy occlusion, leading the authors to add temporal tracking logic.
  • Short-term segmentation using Grounded-SAM2 on short video clips achieved over 80% fully correct tracks after post-processing, with most errors tied to mask quality or duplicated labels.
  • For long-duration identity consistency, the study introduces a long-term tracking pipeline (initialization, tracking, matching, mask refinement, re-identification, and quality control) and reports strong metrics on a 132-minute continuous video with no identity switches.

Abstract

Foundation models (FM) are reshaping computer vision by reducing reliance on task-specific supervised learning and leveraging general visual representations learned at scale. In precision livestock farming, most pipelines remain dominated by supervised learning models that require extensive labeled data, repeated retraining, and farm-specific tuning. This study presents an FM-centered workflow for automated monitoring of group-housed nursery pigs, in which pretrained vision-language FM serve as general visual backbones and farm-specific adaptation is achieved through modular post-processing. Grounding-DINO was first applied to 1,418 annotated images to establish a baseline detection performance. While detection accuracy was high under daytime conditions, performance degraded under night-vision and heavy occlusion, motivating the integration of temporal tracking logic. Building on these detections, short-term video segmentation with Grounded-SAM2 was evaluated on 550 one-minute video clips; after post-processing, over 80% of 4,927 active tracks were fully correct, with most remaining errors arising from inaccurate masks or duplicated labels. To support identity consistency over an extended time, we further developed a long-term tracking pipeline integrating initialization, tracking, matching, mask refinement, re-identification, and post-hoc quality control. This system was evaluated on a continuous 132-minute video and maintained stable identities throughout. On 132 uniformly sampled ground-truth frames, the system achieved a mean region similarity (J) of 0.83, contour accuracy (F) of 0.92, J&F of 0.87, MOTA of 0.99, and MOTP of 90.7%, with no identity switches. Overall, this work demonstrates how FM prior knowledge can be combined with lightweight, task-specific logic to enable scalable, label-efficient, and long-duration monitoring in pig production.