Self-Supervised Learning of Plant Image Representations

arXiv cs.CV / 5/1/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies self-supervised learning (SSL) for learning plant image representations to reduce dependence on scarce expert-labeled data in biodiversity monitoring.
  • It finds that several standard SSL augmentations (e.g., Gaussian blur, grayscale conversion, solarization) can hurt fine-grained plant species recognition by removing subtle discriminative cues.
  • The authors propose alternative, plant-suited transformations such as affine and posterization, which better preserve features needed for fine-grained tasks.
  • Training SimDINOv2 on the iNaturalist 2021 Plantae subset produces substantially stronger representations than training on ImageNet-1K, underscoring the value of domain-specific data.
  • Across ViT-Base and ViT-Large, the resulting models are competitive and sometimes outperform strong supervised baselines (Pl@ntCLEF, BioCLIP) on downstream recognition, especially in few-shot settings.

Abstract

Automated plant recognition plays a crucial role in biodiversity monitoring and conservation, yet current approaches rely heavily on supervised learning, which is limited by the availability of expert-labeled data. Self-supervised learning (SSL) offers a scalable alternative, but existing methods and training protocols are largely designed for coarse-grained visual tasks and may not transfer well to fine-grained domains such as plant species recognition. In this work, we investigate SSL for plant image representation learning. We show that commonly used augmentations in SSL pipelines - such as Gaussian blur, grayscale conversion, and solarization - are detrimental in the context of plant images, as they remove subtle discriminative cues essential for fine-grained recognition. We instead identify alternative transformations, including affine and posterization, that are better suited to this domain. We further demonstrate that training SimDINOv2 on the iNaturalist 2021 Plantae subset yields significantly stronger representations than training on ImageNet-1K, highlighting the importance of domain-specific data for SSL. Our findings are consistent across both ViT-Base and ViT-Large architectures. Moreover, our models achieve competitive performance and sometimes outperform strong supervised baselines Pl@ntCLEF and BioCLIP on downstream plant recognition tasks in few-shot settings. Overall, our results highlight the critical importance of domain-adapted augmentation strategies and dataset selection in self-supervised learning, and provide practical guidelines for building scalable models for biodiversity monitoring.