A Benchmark Study of Segmentation Models and Adaptation Strategies for Landslide Detection from Satellite Imagery

arXiv cs.CV / 4/21/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The study benchmarks multiple segmentation approaches for satellite-based landslide detection, including CNNs, transformer-based models, and large pretrained foundation models under consistent protocols.
  • Using the Globally Distributed Coseismic Landslide Dataset (GDCLD), the authors compare representative architectures and foundation models to quantify their relative segmentation performance.
  • The paper evaluates adaptation strategies, showing that parameter-efficient fine-tuning methods such as LoRA and AdaLoRA can cut trainable parameters by up to 95% while maintaining accuracy close to full fine-tuning.
  • It also analyzes robustness and generalization by testing performance under distribution shift using validation versus held-out test sets.
  • Overall, the results indicate transformer-based segmentation models are strong for this task and that efficient fine-tuning is a practical path for adapting large models to landslide detection.

Abstract

Landslide detection from high resolution satellite imagery is a critical task for disaster response and risk assessment, yet the relative effectiveness of modern segmentation architectures and finetuning strategies for this problem remains insufficiently understood. In this work, we present a systematic benchmarking study of convolutional neural networks, transformer based segmentation models, and large pre-trained foundation models for landslide detection. Using the Globally Distributed Coseismic Landslide Dataset (GDCLD) dataset, we evaluate representative CNN- and transformer-based segmentation models alongside large pretrained foundation models under consistent training and evaluation protocols. In addition, we compare full fine-tuning with parameter-efficient fine-tuning methods, including LoRA and AdaLoRA, to assess their performance efficiency tradeoffs. Experimental results show that transformer-based models achieve strong segmentation performance, while parameter efficient finetuning reduces trainable parameters by up to 95% with comparable accuracy to full finetuning. We further analyze generalization under distribution shift by comparing validation and held-out test performance.