LoRA-NIR: Low-Rank Adaptation of Vision Transformers for Remote Sensing with Near-Infrared Imagery


ÜLKÜ İ., TANRIÖVER Ö. Ö., AKAGÜNDÜZ E.

IEEE Geoscience and Remote Sensing Letters, cilt.21, 2024 (SCI-Expanded) identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 21
  • Basım Tarihi: 2024
  • Doi Numarası: 10.1109/lgrs.2024.3449372
  • Dergi Adı: IEEE Geoscience and Remote Sensing Letters
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Aerospace Database, Aquatic Science & Fisheries Abstracts (ASFA), Communication Abstracts, Compendex, Geobase, INSPEC, Metadex, Civil Engineering Abstracts
  • Anahtar Kelimeler: Infrared images, low-rank adaptation (LoRA), semantic segmentation, vision transformer (ViT)
  • Ankara Üniversitesi Adresli: Evet

Özet

Plant health can be monitored dynamically using multispectral sensors that measure Near-Infrared reflectance (NIR). Despite this potential, obtaining and annotating high-resolution NIR images poses a significant challenge for training deep neural networks. Typically, large networks pre-trained on the RGB domain are utilized to fine-tune infrared images. This practice introduces a domain shift issue because of the differing visual traits between RGB and NIR images. As an alternative to fine-tuning, a method called low-rank adaptation (LoRA) enables more efficient training by optimizing rank-decomposition matrices while keeping the original network weights frozen. However, existing parameter-efficient adaptation strategies for remote sensing images focus on RGB images and overlook domain shift issues in the NIR domain. Therefore, this study investigates the potential benefits of using vision transformer (ViT) backbones pre-trained in the RGB domain, with low-rank adaptation for downstream tasks in the NIR domain. Extensive experiments demonstrate that employing LoRA with pre-trained ViT backbones yields the best performance for downstream tasks applied to NIR images.