Land cover mapping with Sentinel-2 imagery using deep learning semantic segmentation models

verfasst von
Oleksandr Honcharov, Viktoriia Hnatushenko
Abstract

Land cover mapping is essential for environmental monitoring and evaluating the effects of human activities. Recent studies have demonstrated the effective application of particular deep learning models for tasks such as wetland mapping. Nonetheless, it is still ambiguous which advanced models developed for natural images are most appropriate for remote sensing data. This study focuses on the segmentation of agricultural fields using satellite imagery to distinguish between cultivated and non-cultivated areas. We employed Sentinel-2 imagery obtained during the summer of 2023 in Ukraine, illustrating the nation's varied land cover. The models were trained to differentiate among three principal categories: water, fields, and background. We chose and optimised five advanced semantic segmentation models, each embodying distinct methodological methods derived from U-Net. Upon examination, all models exhibited robust performance, with total accuracy spanning from 80% to 89.2%. The highest-performing models were U-Net with Residual Blocks and U-Net with Residual Blocks and Batch Normalisation, whereas U-Net with LeakyReLU Activation exhibited much quicker inference times. The findings suggest that semantic segmentation algorithms are highly effective for efficient land cover mapping utilising multispectral satellite images and establish a dependable benchmark for assessing future advancements in this domain.

Organisationseinheit(en)
Institut für Photogrammetrie und Geoinformation
Externe Organisation(en)
Ukrainian State University of Science and Technologies
Typ
Aufsatz in Konferenzband
Seiten
1-18
Anzahl der Seiten
18
Publikationsdatum
01.02.2025
Publikationsstatus
Veröffentlicht
Peer-reviewed
Ja
ASJC Scopus Sachgebiete
Allgemeine Computerwissenschaft
Ziele für nachhaltige Entwicklung
SDG 15 – Lebensraum Land
Elektronische Version(en)
https://ceur-ws.org/Vol-3909/Paper_1.pdf (Zugang: Offen)