ESA title
Optic

Crop mapping with multi temporal and multi-sensor images

Importance of the work

Agriculture is a fast-growing sector in terms of its utilisation of EO-based products. It plays a crucial role in the global economy and is rapidly evolving, particularly due to climate change and increasing production demand. As populations and consumption increase, land, water and energy become progressively threatened resources, meaning that smart and more efficient land cover monitoring provides a means to address several other significant challenges. This study aims to exploit open and freely accessible cloud-computing tools and datasets for crop mapping, giving the potential for disruptive added value in EO among stakeholders worldwide, from policy makers to commercial and private users.


(a) Example of moving mean profiles (upper panel) of Sentinel-2 Normalised Difference Vegetation Index (NDVI) and bare-soil index (BSI) and (lower panel) VV, VH and VI (vegetation index) σ0 backscatter intensities. (b) Optical image of NDVI distribution over a Dutch land section (c) Crop-type features extracted correctly through machine learning overlaid in blue. Credits: Beatrice Gottardi, ESA. Contains modified Copernicus Sentinel data (2019).

Overview

The project constisted in developing a machine-learning Random Forest (RF) classifier that identifies major regional crop types from radar and multispectral images. It combines 12-day moving median composites of Copernicus Sentinel-1 and Sentinel-2 satellites through one agricultural year (2017–2018). Some commonly used vegetation indexes such as the Normalised Difference Vegetation Index (NDVI) have been used in addition to image band intensities. The workflow has been applied to the territory of the Netherlands. The ground truth dataset comes from the national Basic Register of Crop Plots. The study relies on a parcel-based classification, entirely developed on Google Earth Engine.


Findings

The combination of multi-sensors and multi-temporal images enhances the classification over areas with relevant cloud coverage through the year and gives more information about growth phenology. The accuracy assessment gives interesting results especially for the detection of some crop classes (corn, wheat, sugar beet, onions and tulips), which range between 85% and 96% accuracy. Among these, higher accuracy has been observed for winter crops of the same species. In fact, it depends on the observation period ingested in the model. A comparison with Sentinel-1 imagery only shows a small reduction of 1% of the overall accuracy of the classification. The model has been applied for the subsequent year, but only a few crop classes were detected with good accuracy. Other approaches will therefore need to be investigated.

Latest use cases

Subscribe to our newsletter

Share


>