RepreSent

Over the last decade, machine learning and deep learning paradigms have experienced an astonishing development in different domains. This development has led to a new data-driven era of science and technology. Following this trend, deep learning has also revolutionized the field of remote sensing. Deep learning has been successfully applied to several Earth observation tasks, e.g., land cover classification, semantic segmentation, change detection, and disaster mapping. However, most of the deep learning-based methods developed for remote sensing are supervised. A major pitfall of deep learning based supervised techniques is their high dependence on a large and well-representative corpus of labelled data. It is expensive and time-consuming to obtain such labels in Earth observation. Thanks to the Copernicus program by the European Space Agency (ESA), a massive amount of unlabeled Earth observation (EO) data is currently available. Supervised methods do not effectively exploit this abundant pool of unlabeled data.

In the computer vision literature, different paradigms that rely on less/zero labels have experienced a fast development, e.g., unsupervised learning, transfer learning, self-supervised, semi-supervised learning, weakly supervised learning, and meta learning.

The main technical objective of this project is to harness the power of artificial intelligence and Earth observation (EO) by exploiting the above non-supervised learning paradigms. Towards this it is crucial to come up with non-supervised learning-based solutions for impactful use-cases that leverage unlabeled EO data. The consortium has four partners, namely DLR, EPFL, VTT, and e-GEOS.

The project further aims towards devising methods that can provide suitable generalization by being easily adaptable to different EO tasks and across different geographies.

The above-mentioned technical objectives will be fulfilled by defining suitable use cases from different thematic areas, e.g., agriculture and forestry and by further defining test sites from different locations. The non-supervised learning-based method will be evaluated based on usual quantitative performance indices along with qualitative analyses commenting on their generalization capability and versatility on different EO tasks.

RepreSent CCNExtension of RepreSent for Scaling-up

The initial phase of the RepreSent project enabled us to delve into the field of representation learning, showcasing its vast potential in making EO data and methodologies broadly accessible without requiring extensive effort. From a technical perspective, the project has been successful, as evidenced by the positive feedback received from the academic community towards our publications and presentations, and the increasing interest in exploring, replicating, and expanding our developed methodologies. Furthermore, achieving our current TRL has initiated communication with a variety of stakeholders, including those in fields such as forest farming, environmental monitoring, and urban planning.

The project extension has three parteners, namely DLR, VTT, and e-GEOS. The main objectives are:

  • First objective focuses on improvement of accuracy and timeliness of EO based forest mapping using multi-temporal and multi-sensor data (in contrast to studied earlier bi-temporal and single/bi-sensor approaches), leveraging self-supervised learning (SSL) methods.
  • Second objective is to enhance our ability to detect building anomalies. Given the varied and changing nature of urban environments, we aim to expand the area of study. This expansion will allow us to better understand the patterns of anomalies across different urban landscapes.
  • Third objective focuses on the refinement of cloud detection methods. Recognizing the potential of self-supervised learning in cloud detection, achieving comparable accuracy as state-of-the-art supervised methods on a small-scale cloud dataset, we plan to extend our experiments to the CloudSEN12 dataset. This will provide us with a global scale of data for validating our approach.