International Geoscience and Remote Sensing Symposium

In 2021 a joint initiative of Belgium and The Netherlands

Tutorials

On Saturday 10 and Sunday 11 July 2021 a series of half-day and full day tutorials will be organised virtually via the online event platform.

IGARSS 2021 participants can register for a tutorial via the IGARSS 2021 official registration form which will be open as of April 10, 2021.

Participation Fees

Full-DayHalf-Day
Participants US $100 US $50

More information about each tutorial can be found in the full programme below.

Questions regarding Tutorials may be directed to tutorials@igarss2021.com.


FD-1: Sentinel-1 Persistent Scattering Interferometry for Ground Motion

Presented by Dinh Ho Tong Minh

Part I

Sat, 10 Jul, 12:00 - 16:00 (UTC)
Sat, 10 Jul, 20:00 - 00:00 China Standard Time (UTC +8)
Sat, 10 Jul, 14:00 - 18:00 Central Europe Summer Time (UTC +2)
Sat, 10 Jul, 05:00 - 09:00 Pacific Daylight Time (UTC -7)

Part II

Sun, 11 Jul, 12:00 - 16:00 (UTC)
Sun, 11 Jul, 20:00 - 00:00 China Standard Time (UTC +8)
Sun, 11 Jul, 14:00 - 18:00 Central Europe Summer Time (UTC +2)
Sun, 11 Jul, 05:00 - 09:00 Pacific Daylight Time (UTC -7)
Persistent Scattering Interferometry of Earth’s surface has proved for its ability to provide millimeter-scale deformation measurements over time. Even though the topic can be challenging, this tutorial makes it much easier to understand. In detail, this tutorial will explain how to use radar interferometry techniques on real-world Sentinel-1 images, with user-oriented (no coding skills required!) open source software. After a quick summary of theory, the tutorial presents how to apply TOPS Sentinel-1 SAR data and processing technology to identify and monitor ground deformation. After one full day training, participants will gain an intuitive understand on background of radar interferometry and able to produce time series of ground motion from a stack of SAR images.

FD-2: Machine Learning in Remote Sensing - Theory and Applications for Earth Observation

Presented by Ronny Hänsch, Devis Tuia, Andrea Marinoni, Ribana Roscher

Part I

Sat, 10 Jul, 12:00 - 16:00 (UTC)
Sat, 10 Jul, 20:00 - 00:00 China Standard Time (UTC +8)
Sat, 10 Jul, 14:00 - 18:00 Central Europe Summer Time (UTC +2)
Sat, 10 Jul, 05:00 - 09:00 Pacific Daylight Time (UTC -7)

Part II

Sun, 11 Jul, 12:00 - 16:00 (UTC)
Sun, 11 Jul, 20:00 - 00:00 China Standard Time (UTC +8)
Sun, 11 Jul, 14:00 - 18:00 Central Europe Summer Time (UTC +2)
Sun, 11 Jul, 05:00 - 09:00 Pacific Daylight Time (UTC -7)
Despite the wide and often successful application of machine learning techniques to analyse and interpret remotely sensed data, the complexity, special requirements, as well as selective applicability of these methods often hinders to use them to their full potential. The gap between sensor- and application-specific expertise on the one hand, and a deep insight and understanding of existing machine learning methods on the other hand often leads to suboptimal results, unnecessary or even harmful optimizations, and biased evaluations. The aim of this tutorial is threefold: First, to provide insights and a deep understanding of the algorithmic principles behind state-of-the-art machine learning approaches including Random Forests and Convolutional Networks, feature learning, regularization priors, explainable AI, and multimodal data fusion. Second, to illustrate the benefits and limitations of machine learning with practical examples, including providing recommendations about proper preprocessing, initialization, and sampling, stating available sources of data and benchmarks, and discussing human machine interaction to generate training data. Third, to inspire new ideas by discussing unusual applications from remote sensing and other domains.

FD-3: Natural disasters and hazards monitoring using Earth Observation data

Presented by Ramona Pelich, Marco Chini, Wataru Takeuchi, Young-Joo Kwak, Vitaliy Yurchenko

Part I

Sat, 10 Jul, 12:00 - 16:00 (UTC)
Sat, 10 Jul, 20:00 - 00:00 China Standard Time (UTC +8)
Sat, 10 Jul, 14:00 - 18:00 Central Europe Summer Time (UTC +2)
Sat, 10 Jul, 05:00 - 09:00 Pacific Daylight Time (UTC -7)

Part II

Sun, 11 Jul, 12:00 - 16:00 (UTC)
Sun, 11 Jul, 20:00 - 00:00 China Standard Time (UTC +8)
Sun, 11 Jul, 14:00 - 18:00 Central Europe Summer Time (UTC +2)
Sun, 11 Jul, 05:00 - 09:00 Pacific Daylight Time (UTC -7)
In recent years, natural disasters, i.e., hydro-geo-meteorological hazards and risks, have been frequently experienced by both developing and developed countries. 2020 has been another year with numerous devastating water-related disasters hitting many regions across the globe. For example, in September 2020, the southeastern France and northern Italy were affected by deadly flash floods caused by a record rainfall, while severe flooding and landslides hit greater Jakarta in January 2020. This tutorial is comprised of basic theoretical and experimental information essential for an emergency hazard and risk mapping process focused on advanced satellite Earth Observation (EO) data including both SAR and Optical data. Firstly, this tutorial gives a better understanding of disaster risk in the early stage by means of EO data available immediately after a disaster occurs. Then, after several comprehensive lectures focused on floods and landslides, a hands-on session will give the opportunity to all participants to learn more about the practical EO tools available for rapid-response information. This full day tutorial will demonstrate the implementation of disaster risk reduction and sustainable monitoring for effective emergency response and management between decision and action activities.

FD-5: From Big EO Data to Digital Twins: Hybrid AI and Quantum based Paradigms

Presented by Mihai Datcu

Part I

Sat, 10 Jul, 12:00 - 16:00 (UTC)
Sat, 10 Jul, 20:00 - 00:00 China Standard Time (UTC +8)
Sat, 10 Jul, 14:00 - 18:00 Central Europe Summer Time (UTC +2)
Sat, 10 Jul, 05:00 - 09:00 Pacific Daylight Time (UTC -7)

Part II

Sun, 11 Jul, 12:00 - 16:00 (UTC)
Sun, 11 Jul, 20:00 - 00:00 China Standard Time (UTC +8)
Sun, 11 Jul, 14:00 - 18:00 Central Europe Summer Time (UTC +2)
Sun, 11 Jul, 05:00 - 09:00 Pacific Daylight Time (UTC -7)
The digital and sensing technologies, i.e. Big Data, are revolutionary developments massively impacting the Earth Observation (EO) domains. While, Artificial Intelligence (AI) is providing now the methods to valorize the Big Data. Today the accepted trends assume more data we analyze, the smarter the analysis paradigms will perform. However, the data deluge, diversity, or the broad range of specialized applications are posing new major challenges. From the perspective of the data valorization and applications the multi-mission and related data use for global applications still need more efforts. From the methodological side the challenges are related to, the reproducibility, the trustworthiness, physics awareness, and over all, the explainability of the methods and results. The tutorial introduces and explains solution based on the concept of Digital Twins. A Digital Twin is the convergence of the remote sensing physical mechanisms tightly connected, communicating and continuously learning, from data and mathematical models, data analytics, simulations and user interaction. The presentation covres the major developments, of hybrid, physics aware AI paradigms, at the convergence of forward modelling, inverse problem and machine learning, to discover causalities and make prediction for maximization of the information extracted from EO and related non-EO data. The tutorial explains how to automatize the entire chain from multi-sensor EO and non-EO data, to physical parameters, required in applications by filling the gaps and generating a relevant, understandable layers of information. Digital Twins are technologies looking o the evolution of EO at least for the horizons of next two decades. The explosive present advance of AI methods was obtained thanks to mainly two factors, the advancement of theoretical bases and performance evolution of the IT, i.e. computation, storage and communication. Today we are the edge of a Quantum revolution, impacting technologies in communication, computing, sensing, or metrology. Quantum computers and simulators are and continuously become largely accessible. Thus, definitely impacting the EO domains. In this context the tutorial puts the bases of information processing from the perspective of the quantum computing and algorithms for EO. The presentation will cover an introduction in quantum information theory, quantum algorithms and computers, presenting the first results and analysing the main perspectives for EO applications.

FD-6: Scalable Machine Learning with High Performance and Cloud Computing

Presented by Gabriele Cavallaro, Shahbaz Memon, Rocco Sedona

Part I

Sat, 10 Jul, 12:00 - 16:00 (UTC)
Sat, 10 Jul, 20:00 - 00:00 China Standard Time (UTC +8)
Sat, 10 Jul, 14:00 - 18:00 Central Europe Summer Time (UTC +2)
Sat, 10 Jul, 05:00 - 09:00 Pacific Daylight Time (UTC -7)

Part II

Sun, 11 Jul, 12:00 - 16:00 (UTC)
Sun, 11 Jul, 20:00 - 00:00 China Standard Time (UTC +8)
Sun, 11 Jul, 14:00 - 18:00 Central Europe Summer Time (UTC +2)
Sun, 11 Jul, 05:00 - 09:00 Pacific Daylight Time (UTC -7)
Recent advances in remote sensors with higher spectral, spatial, and temporal resolutions have significantly increased data volumes, which pose a challenge to process and analyze the resulting massive data in a timely fashion to support practical applications. Meanwhile, the development of computationally demanding Machine Learning (ML) and Deep Learning (DL) techniques (e.g., deep neural networks with massive amounts of tunable parameters) demand for parallel algorithms with high scalability performance. Therefore, data intensive computing approaches have become indispensable tools to deal with the challenges posed by applications from geoscience and Remote Sensing (RS). In recent years, high-performance and distributed computing have been rapidly advanced in terms of hardware architectures and software. For instance, the popular graphics processing unit (GPU) has evolved into a highly parallel many-core processor with tremendous computing power and high memory bandwidth. Moreover, recent High Performance Computing (HPC) architectures and parallel programming have been influenced by the rapid advancement of DL and hardware accelerators as modern GPUs. ML and DL have already brought crucial achievements in solving RS data classification problems. The state-of-the-art results have been achieved by deep networks with backbones based on convolutional transformations (e.g., Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Generative Adversarial Networks (GANs)). Their hierarchical architecture composed of stacked repetitive operations enables the extraction of useful informative features from raw data and modelling high-level semantic content of RS data. On the one hand, DL can lead to more accurate classification results of land cover classes when networks are trained over large RS annotated datasets. On the other hand, deep networks pose challenges in terms of training time. In fact, the use of a large datasets for training a DL model requires the availability of non-negligible time resources. In this scenario, approaches relying on local workstation machines can provide only limited capabilities. Despite modern commodity computers and laptops becoming more powerful in terms of multi-core configurations and GPU, the limitations in regard to computational power and memory are always an issue when it comes to fast training of large high accuracy models from correspondingly large amounts of data. Therefore, the use of highly scalable and parallel distributed architectures (such as HPC systems and Cloud Computing services) is a necessary solution to train DL classifiers in a reasonable amount of time, which can then also provide users with high accuracy performance in the recognition tasks. The tutorial aims at providing a complete overview for an audience that is not familiar with these topics. The tutorial will follow a two-fold approach: from selected background lectures to practical hands-on exercises in order to perform own research after the tutorial. The tutorial will discuss the fundamentals of what a supercomputer and a cloud consists of, and how we can take advantage of such systems to solve RS problems that require fast and highly scalable methods such as realistic real time scenarios.

HD-2: All you need to know about learning with limited labels for remote sensing

Presented by Angelica Aviles-Rivero, Rihuan Ke, Lichao Mou, Sudipan Saha, Carola-Bibiane Schonlieb, Xiaoxiang Zhu

Sun, 11 Jul, 12:00 - 16:00 (UTC)
Sun, 11 Jul, 20:00 - 00:00 China Standard Time (UTC +8)
Sun, 11 Jul, 14:00 - 18:00 Central Europe Summer Time (UTC +2)
Sun, 11 Jul, 05:00 - 09:00 Pacific Daylight Time (UTC -7)
In the last years, machine learning has experienced an astonishing development in all domains. The advent of Deep Learning (DL) – since the pioneering work of Hinton in 2012 – changed the perspective of the community, adopting in this way DL as the go-to technique for different remote sensing tasks such as classification, segmentation and detection. However, a major drawback of these techniques is the high dependence on a large and well-representative corpus of labelled data. In several real world problems, this might be a strong assumption for a solution, as annotated data contains strong human bias, and for many domains, including remote sensing, it is expensive and time consuming to obtain labels. Motivated by these drawbacks, different paradigms that rely on less labels have experience a fast development including Semi-Supervised Learning and Weakly Supervised Learning, in which one aims to exploit the inherent relationship between a very small set of label data and a huge amount of unlabelled data. In this tutorial, we aim to draw attention to current developments in learning with fewer labels for remote sensing data analysis. We will start by introducing the topic and giving an overview of the body of the literature in the area. We will then present current challenges when dealing with remote sensing data including video street level, hyperspectral and time series data. We will close our tutorial by summarising the current challenges and opportunities in this domain. Some open questions related to the topic will also be discussed in the end. SCHEDULE PART 1 PART 1.A Tutorial Overview PART 1.B Semi-Supervised Learning for Remote Sensing Data: Classic, Deep Learning and Hybrid Techniques PART 1.C 1 Weakly supervised learning for time series remote sensing data analysis PART 2.A Deep Semi-Supervised Learning for Street Level Data Analysis PART 2.B Semantic Understanding of Remote Sensing Imagery Using Sparse Labels PART 2.C Closing Remarks

HD-3: Hands-on openEO: access cloud platforms using your preferred programming language

Presented by Pieter Kempeneers, Jeroen Dries, Dainius Masiliunas

Sun, 11 Jul, 12:00 - 16:00 (UTC)
Sun, 11 Jul, 20:00 - 00:00 China Standard Time (UTC +8)
Sun, 11 Jul, 14:00 - 18:00 Central Europe Summer Time (UTC +2)
Sun, 11 Jul, 05:00 - 09:00 Pacific Daylight Time (UTC -7)
openEO is a user-driven open source application programming interface (API) to deploy Earth Observation data analysis workflows on cloud platforms (backends). Users can program in their preferred language (e.g., Python, R, JavaScript), while the API establishes a uniform communication with any backend that has implemented the openEO API. Examples of such backends are: the Earth Observation Data Centre For Water Resources Monitoring Gmbh (EODC), Google Earth Engine (GEE), Mundialis Actinia, Sinergise Sentinel Hub, VITO GeoPyspark, EURAC WCPS, and the JRC Big Data Analytics Platform. In addition, a number of DIAS can already be accessed. In this tutorial, the basic principles and functionalities of the openEO API will be introduced first. The participants will then interact with some real backends and experience the flexibility of the API. They will create virtual data cubes from Earth Observation collections (Sentinel 1/2) and deploy their Python or R workflows in the cloud.

HD-4: Compressed Sensing, Finite-Rate-of-Sampling, and Sub-Nyquist Processing for Synthetic Aperture Radar

Presented by Kumar Vijay Mishra, Raghu G. Raj

Sun, 11 Jul, 12:00 - 16:00 (UTC)
Sun, 11 Jul, 20:00 - 00:00 China Standard Time (UTC +8)
Sun, 11 Jul, 14:00 - 18:00 Central Europe Summer Time (UTC +2)
Sun, 11 Jul, 05:00 - 09:00 Pacific Daylight Time (UTC -7)
In the past few years, new approaches to radar signal processing have been introduced which allow the radar to perform signal detection and parameter estimation from much fewer measurements than that required by Nyquist sampling. These reduced rate radars model the received signal as having finite rate of innovation and employ the sub-Nyquist framework to obtain low-rate samples of the signal. Sub-Nyquist radars exploit the fact that the target scene is sparse facilitating the use of compressed sensing (CS) methods in signal recovery. The CS may also be applied on full-rate samples to facilitate reduced-rate processing on the sampled signal. Recent developments in reduced-rate sampling break the link between common radar design trade-offs such as range resolution and transmit bandwidth; dwell time and Doppler resolution; spatial resolution and number of antenna elements; continuous-wave radar sweep time and range resolution. Several pulse-Doppler radar systems are based on these principles. Contrary to other CS-based designs, sub-Nyquist formulations directly address the reduced-rate analog sampling in space and time, avoid a prohibitive dictionary size, and are robust to noise and clutter. The temporal sub-Nyquist processing estimates the target locations using less bandwidth than conventional systems. This also paves the way to cognitive radars that share their transmit spectrum with other communication services, thereby providing a robust solution for coexistence in spectrally crowded environments. Without impairing Doppler resolution, these systems also reduce the dwell time by transmitting interleaved radar pulses in a scarce manner within a coherent processing interval or ``slow time". Extensions to spatial domain have been proposed in the context of multiple-input-multiple-output array radars where few antenna elements are used without degradation in angular resolution. For each setting, state-of-the-art hardware prototypes have also been designed to demonstrate the real-time feasibility of sub-Nyquist radars. Recently, these concepts have also been applied to imaging systems such as synthetic aperture radar (SAR), inverse SAR (ISAR), and interferometric SAR (InSAR). In fact, SAR was one of the first applications of CS methods. The SAR imaging data are not naturally sparse in the range-time domain. However, they are often sparse in other domains, such as wavelets. The motivation to apply sub-Nyquist methods is to address the challenge of oversampled data for the SAR processing challenge. This tutorial introduces the audience to reduced-rate sampling methods with a focus on SAR, ISAR, and InSAR systems. It will provide an overview of detailed signal processing theory to apply reduced-rate sampling to conventional radars and follow by its recent applications to imaging radars.

HD-6: The ARTMO toolbox for analyzing and processing of remote sensing data into biophysical variables

Presented by Jochem Verrelst, Jorge Vicent

Sun, 11 Jul, 12:00 - 16:00 (UTC)
Sun, 11 Jul, 20:00 - 00:00 China Standard Time (UTC +8)
Sun, 11 Jul, 14:00 - 18:00 Central Europe Summer Time (UTC +2)
Sun, 11 Jul, 05:00 - 09:00 Pacific Daylight Time (UTC -7)
This tutorial will focus on the use of ARTMO’s (Automated Radiative Transfer Models Operator) and ALG’s (Atmospheric Look-up table Generator) radiative transfer models (RTMs), retrieval toolboxes and post-processing tools (https://artmotoolbox.com/) for the generation and interpretation of hyperspectral data. ARTMO and ALG brings together a diverse collection of leaf, canopy and atmosphere RTMs into a synchronized user-friendly GUI environment. Essential tools are provided to create all kinds of look-up tables (LUT). These LUTs can then subsequently be used for mapping applications from optical images. A LUT, or user-collected field data, can subsequently be inserted into three types of mapping toolboxes: (1) through parametric regression (e.g. vegetation indices), (2) nonparametric methods (e.g. machine learning methods), or (3) through LUT-based inversion strategies. In each of these toolboxes various optimization algorithms are provided so that the best-performing strategy can be applied for mapping applications. When coupled with an atmosphere RTM retrieval can take place directly from top-of-atmosphere radiance data. Further, ARTMO’s RTM post-processing tools include: (1) global sensitivity analysis, (2) emulation, i.e. approximating RTMs through machine learning, and (3) synthetic scene generation. Here we plan to present ARTMO’s mapping capabilities at bottom and top of atmosphere level using coupled leaf-canopy-atmosphere RTMs. The proposed tutorial will consist of a brief theoretical session and a practical session, where the following topics will be addressed: 1. Basics of leaf, canopy and atmosphere RTMs: generation of RTM simulations 2. Overview of retrieval methods: parametric, nonparametric, inversion and hybrid methods. Coupling of top-of-canopy simulations with simulations from atmospheric RTMS for the generation of top-of-atmosphere radiance data. 3. Principles of emulation of hyperspectral data, and applications such as global sensitivity analysis and scene generation. In the practical session we will learn to work with the ARTMO toolboxes. They provide practical solutions dealing with the abovementioned topics. Step-by-step tutorials, demonstration cases and demo data will be provided.