Supervised, Semi-supervised, and Unsupervised Learning for Hyperspectral Regression

All rights reserved.

Abstract

In this chapter, we present an entire workflow for hyperspectral regression based on supervised, semi-supervised, and unsupervised learning. Hyperspectral regression is defined as the estimation of continuous parameters like chlorophyll a, soil moisture, or soil texture based on hyperspectral input data. The main challenges in hyperspectral regression are the high dimensionality and strong correlation of the input data combined with small ground truth datasets as well as dataset shift. The presented workflow is divided into three levels. (1) At the data level, the data is pre-processed, dataset shift is addressed, and the dataset is split reasonably. (2) The feature level considers unsupervised dimensionality reduction, unsupervised clustering as well as manual feature engineering and feature selection. These unsupervised approaches include autoencoder (AE), t-distributed stochastic neighbor embedding (t-SNE) as well as uniform manifold approximation and projection (UMAP). (3) At the model level, the most commonly used supervised and semi-supervised machine learning models are presented. These models include random forests (RF), convolutional neural networks (CNN), and supervised self-organizing maps (SOM). We address the process of model selection, hyperparameter optimization, and model evaluation. Finally, we give an overview of upcoming trends in hyperspectral regression. Additionally, we provide comprehensive code examples and accompanying materials in the form of a hyperspectral dataset and Python notebooks via GitHub [98, 100].

Publication
Prasad S., Chanussot J. (eds) Hyperspectral Image Analysis. Advances in Computer Vision and Pattern Recognition. Springer, Cham
Felix M. Riese
Felix M. Riese
MBA Consultant

Consultant at Roche (CH) and MBA Fellow at CDI (FR).