# Matrix Cofactorization for Joint Unmixing and Classification of Hyperspectral Images

## Summary (2 min read)

### Introduction

- Index Terms—supervised learning, spectral unmixing, cofactorization, hyperspectral images.
- In particular classification algorithms received a lot of attention from the scientific community.
- In the specific case of hyperspectral images (HSI), images capture a very rich signal since each pixel is a sampling of the reflectance spectrum of the corresponding area, typically in the visible and infrared spectral domains with hundreds of measurements.
- The core concept is to express the two problems of interest, namely spectral unmixing and classification, as factorization problems and then to introduce a coupling term to intertwine the two estimations.
- Finally, the method is tested and compared to other unmixing and classification methods in Section IV.

### II. PROBLEM STATEMENT

- As presented in Sections II-A and II-B, spectral unmixing and supervised classification are commonly expressed as factorization problems.
- In the proposed model, the link is made between the abundance matrix and the feature matrix.
- More precisely, the coupling term is expressed as a clustering term over the abundance vectors where the attribution vectors to the clusters are also the feature vectors of the classification as detailed in Section II-C.

### A. Spectral unmixing

- These abundance vectors describe the mixture contained in the pixel.
- In addition to the data fitting term, two penalization terms are considered in the proposed unmixing model.
- The term ı R R×P + (A) enforces a nonnegativity constraint, ensuring an additive decomposition of the spectra.
- The second penalization λa ‖A‖1 is a sparsity penalization promoting the concept that only a few endmembers are active in a given pixel.
- In the following work, the choice has been made to discard the estimation of the endmember matrix for the sake of simplicity.

### B. Classification

- Numerous decision rules have been proposed to carry out classification.
- The weighing coefficients dp adjust the cost function with respect to the sizes of the training and test sets, in particular in the case of unbalanced classes.
- Moreover, the nonlinear mapping φ(·) is chosen as a sigmoid, which makes the proposed classifier interpretable as a one layer neural network.
- The second considered penalization is a spatial regularization enforced through a smoothed weighted vectorial total variation norm (vTV).
- They are computed beforehand using external data containing information on the spatial structures, e.g., a panchromatic image or a LIDAR image [11].

### C. Clustering

- To define a global cofactorization problem, a relation is drawn between the activation matrices of the two factorization problems, namely the abundance matrix and the feature matrix.
- Abundances vectors are clustered and the resulting attribution vectors are then used as feature vectors for the classification.
- Thus, the resulting clustering method is a particular instance of kmeans where the attribution vectors are relaxed and can be interpreted as the collection of probabilities to belong to each of the clusters.

### D. Multi-objective problem

- The two factorization problems corresponding to the spectral unmixing and classification tasks have been expressed and the link between these two problems has been set up through the clustering term.

### III. OPTIMIZATION SCHEME

- The proposed global optimization problem (8) is nonconvex and non-smooth.
- Such problem are usually very challenging to solve.
- The concept of this algorithm is to perform a proximal gradient descent according to each variable alternatively.
- In the present case, the partial gradients is easily computed and all globally Lipschitz.
- As for the proximal operators, they are are well-known [12] except for f0(·).

### IV. EXPERIMENTS

- Data generation – The HSI used to perform the experiments is a semi-synthetic image.
- For the last hyperparameter λ̃c, two values have been considered 0. and 0.1, standing respectively for the case without and with spatial regularization.
- It should be noted that all unmixing methods use directly the correct endmember matrix M which has been used to generate the data.
- Processing time is indeed higher for the proposed cofactorization method than for RF, FCLS and CBPDN.
- In terms of qualitative results, Figure 3 presents the classification maps which appear consistent with the quantitative results.

### V. CONCLUSION AND PERSPECTIVE

- This paper introduces a unified framework to perform jointly spectral unmixing and classification by the mean of a cofactorization problem.
- The overall cofactorization task is formulated as a non-convex nonsmooth optimization problem whose solution was approximated thanks to a PALM algorithm which ensured some convergence guarantees.
- Geosci. Remote Sens., vol. 54, no. 10, pp. 6232–6251, 2016. [2].
- A. Villa, J. Chanussot et al., “Spectral unmixing for the classification of hyperspectral images at a finer spatial resolution,” IEEE J. Sel. Top. Signal Process., vol. 5, no.
- J. Bolte, S. Sabach et al., “Proximal alternating linearized minimization for nonconvex and nonsmooth problems,” Mathematical Programming, vol.

Did you find this useful? Give us your feedback

##### Citations

28,684 citations

##### References

28,684 citations

### "Matrix Cofactorization for Joint Un..." refers background in this paper

...This particular loss function has been extensively used in the context of neural networks [10]....

[...]

4,394 citations

### "Matrix Cofactorization for Joint Un..." refers methods in this paper

...To evaluate the classification accuracy, two conventional metrics are used, namely Cohen’s kappa coefficient and the averaged F1-score over all classes [18]....

[...]

2,090 citations

### "Matrix Cofactorization for Joint Un..." refers methods in this paper

...The real image has been unmixed using a fully constrained least square (FCLS) algorithm [14] using R = 5 endmembers extracted with the well-known VCA algorithm [15]....

[...]

^{1}, Antonio Plaza

^{2}, Nicolas Dobigeon

^{3}, Mario Parente

^{4}+3 more•Institutions (7)

1,979 citations

### "Matrix Cofactorization for Joint Un..." refers methods in this paper

...More specifically, the image has been generated using a real HSI....

[...]

...Regarding classification, we considered the random forest (RF) algorithm, known to perform very well to classify HSI....

[...]

...Each pixel of an HSI is a L-dimensional measurement of a reflectance spectrum....

[...]

...In the specific case of hyperspectral images (HSI), images capture a very rich signal since each pixel is a sampling of the reflectance spectrum of the corresponding area, typically in the visible and infrared spectral domains with hundreds of measurements....

[...]

...To fully exploit the available information, it is interesting to resort to alternative methods of interpretation such as representation learning methods, namely spectral unmixing in the case of HSI [3]....

[...]

1,862 citations