Matrix Cofactorization for Joint Unmixing and Classification of Hyperspectral Images
Summary (2 min read)
Introduction
- In particular classification algorithms received a lot of attention from the scientific community.
- In the specific case of hyperspectral images (HSI), images capture a very rich signal since each pixel is a sampling of the reflectance spectrum of the corresponding area, typically in the visible and infrared spectral domains with hundreds of measurements.
- The core concept is to express the two problems of interest, namely spectral unmixing and classification, as factorization problems and then to introduce a coupling term to intertwine the two estimations.
- Finally, the method is tested and compared to other unmixing and classification methods in Section IV.
II. PROBLEM STATEMENT
- As presented in Sections II-A and II-B, spectral unmixing and supervised classification are commonly expressed as factorization problems.
- In the proposed model, the link is made between the abundance matrix and the feature matrix.
- More precisely, the coupling term is expressed as a clustering term over the abundance vectors where the attribution vectors to the clusters are also the feature vectors of the classification as detailed in Section II-C.
B. Classification
- Numerous decision rules have been proposed to carry out classification.
- The weighing coefficients dp adjust the cost function with respect to the sizes of the training and test sets, in particular in the case of unbalanced classes.
- Moreover, the nonlinear mapping φ(·) is chosen as a sigmoid, which makes the proposed classifier interpretable as a one layer neural network.
- The second considered penalization is a spatial regularization enforced through a smoothed weighted vectorial total variation norm (vTV).
- They are computed beforehand using external data containing information on the spatial structures, e.g., a panchromatic image or a LIDAR image [11].
C. Clustering
- To define a global cofactorization problem, a relation is drawn between the activation matrices of the two factorization problems, namely the abundance matrix and the feature matrix.
- Abundances vectors are clustered and the resulting attribution vectors are then used as feature vectors for the classification.
- Thus, the resulting clustering method is a particular instance of kmeans where the attribution vectors are relaxed and can be interpreted as the collection of probabilities to belong to each of the clusters.
III. OPTIMIZATION SCHEME
- The proposed global optimization problem (8) is nonconvex and non-smooth.
- Such problem are usually very challenging to solve.
- The concept of this algorithm is to perform a proximal gradient descent according to each variable alternatively.
- In the present case, the partial gradients is easily computed and all globally Lipschitz.
- As for the proximal operators, they are are well-known [12] except for f0(·).
IV. EXPERIMENTS
- Data generation – The HSI used to perform the experiments is a semi-synthetic image.
- For the last hyperparameter λ̃c, two values have been considered 0. and 0.1, standing respectively for the case without and with spatial regularization.
- It should be noted that all unmixing methods use directly the correct endmember matrix M which has been used to generate the data.
- Processing time is indeed higher for the proposed cofactorization method than for RF, FCLS and CBPDN.
- In terms of qualitative results, Figure 3 presents the classification maps which appear consistent with the quantitative results.
V. CONCLUSION AND PERSPECTIVE
- This paper introduces a unified framework to perform jointly spectral unmixing and classification by the mean of a cofactorization problem.
- The overall cofactorization task is formulated as a non-convex nonsmooth optimization problem whose solution was approximated thanks to a PALM algorithm which ensured some convergence guarantees.
- A. Villa, J. Chanussot et al., “Spectral unmixing for the classification of hyperspectral images at a finer spatial resolution,” IEEE J. Sel. Top. Signal Process., vol. 5, no.
- J. Bolte, S. Sabach et al., “Proximal alternating linearized minimization for nonconvex and nonsmooth problems,” Mathematical Programming, vol.
Did you find this useful? Give us your feedback
Citations
28,685 citations
References
580 citations
"Matrix Cofactorization for Joint Un..." refers methods in this paper
...The real image has been unmixed using a fully constrained least square (FCLS) algorithm [14] using R = 5 endmembers extracted with the well-known VCA algorithm [15]....
[...]
...As for the unmixing comparison, we considered two methods described in [14]....
[...]
253 citations
"Matrix Cofactorization for Joint Un..." refers background in this paper
...As for the proximal operators, they are are well-known [12] except for f0(·)....
[...]
164 citations
99 citations
"Matrix Cofactorization for Joint Un..." refers background in this paper
...For f0(·), it is necessary to resort to the composition of the proximal operators associated to the non-negative constraint and the 1norm, which is here possible according to [13]....
[...]
26 citations
Related Papers (5)
Frequently Asked Questions (13)
Q2. What are the future works in this paper?
M. Belgiu and L. Drăguţ, “ Random forest in remote sensing: A review of applications and future directions, ” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 114, pp. 24–31, 2016. [ 3 ]
Q3. What is the classification rule for a linear classifier?
Considering a linear classifier parametrized by the matrix Q ∈ RC×K , a vector-wise nonlinear mapping φ(·), such as a sigmoid or a softmax operator, is then applied to the output of the classifier.
Q4. What are the two conventional metrics used to evaluate the classification accuracy?
To evaluate the classification accuracy, two conventional metrics are used, namely Cohen’s kappa coefficient and the averaged F1-score over all classes [18].
Q5. What is the attribution term for the clustering term?
More precisely, the coupling term is expressed as a clustering term over the abundance vectors where the attribution vectors to the clusters are also the feature vectors of the classification as detailed in Section II-C.
Q6. What is the penalization of the kmeans clustering problem?
Two constraints are considered in this kmeans clustering problem: i) a positivity constraint on B since centroids are expected to be interpretable as mean abundance vectors and ii) the vectors zp (p ∈ P) are assumed to be defined on the K-dimensional probability simplex SK .
Q7. What is the definition of the unmixing method?
It should be noted that all unmixing methods use directly the correct endmember matrix M which has been used to generate the data.
Q8. What is the purpose of this paper?
This paper introduces a unified framework to perform jointly spectral unmixing and classification by the mean of a cofactorization problem.
Q9. What is the index subset of unlabeled pixel?
The index subset of labeled pixel is denoted hereafter L while the index subset of unlabeled pixel is U ( L ∩ U = ∅ and L ∪ U = P).
Q10. What is the RMSE for the unmixing?
To evaluate the unmixing results quantitatively, the reconstruction error (RE) and root global mean squared error (RMSE) are considered, i.e.,RE =√1PL∥ ∥ ∥ Y −MÂ ∥ ∥ ∥ 2F ,RMSE =√1PR∥ ∥ ∥ Atrue −
Q11. What is the definition of a global cofactorization problem?
To define a global cofactorization problem, a relation is drawn between the activation matrices of the two factorization problems, namely the abundance matrix and the feature matrix.
Q12. What is the cost function for a classifier?
The weighing coefficients dp adjust the cost function with respect to the sizes of the training and test sets, in particular in the case of unbalanced classes.
Q13. What are the results of the proposed cofactorization framework?
Results reported in Table The authorshow that the proposed cofactorization framework outperforms both RF and D-KSVD in term of classification.