Multisource and Multitemporal Data Fusion in Remote Sensing: A Comprehensive Review of the State of the Art
read more
Citations
A New Benchmark Based on Recent Advances in Multispectral Pansharpening: Revisiting Pansharpening With Classical and Emerging Pansharpening Methods
Classification of Hyperspectral and LiDAR Data Using Coupled CNNs
UAV & satellite synergies for optical remote sensing applications: A literature review
Classification of Hyperspectral and LiDAR Data Using Coupled CNNs
Deep learning-based remote and social sensing data fusion for urban region function recognition
References
Very Deep Convolutional Networks for Large-Scale Image Recognition
Fully Convolutional Networks for Semantic Segmentation
A Computer Movie Simulating Urban Growth in the Detroit Region
Spark: cluster computing with working sets
Review Article Digital change detection techniques using remotely-sensed data
Related Papers (5)
Multisensor image fusion in remote sensing: Concepts, methods and applications
Frequently Asked Questions (16)
Q2. What future works have the authors mentioned in the paper "Multisource and multitemporal data fusion in remote sensing" ?
In this context, several vibrant fusion topics, including pansharpening and resolution enhancement, point cloud data fusion, hyperspectral and LiDAR data fusion, multitemporal data fusion, as well as big data and social media were detailed and their corresponding challenges and possible future research directions were outlined and discussed. As demonstrated through the challenges and possible future research of each section, although the field of remote sensing data fusion is mature, there are still many doors left open for further investigation, both from the theoretical and application perspectives. The authors hope that this review opens up new possibilities for readers to further investigate the remaining challenges to developing sophisticated fusion approaches suitable for the applications at hand.
Q3. What are the contributions in "Multisource and multitemporal data fusion in remote sensing" ?
The final version of the paper can be found in IEEE Geoscience and Remote Sensing Magazine. This paper brings together the advances of multisource and multitemporal data fusion approaches with respect to different research communities and provides a thorough and discipline-specific starting point for researchers at different levels ( i. e., students, researchers, and The work of P. Ghamisi is supported by the ” High Potential Program ” of Helmholtz-Zentrum Dresden-Rossendorf. N. Yokoya is with the RIKEN Center for Advanced Intelligence Project, RIKEN, 103-0027 Tokyo, Japan ( e-mail: naoto. yokoya @ riken. jp ). More specifically, this paper provides a bird ’ s-eye view of many important contributions specifically dedicated to the topics of pansharpening and resolution enhancement, point cloud data fusion, hyperspectral and LiDAR data fusion, multitemporal data fusion, as well as big data and social media. Such an increase in remote sensing and ancillary datasets, however, opens up the possibility of utilizing multimodal datasets in a joint manner to further improve the performance of the processing approaches with respect to the application at hand.
Q4. What have the authors stated for future works in "Multisource and multitemporal data fusion in remote sensing" ?
In this context, several vibrant fusion topics, including pansharpening and resolution enhancement, point cloud data fusion, hyperspectral and LiDAR data fusion, multitemporal data fusion, as well as big data and social media were detailed and their corresponding challenges and possible future research directions were outlined and discussed. As demonstrated through the challenges and possible future research of each section, although the field of remote sensing data fusion is mature, there are still many doors left open for further investigation, both from the theoretical and application perspectives. The authors hope that this review opens up new possibilities for readers to further investigate the remaining challenges to developing sophisticated fusion approaches suitable for the applications at hand.
Q5. What is the main concept of MRA-based pansharpening methods?
The main concept of MRA-based pansharpening methods is to extract spatial details (or highfrequency components) from the PAN image and inject the details multiplied by gain coefficients into the multispectral data.
Q6. What is the main challenge of the point cloud model for fusion with other data sources?
The main challenges of the point cloud model for fusion with other data sources is the unstructured three-dimensional spatial nature of P and that often no fixed spatial scale and accuracy exist across the dataset.
Q7. What is the main observation at the basis of these techniques?
The main observation at the basis of these techniques is that the available class labels can be propagated within the time-series to all the pixels that have not been changed between the considered acquisitions.
Q8. What is the definition of hyperspectral imaging?
Hyperspectral imaging often exhibits a nonlinear relation between the captured spectral information and the corresponding material.
Q9. What is the important challenge of combining remote sensing and social medial data?
To derive the value of big data, combining remote sensingand social medial data, one of the most important challenges is how to process and analyze those data by novel methods or methodologies.
Q10. What were the proposed transfer learning approaches?
Transfer learning approaches were proposed in [186]– [188], where change detection-based techniques were defined for propagating the labels of available data for a given image to the training sets of other images in the time-series.
Q11. Why is it important to organize benchmark datasets on a platform like the DASE website?
It is an urgent issue of the community to arrange benchmarkdatasets on a platform like the GRSS Data and Algorithm Standard Evaluation (DASE) website [102] so that everyone can fairly compete for the performance of the algorithm.
Q12. What are the characteristics of MRA-based pansharpening techniques?
MRA-based pan-sharpening techniques can be characterized by 1) the algorithm used for obtaining spatial details (e.g., spatial filtering or multiscale transform), and 2) the definition of the gain coefficients.
Q13. What is the main concept of MRA-based pansharpening?
Selva et al. (2015) proposed a general framework called hypersharpening that extends MRA-based pan-sharpening methods to multiband image fusion by creating a fine spatial resolution synthetic image for each coarse spatial resolution band as a linear combination of fine spatial resolution bands based on linear regression [74].
Q14. What is the cross-entropy loss of the CNN model?
Both the fully convolutional network (FCN) model [204] and the CNN model are constructed based on the pre-trained ImageNet VGG-16 network [205] with the cross-entropy loss.
Q15. How long can the Landsat sensor be able to revisit images?
The Landsat sensor can acquire images at a much finer spatial resolution of 30 m, but has a limited revisit capability of 16 days.
Q16. How did Liu and his team compare the HSI and LiDAR methods?
The joint pixel and object-based method increased the overall accuracy by 7.1% to 94.7%.HSI and airborne LiDAR data were used as complementary data sources for crown structure and physiological tree information by Liu et al. [127] to map 15 different urban tree species.