scispace - formally typeset
Search or ask a question

Showing papers on "Subpixel rendering published in 2017"


Posted ContentDOI
17 Feb 2017-bioRxiv
TL;DR: The proposed algorithm for fast Non-Rigid Motion Correction (NoRMCorre) based on template matching is introduced, which can be run in an online mode resulting in comparable to or even faster than real time motion registration on streaming data.
Abstract: Motion correction is a challenging pre-processing problem that arises early in the analysis pipeline of calcium imaging data sequences. Here we introduce an algorithm for fast Non-Rigid Motion Correction (NoRMCorre) based on template matching. orm operates by splitting the field of view into overlapping spatial patches that are registered for rigid translation against a continuously updated template. The estimated alignments are subsequently up-sampled to create a smooth motion field for each frame that can efficiently approximate non-rigid motion in a piecewise-rigid manner. orm allows for subpixel registration and can be run in an online mode resulting in comparable to or even faster than real time motion registration on streaming data. We evaluate the performance of the proposed method with simple yet intuitive metrics and compare against other non-rigid registration methods on two-photon calcium imaging datasets. Open source Matlab and Python code is also made available.

225 citations


Journal ArticleDOI
TL;DR: A new resolution enhancement method is presented for multispectral and multiresolution images, such as those provided by the Sentinel-2 satellites, where band-dependent information is separated from information that is common to all bands, to preserve the subpixel details.
Abstract: A new resolution enhancement method is presented for multispectral and multiresolution images, such as those provided by the Sentinel-2 satellites. Starting from the highest resolution bands, band-dependent information (reflectance) is separated from information that is common to all bands (geometry of scene elements). This model is then applied to unmix low-resolution bands, preserving their reflectance, while propagating band-independent information to preserve the subpixel details. A reference implementation is provided, with an application example for super-resolving Sentinel-2 data.

98 citations


Journal ArticleDOI
TL;DR: In this article, a simplified gradient-based optical flow method, optimized for subpixel harmonic displacements, is used to predict the resolution potential and the effect of noise in a synthetic experiment, which is followed by a real experiment.

84 citations


Journal ArticleDOI
TL;DR: This work proposes an adaptive pixel-super-resolved lensfree imaging (APLI) method which can solve, or at least partially alleviate, limitations of typical lensfree microscopes and addresses the pixel aliasing problem by Z-scanning only, without resorting to subpixel shifting or beam-angle manipulation.
Abstract: High-resolution wide field-of-view (FOV) microscopic imaging plays an essential role in various fields of biomedicine, engineering, and physical sciences. As an alternative to conventional lens-based scanning techniques, lensfree holography provides a new way to effectively bypass the intrinsical trade-off between the spatial resolution and FOV of conventional microscopes. Unfortunately, due to the limited sensor pixel-size, unpredictable disturbance during image acquisition, and sub-optimum solution to the phase retrieval problem, typical lensfree microscopes only produce compromised imaging quality in terms of lateral resolution and signal-to-noise ratio (SNR). Here, we propose an adaptive pixel-super-resolved lensfree imaging (APLI) method which can solve, or at least partially alleviate these limitations. Our approach addresses the pixel aliasing problem by Z-scanning only, without resorting to subpixel shifting or beam-angle manipulation. Automatic positional error correction algorithm and adaptive relaxation strategy are introduced to enhance the robustness and SNR of reconstruction significantly. Based on APLI, we perform full-FOV reconstruction of a USAF resolution target (~29.85 mm2) and achieve half-pitch lateral resolution of 770 nm, surpassing 2.17 times of the theoretical Nyquist–Shannon sampling resolution limit imposed by the sensor pixel-size (1.67µm). Full-FOV imaging result of a typical dicot root is also provided to demonstrate its promising potential applications in biologic imaging.

72 citations


Journal ArticleDOI
TL;DR: Experimental results demonstrate BSNE-ICEM, which has advantages over support vector machine-based approaches in many aspects, such as easy implementation, fewer parameters to be used, and better false classification and precision rates.
Abstract: Hyperspectral image classification faces various levels of difficulty due to the use of different types of hyperspectral image data. Recently, spectral–spatial approaches have been developed by jointly taking care of spectral and spatial information. This paper presents a completely different approach from a subpixel target detection view point. It implements four stage processes, a preprocessing stage, which uses band selection (BS) and nonlinear band expansion, referred to as BS-then-nonlinear expansion (BSNE), a detection stage, which implements constrained energy minimization (CEM) to produce subpixel target maps, and an iterative stage, which develops an iterative CEM (ICEM) by applying Gaussian filters to capture spatial information, and then feeding the Gaussian-filtered CEM-detection maps back to BSNE band images to reprocess CEM in an iterative manner. Finally, in the last stage Otsu’s method is applied to converting ICEM-detected real-valued maps to discrete values for classification. The entire process is called BSNE-ICEM. Experimental results demonstrate BSNE-ICEM, which has advantages over support vector machine-based approaches in many aspects, such as easy implementation, fewer parameters to be used, and better false classification and precision rates.

71 citations


Journal ArticleDOI
TL;DR: A two-step image alignment approach with a novel utilization of existing image registration algorithms is introduced in this paper and is applied to a set of Mastcam stereo images, demonstrating that the fused images can improve pixel clustering and anomaly detection performance.
Abstract: The Mars Science Laboratory is a robotic rover mission to Mars launched by NASA on November 26, 2011, which successfully landed the Curiosity rover in Gale Crater on August 6, 2012. The Curiosity rover has two mast cameras (Mastcams) that acquire stereo images at a number of different wavelengths. Each camera has nine bands of which six bands are overlapped in the two cameras. These acquired stereo band images at different wavelengths can be fused into a 12-band multispectral image cube, which could be helpful to guide the rover to interesting locations. Since the two Mastcams’ fields of view are three times different from each other, in order to fuse the left- and right-camera band images to form a multispectral image cube, there is a need for a precise image alignment of the stereo images with registration errors at the subpixel level. A two-step image alignment approach with a novel utilization of existing image registration algorithms is introduced in this paper and is applied to a set of Mastcam stereo images. The effect of the two-step alignment approach using more than 100 pairs of Mastcam images, selected from over 500000 images in NASA's Planetary Data System database, clearly demonstrated that the fused images can improve pixel clustering and anomaly detection performance. In particular, registration errors in the subpixel level are observed with the applied alignment approach. Moreover, the pixel clustering and anomaly detection performance have been observed to be better when using fused images.

70 citations


Journal ArticleDOI
TL;DR: A novel fusion framework for HSI classification that combines subpixel, pixel, and superpixel-based complementary information is proposed and can demonstrate the effectiveness of the proposed fusion schemes in improving discrimination capability, when compared with the classification results relied on each individual feature.
Abstract: Supervised classification of hyperspectral images (HSI) is a very challenging task due to the existence of noisy and mixed spectral characteristics. Recently, the widely developed spectral unmixing techniques offer the possibility to extract spectral mixture information at a subpixel level, which can contribute to the categorization of seriously mixed spectral pixels. Besides, it has been demonstrated that the discrimination between different materials will be improved by integrating the geometry and structure information, which can be derived from the variance between neighboring pixels. Furthermore, by incorporating the spatial context, the superpixel-based spectral–spatial similarity information can be used to smooth classification results in homogeneous regions. Therefore, a novel fusion framework for HSI classification that combines subpixel, pixel, and superpixel-based complementary information is proposed in this paper. Here, both feature fusion and decision fusion schemes are introduced. For the feature fusion scheme, the first step is to extract subpixel-level, pixel-level, and superpixel-level features from HSI, respectively. Then, the multiple feature-induced kernels are fused to form one composite kernel, which is incorporated with a support vector machine (SVM) classifier for label assignment. For the decision fusion scheme, class probabilities based on three different features are estimated by the probabilistic SVM classifier first. Then, the class probabilities are adaptively fused to form a probabilistic decision rule for classification. Experimental results tested on different real HSI images can demonstrate the effectiveness of the proposed fusion schemes in improving discrimination capability, when compared with the classification results relied on each individual feature.

68 citations


Journal ArticleDOI
TL;DR: This work reconciliate total variation with Shannon interpolation and study a Fourier-based estimate that behaves much better in terms of grid invariance, isotropy, artifact removal and subpixel accuracy.
Abstract: Discretization schemes commonly used for total variation regularization lead to images that are difficult to interpolate, which is a real issue for applications requiring subpixel accuracy and aliasing control. In the present work, we reconciliate total variation with Shannon interpolation and study a Fourier-based estimate that behaves much better in terms of grid invariance, isotropy, artifact removal and subpixel accuracy. We show that this new variant (called Shannon total variation) can be easily handled with classical primal–dual formulations and illustrate its efficiency on several image processing tasks, including deblurring, spectrum extrapolation and a new aliasing reduction algorithm.

36 citations


Journal ArticleDOI
TL;DR: In this paper, the authors integrated high spatial resolution orthophotos and Landsat imagery to identify differences across a range of diverse urban subsets within the rapidly expanding Perth Metropolitan Region (PMR), Western Australia.
Abstract: Urban areas are Earth’s fastest growing land use that impact hydrological and ecological systems and the surface energy balance. The identification and extraction of accurate spatial information relating to urban areas is essential for future sustainable city planning owing to its importance within global environmental change and human–environment interactions. However, monitoring urban expansion using medium resolution (30–250 m) imagery remains challenging due to the variety of surface materials that contribute to measured reflectance resulting in spectrally mixed pixels. This research integrates high spatial resolution orthophotos and Landsat imagery to identify differences across a range of diverse urban subsets within the rapidly expanding Perth Metropolitan Region (PMR), Western Australia. Results indicate that calibrating Landsat-derived subpixel land-cover estimates with correction values (calculated from spatially explicit comparisons of subpixel Landsat values to classified high-resoluti...

36 citations


Patent
25 Sep 2017
TL;DR: In this paper, an organic light-emitting display panel and a driving method for organic light emitting display (LED) is described. But the display panel is not shown in detail.
Abstract: The present application discloses an organic light-emitting display panel and a driving method thereof, as well as an organic light-emitting display device. The display panel includes: an array arrangement including pixel units, wherein each pixel unit comprises a first, a second, a third and a fourth subpixels; a pixel circuit is formed in each subpixel; the first, second, third and fourth subpixels of an identical pixel unit are arranged along a column direction and are electrically connected with a given reference signal line; a color of the first subpixel, a color of the second subpixel, a color of the third subpixel and a color of the fourth subpixel differ from one another, and the color of the first subpixel, the color the second subpixel and the color the third subpixel are red, blue and green, respectively; and the color of the fourth subpixel is not white.

36 citations


Journal ArticleDOI
TL;DR: Experimental results demonstrate that the proposed PSF estimation scheme could not only achieve higher accuracy for the blur angle and the blur length, but also produce more impressive restoration results.

Journal ArticleDOI
TL;DR: This paper simulations Landsat scenes to evaluate a subpixel registration process based on phase correlation and the upsampling of the Fourier transform, and shows that image size affects the cross correlation results, but for images equal or larger than 100 × 100 pixels similar accuracies are expected.
Abstract: Multi-temporal analysis is one of the main applications of remote sensing, and Landsat imagery has been one of the main resources for many years. However, the moderate spatial resolution (30 m) restricts their use for high precision applications. In this paper, we simulate Landsat scenes to evaluate, by means of an exhaustive number of tests, a subpixel registration process based on phase correlation and the upsampling of the Fourier transform. From a high resolution image (0.5 m), two sets of 121 synthetic images of fixed translations are created to simulate Landsat scenes (30 m). In this sense, the use of the point spread function (PSF) of the Landsat TM (Thematic Mapper) sensor in the downsampling process improves the results compared to those obtained by simple averaging. In the process of obtaining sub-pixel accuracy by upsampling the cross correlation matrix by a certain factor, the limit of improvement is achieved at 0.1 pixels. We show that image size affects the cross correlation results, but for images equal or larger than 100 × 100 pixels similar accuracies are expected. The large dataset used in the tests allows us to describe the intra-pixel distribution of the errors obtained in the registration process and how they follow a waveform instead of random/stochastic behavior. The amplitude of this waveform, representing the highest expected error, is estimated at 1.88 m. Finally, a validation test is performed over a set of sub-pixel shorelines obtained from actual Landsat-5 TM, Landsat-7 ETM+ (Enhanced Thematic Mapper Plus) and Landsat-8 OLI (Operation Land Imager) scenes. The evaluation of the shoreline accuracy with respect to permanent seawalls, before and after the registration, shows the importance of the registering process and serves as a non-synthetic validation test that reinforce previous results.

Journal ArticleDOI
TL;DR: The main result is that the a priori choices to numerically shift the reference image modify DIC results and may lead to wrong conclusions in terms of DIC error assessment.

Journal ArticleDOI
TL;DR: The proposed framework solves unmixing problems involving a set of multisensor time-series spectral images in order to understand dynamic changes of the surface at a subpixel scale and obtains robust and stable un Mixing results.
Abstract: We present a new framework, called multisensor coupled spectral unmixing (MuCSUn), that solves unmixing problems involving a set of multisensor time-series spectral images in order to understand dynamic changes of the surface at a subpixel scale. The proposed methodology couples multiple unmixing problems based on regularization on graphs between the time-series data to obtain robust and stable unmixing solutions beyond data modalities due to different sensor characteristics and the effects of nonoptimal atmospheric correction. Atmospheric normalization and cross calibration of spectral response functions are integrated into the framework as a preprocessing step. The proposed methodology is quantitatively validated using a synthetic data set that includes seasonal and trend changes on the surface and the residuals of nonoptimal atmospheric correction. The experiments on the synthetic data set clearly demonstrate the efficacy of MuCSUn and the importance of the preprocessing step. We further apply our methodology to a real time-series data set composed of 11 Hyperion and 22 Landsat-8 images taken over Fukushima, Japan, from 2011 to 2015. The proposed methodology successfully obtains robust and stable unmixing results and clearly visualizes class-specific changes at a subpixel scale in the considered study area.

Patent
31 May 2017
TL;DR: In this article, a grayscale compensation method and system of an OLED display panel is presented. And the method includes: acquiring the actual display brightness of each subpixel of the OLED display, and calculating the corresponding relationship between the actual input grayscales of each sub-pixel and the actual displays brightness of the entire display.
Abstract: An embodiment of the invention discloses a grayscale compensation method and system of an OLED display panel. The method includes: acquiring the actual display brightness of each subpixel of the OLED display panel under an actual input grayscale, and calculating the corresponding relationship between the actual input grayscale of each subpixel and the actual display brightness of each subpixel; determining the target display brightness of each subpixel under a preset input grayscale according to the target brightness of each subpixel under the 255th grayscale and the target gamma value of the OLED display panel; determining the target input grayscale of each subpixel when each subpixel displays the target display brightness according to the corresponding relationship between the actual input grayscale of each subpixel and the actual display brightness of each subpixel; determining a grayscale compensation value according to the target input grayscale of each subpixel and the preset input grayscale. By the grayscale compensation method, light-emitting unevenness among the subpixels of the OLED display panel is improved.

Journal ArticleDOI
TL;DR: The concept is that intensity measured at a pixel with a fixed Eulerian coordinate in a digital video can be regarded as a virtual visual sensor, turning a video camera into a simple, computationally inexpensive, and accurate displacement sensor with notably low signal‐to‐noise ratio.
Abstract: Summary Vibration measurements provide useful information about a structural system's dynamic characteristics and are used in many fields of science and engineering. Here, we present an alternative noncontact approach to measure dynamic displacements of structural systems using digital videos. The concept is that intensity measured at a pixel with a fixed (or Eulerian) coordinate in a digital video can be regarded as a virtual visual sensor. The pixels in the vicinity of the boundary of a vibrating structural element contain useful frequency information, which we have been able to demonstrate in earlier studies. Our ultimate goal, however, is to be able to compute dynamic displacements, i.e., actual displacement amplitudes in the time domain. In order to achieve that, we introduce the use of simple black-and-white targets that are mounted on locations of interest on the structure. By using these targets, intensity can be directly related to displacement, turning a video camera into a simple, computationally inexpensive, and accurate displacement sensor with notably low signal-to-noise ratio. We show that subpixel accuracy with levels comparable to computationally expensive block matching algorithms can be achieved using the proposed targets. Our methodology can be used for laboratory experiments, on real structures, and additionally, we see educational opportunities in K-12 classroom. In this paper, we introduce the concept and theory of the proposed methodology, present and discuss a laboratory experiment to evaluate the accuracy of the proposed black-and-white targets, and discuss the results from a field test of an in-service bridge.

Patent
23 Feb 2017
TL;DR: In this paper, a hybrid light emitting diode (LED) display and fabrication method is presented, which consists of a stack of thin-film layers overlying a top surface of a substrate.
Abstract: A hybrid light emitting diode (LED) display and fabrication method are provided. The method forms a stack of thin-film layers overlying a top surface of a substrate. The stack includes an LED control matrix and a plurality of pixels. Each pixel is made up of a first subpixel enabled using an inorganic micro LED (uLED), a second subpixel enabled using an organic LED (OLED), and a third subpixel enabled using an OLED. The first subpixel emits a blue color light, the second subpixel emits a red color light, and the third subpixel emits a green color light. In one aspect, the stack includes a plurality of wells in a top surface of the stack, populated by the LEDs. The uLEDs may be configured vertical structures with top and bottom electrical contacts, or surface mount top surface contacts. The uLEDs may also include posts for fluidic assembly orientation.

Journal ArticleDOI
TL;DR: Tests showed that the accuracy of the proposed method for finding translational shifts is of the order of a few ten-thousandths of a pixel, which is a substantial improvement over other state-of-the-art methods.

Patent
09 Mar 2017
TL;DR: In this article, the backlight of a multiview display is coupled to a plate light guide configured with a plurality of multibeam diffraction gratings, each of which corresponds to a set of light valves.
Abstract: Multiview displays include a backlight and a screen used to form a plurality of multiview pixels. Each multiview pixel includes a plurality of sets of light valves. The backlight includes a light source optically coupled to a plate light guide configured with a plurality of multibeam diffraction gratings. Each multibeam diffraction grating corresponds to a set of light valves and is spatially offset with respect to a center of the set of light valves toward a center of the multiview pixel. The plurality of multibeam diffraction gratings is also configured to diffractively couple out light beams from the plate light guide with different diffraction angles and angular offsets such that at least a portion of the coupled-out light beams interleave and propagate in different view directions of the multiview display.

Journal ArticleDOI
TL;DR: Experimental results from both Landsat and MODIS imagery have proven that ISAM, when compared with other SAMs, can improve SPM accuracies and is a more efficient SPM technique than MSPSAM and MSAM.
Abstract: Subpixel mapping (SPM) is a technique that produces hard classification maps at a spatial resolution finer than that of the input images produced when handling mixed pixels. Existing spatial attraction model (SAM) techniques have been proven to be an effective SPM method. The techniques mostly differ in the way in which they compute the spatial attraction, for example, from the surrounding pixels in the subpixel/pixel spatial attraction model (SPSAM), from the subpixels within the surrounding pixels in the modified SPSAM (MSPSAM), or from the subpixels within the surrounding pixels and the touching subpixels within the central pixel in the mixed spatial attraction model (MSAM). However, they have a number of common defects, such as a lack of consideration of the attraction from subpixels within the central pixel and the unequal treatment of attraction from surrounding subpixels of the same distance. In order to overcome these defects, this study proposed an improved SAM (ISAM) for SPM. ISAM estimates the attraction value of the current subpixel at the center of a moving window from all subpixels within the window, and moves the window one subpixel per step. Experimental results from both Landsat and MODIS imagery have proven that ISAM, when compared with other SAMs, can improve SPM accuracies and is a more efficient SPM technique than MSPSAM and MSAM.

Patent
14 Dec 2017
TL;DR: In this article, a backlight and a screen used to form a plurality of multiview pixels are configured to couple out light from the plate light guide with different angles and angular offsets, such that at least a portion of the coupled-out light beams interleave and propagate in different view directions of the multi-view display.
Abstract: Multiview displays include a backlight and a screen used to form a plurality of multiview pixels Each multiview pixel includes a plurality of sets of light valves The backlight includes a light source optically coupled to a plate light guide configured with a plurality of multibeam elements Each multibeam element corresponds to a set of light valves and is spatially offset with respect to a center of the set of light valves toward a center of the multiview pixel The plurality of multibeam elements are also configured to couple out light from the plate light guide with different angles and angular offsets such that at least a portion of the coupled-out light beams interleave and propagate in different view directions of the multiview display

Journal ArticleDOI
TL;DR: Validation results demonstrate the high potential of the MSVR for subpixel mapping in the urban context, and MSVR outperforms SVR in terms of both accuracy and computational time.
Abstract: Hyperspectral remote sensing data offer the opportunity to map urban characteristics in detail. Though, adequate algorithms need to cope with increasing data dimensionality, high redundancy between individual bands, and often spectrally complex urban landscapes. The study focuses on subpixel quantification of urban land cover compositions using simulated environmental mapping and analysis program (EnMAP) data acquired over the city of Berlin, utilizing both machine learning regression and classification algorithms, i.e., multioutput support vector regression (MSVR), standard support vector regression (SVR), import vector machine classifier (IVM), and support vector classifier (SVC). The experimental setup incorporates a spectral library and a reference land cover fraction map used for validation purposes. The library spectra were synthetically mixed to derive quantitative training data for the classes vegetation, impervious surface, soil, and water. MSVR and SVR models were trained directly using the synthetic mixtures. For IVM and SVC, a modified hyperparameter selection approach is conducted to improve the description of urban land cover fractions by means of probability outputs. Validation results demonstrate the high potential of the MSVR for subpixel mapping in the urban context. MSVR outperforms SVR in terms of both accuracy and computational time. IVM and SVC work similarly well, yet with lower accuracies of subpixel fraction estimates compared to both regression approaches.

Journal ArticleDOI
TL;DR: A new approach based on a back-propagation neural network with a HR map (BPNN_HRM), in which a supervised model is introduced into SLCCD for the first time, outperforms the other traditional methods in providing a more detailed map for change detection.
Abstract: Extracting subpixel land-cover change detection (SLCCD) information is important when multitemporal remotely sensed images with different resolutions are available. The general steps are as follows. First, soft classification is applied to a low-resolution (LR) image to generate the proportion of each class. Second, the proportion differences are produced by the use of another high-resolution (HR) image and used as the input of subpixel mapping. Finally, a subpixel sharpened difference map can be generated. However, the prior HR land-cover map is only used to compare with the enhanced map of LR image for change detection, which leads to a nonideal SLCCD result. In this letter, we present a new approach based on a back-propagation neural network (BPNN) with a HR map (BPNN_HRM), in which a supervised model is introduced into SLCCD for the first time. The known information of the HR land-cover map is adequately employed to train the BPNN, whether it predates or postdates the LR image, so that a subpixel change detection map can be effectively generated. In order to evaluate the performance of the proposed algorithm, it was compared with four state-of-the-art methods. The experimental results confirm that the BPNN_HRM method outperforms the other traditional methods in providing a more detailed map for change detection.

Journal ArticleDOI
TL;DR: This letter applies a phase correlation approach to detect subpixel shifts between B2, B3, and B4 Sentinel-2A/MSI images, and shows that shifts of more than 1.1 pixels can be observed for moving targets, such as airplanes and clouds, and can be used for cloud detection.
Abstract: This letter aims at analyzing subpixel misregistration between multispectral images acquired by the Multi Spectral Instrument (MSI) aboard Sentinel-2A remote sensing satellite, and exploring its potential for moving target and cloud detection. By virtue of its hardware design, MSI’s detectors exhibit a parallax angle that leads to subpixel shifts that are corrected with special preprocessing routines. However, these routines do not correct shifts for moving and/or high-altitude objects. In this letter, we apply a phase correlation approach to detect subpixel shifts between B2 (blue), B3 (green), and B4 (red) Sentinel-2A/MSI images. We show that shifts of more than 1.1 pixels can be observed for moving targets, such as airplanes and clouds, and can be used for cloud detection. We demonstrate that the proposed approach can detect clouds that are not identified in the built-in cloud mask provided within the Sentinel-2A Level-1C product.

Patent
10 Feb 2017
TL;DR: In this article, a display panel includes an array of subpixels in a first, a second, and a third colors, which are alternatively arranged in every three adjacent rows of the array.
Abstract: An apparatus includes a display panel. In one example, the display panel includes an array of subpixels in a first, a second, and a third colors. Subpixels in the first, second, and third colors are alternatively arranged in every three adjacent rows of the array of subpixels. Every two adjacent rows of the array of subpixels are staggered with each other. A first subpixel in one of the first, second, and third colors and a second subpixel in a same color as the first subpixel are offset by 3 units in the horizontal axis and 4 units in the vertical axis. The first and second subpixels have a minimum distance among subpixels in the same color.

Journal ArticleDOI
TL;DR: A novel method that derives higher resolution MSIs with more spatial–spectral information (MSI-SS) from the same area to improve the accuracy of soft-then-hard subpixel mapping (STHSPM).
Abstract: Multiple subpixel shifted images (MSIs) from the same area can be incorporated to improve the accuracy of soft-then-hard subpixel mapping (STHSPM). In this paper, a novel method that derives higher resolution MSIs with more spatial–spectral information (MSI-SS) is proposed. First, coarse MSIs produce two high-resolution MSIs for each class respectively by two paths at the same time. The spatial path produces the high-resolution MSIs by soft classification followed by interpolation. And, the other high-resolution MSIs are derived from the spectral path by interpolation followed by soft classification. Then the higher resolution MSIs with more spatial–spectral information for each class are derived by integrating the aforementioned two kinds of high-resolution MSIs by the appropriate weight. Finally, the integrated higher resolution MSIs for each class are used to allocate hard class labels to subpixels. The proposed method is fast and takes more spatial–spectral information of the original MSIs into account. Experiments on three real hyperspectral remote sensing images show that the proposed method produce higher SPM accuracy result.

Journal ArticleDOI
TL;DR: A detailed quantitative assessment of different aspects in linear spectral mixture analysis, such as the criteria used to determine the types of pixels, the abundance sum-to-one constraint in the unmixing, and the accuracy of the utilized abundance maps, is investigated.
Abstract: Subpixel mapping techniques have been widely utilized to determine the spatial distribution of the different land-cover classes in mixed pixels at a subpixel scale by converting low-resolution fractional abundance maps (estimated by a linear mixture model) into a finer classification map. Over the past decades, many subpixel mapping algorithms have been proposed to tackle this problem. It has been obvious that the utilized abundance map has a strong impact on the subsequent subpixel mapping procedure. However, limited attention has been given to the impact of the different aspects in the spectral unmixing model on the subpixel mapping performance. In this paper, a detailed quantitative assessment of different aspects in linear spectral mixture analysis, such as the criteria used to determine the types of pixels, the abundance sum-to-one constraint in the unmixing, and the accuracy of the utilized abundance maps, is investigated. This is accomplished by designing an experimental procedure with replaceable components. A total of six hyperspectral images (four synthetic and two real) were utilized in our experiments. By investigating these critical issues, we can further improve the performance of subpixel mapping techniques.

Journal ArticleDOI
Huijie Zhao1, Shaoguang Shi1, Hongzhi Jiang1, Ying Zhang1, Zefu Xu1 
TL;DR: A multiplane model (MPM) is proposed with phase fringe to produce dense mark points and a back propagation neural network to obtain subpixel calibration and experiments show that MPM can reduce the back projection error efficiently compared with the pinhole model.
Abstract: A specifically designed imaging system based on an acousto-optic tunable filter (AOTF) can integrate hyperspectral imaging and 3D reconstruction. As a result of the complicated optical structure, the AOTF imaging system deviates from the traditional pinhole model and lens distortion form, causing difficulty to achieve precise camera calibration. The influencing factors leading to the deviation are discussed and a multiplane model (MPM) is proposed with phase fringe to produce dense mark points and a back propagation neural network to obtain subpixel calibration. Experiments show that MPM can reduce the back projection error efficiently compared with the pinhole model. A 3D reconstruction process is conducted based on the calibration result to verify the feasibility of the proposed method.

Journal ArticleDOI
01 Sep 2017-Water
TL;DR: Wang et al. as mentioned in this paper proposed a modified linear spectral mixture analysis (LSMA) method to extract high-precision water fraction maps, which is applied to the 18 October 2015 Landsat 8 OLI image of the Pearl River Delta for the extraction of water fractions.
Abstract: High-resolution water mapping with remotely sensed data is essential to monitoring of rainstorm waterlogging and flood disasters. In this study, a modified linear spectral mixture analysis (LSMA) method is proposed to extract high-precision water fraction maps. In the modified LSMA, the pure water and mixed water-land pixels, which are extracted by the Otsu method and a morphological dilation operation, are used to improve the accuracy of water fractions. The modified LSMA is applied to the 18 October 2015 Landsat 8 OLI image of the Pearl River Delta for the extraction of water fractions. Based on the water fraction maps, a modified subpixel mapping method (MSWM) based on a pixel-swapping algorithm is proposed for obtaining the spatial distribution information of water at subpixel scale. The MSWM includes two steps in subpixel water mapping. The MSWM considers the inter-subpixel/pixel and intra-subpixel/subpixel spatial attractions. Subpixel water mapping is first implemented with the inter-subpixel/pixel spatial attractions, which are estimated using the distance between a given subpixel and its surrounding pixels and the water fractions of the surrounding pixels. Based on the initialized subpixel water mapping results, the final subpixel water maps are determined by a modified pixel-swapping algorithm, in which the intra-subpixel/subpixel spatial attractions are estimated using the initialized subpixel water maps and an inverse-distance weighted function of the current subpixel at the centre of a moving window with its surrounding subpixels within the window. The subpixel water mapping performance of the MSWM is compared with that of subpixel mapping for linear objects (SPML) and that of the subpixel/pixel spatial attraction model (SPSAM) using the GF-1 reference image from 20 October 2015. The experimental results show that the MSWM shows better subpixel water mapping performance and obtains more details than SPML and SPSAM, as it has the largest overall accuracy values and Kappa coefficients. Furthermore, the MSWM can significantly eliminate the phenomenon of jagged edges and has smooth continuous edges.

Journal ArticleDOI
Shubo Zhou1, Yan Yuan1, Lijuan Su1, Xiaomin Ding1, Wang Jichao1 
TL;DR: In this article, a ray tracing method is used to model the telecentric-based light field imaging process and a regularized super resolution method is applied to obtain the super resolution result with a magnification ratio of 8.