scispace - formally typeset
Search or ask a question

Showing papers on "Image resolution published in 2015"


Journal ArticleDOI
TL;DR: In this article, the uncertainty of a PIV displacement field is estimated using a generic post-processing method based on statistical analysis of the correlation process using differences in the intensity pattern in the two images.
Abstract: The uncertainty of a PIV displacement field is estimated using a generic post-processing method based on statistical analysis of the correlation process using differences in the intensity pattern in the two images. First the second image is dewarped back onto the first one using the computed displacement field which provides two almost perfectly matching images. Differences are analyzed regarding the effect of shifting the peak of the correlation function. A relationship is derived between the standard deviation of intensity differences in each interrogation window and the expected asymmetry of the correlation peak, which is then converted to the uncertainty of a displacement vector. This procedure is tested with synthetic data for various types of noise and experimental conditions (pixel noise, out-of-plane motion, seeding density, particle image size, etc) and is shown to provide an accurate estimate of the true error.

519 citations


Journal ArticleDOI
TL;DR: In this paper, a variational-based approach for fusing hyperspectral and multispectral images is proposed, which is formulated as an inverse problem whose solution is the target image assumed to live in a lower dimensional subspace.
Abstract: This paper presents a variational-based approach for fusing hyperspectral and multispectral images. The fusion problem is formulated as an inverse problem whose solution is the target image assumed to live in a lower dimensional subspace. A sparse regularization term is carefully designed, relying on a decomposition of the scene on a set of dictionaries. The dictionary atoms and the supports of the corresponding active coding coefficients are learned from the observed images. Then, conditionally on these dictionaries and supports, the fusion problem is solved via alternating optimization with respect to the target image (using the alternating direction method of multipliers) and the coding coefficients. Simulation results demonstrate the efficiency of the proposed algorithm when compared with state-of-the-art fusion methods.

474 citations


Proceedings ArticleDOI
07 Dec 2015
TL;DR: This work builds on the Patchmatch idea: starting from randomly generated 3D planes in scene space, the best-fitting planes are iteratively propagated and refined to obtain a 3D depth and normal field per view, such that a robust photo-consistency measure over all images is maximized.
Abstract: We present a new, massively parallel method for high-quality multiview matching. Our work builds on the Patchmatch idea: starting from randomly generated 3D planes in scene space, the best-fitting planes are iteratively propagated and refined to obtain a 3D depth and normal field per view, such that a robust photo-consistency measure over all images is maximized. Our main novelties are on the one hand to formulate Patchmatch in scene space, which makes it possible to aggregate image similarity across multiple views and obtain more accurate depth maps. And on the other hand a modified, diffusion-like propagation scheme that can be massively parallelized and delivers dense multiview correspondence over ten 1.9-Megapixel images in 3 seconds, on a consumer-grade GPU. Our method uses a slanted support window and thus has no fronto-parallel bias, it is completely local and parallel, such that computation time scales linearly with image size, and inversely proportional to the number of parallel threads. Furthermore, it has low memory footprint (four values per pixel, independent of the depth range). It therefore scales exceptionally well and can handle multiple large images at high depth resolution. Experiments on the DTU and Middlebury multiview datasets as well as oblique aerial images show that our method achieves very competitive results with high accuracy and completeness, across a range of different scenarios.

410 citations


Journal ArticleDOI
TL;DR: Results show that the DMAS beamformer outperforms DAS in both simulated and experimental trials and that the main improvement brought about by this new method is a significantly higher contrast resolution, which turns out into an increased dynamic range and better quality of B-mode images.
Abstract: Most of ultrasound medical imaging systems currently on the market implement standard Delay and Sum (DAS) beamforming to form B-mode images. However, image resolution and contrast achievable with DAS are limited by the aperture size and by the operating frequency. For this reason, different beamformers have been presented in the literature that are mainly based on adaptive algorithms, which allow achieving higher performance at the cost of an increased computational complexity. In this paper, we propose the use of an alternative nonlinear beamforming algorithm for medical ultrasound imaging, which is called Delay Multiply and Sum (DMAS) and that was originally conceived for a RADAR microwave system for breast cancer detection. We modify the DMAS beamformer and test its performance on both simulated and experimentally collected linear-scan data, by comparing the Point Spread Functions, beampatterns, synthetic phantom and in vivo carotid artery images obtained with standard DAS and with the proposed algorithm. Results show that the DMAS beamformer outperforms DAS in both simulated and experimental trials and that the main improvement brought about by this new method is a significantly higher contrast resolution (i.e., narrower main lobe and lower side lobes), which turns out into an increased dynamic range and better quality of B-mode images.

376 citations


Journal ArticleDOI
TL;DR: A novel method to derive 3D hyperspectral information from lightweight snapshot cameras for unmanned aerial vehicles for vegetation monitoring and applies the approach to data from a flight campaign in a barley experiment to demonstrate the feasibility of vegetation monitoring in the context of precision agriculture.
Abstract: This paper describes a novel method to derive 3D hyperspectral information from lightweight snapshot cameras for unmanned aerial vehicles for vegetation monitoring. Snapshot cameras record an image cube with one spectral and two spatial dimensions with every exposure. First, we describe and apply methods to radiometrically characterize and calibrate these cameras. Then, we introduce our processing chain to derive 3D hyperspectral information from the calibrated image cubes based on structure from motion. The approach includes a novel way for quality assurance of the data which is used to assess the quality of the hyperspectral data for every single pixel in the final data product. The result is a hyperspectral digital surface model as a representation of the surface in 3D space linked with the hyperspectral information emitted and reflected by the objects covered by the surface. In this study we use the hyperspectral camera Cubert UHD 185-Firefly, which collects 125 bands from 450 to 950 nm. The obtained data product has a spatial resolution of approximately 1 cm for the spatial and 21 cm for the hyperspectral information. The radiometric calibration yields good results with less than 1% offset in reflectance compared to an ASD FieldSpec 3 for most of the spectral range. The quality assurance information shows that the radiometric precision is better than 0.13% for the derived data product. We apply the approach to data from a flight campaign in a barley experiment with different varieties during the growth stage heading (BBCH 52 – 59) to demonstrate the feasibility for vegetation monitoring in the context of precision agriculture. The plant parameters retrieved from the data product correspond to in-field measurements of a single date field campaign for plant height (R2 = 0.7), chlorophyll (BGI2, R2 = 0.52), LAI (RDVI, R2 = 0.32) and biomass (RDVI, R2 = 0.29). Our approach can also be applied for other image-frame cameras as long as the individual bands of the image cube are spatially co-registered beforehand.

376 citations


Journal ArticleDOI
TL;DR: This work demonstrates a single-photon imaging system based on a time-gated intensified camera from which the image of an object can be inferred from very few detected photons, and shows that a ghost-imaging configuration is a useful approach for obtaining images with high signal-to-noise ratios.
Abstract: Low-light-level imaging techniques have application in many diverse fields, ranging from biological sciences to security. A high-quality digital camera based on a multi-megapixel array will typically record an image by collecting of order 105 photons per pixel, but by how much could this photon flux be reduced? In this work we demonstrate a single-photon imaging system based on a time-gated intensified camera from which the image of an object can be inferred from very few detected photons. We show that a ghost-imaging configuration, where the image is obtained from photons that have never interacted with the object, is a useful approach for obtaining images with high signal-to-noise ratios. The use of heralded single photons ensures that the background counts can be virtually eliminated from the recorded images. By applying principles of image compression and associated image reconstruction, we obtain high-quality images of objects from raw data formed from an average of fewer than one detected photon per image pixel.

361 citations


Patent
10 Mar 2015
TL;DR: In this paper, an adaptive strobe illumination control process for use in a digital image capture and processing system is described. And the authors present a real-time image analysis based on the results of this analysis, the exposure time (i.e. photonic integration time interval) is automatically adjusted during subsequent image frames (e.g. image acquisition cycles) according to the principles of the present disclosure.
Abstract: An adaptive strobe illumination control process for use in a digital image capture and processing system. In general, the process involves: (i) illuminating an object in the field of view (FOV) with several different pulses of strobe (i.e. stroboscopic) illumination over a pair of consecutive video image frames; (ii) detecting digital images of the illuminated object over these consecutive image frames; and (iii) decode processing the digital images in an effort to read a code symbol graphically encoded therein. In a first illustrative embodiment, upon failure to read a code symbol graphically encoded in one of the first and second images, these digital images are analyzed in real-time, and based on the results of this real-time image analysis, the exposure time (i.e. photonic integration time interval) is automatically adjusted during subsequent image frames (i.e. image acquisition cycles) according to the principles of the present disclosure. In a second illustrative embodiment, upon failure to read a code symbol graphically encoded in one of the first and second images, these digital images are analyzed in real-time, and based on the results of this real-time image analysis, the energy level of the strobe illumination is automatically adjusted during subsequent image frames (i.e. image acquisition cycles) according to the principles of the present disclosure.

352 citations


Proceedings ArticleDOI
07 Dec 2015
TL;DR: By working directly on the whole image, the proposed CSC-SR algorithm does not need to divide the image into overlapped patches, and can exploit the image global correlation to produce more robust reconstruction of image local structures.
Abstract: Most of the previous sparse coding (SC) based super resolution (SR) methods partition the image into overlapped patches, and process each patch separately. These methods, however, ignore the consistency of pixels in overlapped patches, which is a strong constraint for image reconstruction. In this paper, we propose a convolutional sparse coding (CSC) based SR (CSC-SR) method to address the consistency issue. Our CSC-SR involves three groups of parameters to be learned: (i) a set of filters to decompose the low resolution (LR) image into LR sparse feature maps, (ii) a mapping function to predict the high resolution (HR) feature maps from the LR ones, and (iii) a set of filters to reconstruct the HR images from the predicted HR feature maps via simple convolution operations. By working directly on the whole image, the proposed CSC-SR algorithm does not need to divide the image into overlapped patches, and can exploit the image global correlation to produce more robust reconstruction of image local structures. Experimental results clearly validate the advantages of CSC over patch based SC in SR application. Compared with state-of-the-art SR methods, the proposed CSC-SR method achieves highly competitive PSNR results, while demonstrating better edge and texture preservation performance.

346 citations


Proceedings ArticleDOI
07 Dec 2015
TL;DR: This paper proposes a method which performs hyperspectral super-resolution by jointly unmixing the two input images into the pure reflectance spectra of the observed materials and the associated mixing coefficients, with a number of useful constraints imposed by elementary physical properties of spectral mixing.
Abstract: Hyperspectral cameras capture images with many narrow spectral channels, which densely sample the electromagnetic spectrum. The detailed spectral resolution is useful for many image analysis problems, but it comes at the cost of much lower spatial resolution. Hyperspectral super-resolution addresses this problem, by fusing a low-resolution hyperspectral image and a conventional high-resolution image into a product of both high spatial and high spectral resolution. In this paper, we propose a method which performs hyperspectral super-resolution by jointly unmixing the two input images into the pure reflectance spectra of the observed materials and the associated mixing coefficients. The formulation leads to a coupled matrix factorisation problem, with a number of useful constraints imposed by elementary physical properties of spectral mixing. In experiments with two benchmark datasets we show that the proposed approach delivers improved hyperspectral super-resolution.

333 citations


Journal ArticleDOI
07 Aug 2015
TL;DR: A taxonomical view of the field is provided and the current methodologies for multimodal classification of remote sensing images are reviewed, which highlight the most recent advances, which exploit synergies with machine learning and signal processing.
Abstract: Earth observation through remote sensing images allows the accurate characterization and identification of materials on the surface from space and airborne platforms. Multiple and heterogeneous image sources can be available for the same geographical region: multispectral, hyperspectral, radar, multitemporal, and multiangular images can today be acquired over a given scene. These sources can be combined/fused to improve classification of the materials on the surface. Even if this type of systems is generally accurate, the field is about to face new challenges: the upcoming constellations of satellite sensors will acquire large amounts of images of different spatial, spectral, angular, and temporal resolutions. In this scenario, multimodal image fusion stands out as the appropriate framework to address these problems. In this paper, we provide a taxonomical view of the field and review the current methodologies for multimodal classification of remote sensing images. We also highlight the most recent advances, which exploit synergies with machine learning and signal processing: sparse methods, kernel-based fusion, Markov modeling, and manifold alignment. Then, we illustrate the different approaches in seven challenging remote sensing applications: 1) multiresolution fusion for multispectral image classification; 2) image downscaling as a form of multitemporal image fusion and multidimensional interpolation among sensors of different spatial, spectral, and temporal resolutions; 3) multiangular image classification; 4) multisensor image fusion exploiting physically-based feature extractions; 5) multitemporal image classification of land covers in incomplete, inconsistent, and vague image sources; 6) spatiospectral multisensor fusion of optical and radar images for change detection; and 7) cross-sensor adaptation of classifiers. The adoption of these techniques in operational settings will help to monitor our planet from space in the very near future.

319 citations


Proceedings ArticleDOI
07 Jun 2015
TL;DR: This work proposes a generic Bayesian sparse coding strategy to be used with Bayesian dictionaries learned with the Beta process and theoretically analyzes the proposed strategy for its accurate performance.
Abstract: Despite the proven efficacy of hyperspectral imaging in many computer vision tasks, its widespread use is hindered by its low spatial resolution, resulting from hardware limitations. We propose a hyperspectral image super resolution approach that fuses a high resolution image with the low resolution hyperspectral image using non-parametric Bayesian sparse representation. The proposed approach first infers probability distributions for the material spectra in the scene and their proportions. The distributions are then used to compute sparse codes of the high resolution image. To that end, we propose a generic Bayesian sparse coding strategy to be used with Bayesian dictionaries learned with the Beta process. We theoretically analyze the proposed strategy for its accurate performance. The computed codes are used with the estimated scene spectra to construct the super resolution hyperspectral image. Exhaustive experiments on two public databases of ground based hyperspectral images and a remotely sensed image show that the proposed approach outperforms the existing state of the art.

Proceedings ArticleDOI
Youngjin Yoon1, Hae-Gon Jeon1, Donggeun Yoo1, Joon-Young Lee1, In So Kweon1 
07 Dec 2015
TL;DR: A novel method for Light-Field image super-resolution (SR) via a deep convolutional neural network using a datadriven learning method to simultaneously up-sample the angular resolution as well as the spatial resolution of a Light- field image.
Abstract: Commercial Light-Field cameras provide spatial and angular information, but its limited resolution becomes an important problem in practical use. In this paper, we present a novel method for Light-Field image super-resolution (SR) via a deep convolutional neural network. Rather than the conventional optimization framework, we adopt a datadriven learning method to simultaneously up-sample the angular resolution as well as the spatial resolution of a Light-Field image. We first augment the spatial resolution of each sub-aperture image to enhance details by a spatial SR network. Then, novel views between the sub-aperture images are generated by an angular super-resolution network. These networks are trained independently but finally finetuned via end-to-end training. The proposed method shows the state-of-the-art performance on HCI synthetic dataset, and is further evaluated by challenging real-world applications including refocusing and depth map estimation.

Journal ArticleDOI
TL;DR: Experiments on MR images of both adult and pediatric subjects demonstrate that the proposed image SR method enhances the details in the recovered high-resolution images, and outperforms methods such as the nearest-neighbor interpolation, cubic interpolations, iterative back projection (IBP), non-local means (NLM), and TV-based up-sampling.
Abstract: Image super-resolution (SR) aims to recover high-resolution images from their low-resolution counterparts for improving image analysis and visualization. Interpolation methods, widely used for this purpose, often result in images with blurred edges and blocking effects. More advanced methods such as total variation (TV) retain edge sharpness during image recovery. However, these methods only utilize information from local neighborhoods, neglecting useful information from remote voxels. In this paper, we propose a novel image SR method that integrates both local and global information for effective image recovery. This is achieved by, in addition to TV, low-rank regularization that enables utilization of information throughout the image. The optimization problem can be solved effectively via alternating direction method of multipliers (ADMM). Experiments on MR images of both adult and pediatric subjects demonstrate that the proposed method enhances the details in the recovered high-resolution images, and outperforms methods such as the nearest-neighbor interpolation, cubic interpolation, iterative back projection (IBP), non-local means (NLM), and TV-based up-sampling.

Journal ArticleDOI
TL;DR: The concept of a ‘single-pixel camera’ is extended to provide continuous real-time video at 10 Hz, simultaneously in the visible and short-wave infrared, using an efficient computer algorithm, allowing for low-cost, non-visible imaging systems in applications such as night-vision, gas sensing and medical diagnostics.
Abstract: Conventional cameras rely upon a pixelated sensor to provide spatial resolution. An alternative approach replaces the sensor with a pixelated transmission mask encoded with a series of binary patterns. Combining knowledge of the series of patterns and the associated filtered intensities, measured by single-pixel detectors, allows an image to be deduced through data inversion. In this work we extend the concept of a ‘single-pixel camera’ to provide continuous real-time video at 10 Hz , simultaneously in the visible and short-wave infrared, using an efficient computer algorithm. We demonstrate our camera for imaging through smoke, through a tinted screen, whilst performing compressive sampling and recovering high-resolution detail by arbitrarily controlling the pixel-binning of the masks. We anticipate real-time single-pixel video cameras to have considerable importance where pixelated sensors are limited, allowing for low-cost, non-visible imaging systems in applications such as night-vision, gas sensing and medical diagnostics.

Journal ArticleDOI
TL;DR: The potential of image fusion is demonstrated through 'sharpening' of IMS images, which uses microscopy measurements to predict ion distributions at a spatial resolution that exceeds that of measured ion images by ten times or more, and prediction of ion distributions in tissue areas that were not measured by IMS.
Abstract: We describe a predictive imaging modality created by 'fusing' two distinct technologies: imaging mass spectrometry (IMS) and microscopy. IMS-generated molecular maps, rich in chemical information but having coarse spatial resolution, are combined with optical microscopy maps, which have relatively low chemical specificity but high spatial information. The resulting images combine the advantages of both technologies, enabling prediction of a molecular distribution both at high spatial resolution and with high chemical specificity. Multivariate regression is used to model variables in one technology, using variables from the other technology. We demonstrate the potential of image fusion through several applications: (i) 'sharpening' of IMS images, which uses microscopy measurements to predict ion distributions at a spatial resolution that exceeds that of measured ion images by ten times or more; (ii) prediction of ion distributions in tissue areas that were not measured by IMS; and (iii) enrichment of biological signals and attenuation of instrumental artifacts, revealing insights not easily extracted from either microscopy or IMS individually.

Journal ArticleDOI
TL;DR: The Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM) and two extended data fusion models (STAARCH and ESTARFM) that have been used to fuse MODIS and Landsat data are reviewed.
Abstract: Crop condition and natural vegetation monitoring require high resolution remote sensing imagery in both time and space - a requirement that cannot currently be satisfied by any single Earth observing sensor in isolation. The suite of available remote sensing instruments varies widely in terms of sensor characteristics, spatial resolution and acquisition frequency. For example, the Moderate-resolution Imaging Spectroradiometer (MODIS) provides daily global observations at 250m to 1km spatial resolution. While imagery from coarse resolution sensors such as MODIS are typically superior to finer resolution data in terms of their revisit frequency, they lack spatial detail to capture surface features for many applications. The Landsat satellite series provides medium spatial resolution (30m) imagery which is well suited to capturing surface details, but a long revisit cycle (16-day) has limited its use in describing daily surface changes. Data fusion approaches provide an alternative way to utilize observations from multiple sensors so that the fused results can provide higher value than can an individual sensor alone. In this paper, we review the Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM) and two extended data fusion models (STAARCH and ESTARFM) that have been used to fuse MODIS and Landsat data. The fused MODISLandsat results inherit the spatial details of Landsat (30 m) and the temporal revisit frequency of MODIS (daily). The theoretical basis of the fusion approach is described and recent applications are presented. While these approaches can produce imagery with high spatiotemporal resolution, they still rely on the availability of actual satellite images and the quality of ingested remote sensing products. As a result, data fusion is useful for bridging gaps between medium resolution image acquisitions, but cannot replace actual satellite missions.

Journal ArticleDOI
TL;DR: IsoView microscopy effectively doubles the penetration depth and provides subsecond temporal resolution for specimens 400-fold larger than could previously be imaged, and improves spatial resolution at least sevenfold and decreases resolution anisotropy at least threefold.
Abstract: Imaging fast cellular dynamics across large specimens requires high resolution in all dimensions, high imaging speeds, good physical coverage and low photo-damage. To meet these requirements, we developed isotropic multiview (IsoView) light-sheet microscopy, which rapidly images large specimens via simultaneous light-sheet illumination and fluorescence detection along four orthogonal directions. Combining these four views by means of high-throughput multiview deconvolution yields images with high resolution in all three dimensions. We demonstrate whole-animal functional imaging of Drosophila larvae at a spatial resolution of 1.1-2.5 μm and temporal resolution of 2 Hz for several hours. We also present spatially isotropic whole-brain functional imaging in Danio rerio larvae and spatially isotropic multicolor imaging of fast cellular dynamics across gastrulating Drosophila embryos. Compared with conventional light-sheet microscopy, IsoView microscopy improves spatial resolution at least sevenfold and decreases resolution anisotropy at least threefold. Compared with existing high-resolution light-sheet techniques, IsoView microscopy effectively doubles the penetration depth and provides subsecond temporal resolution for specimens 400-fold larger than could previously be imaged.

Journal ArticleDOI
TL;DR: The GaLactic and Extragalactic All-sky MWA survey (GLEAM) as mentioned in this paper surveys the entire radio sky south of declination +25 deg at frequencies between 72 and 231 MHz, made with the Murchison Widefield Array (MWA) using a drift scan method that makes efficient use of the MWA's very large field-of-view.
Abstract: GLEAM, the GaLactic and Extragalactic All-sky MWA survey, is a survey of the entire radio sky south of declination +25 deg at frequencies between 72 and 231 MHz, made with the Murchison Widefield Array (MWA) using a drift scan method that makes efficient use of the MWA's very large field-of-view. We present the observation details, imaging strategies and theoretical sensitivity for GLEAM. The survey ran for two years, the first year using 40 kHz frequency resolution and 0.5 s time resolution; the second year using 10 kHz frequency resolution and 2 s time resolution. The resulting image resolution and sensitivity depends on observing frequency, sky pointing and image weighting scheme. At 154 MHz the image resolution is approximately 2.5 x 2.2/cos(DEC+26.7) arcmin with sensitivity to structures up to ~10 deg in angular size. We provide tables to calculate the expected thermal noise for GLEAM mosaics depending on pointing and frequency and discuss limitations to achieving theoretical noise in Stokes I images. We discuss challenges, and their solutions, that arise for GLEAM including ionospheric effects on source positions and linearly polarised emission, and the instrumental polarisation effects inherent to the MWA's primary beam.

Journal ArticleDOI
30 Jul 2015-PLOS ONE
TL;DR: This study acquired in vivo MR images at 7T using prospective motion correction during long acquisitions and presents images among the highest, if not the highest resolution of in vivo human brain MRI ever acquired.
Abstract: High field MRI systems, such as 7 Tesla (T) scanners, can deliver higher signal to noise ratio (SNR) than lower field scanners and thus allow for the acquisition of data with higher spatial resolution, which is often demanded by users in the fields of clinical and neuroscientific imaging. However, high resolution scans may require long acquisition times, which in turn increase the discomfort for the subject and the risk of subject motion. Even with a cooperative and trained subject, involuntary motion due to heartbeat, swallowing, respiration and changes in muscle tone can cause image artifacts that reduce the effective resolution. In addition, scanning with higher resolution leads to increased sensitivity to even very small movements. Prospective motion correction (PMC) at 3T and 7T has proven to increase image quality in case of subject motion. Although the application of prospective motion correction is becoming more popular, previous articles focused on proof of concept studies and technical descriptions, whereas this paper briefly describes the technical aspects of the optical tracking system, marker fixation and cross calibration and focuses on the application of PMC to very high resolution imaging without intentional motion. In this study we acquired in vivo MR images at 7T using prospective motion correction during long acquisitions. As a result, we present images among the highest, if not the highest resolution of in vivo human brain MRI ever acquired.

Journal ArticleDOI
TL;DR: Protein imaging mass spectrometry capabilities at sub-cellular spatial resolution and at high acquisition speed are achieved by integrating a transmission geometry ion source with time of flight mass spectromaetry and a 1-μm laser spot diameter on target is achieved.
Abstract: We have achieved protein imaging mass spectrometry capabilities at sub-cellular spatial resolution and at high acquisition speed by integrating a transmission geometry ion source with time of flight mass spectrometry. The transmission geometry principle allowed us to achieve a 1-μm laser spot diameter on target. A minimal raster step size of the instrument was 2.5 μm. Use of 2,5-dihydroxyacetophenone robotically sprayed on top of a tissue sample as a matrix together with additional sample preparation steps resulted in single pixel mass spectra from mouse cerebellum tissue sections having more than 20 peaks in a range 3–22 kDa. Mass spectrometry images were acquired in a standard step raster microprobe mode at 5 pixels/s and in a continuous raster mode at 40 pixels/s.

Journal ArticleDOI
TL;DR: A novel method for image fusion with a high-resolution panchromatic image and a low-resolution multispectral (Ms) image at the same geographical location, formulated as a convex optimization problem which minimizes a linear combination of a least-squares fitting term and a dynamic gradient sparsity regularizer.
Abstract: In this paper, we propose a novel method for image fusion with a high-resolution panchromatic image and a low-resolution multispectral (Ms) image at the same geographical location. The fusion is formulated as a convex optimization problem which minimizes a linear combination of a least-squares fitting term and a dynamic gradient sparsity regularizer. The former is to preserve accurate spectral information of the Ms image, while the latter is to keep sharp edges of the high-resolution panchromatic image. We further propose to simultaneously register the two images during the fusing process, which is naturally achieved by virtue of the dynamic gradient sparsity property. An efficient algorithm is then devised to solve the optimization problem, accomplishing a linear computational complexity in the size of the output image in each iteration. We compare our method against six state-of-the-art image fusion methods on Ms image data sets from four satellites. Extensive experimental results demonstrate that the proposed method substantially outperforms the others in terms of both spatial and spectral qualities. We also show that our method can provide high-quality products from coarsely registered real-world IKONOS data sets. Finally, a MATLAB implementation is provided to facilitate future research.

Journal ArticleDOI
TL;DR: This paper proposes a novel method for fusion of MS/HS and PAN images and of MS and HS images in the lower dimensional PC subspace, with substantially lower computational requirements and very high tolerance to noise in the observed data.
Abstract: In remote sensing, due to cost and complexity issues, multispectral (MS) and hyperspectral (HS) sensors have significantly lower spatial resolution than panchromatic (PAN) images. Recently, the problem of fusing coregistered MS and HS images has gained some attention. In this paper, we propose a novel method for fusion of MS/HS and PAN images and of MS and HS images. MS and, more so, HS images contain spectral redundancy, which makes the dimensionality reduction of the data via principal component (PC) analysis very effective. The fusion is performed in the lower dimensional PC subspace; thus, we only need to estimate the first few PCs, instead of every spectral reflectance band, and without compromising the spectral and spatial quality. The benefits of the approach are substantially lower computational requirements and very high tolerance to noise in the observed data. Examples are presented using WorldView 2 data and a simulated data set based on a real HS image, with and without added noise.

Proceedings ArticleDOI
07 Jun 2015
TL;DR: This work develops an improved technique for local shape estimation from defocus and correspondence cues, and shows how shading can be used to further refine the depth, and proposes a new framework that uses angular coherence to optimize depth and shading.
Abstract: Light-field cameras are now used in consumer and industrial applications. Recent papers and products have demonstrated practical depth recovery algorithms from a passive single-shot capture. However, current light-field capture devices have narrow baselines and constrained spatial resolution; therefore, the accuracy of depth recovery is limited, requiring heavy regularization and producing planar depths that do not resemble the actual geometry. Using shading information is essential to improve the shape estimation. We develop an improved technique for local shape estimation from defocus and correspondence cues, and show how shading can be used to further refine the depth. Light-field cameras are able to capture both spatial and angular data, suitable for refocusing. By locally refocusing each spatial pixel to its respective estimated depth, we produce an all-in-focus image where all viewpoints converge onto a point in the scene. Therefore, the angular pixels have angular coherence, which exhibits three properties: photo consistency, depth consistency, and shading consistency. We propose a new framework that uses angular coherence to optimize depth and shading. The optimization framework estimates both general lighting in natural scenes and shading to improve depth regularization. Our method outperforms current state-of-the-art light-field depth estimation algorithms in multiple scenarios, including real images.

Journal ArticleDOI
TL;DR: Results show that a balance between spatial resolution and spectral discrimination is needed to optimize the mission planning and image processing to achieve every agronomic objective and users do not have to sacrifice flying at low altitudes to cover the whole area of interest completely.
Abstract: This article describes the technical specifications and configuration of a multirotor unmanned aerial vehicle (UAV) to acquire remote images using a six-band multispectral sensor. Several flight missions were programmed as follows: three flight altitudes (60, 80 and 100 m), two flight modes (stop and cruising modes) and two ground control point (GCP) settings were considered to analyze the influence of these parameters on the spatial resolution and spectral discrimination of multispectral orthomosaicked images obtained using Pix4Dmapper. Moreover, it is also necessary to consider the area to be covered or the flight duration according to any flight mission programmed. The effect of the combination of all these parameters on the spatial resolution and spectral discrimination of the orthomosaicks is presented. Spectral discrimination has been evaluated for a specific agronomical purpose: to use the UAV remote images for the detection of bare soil and vegetation (crop and weeds) for in-season site-specific weed management. These results show that a balance between spatial resolution and spectral discrimination is needed to optimize the mission planning and image processing to achieve every agronomic objective. In this way, users do not have to sacrifice flying at low altitudes to cover the whole area of interest completely.

Journal ArticleDOI
TL;DR: In this article, the authors reviewed the current status on assessment parameters for spatial resolution and on published data regarding spatial resolution in CBCT images and evaluated and analyzed published measurement data on spatial resolution.
Abstract: Spatial resolution is one of the most important parameters objectively defining image quality, particularly in dental imaging, where fine details often have to be depicted. Here, we review the current status on assessment parameters for spatial resolution and on published data regarding spatial resolution in CBCT images. The current concepts of visual [line-pair (lp) measurements] and automated [modulation transfer function (MTF)] assessment of spatial resolution in CBCT images are summarized and reviewed. Published measurement data on spatial resolution in CBCT are evaluated and analysed. Effective (i.e. actual) spatial resolution available in CBCT images is being influenced by the two-dimensional detector, the three-dimensional reconstruction process, patient movement during the scan and various other parameters. In the literature, the values range between 0.6 and 2.8 lp mm−1 (visual assessment; median, 1.7 lp mm−1) vs MTF (range, 0.5–2.3 cycles per mm; median, 2.1 lp mm−1). Spatial resolution of CBCT i...

Journal ArticleDOI
TL;DR: A newly devised space-saving image-readout optical system for multiple reflection-type SLMs to increase the size of three-dimensional (3D) images that are displayed using electronic holography.
Abstract: In this paper, we propose a new method of using multiple spatial light modulators (SLMs) to increase the size of three-dimensional (3D) images that are displayed using electronic holography. The scalability of images produced by the previous method had an upper limit that was derived from the path length of the image-readout part. We were able to produce larger colour electronic holographic images with a newly devised space-saving image-readout optical system for multiple reflection-type SLMs. This optical system is designed so that the path length of the image-readout part is half that of the previous method. It consists of polarization beam splitters (PBSs), half-wave plates (HWPs), and polarizers. We used 16 (4 × 4) 4K×2K-pixel SLMs for displaying holograms. The experimental device we constructed was able to perform 20 fps video reproduction in colour of full-parallax holographic 3D images with a diagonal image size of 85 mm and a horizontal viewing-zone angle of 5.6 degrees.

Journal ArticleDOI
TL;DR: A novel set of optics with the combination of anamorphic prism pair and cylindrical lens is designed, which greatly improves the uniformity of the planar beams, and hence improves the reconstruction fidelity of the sensor.
Abstract: This work aims to develop a fan-beam tomographic sensor using tunable diode lasers that can simultaneously image temperature and gas concentration with both high spatial and temporal resolutions. The sensor features three key advantages. First, the sensor bases on a stationary fan-beam arrangement, by which a high spatial resolution is guaranteed because the distance between two neighboring detectors in a view is approximately reduced to the size of a photodiode. Second, fan-beam illumination from five views is simultaneously generated instead of rotating either the fanned beams or the target, which significantly enhances the temporal resolution. Third, a novel set of optics with the combination of anamorphic prism pair and cylindrical lens is designed, which greatly improves the uniformity of the planar beams, and hence improves the reconstruction fidelity. This paper reports the tomographic model, optics design, numerical simulation and experimental validation of the sensor. The sensor exhibits good applicability for flame monitoring and combustion diagnosis.

Journal ArticleDOI
TL;DR: A new property called stability of segmentation algorithms is defined and it is demonstrated that piece- or tile-wise computation of a stable segmentation algorithm can be achieved with identical results with respect to processing the whole image at once.
Abstract: Segmentation of real-world remote sensing images is challenging because of the large size of those data, particularly for very high resolution imagery. However, a lot of high-level remote sensing methods rely on segmentation at some point and are therefore difficult to assess at full image scale, for real remote sensing applications. In this paper, we define a new property called stability of segmentation algorithms and demonstrate that piece- or tile-wise computation of a stable segmentation algorithm can be achieved with identical results with respect to processing the whole image at once. We also derive a technique to empirically estimate the stability of a given segmentation algorithm and apply it to four different algorithms. Among those algorithms, the mean-shift algorithm is found to be quite unstable. We propose a modified version of this algorithm enforcing its stability and thus allowing for tile-wise computation with identical results. Finally, we present results of this method and discuss the various trends and applications.

Journal ArticleDOI
Ke Gu1, Min Liu1, Guangtao Zhai1, Xiaokang Yang1, Wenjun Zhang1 
TL;DR: Experimental results show that the performance of peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM) can be substantially improved by applying these metrics to OSS model preprocessed images, superior to classical multi-scale-PSNR/SSIM and comparable to the state-of-the-art competitors.
Abstract: Viewing distance and image resolution have substantial influences on image quality assessment (IQA), but this issue has been highly overlooked in the literature so far. In this paper, we examine the problem of optimal resolution adjustment as a preprocessing step for IQA. In general, the sampling of visual information by human eyes’ optics is approximately a low-pass process. For a given visual scene, the amount of the extractable information greatly depends on the viewing distance and image resolution. We first introduce a novel dedicated viewing distance-changed image database (VDID2014) with two groups of typical viewing distances and image resolutions to promote the IQA study for this issue. Then we design a new effective optimal scale selection (OSS) model in dual-transform domains, in which a cascade of adaptive high-frequency clipping in the discrete wavelet transform domain and adaptive resolution scaling in the spatial domain is used. Validation of our technique is conducted on five image databases (LIVE, IVC, Toyama, VDID2014, and TID2008). Experimental results show that the performance of peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM) can be substantially improved by applying these metrics to OSS model preprocessed images, superior to classical multi-scale-PSNR/SSIM and comparable to the state-of-the-art competitors.

Journal ArticleDOI
TL;DR: The results of DSM accuracy investigation demonstrate the quality of UAV Photogrammetry product with the use of appropriate number of GCPs, as well as demonstrating the efficiency and accuracy generating standard mapping products.