scispace - formally typeset
Search or ask a question

Showing papers on "Image resolution published in 2013"


Journal ArticleDOI
01 Mar 2013
TL;DR: Recent advances in spectral-spatial classification of hyperspectral images are presented in this paper and several techniques are investigated for combining both spatial and spectral information.
Abstract: Recent advances in spectral-spatial classification of hyperspectral images are presented in this paper. Several techniques are investigated for combining both spatial and spectral information. Spatial information is extracted at the object (set of pixels) level rather than at the conventional pixel level. Mathematical morphology is first used to derive the morphological profile of the image, which includes characteristics about the size, orientation, and contrast of the spatial structures present in the image. Then, the morphological neighborhood is defined and used to derive additional features for classification. Classification is performed with support vector machines (SVMs) using the available spectral information and the extracted spatial information. Spatial postprocessing is next investigated to build more homogeneous and spatially consistent thematic maps. To that end, three presegmentation techniques are applied to define regions that are used to regularize the preliminary pixel-wise thematic map. Finally, a multiple-classifier (MC) system is defined to produce relevant markers that are exploited to segment the hyperspectral image with the minimum spanning forest algorithm. Experimental results conducted on three real hyperspectral images with different spatial and spectral resolutions and corresponding to various contexts are presented. They highlight the importance of spectral-spatial strategies for the accurate classification of hyperspectral images and validate the proposed methods.

1,225 citations


Journal ArticleDOI
TL;DR: This work introduces a measure based on Fourier ring correlation (FRC) that can be computed directly from an image and demonstrates its validity and benefits on two-dimensional (2D) and 3D localization microscopy images of tubulin and actin filaments.
Abstract: Resolution in optical nanoscopy (or super-resolution microscopy) depends on the localization uncertainty and density of single fluorescent labels and on the sample's spatial structure. Currently there is no integral, practical resolution measure that accounts for all factors. We introduce a measure based on Fourier ring correlation (FRC) that can be computed directly from an image. We demonstrate its validity and benefits on two-dimensional (2D) and 3D localization microscopy images of tubulin and actin filaments. Our FRC resolution method makes it possible to compare achieved resolutions in images taken with different nanoscopy methods, to optimize and rank different emitter localization and labeling strategies, to define a stopping criterion for data acquisition, to describe image anisotropy and heterogeneity, and even to estimate the average number of localizations per emitter. Our findings challenge the current focus on obtaining the best localization precision, showing instead how the best image resolution can be achieved as fast as possible.

649 citations


Proceedings ArticleDOI
01 Dec 2013
TL;DR: This work formulate a convex optimization problem using higher order regularization for depth image up sampling, and derives a numerical algorithm based on a primal-dual formulation that is efficiently parallelized and runs at multiple frames per second.
Abstract: In this work we present a novel method for the challenging problem of depth image up sampling. Modern depth cameras such as Kinect or Time-of-Flight cameras deliver dense, high quality depth measurements but are limited in their lateral resolution. To overcome this limitation we formulate a convex optimization problem using higher order regularization for depth image up sampling. In this optimization an an isotropic diffusion tensor, calculated from a high resolution intensity image, is used to guide the up sampling. We derive a numerical algorithm based on a primal-dual formulation that is efficiently parallelized and runs at multiple frames per second. We show that this novel up sampling clearly outperforms state of the art approaches in terms of speed and accuracy on the widely used Middlebury 2007 datasets. Furthermore, we introduce novel datasets with highly accurate ground truth, which, for the first time, enable to benchmark depth up sampling methods using real sensor data.

538 citations


Journal ArticleDOI
TL;DR: An optical model for light field microscopy based on wave optics, instead of previously reported ray optics models is presented, and a 3-D deconvolution method is presented that is able to reconstruct volumes at higher spatial resolution, and with better optical sectioning, than previously reported.
Abstract: Light field microscopy is a new technique for high-speed volumetric imaging of weakly scattering or fluorescent specimens It employs an array of microlenses to trade off spatial resolution against angular resolution, thereby allowing a 4-D light field to be captured using a single photographic exposure without the need for scanning The recorded light field can then be used to computationally reconstruct a full volume In this paper, we present an optical model for light field microscopy based on wave optics, instead of previously reported ray optics models We also present a 3-D deconvolution method for light field microscopy that is able to reconstruct volumes at higher spatial resolution, and with better optical sectioning, than previously reported To accomplish this, we take advantage of the dense spatio-angular sampling provided by a microlens array at axial positions away from the native object plane This dense sampling permits us to decode aliasing present in the light field to reconstruct high-frequency information We formulate our method as an inverse problem for reconstructing the 3-D volume, which we solve using a GPU-accelerated iterative algorithm Theoretical limits on the depth-dependent lateral resolution of the reconstructed volumes are derived We show that these limits are in good agreement with experimental results on a standard USAF 1951 resolution target Finally, we present 3-D reconstructions of pollen grains that demonstrate the improvements in fidelity made possible by our method

472 citations


Journal ArticleDOI
TL;DR: This paper proposes a new pan-sharpening method named SparseFI, based on the compressive sensing theory and explores the sparse representation of HR/LR multispectral image patches in the dictionary pairs cotrained from the panchromatic image and its downsampled LR version.
Abstract: Data provided by most optical Earth observation satellites such as IKONOS, QuickBird, and GeoEye are composed of a panchromatic channel of high spatial resolution (HR) and several multispectral channels at a lower spatial resolution (LR). The fusion of an HR panchromatic and the corresponding LR spectral channels is called “pan-sharpening.” It aims at obtaining an HR multispectral image. In this paper, we propose a new pan-sharpening method named Sparse F usion of Images (SparseFI, pronounced as “sparsify”). SparseFI is based on the compressive sensing theory and explores the sparse representation of HR/LR multispectral image patches in the dictionary pairs cotrained from the panchromatic image and its downsampled LR version. Compared with conventional methods, it “learns” from, i.e., adapts itself to, the data and has generally better performance than existing methods. Due to the fact that the SparseFI method does not assume any spectral composition model of the panchromatic image and due to the super-resolution capability and robustness of sparse signal reconstruction algorithms, it gives higher spatial resolution and, in most cases, less spectral distortion compared with the conventional methods.

390 citations


Journal ArticleDOI
TL;DR: A new fully automatic image information extraction, generalization and mosaic workflow is presented that is based on multiscale textural and morphological image features extraction and a new systematic approach for quality control and validation allowing global spatial and thematic consistency checking is proposed and applied.
Abstract: A general framework for processing high and very-high resolution imagery in support of a Global Human Settlement Layer (GHSL) is presented together with a discussion on the results of the first operational test of the production workflow. The test involved the mapping of 24.3 million km2 of the Earth surface spread in four continents, corresponding to an estimated population of 1.3 billion people in 2010. The resolution of the input image data ranges from 0.5 to 10 meters, collected by a heterogeneous set of platforms including satellite SPOT (2 and 5), CBERS 2B, RapidEye (2 and 4), WorldView (1 and 2), GeoEye 1, QuickBird 2, Ikonos 2, and airborne sensors. Several imaging modes were tested including panchromatic, multispectral and pan-sharpened images. A new fully automatic image information extraction, generalization and mosaic workflow is presented that is based on multiscale textural and morphological image features extraction. New image feature compression and optimization are introduced, together with new learning and classification techniques allowing for the processing of HR/VHR image data using low-resolution thematic layers as reference. A new systematic approach for quality control and validation allowing global spatial and thematic consistency checking is proposed and applied. The quality of the results are discussed by sensor, band, resolution, and eco-regions. Critical points, lessons learned and next steps are highlighted.

385 citations


Journal ArticleDOI
21 Mar 2013-Nature
TL;DR: A multi-directional diffractive backlight technology that permits the rendering of high-resolution, full-parallax 3D images in a very wide view zone (up to 180 degrees in principle) at an observation distance of up to a metre is introduced.
Abstract: Multiview three-dimensional (3D) displays can project the correct perspectives of a 3D image in many spatial directions simultaneously They provide a 3D stereoscopic experience to many viewers at the same time with full motion parallax and do not require special glasses or eye tracking None of the leading multiview 3D solutions is particularly well suited to mobile devices (watches, mobile phones or tablets), which require the combination of a thin, portable form factor, a high spatial resolution and a wide full-parallax view zone (for short viewing distance from potentially steep angles) Here we introduce a multi-directional diffractive backlight technology that permits the rendering of high-resolution, full-parallax 3D images in a very wide view zone (up to 180 degrees in principle) at an observation distance of up to a metre The key to our design is a guided-wave illumination technique based on light-emitting diodes that produces wide-angle multiview images in colour from a thin planar transparent lightguide Pixels associated with different views or colours are spatially multiplexed and can be independently addressed and modulated at video rate using an external shutter plane To illustrate the capabilities of this technology, we use simple ink masks or a high-resolution commercial liquid-crystal display unit to demonstrate passive and active (30 frames per second) modulation of a 64-view backlight, producing 3D images with a spatial resolution of 88 pixels per inch and full-motion parallax in an unprecedented view zone of 90 degrees We also present several transparent hand-held prototypes showing animated sequences of up to six different 200-view images at a resolution of 127 pixels per inch

353 citations


Proceedings ArticleDOI
23 Jun 2013
TL;DR: This work proposes a fast regression model for practical single image super-resolution based on in-place examples, by leveraging two fundamental super- resolution approaches- learning from an external database and learning from self-examples.
Abstract: We propose a fast regression model for practical single image super-resolution based on in-place examples, by leveraging two fundamental super-resolution approaches- learning from an external database and learning from self-examples. Our in-place self-similarity refines the recently proposed local self-similarity by proving that a patch in the upper scale image have good matches around its origin location in the lower scale image. Based on the in-place examples, a first-order approximation of the nonlinear mapping function from low-to high-resolution image patches is learned. Extensive experiments on benchmark and real-world images demonstrate that our algorithm can produce natural-looking results with sharp edges and preserved fine details, while the current state-of-the-art algorithms are prone to visual artifacts. Furthermore, our model can easily extend to deal with noise by combining the regression results on multiple in-place examples for robust estimation. The algorithm runs fast and is particularly useful for practical applications, where the input images typically contain diverse textures and they are potentially contaminated by noise or compression artifacts.

349 citations


Journal ArticleDOI
TL;DR: A hybrid methodology combining backscatter thresholding, region growing, and change detection (CD) is introduced as an approach enabling the automated, objective, and reliable flood extent extraction from very high resolution urban SAR images.
Abstract: Very high resolution synthetic aperture radar (SAR) sensors represent an alternative to aerial photography for delineating floods in built-up environments where flood risk is highest. However, even with currently available SAR image resolutions of 3 m and higher, signal returns from man-made structures hamper the accurate mapping of flooded areas. Enhanced image processing algorithms and a better exploitation of image archives are required to facilitate the use of microwave remote-sensing data for monitoring flood dynamics in urban areas. In this paper, a hybrid methodology combining backscatter thresholding, region growing, and change detection (CD) is introduced as an approach enabling the automated, objective, and reliable flood extent extraction from very high resolution urban SAR images. The method is based on the calibration of a statistical distribution of “open water” backscatter values from images of floods. Images acquired during dry conditions enable the identification of areas that are not “visible” to the sensor (i.e., regions affected by “shadow”) and that systematically behave as specular reflectors (e.g., smooth tarmac, permanent water bodies). CD with respect to a reference image thereby reduces overdetection of inundated areas. A case study of the July 2007 Severn River flood (UK) observed by airborne photography and the very high resolution SAR sensor on board TerraSAR-X highlights advantages and limitations of the method. Even though the proposed fully automated SAR-based flood-mapping technique overcomes some limitations of previous methods, further technological and methodological improvements are necessary for SAR-based flood detection in urban areas to match the mapping capability of high-quality aerial photography.

328 citations


Proceedings ArticleDOI
01 Dec 2013
TL;DR: It is shown that recurrence of small patches across scales of the low-res image (which forms the basis for single-image SR), can also be used for estimating the optimal blur kernel, which leads to significant improvement in SR results.
Abstract: Super resolution (SR) algorithms typically assume that the blur kernel is known (either the Point Spread Function 'PSF' of the camera, or some default low-pass filter, e.g. a Gaussian). However, the performance of SR methods significantly deteriorates when the assumed blur kernel deviates from the true one. We propose a general framework for "blind" super resolution. In particular, we show that: (i) Unlike the common belief, the PSF of the camera is the wrong blur kernel to use in SR algorithms. (ii) We show how the correct SR blur kernel can be recovered directly from the low-resolution image. This is done by exploiting the inherent recurrence property of small natural image patches (either internally within the same image, or externally in a collection of other natural images). In particular, we show that recurrence of small patches across scales of the low-res image (which forms the basis for single-image SR), can also be used for estimating the optimal blur kernel. This leads to significant improvement in SR results.

291 citations


Journal ArticleDOI
06 Mar 2013-PLOS ONE
TL;DR: The results suggest that an agreement among spectral and spatial resolutions is needed to optimise the flight mission according to every agronomical objective as affected by the size of the smaller object to be discriminated (weed plants or weed patches).
Abstract: A new aerial platform has risen recently for image acquisition, the Unmanned Aerial Vehicle (UAV). This article describes the technical specifications and configuration of a UAV used to capture remote images for early season site- specific weed management (ESSWM). Image spatial and spectral properties required for weed seedling discrimination were also evaluated. Two different sensors, a still visible camera and a six-band multispectral camera, and three flight altitudes (30, 60 and 100 m) were tested over a naturally infested sunflower field. The main phases of the UAV workflow were the following: 1) mission planning, 2) UAV flight and image acquisition, and 3) image pre-processing. Three different aspects were needed to plan the route: flight area, camera specifications and UAV tasks. The pre-processing phase included the correct alignment of the six bands of the multispectral imagery and the orthorectification and mosaicking of the individual images captured in each flight. The image pixel size, area covered by each image and flight timing were very sensitive to flight altitude. At a lower altitude, the UAV captured images of finer spatial resolution, although the number of images needed to cover the whole field may be a limiting factor due to the energy required for a greater flight length and computational requirements for the further mosaicking process. Spectral differences between weeds, crop and bare soil were significant in the vegetation indices studied (Excess Green Index, Normalised Green-Red Difference Index and Normalised Difference Vegetation Index), mainly at a 30 m altitude. However, greater spectral separability was obtained between vegetation and bare soil with the index NDVI. These results suggest that an agreement among spectral and spatial resolutions is needed to optimise the flight mission according to every agronomical objective as affected by the size of the smaller object to be discriminated (weed plants or weed patches).

Journal ArticleDOI
TL;DR: By comparing with six well-known methods in terms of several universal quality evaluation indexes with or without references, the simulated and real experimental results on QuickBird and IKONOS images demonstrate the superiority of the proposed remote sensing image fusion method.
Abstract: Remote sensing image fusion can integrate the spatial detail of panchromatic (PAN) image and the spectral information of a low-resolution multispectral (MS) image to produce a fused MS image with high spatial resolution. In this paper, a remote sensing image fusion method is proposed with sparse representations over learned dictionaries. The dictionaries for PAN image and low-resolution MS image are learned from the source images adaptively. Furthermore, a novel strategy is designed to construct the dictionary for unknown high-resolution MS images without training set, which can make our proposed method more practical. The sparse coefficients of the PAN image and low-resolution MS image are sought by the orthogonal matching pursuit algorithm. Then, the fused high-resolution MS image is calculated by combining the obtained sparse coefficients and the dictionary for the high-resolution MS image. By comparing with six well-known methods in terms of several universal quality evaluation indexes with or without references, the simulated and real experimental results on QuickBird and IKONOS images demonstrate the superiority of our method.

Proceedings ArticleDOI
23 Jun 2013
TL;DR: A novel approximation algorithm is developed whose complexity grows linearly with the image size and achieve realtime performance and is well suited for upsampling depth images using binary edge maps, an important sensor fusion application.
Abstract: We propose an algorithm utilizing geodesic distances to upsample a low resolution depth image using a registered high resolution color image. Specifically, it computes depth for each pixel in the high resolution image using geodesic paths to the pixels whose depths are known from the low resolution one. Though this is closely related to the all-pair-shortest-path problem which has O(n2 log n) complexity, we develop a novel approximation algorithm whose complexity grows linearly with the image size and achieve realtime performance. We compare our algorithm with the state of the art on the benchmark dataset and show that our approach provides more accurate depth upsampling with fewer artifacts. In addition, we show that the proposed algorithm is well suited for upsampling depth images using binary edge maps, an important sensor fusion application.

Journal ArticleDOI
TL;DR: A depth-map merging based multiple view stereo method for large-scale scenes which takes both accuracy and efficiency into account and can reconstruct quite accurate and dense point clouds with high computational efficiency.
Abstract: In this paper, we propose a depth-map merging based multiple view stereo method for large-scale scenes which takes both accuracy and efficiency into account. In the proposed method, an efficient patch-based stereo matching process is used to generate depth-map at each image with acceptable errors, followed by a depth-map refinement process to enforce consistency over neighboring views. Compared to state-of-the-art methods, the proposed method can reconstruct quite accurate and dense point clouds with high computational efficiency. Besides, the proposed method could be easily parallelized at image level, i.e., each depth-map is computed individually, which makes it suitable for large-scale scene reconstruction with high resolution images. The accuracy and efficiency of the proposed method are evaluated quantitatively on benchmark data and qualitatively on large data sets.

Journal ArticleDOI
TL;DR: A robust morphological methodology using edge detection is devised to evaluate the physical properties of different speckle patterns with image resolutions from 23 to 705 pixels/mm, to demonstrate that the pattern properties derived from the analysis can be used to indicate pattern quality and hence minimise DIC measurement errors.

Journal ArticleDOI
TL;DR: This method forms a unified framework for blending remote sensing images with temporal reflectance changes, whether phenology change or land-cover-type change, based on a two-layer spatiotemporal fusion strategy due to the large spatial resolution difference between HSLT and LSHT data.
Abstract: This paper proposes a novel spatiotemporal fusion model for generating images with high-spatial and high-temporal resolution (HSHT) through learning with only one pair of prior images. For this purpose, this method establishes correspondence between low-spatial-resolution but high-temporal-resolution (LSHT) data and high-spatial-resolution but low-temporal-resolution (HSLT) data through the superresolution of LSHT data and further fusion by using high-pass modulation. Specifically, this method is implemented in two stages. In the first stage, the spatial resolution of LSHT data on prior and prediction dates is improved simultaneously by means of sparse representation; in the second stage, the known HSLT and the superresolved LSHTs are fused via high-pass modulation to generate the HSHT data on the prediction date. Remarkably, this method forms a unified framework for blending remote sensing images with temporal reflectance changes, whether phenology change (e.g., seasonal change of vegetation) or land-cover-type change (e.g., conversion of farmland to built-up area) based on a two-layer spatiotemporal fusion strategy due to the large spatial resolution difference between HSLT and LSHT data. This method was tested on both a simulated data set and two actual data sets of Landsat Enhanced Thematic Mapper Plus-Moderate Resolution Imaging Spectroradiometer acquisitions. It was also compared with other well-known spatiotemporal fusion algorithms on two types of data: images primarily with phenology changes and images primarily with land-cover-type changes. Experimental results demonstrated that our method performed better in capturing surface reflectance changes on both types of images.

Proceedings ArticleDOI
01 Dec 2013
TL;DR: It is found that an accurate blur model is more important than a sophisticated image prior in reconstructing raw lowers images acquired by an actual camera and the default blur models of various SR algorithms may differ from the camera blur, typically leading to over-smoothed results.
Abstract: Over the past decade, single image Super-Resolution (SR) research has focused on developing sophisticated image priors, leading to significant advances. Estimating and incorporating the blur model, that relates the high-res and low-res images, has received much less attention, however. In particular, the reconstruction constraint, namely that the blurred and down sampled high-res output should approximately equal the low-res input image, has been either ignored or applied with default fixed blur models. In this work, we examine the relative importance of the image prior and the reconstruction constraint. First, we show that an accurate reconstruction constraint combined with a simple gradient regularization achieves SR results almost as good as those of state-of-the-art algorithms with sophisticated image priors. Second, we study both empirically and theoretically the sensitivity of SR algorithms to the blur model assumed in the reconstruction constraint. We find that an accurate blur model is more important than a sophisticated image prior. Finally, using real camera data, we demonstrate that the default blur models of various SR algorithms may differ from the camera blur, typically leading to over-smoothed results. Our findings highlight the importance of accurately estimating camera blur in reconstructing raw lowers images acquired by an actual camera.

Journal ArticleDOI
TL;DR: Performance comparison with classical brute-force image registration method reveals that the proposed quantum algorithm can achieve a quartic speedup.
Abstract: The power of quantum mechanics has been extensively exploited to meet the high computational requirement of classical image processing. However, existing quantum image models can only represent the images sampled in Cartesian coordinates. In this paper, quantum log-polar image (QUALPI), a novel quantum image representation is proposed for the storage and processing of images sampled in log-polar coordinates. In QUALPI, all the pixels of a QUALPI are stored in a normalized superposition and can be operated on simultaneously. A QUALPI can be constructed from a classical image via a preparation whose complexity is approximately linear in the image size. Some common geometric transformations, such as symmetry transformation, rotation, etc., can be performed conveniently with QUALPI. Based on these geometric transformations, a fast rotation-invariant quantum image registration algorithm is designed for log-polar images. Performance comparison with classical brute-force image registration method reveals that our quantum algorithm can achieve a quartic speedup.

Journal ArticleDOI
TL;DR: The results reveal that light-sheets generated by pulsed near-infrared Bessel beams and two photon excitation provide the best image resolution, contrast at both a minimum amount of artifacts and signal degradation along the propagation of the beam into the sample.
Abstract: In this study we show that it is possible to successfully combine the benefits of light-sheet microscopy, self-reconstructing Bessel beams and two-photon fluorescence excitation to improve imaging in large, scattering media such as cancer cell clusters. We achieved a nearly two-fold increase in axial image resolution and 5–10 fold increase in contrast relative to linear excitation with Bessel beams. The light-sheet penetration depth could be increased by a factor of 3–5 relative to linear excitation with Gaussian beams. These finding arise from both experiments and computer simulations. In addition, we provide a theoretical description of how these results are composed. We investigated the change of image quality along the propagation direction of the illumination beams both for clusters of spheres and tumor multicellular spheroids. The results reveal that light-sheets generated by pulsed near-infrared Bessel beams and two photon excitation provide the best image resolution, contrast at both a minimum amount of artifacts and signal degradation along the propagation of the beam into the sample.

Proceedings ArticleDOI
23 Jun 2013
TL;DR: Experimental results demonstrate that the proposed algorithm generates hallucinated face images with favorable quality and adaptability.
Abstract: The goal of face hallucination is to generate high-resolution images with fidelity from low-resolution ones. In contrast to existing methods based on patch similarity or holistic constraints in the image space, we propose to exploit local image structures for face hallucination. Each face image is represented in terms of facial components, contours and smooth regions. The image structure is maintained via matching gradients in the reconstructed high-resolution output. For facial components, we align input images to generate accurate exemplars and transfer the high-frequency details for preserving structural consistency. For contours, we learn statistical priors to generate salient structures in the high-resolution images. A patch matching method is utilized on the smooth regions where the image gradients are preserved. Experimental results demonstrate that the proposed algorithm generates hallucinated face images with favorable quality and adaptability.

Journal ArticleDOI
TL;DR: An adaptive self-interpolation algorithm is first proposed to estimate a sharp high-resolution gradient field directly from the input low-resolution image, regarded as a gradient constraint or an edge-preserving constraint to reconstruct the high- resolution image.
Abstract: Super-resolution from a single image plays an important role in many computer vision systems. However, it is still a challenging task, especially in preserving local edge structures. To construct high-resolution images while preserving the sharp edges, an effective edge-directed super-resolution method is presented in this paper. An adaptive self-interpolation algorithm is first proposed to estimate a sharp high-resolution gradient field directly from the input low-resolution image. The obtained high-resolution gradient is then regarded as a gradient constraint or an edge-preserving constraint to reconstruct the high-resolution image. Extensive results have shown both qualitatively and quantitatively that the proposed method can produce convincing super-resolution images containing complex and sharp features, as compared with the other state-of-the-art super-resolution algorithms.

Journal ArticleDOI
TL;DR: From this analysis a piece of image reconstruction code has been developed that can restore the majority of the effects of these detrimental image distortions for atomic-resolution data.
Abstract: The aberration-corrected scanning transmission electron microscope has great sensitivity to environmental or instrumental disturbances such as acoustic, mechanical, or electromagnetic interference. This interference can introduce distortions to the images recorded and degrade both signal noise and resolution performance. In addition, sample or stage drift can cause the images to appear warped and leads to unreliable lattice parameters being exhibited. Here a detailed study of the sources, natures, and effects of imaging distortions is presented, and from this analysis a piece of image reconstruction code has been developed that can restore the majority of the effects of these detrimental image distortions for atomic-resolution data. Example data are presented, and the performance of the restored images is compared quantitatively against the as-recorded data. An improvement in apparent resolution of 16% and an improvement in signal-to-noise ratio of 30% were achieved, as well as correction of the drift up to the precision to which it can be measured.

Journal ArticleDOI
TL;DR: This method, which is based on cross-correlations, allows optimisation of pattern phase even when the pattern itself is too fine for detection, in which case most other methods inevitably fail.
Abstract: Structured illumination microscopy can achieve super-resolution in fluorescence imaging. The sample is illuminated with periodic light patterns, and a series of images are acquired for different pattern positions, also called phases. From these a super-resolution image can be computed. However, for an artefact-free reconstruction it is important that the pattern phases be known with very high precision. If the necessary precision cannot be guaranteed experimentally, the phase information has to be retrieved a posteriori from the acquired data. We present a fast and robust algorithm that iteratively determines these phases with a precision of typically below λ/100. Our method, which is based on cross-correlations, allows optimisation of pattern phase even when the pattern itself is too fine for detection, in which case most other methods inevitably fail. We analyse the performance of this method using simulated data from a synthetic 2D sample as well as experimental single-slice data from a 3D sample and compare it with another previously published approach.

Journal Article
TL;DR: By showing how place recognition along a route is feasible even with severely degraded image sequences, this paper hopes to provoke a re-examination of how to develop and test future localization and mapping systems.
Abstract: In this paper we use the algorithm SeqSLAM to address the question, how little and what quality of visual information is needed to localize along a familiar route? We conduct a comprehensive investigation of place recognition performance on seven datasets while varying image resolution (primarily 1 to 512 pixel images), pixel bit depth, field of view, motion blur, image compression and matching sequence length. Results confirm that place recognition using single images or short image sequences is poor, but improves to match or exceed current benchmarks as the matching sequence length increases. We then present place recognition results from two experiments where low-quality imagery is directly caused by sensor limitations; in one, place recognition is achieved along an unlit mountain road by using noisy, long-exposure blurred images, and in the other, two single pixel light sensors are used to localize in an indoor environment. We also show failure modes caused by pose variance and sequence aliasing, and discuss ways in which they may be overcome. By showing how place recognition along a route is feasible even with severely degraded image sequences, we hope to provoke a re-examination of how we develop and test future localization and mapping systems.

Journal ArticleDOI
TL;DR: In this article, a self-assembled gold nanoparticle surface patterning technique is presented that enables nanometer spatial resolution for digital image correlation (DIC) experiments in a scanning electron microscope.
Abstract: A self-assembled gold nanoparticle surface patterning technique is presented that enables nanometer spatial resolution for digital image correlation (DIC) experiments in a scanning electron microscope. This technique, originally developed for surface-enhanced Raman scattering substrates, results in the assembly of individual 15–136 nm diameter gold nanoparticles over the surface of the test sample. The resulting dense, randomly isotropic, and high contrast pattern enables DIC down to an unprecedented image resolution of approximately 4 nm/pixel. The technique is inexpensive, fast, results in even coverage over the entire surface of the test sample, and can be applied to metallic and non-metallic substrates as well as curved or delicate specimens. In addition, the pattern is appropriate for multi-scale experimental investigations through the utilization of nanoparticle aggregates that collect on the surface in combination with the pattern formed by individual nanoparticles.

Journal ArticleDOI
TL;DR: An image fusion approach based on multiresolution and multisensor regularized spatial unmixing is presented, which yields a composite image with the spatial resolution of the high spatial resolution image while retaining the spectral and temporal characteristics of the medium spatialresolution image.

Patent
29 Jul 2013
TL;DR: In this article, a method of generating one or more new spatial and chromatic variation digital images using an original digitally-acquired image which including a face or portions of a face is presented.
Abstract: A method of generating one or more new spatial and chromatic variation digital images uses an original digitally-acquired image which including a face or portions of a face. A group of pixels that correspond to a face within the original digitally-acquired image is identified. A portion of the original image is selected to include the group of pixels. Values of pixels of one or more new images based on the selected portion are automatically generated, or an option to generate them is provided, in a manner which always includes the face within the one or more new images. Such method may be implemented to automatically establish the correct orientation and color balance of an image. Such method can be implemented as an automated method or a semi automatic method to guide users in viewing, capturing or printing of images.

Journal ArticleDOI
TL;DR: The proposed SR approach is based upon an observation that small patches in natural images tend to redundantly repeat themselves many times both within the same scale and across different scales and can produce compelling SR recovery both quantitatively and perceptually in comparison with other state-of-the-art baselines.
Abstract: Example learning-based image super-resolution (SR) is recognized as an effective way to produce a high-resolution (HR) image with the help of an external training set. The effectiveness of learning-based SR methods, however, depends highly upon the consistency between the supporting training set and low-resolution (LR) images to be handled. To reduce the adverse effect brought by incompatible high-frequency details in the training set, we propose a single image SR approach by learning multiscale self-similarities from an LR image itself. The proposed SR approach is based upon an observation that small patches in natural images tend to redundantly repeat themselves many times both within the same scale and across different scales. To synthesize the missing details, we establish the HR-LR patch pairs using the initial LR input and its down-sampled version to capture the similarities across different scales and utilize the neighbor embedding algorithm to estimate the relationship between the LR and HR image pairs. To fully exploit the similarities across various scales inside the input LR image, we accumulate the previous resultant images as training examples for the subsequent reconstruction processes and adopt a gradual magnification scheme to upscale the LR input to the desired size step by step. In addition, to preserve sharper edges and suppress aliasing artifacts, we further apply the nonlocal means method to learn the similarity within the same scale and formulate a nonlocal prior regularization term to well pose SR estimation under a reconstruction-based SR framework. Experimental results demonstrate that the proposed method can produce compelling SR recovery both quantitatively and perceptually in comparison with other state-of-the-art baselines.

Journal ArticleDOI
TL;DR: In this paper, a comprehensive investigation of place recognition performance on seven datasets while varying image resolution (primarily 1 to 512 pixel images), pixel bit depth, field of view, motion blur, image compression and matching sequence length is conducted.
Abstract: In this paper we use the algorithm SeqSLAM to address the question, how little and what quality of visual information is needed to localize along a familiar route? We conduct a comprehensive investigation of place recognition performance on seven datasets while varying image resolution (primarily 1 to 512 pixel images), pixel bit depth, field of view, motion blur, image compression and matching sequence length. Results confirm that place recognition using single images or short image sequences is poor, but improves to match or exceed current benchmarks as the matching sequence length increases. We then present place recognition results from two experiments where low-quality imagery is directly caused by sensor limitations; in one, place recognition is achieved along an unlit mountain road by using noisy, long-exposure blurred images, and in the other, two single pixel light sensors are used to localize in an indoor environment. We also show failure modes caused by pose variance and sequence aliasing, and discuss ways in which they may be overcome. By showing how place recognition along a route is feasible even with severely degraded image sequences, we hope to provoke a re-examination of how we develop and test future localization and mapping systems.

Proceedings ArticleDOI
TL;DR: The Lytro camera is considered a successful example of the miniaturization aided by the increase in computational power characterizing mobile computational photography, and the interpretation of Lytro image data saved by the camera is used.
Abstract: The Lytro camera is the first implementation of a plenoptic camera for the consumer market. We consider it a successful example of the miniaturization aided by the increase in computational power characterizing mobile computational photography. The plenoptic camera approach to radiance capture uses a microlens array as an imaging system focused on the focal plane of the main camera lens. This paper analyzes the performance of Lytro camera from a system level perspective, considering the Lytro camera as a black box, and uses our interpretation of Lytro image data saved by the camera. We present our findings based on our interpretation of Lytro camera file structure, image calibration and image rendering; in this context, artifacts and final image resolution are discussed.