scispace - formally typeset
Search or ask a question

Showing papers on "Image resolution published in 2012"


Journal ArticleDOI
TL;DR: This paper conducts a comparative study on 12 selected image fusion metrics over six multiresolution image fusion algorithms for two different fusion schemes and input images with distortion and relates the results to an image quality measurement.
Abstract: Comparison of image processing techniques is critically important in deciding which algorithm, method, or metric to use for enhanced image assessment. Image fusion is a popular choice for various image enhancement applications such as overlay of two image products, refinement of image resolutions for alignment, and image combination for feature extraction and target recognition. Since image fusion is used in many geospatial and night vision applications, it is important to understand these techniques and provide a comparative study of the methods. In this paper, we conduct a comparative study on 12 selected image fusion metrics over six multiresolution image fusion algorithms for two different fusion schemes and input images with distortion. The analysis can be applied to different image combination algorithms, image processing methods, and over a different choice of metrics that are of use to an image processing expert. The paper relates the results to an image quality measurement based on power spectrum and correlation analysis and serves as a summary of many contemporary techniques for objective assessment of image fusion algorithms.

563 citations


Journal ArticleDOI
TL;DR: Thorough experimental results suggest that the proposed SR method can reconstruct higher quality results both quantitatively and perceptually and propose a maximum a posteriori probability framework for SR recovery.
Abstract: Image super-resolution (SR) reconstruction is essentially an ill-posed problem, so it is important to design an effective prior. For this purpose, we propose a novel image SR method by learning both non-local and local regularization priors from a given low-resolution image. The non-local prior takes advantage of the redundancy of similar patches in natural images, while the local prior assumes that a target pixel can be estimated by a weighted average of its neighbors. Based on the above considerations, we utilize the non-local means filter to learn a non-local prior and the steering kernel regression to learn a local prior. By assembling the two complementary regularization terms, we propose a maximum a posteriori probability framework for SR recovery. Thorough experimental results suggest that the proposed SR method can reconstruct higher quality results both quantitatively and perceptually.

527 citations


Journal ArticleDOI
TL;DR: This work addresses traditional multiview stereo methods to the extracted low-resolution views can result in reconstruction errors due to aliasing, and incorporates Lambertian and texture preserving priors to reconstruct both scene depth and its superresolved texture in a variational Bayesian framework.
Abstract: Portable light field (LF) cameras have demonstrated capabilities beyond conventional cameras. In a single snapshot, they enable digital image refocusing and 3D reconstruction. We show that they obtain a larger depth of field but maintain the ability to reconstruct detail at high resolution. In fact, all depths are approximately focused, except for a thin slab where blur size is bounded, i.e., their depth of field is essentially inverted compared to regular cameras. Crucial to their success is the way they sample the LF, trading off spatial versus angular resolution, and how aliasing affects the LF. We show that applying traditional multiview stereo methods to the extracted low-resolution views can result in reconstruction errors due to aliasing. We address these challenges using an explicit image formation model, and incorporate Lambertian and texture preserving priors to reconstruct both scene depth and its superresolved texture in a variational Bayesian framework, eliminating aliasing by fusing multiview information. We demonstrate the method on synthetic and real images captured with our LF camera, and show that it can outperform other computational camera systems.

434 citations


Journal ArticleDOI
22 Mar 2012-Nature
TL;DR: The experimental demonstration of a general electron tomography method that achieves atomic-scale resolution without initial assumptions about the sample structure is reported, and it is anticipated that this general method can be applied not only to determine the 3D structure of nanomaterials at atomic- scale resolution, but also to improve the spatial resolution and image quality in other tomography fields.
Abstract: Transmission electron microscopy is a powerful imaging tool that has found broad application in materials science, nanoscience and biology. With the introduction of aberration-corrected electron lenses, both the spatial resolution and the image quality in transmission electron microscopy have been significantly improved and resolution below 0.5 angstroms has been demonstrated. To reveal the three-dimensional (3D) structure of thin samples, electron tomography is the method of choice, with cubic-nanometre resolution currently achievable. Discrete tomography has recently been used to generate a 3D atomic reconstruction of a silver nanoparticle two to three nanometres in diameter, but this statistical method assumes prior knowledge of the particle's lattice structure and requires that the atoms fit rigidly on that lattice. Here we report the experimental demonstration of a general electron tomography method that achieves atomic-scale resolution without initial assumptions about the sample structure. By combining a novel projection alignment and tomographic reconstruction method with scanning transmission electron microscopy, we have determined the 3D structure of an approximately ten-nanometre gold nanoparticle at 2.4-angstrom resolution. Although we cannot definitively locate all of the atoms inside the nanoparticle, individual atoms are observed in some regions of the particle and several grains are identified in three dimensions. The 3D surface morphology and internal lattice structure revealed are consistent with a distorted icosahedral multiply twinned particle. We anticipate that this general method can be applied not only to determine the 3D structure of nanomaterials at atomic-scale resolution, but also to improve the spatial resolution and image quality in other tomography fields.

379 citations


Journal ArticleDOI
TL;DR: Experimental results demonstrate the superiority of SPSTFM in capturing surface reflectance changes on both categories of images, and the model was compared with other related algorithms on two types of data: images primarily with phenology change and images mainly with land-cover type change.
Abstract: This paper presents a novel model for blending remote sensing data of high spatial resolution (HSR), taken at infrequent intervals, with those available frequently but at low spatial resolution (LSR) in the context of monitoring and predicting changes in land usage and phenology. Named “SParse-representation-based SpatioTemporal reflectance Fusion Model” (SPSTFM), the model has been developed for predicting HSR surface reflectances through data blending with LSR scenes. Remarkably, this model forms a unified framework for fusing remote sensing images with temporal reflectance changes, phenology change (e.g., seasonal change of vegetation), or type change (e.g., conversion of farmland to built-up area), by establishing correspondences between structures within HSR images of given areas and their corresponding LSR images. Such corresponding relationship is achieved by means of the sparse representation, specifically by jointly training two dictionaries generated from HSR and LSR difference image patches and sparse coding at the reconstruction stage. SPSTFM was tested using both a simulated data set and an actual data set of Landsat Enhanced Thematic Mapper Plus-Moderate Resolution Imaging Spectroradiometer acquisitions. It was also compared with other related algorithms on two types of data: images primarily with phenology change and images primarily with land-cover type change. Experimental results demonstrate the superiority of SPSTFM in capturing surface reflectance changes on both categories of images.

311 citations


Journal ArticleDOI
TL;DR: In this paper, the authors compare and evaluate different image matching methods for glacier flow determination over large scales, and they consider CCF-O and COSI-Corr to be the two most robust matching methods.

254 citations


Journal ArticleDOI
TL;DR: A boosted Bayesian multiresolution (BBMR) system to identify regions of CaP on digital biopsy slides and reveals that different classes and types of image features become more relevant for discriminating between CaP and benign areas at different image resolutions.
Abstract: Diagnosis of prostate cancer (CaP) currently involves examining tissue samples for CaP presence and extent via a microscope, a time-consuming and subjective process. With the advent of digital pathology, computer-aided algorithms can now be applied to disease detection on digitized glass slides. The size of these digitized histology images (hundreds of millions of pixels) presents a formidable challenge for any computerized image analysis program. In this paper, we present a boosted Bayesian multiresolution (BBMR) system to identify regions of CaP on digital biopsy slides. Such a system would serve as an important preceding step to a Gleason grading algorithm, where the objective would be to score the invasiveness and severity of the disease. In the first step, our algorithm decomposes the whole-slide image into an image pyramid comprising multiple resolution levels. Regions identified as cancer via a Bayesian classifier at lower resolution levels are subsequently examined in greater detail at higher resolution levels, thereby allowing for rapid and efficient analysis of large images. At each resolution level, ten image features are chosen from a pool of over 900 first-order statistical, second-order co-occurrence, and Gabor filter features using an AdaBoost ensemble method. The BBMR scheme, operating on 100 images obtained from 58 patients, yielded: 1) areas under the receiver operating characteristic curve (AUC) of 0.84, 0.83, and 0.76, respectively, at the lowest, intermediate, and highest resolution levels and 2) an eightfold savings in terms of computational time compared to running the algorithm directly at full (highest) resolution. The BBMR model outperformed (in terms of AUC): 1) individual features (no ensemble) and 2) a random forest classifier ensemble obtained by bagging multiple decision tree classifiers. The apparent drop-off in AUC at higher image resolutions is due to lack of fine detail in the expert annotation of CaP and is not an artifact of the classifier. The implicit feature selection done via the AdaBoost component of the BBMR classifier reveals that different classes and types of image features become more relevant for discriminating between CaP and benign areas at different image resolutions.

251 citations


Journal ArticleDOI
TL;DR: A new wavelet shrinkage approach allows the distributed vibration measurement of 20-Hz and 8-kHz events to be detected over 1-km sensing length with a 5-ns optical pulse, which is equivalent to 50-cm spatial resolution using the single-mode sensing fiber.
Abstract: This letter proposed and demonstrated a wavelet technique to reduce the time domain noise to get submeter spatial resolution in the distributed vibration sensor based on phase optical time domain reflectometry. A new wavelet shrinkage approach allows the distributed vibration measurement of 20-Hz and 8-kHz events to be detected over 1-km sensing length with a 5-ns optical pulse, which is equivalent to 50-cm spatial resolution using the single-mode sensing fiber.

250 citations


Proceedings ArticleDOI
18 Apr 2012
TL;DR: In this study, the most commonly used methods including GIHS, GIHSF, PCA and Wavelet are analyzed using image quality metrics such as SSIM, ERGAS and SAM to find the best method for obtaining the fused image having the least spectral distortions according to obtained results.
Abstract: In literature, several methods are available to combine both low spatial multispectral and low spectral panchromatic resolution images to obtain a high resolution multispectral image. One of the most common problems encountered in these methods is spectral distortions introduced during the merging process. At the same time, the spectral quality of the image is the most important factor affecting the accuracy of the results in many applications such as object recognition, object extraction, image analysis. In this study, the most commonly used methods including GIHS, GIHSF, PCA and Wavelet are analyzed using image quality metrics such as SSIM, ERGAS and SAM. At the same time, Wavelet is the best method for obtaining the fused image having the least spectral distortions according to obtained results. At the same time, image quality of GIHS, GIHSF and PCA methods are close to each other, but spatial qualities of the fused image using the wavelet method are less than others.

243 citations


Journal ArticleDOI
TL;DR: An accurate model-based inversion algorithm for 3-D optoacoustic image reconstruction is proposed and validated and superior performance versus commonly-used backprojection inversion algorithms is showcased by numerical simulations and phantom experiments.
Abstract: In many practical optoacoustic imaging implementations, dimensionality of the tomographic problem is commonly reduced into two dimensions or 1-D scanning geometries in order to simplify technical implementation, improve imaging speed or increase signal-to-noise ratio. However, this usually comes at a cost of significantly reduced quality of the tomographic data, out-of-plane image artifacts, and overall loss of image contrast and spatial resolution. Quantitative optoacoustic image reconstruction implies therefore collection of point 3-D (volumetric) data from as many locations around the object as possible. Here, we propose and validate an accurate model-based inversion algorithm for 3-D optoacoustic image reconstruction. Superior performance versus commonly-used backprojection inversion algorithms is showcased by numerical simulations and phantom experiments.

204 citations


Journal ArticleDOI
TL;DR: In this article, the spatial resolution of digital particle image velocimetry (DPIV) is analyzed as a function of the tracer particles and the imaging and recording system.
Abstract: This work analyzes the spatial resolution that can be achieved by digital particle image velocimetry (DPIV) as a function of the tracer particles and the imaging and recording system. As the in-plane resolution for window-correlation evaluation is related by the interrogation window size, it was assumed in the past that single-pixel ensemble-correlation increases the spatial resolution up to the pixel limit. However, it is shown that the determining factor limiting the resolution of single-pixel ensemble-correlation are the size of the particle images, which is dependent on the size of the particles, the magnification, the f-number of the imaging system, and the optical aberrations. Furthermore, since the minimum detectable particle image size is determined by the pixel size of the camera sensor in DPIV, this quantity is also considered in this analysis. It is shown that the optimal magnification that results in the best possible spatial resolution can be estimated from the particle size, the lens properties, and the pixel size of the camera. Thus, the information provided in this paper allows for the optimization of the camera and objective lens choices as well as the working distance for a given setup. Furthermore, the possibility of increasing the spatial resolution by means of particle tracking velocimetry (PTV) is discussed in detail. It is shown that this technique allows to increase the spatial resolution to the subpixel limit for averaged flow fields. In addition, PTV evaluation methods do not show bias errors that are typical for correlation-based approaches. Therefore, this technique is best suited for the estimation of velocity profiles.

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors introduced a new data fusion model for blending observations of high-temporal resolution sensors (e.g., MODIS) and moderate spatial resolution satellites to produce synthetic imagery with both high-spatial and temporal resolutions.
Abstract: Due to cloud coverage and obstruction, it is difficult to obtain useful images during the critical periods of monitoring vegetation using medium-resolution spatial satellites such as Landsat and Satellite Pour l'Observation de la Terre (SPOT), especially in pluvial regions. Although high temporal resolution sensors, such as the Advanced Very High Resolution Radiometer (AVHRR) and Moderate Resolution Imaging Spectroradiometer (MODIS), can provide high-frequency data, the coarse ground resolutions of these sensors make them unsuitable to quantify the vegetation growth processes at fine scales. This paper introduces a new data fusion model for blending observations of high temporal resolution sensors (e.g., MODIS) and moderate spatial resolution satellites (e.g., Landsat) to produce synthetic imagery with both high-spatial and temporal resolutions. By detecting temporal change information from MODIS daily surface reflectance images, our algorithm produced high-resolution temporal synthetic Landsat data based on a Landsat-7 Enhanced Thematic Mapper Plus (ETM+) image at the beginning time (T1). The algorithm was then tested over a 185×185 km2 area located in East China. The results showed that the algorithm can produce high-resolution temporal synthetic Landsat data that were similar to the actual observations with a high correlation coefficient (r) of 0.98 between synthetic imageries and the actual observations.

Journal ArticleDOI
TL;DR: In this article, the authors implemented and investigated two image reconstruction methods for use with a 3D OAT small animal imager: a penalized least-squares (PLS) method employing a quadratic smoothness penalty and a PLS method employing the total variation norm penalty.
Abstract: Iterative image reconstruction algorithms for optoacoustic tomography (OAT), also known as photoacoustic tomography, have the ability to improve image quality over analytic algorithms due to their ability to incorporate accurate models of the imaging physics, instrument response and measurement noise. However, to date, there have been few reported attempts to employ advanced iterative image reconstruction algorithms for improving image quality in three-dimensional (3D) OAT. In this work, we implement and investigate two iterative image reconstruction methods for use with a 3D OAT small animal imager: namely a penalized least-squares (PLS) method employing a quadratic smoothness penalty and a PLS method employing a total variation norm penalty. The reconstruction algorithms employ accurate models of the ultrasonic transducer impulse responses. Experimental data sets are employed to compare the performances of the iterative reconstruction algorithms to that of a 3D filtered backprojection (FBP) algorithm. By the use of quantitative measures of image quality, we demonstrate that the iterative reconstruction algorithms can mitigate image artifacts and preserve spatial resolution more effectively than FBP algorithms. These features suggest that the use of advanced image reconstruction algorithms can improve the effectiveness of 3D OAT while reducing the amount of data required for biomedical applications.

Journal ArticleDOI
TL;DR: This paper proposes a multiple-geometric-dictionaries-based clustered sparse coding scheme for SISR, and adds a self-similarity constraint on the recovered image in patch aggregation to reveal new features and details.
Abstract: Recently, single image super-resolution reconstruction (SISR) via sparse coding has attracted increasing interest. In this paper, we proposed a multiple-geometric-dictionaries-based clustered sparse coding scheme for SISR. Firstly, a large number of high-resolution (HR) image patches are randomly extracted from a set of example training images and clustered into several groups of “geometric patches,” from which the corresponding “geometric dictionaries” are learned to further sparsely code each local patch in a low-resolution image. A clustering aggregation is performed on the HR patches recovered by different dictionaries, followed by a subsequent patch aggregation to estimate the HR image. Considering that there are often many repetitive image structures in an image, we add a self-similarity constraint on the recovered image in patch aggregation to reveal new features and details. Finally, the HR residual image is estimated by the proposed recovery method and compensated to better preserve the subtle details of the images. Some experiments test the proposed method on natural images, and the results show that the proposed method outperforms its counterparts in both visual fidelity and numerical measures.

Journal ArticleDOI
TL;DR: A simulation-based estimate of the resolution of an experimental single molecule acquisition is proposed based on image wavelet segmentation and single particle centroid determination, and its performance is compared with the commonly used gaussian fitting of the point spread function.
Abstract: Localization of single molecules in microscopy images is a key step in quantitative single particle data analysis. Among them, single molecule based super-resolution optical microscopy techniques require high localization accuracy as well as computation of large data sets in the order of 10(5) single molecule detections to reconstruct a single image. We hereby present an algorithm based on image wavelet segmentation and single particle centroid determination, and compare its performance with the commonly used gaussian fitting of the point spread function. We performed realistic simulations at different signal-to-noise ratios and particle densities and show that the calculation time using the wavelet approach can be more than one order of magnitude faster than that of gaussian fitting without a significant degradation of the localization accuracy, from 1 nm to 4 nm in our range of study. We propose a simulation-based estimate of the resolution of an experimental single molecule acquisition.

Journal ArticleDOI
TL;DR: DeconSTORM approximates the maximum likelihood sample estimate under a realistic statistical model of fluorescence microscopy movies comprising numerous frames and enables an approximately fivefold or greater increase in imaging speed by allowing a higher density of activated fluorophores/frame.

Journal ArticleDOI
TL;DR: The target application for this sensor is time-resolved imaging, in particular fluorescence lifetime imaging microscopy and 3D imaging, and the characterization shows the suitability of the proposed sensor technology for these applications.
Abstract: We report on the design and characterization of a novel time-resolved image sensor fabricated in a 130 nm CMOS process. Each pixel within the 3232 pixel array contains a low-noise single-photon detector and a high-precision time-to-digital converter (TDC). The 10-bit TDC exhibits a timing resolution of 119 ps with a timing uniformity across the entire array of less than 2 LSBs. The differential non-linearity (DNL) and integral non-linearity (INL) were measured at ±0.4 and ±1.2 LSBs, respectively. The pixel array was fabricated with a pitch of 50 μm in both directions and with a total TDC area of less than 2000 μm2. The target application for this sensor is time-resolved imaging, in particular fluorescence lifetime imaging microscopy and 3D imaging. The characterization shows the suitability of the proposed sensor technology for these applications.

Journal ArticleDOI
TL;DR: This work reports on an architecture that acquires the two-dimensional spatial Fourier transform of the target object and determines its image signature, resolution, and signal-to-noise ratio in the presence of practical constraints such as atmospheric turbulence, background radiation, and photodetector noise.
Abstract: Computational ghost imaging is a structured-illumination active imager coupled with a single-pixel detector that has potential applications in remote sensing. Here we report on an architecture that acquires the two-dimensional spatial Fourier transform of the target object (which can be inverted to obtain a conventional image). We determine its image signature, resolution, and signal-to-noise ratio in the presence of practical constraints such as atmospheric turbulence, background radiation, and photodetector noise. We consider a bistatic imaging geometry and quantify the resolution impact of nonuniform Kolmogorov-spectrum turbulence along the propagation paths. We show that, in some cases, short-exposure intensity averaging can mitigate atmospheric-turbulence-induced resolution loss. Our analysis reveals some key performance differences between computational ghost imaging and conventional active imaging, and identifies scenarios in which theory predicts that the former will perform better than the latter.

Journal ArticleDOI
TL;DR: The 2-D x-space signal equation,2-D image equation, and the concept of signal fading and resolution loss for a projection MPI imager are introduced and the theoretically predicted x- space spatial resolution is confirmed.
Abstract: Projection magnetic particle imaging (MPI) can improve imaging speed by over 100-fold over traditional 3-D MPI. In this work, we derive the 2-D x-space signal equation, 2-D image equation, and introduce the concept of signal fading and resolution loss for a projection MPI imager. We then describe the design and construction of an x-space projection MPI scanner with a field gradient of 2.35 T/m across a 10 cm magnet free bore. The system has an expected resolution of 3.5 × 8.0 mm using Resovist tracer, and an experimental resolution of 3.8 × 8.4 mm resolution. The system images 2.5 cm × 5.0 cm partial field-of views (FOVs) at 10 frames/s, and acquires a full field-of-view of 10 cm × 5.0 cm in 4 s. We conclude by imaging a resolution phantom, a complex “Cal” phantom, mice injected with Resovist tracer, and experimentally confirm the theoretically predicted x-space spatial resolution.

Journal ArticleDOI
16 Jul 2012
TL;DR: This work presents a method to analyze high-density super-resolution data in three dimensions, where the images of individual fluorophores not only overlap, but also have varying PSFs that depend on the z positions of the fluorophore.
Abstract: Stochastic optical reconstruction microscopy (STORM) and related methods achieves sub-diffraction-limit image resolution through sequential activation and localization of individual fluorophores. The analysis of image data from these methods has typically been confined to the sparse activation regime where the density of activated fluorophores is sufficiently low such that there is minimal overlap between the images of adjacent emitters. Recently several methods have been reported for analyzing higher density data, allowing partial overlap between adjacent emitters. However, these methods have so far been limited to two-dimensional imaging, in which the point spread function (PSF) of each emitter is assumed to be identical. In this work, we present a method to analyze high-density super-resolution data in three dimensions, where the images of individual fluorophores not only overlap, but also have varying PSFs that depend on the z positions of the fluorophores. This approach accurately analyzed data sets with an emitter density five times higher than previously possible with sparse emitter analysis algorithms. We applied this algorithm to the analysis of data sets taken from membrane-labeled retina and brain tissues which contain a high-density of labels, and obtained substantially improved super-resolution image quality.

Journal ArticleDOI
TL;DR: In this article, the resolution limits of popular sub-diffraction and sub-wavelength imaging schemes are examined using a unified approach that allows rapid comparison of the relative merits and shortcomings of each technique.
Abstract: The resolution limits of popular sub-diffraction and sub-wavelength imaging schemes are examined using a unified approach that allows rapid comparison of the relative merits and shortcomings of each technique. This is intended to clarify the often confusing and constantly growing array of super-resolution techniques. Specific techniques examined include centroid-based techniques like PALM (photo-activated localization microscopy) and STORM (stochastic optical reconstruction microscopy), structured illumination techniques like SSIM (spatially structured illumination microscopy), STED (stimulated emission depletion), and GSD (ground state depletion), coherent techniques like MRI (magnetic resonance imaging), Rabi gradients, and light shift gradients, as well as quantum-inspired multi-photon techniques. It is found that the ultimate resolution for all these techniques can be described using a simple ratio of an oscillation frequency to an effective decay rate, which can be physically interpreted as the number of oscillations that can be observed before decay (i.e.?the quality factor Q of the imaging transition).

Journal ArticleDOI
TL;DR: A patch-based regularization for iterative image reconstruction that uses neighborhood patches instead of individual pixels in computing the nonquadratic penalty is presented, which can achieve higher contrast recovery for small objects without increasing background variation compared with the quadratic regularization.
Abstract: Iterative image reconstruction for positron emission tomography (PET) can improve image quality by using spatial regularization that penalizes image intensity difference between neighboring pixels. The most commonly used quadratic penalty often oversmoothes edges and fine features in reconstructed images. Nonquadratic penalties can preserve edges but often introduce piece-wise constant blocky artifacts and the results are also sensitive to the hyper-parameter that controls the shape of the penalty function. This paper presents a patch-based regularization for iterative image reconstruction that uses neighborhood patches instead of individual pixels in computing the nonquadratic penalty. The new regularization is more robust than the conventional pixel-based regularization in differentiating sharp edges from random fluctuations due to noise. An optimization transfer algorithm is developed for the penalized maximum likelihood estimation. Each iteration of the algorithm can be implemented in three simple steps: an EM-like image update, an image smoothing and a pixel-by-pixel image fusion. Computer simulations show that the proposed patch-based regularization can achieve higher contrast recovery for small objects without increasing background variation compared with the quadratic regularization. The reconstruction is also more robust to the hyper-parameter than conventional pixel-based nonquadratic regularizations. The proposed regularization method has been applied to real 3-D PET data.

Journal ArticleDOI
TL;DR: The essential role of SR for layover separation in urban infrastructure monitoring is indicated by geometric and statistical analysis and it is shown that double scatterers with small elevation distances are more frequent than those with large elevation distances.
Abstract: Tomographic synthetic aperture radar (SAR) inversion, including SAR tomography and differential SAR tomography, is essentially a spectral analysis problem. The resolution in the elevation direction depends on the elevation aperture size, i.e., on the spread of orbit tracks. Since the orbits of modern meter-resolution spaceborne SAR systems, such as TerraSAR-X, are tightly controlled, the tomographic elevation resolution is at least an order of magnitude lower than in range and azimuth. Hence, super-resolution (SR) reconstruction algorithms are desired. Considering the sparsity of the signal in elevation, a compressive sensing based super-resolving algorithm, named “Scale-down by L1 norm Minimization, Model selection, and Estimation Reconstruction” (SL1MMER, pronounced “slimmer”), was proposed by the authors in a previous paper. The ultimate bounds of the technique on localization accuracy and SR power were investigated. In this paper, the essential role of SR for layover separation in urban infrastructure monitoring is indicated by geometric and statistical analysis. It is shown that double scatterers with small elevation distances are more frequent than those with large elevation distances. Furthermore, the SR capability of SL1MMER is demonstrated using TerraSAR-X real data examples. For a high rise building complex, the percentage of detected double scatterers is almost doubled compared to classical linear estimators. Among them, half of the detected double scatterer pairs have elevation distances smaller than the Rayleigh elevation resolution. This confirms the importance of SR for this type of applications.

Journal ArticleDOI
TL;DR: A maximum a posteriori (MAP) based multi-frame super-resolution algorithm for hyperspectral images and principal component analysis (PCA) is utilized in both parts of the proposed algorithm: motion estimation and image reconstruction.

Journal ArticleDOI
TL;DR: A new interpolation-based method of image super-resolution reconstruction using multisurface fitting to take full advantage of spatial structure information and extends the method to a more general noise model.
Abstract: In this paper, we propose a new interpolation-based method of image super-resolution reconstruction. The idea is using multisurface fitting to take full advantage of spatial structure information. Each site of low-resolution pixels is fitted with one surface, and the final estimation is made by fusing the multisampling values on these surfaces in the maximum a posteriori fashion. With this method, the reconstructed high-resolution images preserve image details effectively without any hypothesis on image prior. Furthermore, we extend our method to a more general noise model. Experimental results on the simulated and real-world data show the superiority of the proposed method in both quantitative and visual comparisons.

Journal ArticleDOI
TL;DR: A practical performance evaluation of a Direct Detection Device (DDD) for biological cryo-EM at two different microscope voltages shows that the DDD is capable of recording usable signal for 3D reconstructions at about 4/5 of the Nyquist frequency, which is a vast improvement over the performance of conventional imaging media.

Journal ArticleDOI
M. van Noort1
TL;DR: In this article, a data reduction method that takes the image degradation effects that are present in the data into account and minimizes the resulting errors is developed, while simultaneously requiring fewer free parameters than conventional approaches.
Abstract: Context. When inverting solar spectra, image degradation effects that are present in the data are usually approximated or not considered.Aims. We develop a data reduction method that takes these issues into account and minimizes the resulting errors.Methods. By accounting for the diffraction PSF of the telescope during the inversions, we can produce a self-consistent solution that best fits the observed data, while simultaneously requiring fewer free parameters than conventional approaches.Results. Simulations using realistic MHD data indicate that the method is stable for all resolutions, including those with pixel scales well beyond those that can be resolved with a 0.5 m telescope, such as the Hinode SOT. Application of the presented method to reduce full Stokes data from the Hinode spectro-polarimeter results in dramatically increased image contrast and an increase in the resolution of the data to the diffraction limit of the telescope in almost all Stokes and fit parameters. The resulting data allow for detecting and interpreting solar features that have so far only been observed with 1m class ground-based telescopes. Conclusions. A new inversion method was developed that allows for accurate fitting of solar spectro-polarimetric imaging data over a large field of view, while simultaneously improving the noise statistics and spatial resolution of the results significantly.

Journal ArticleDOI
TL;DR: A spatially weighted TV image SR algorithm is proposed, in which the spatial information distributed in different image regions is added to constrain the SR process, and a newly proposed and effective spatial information indicator called difference curvature is used to identify the spatial property of each pixel.
Abstract: Total variation (TV) has been used as a popular and effective image prior model in regularization-based image processing fields, such as denoising, deblurring, super-resolution (SR), and others, because of its ability to preserve edges. However, as the TV model favors a piecewise constant solution, the processing results in the flat regions of the image being poor, and it cannot automatically balance the processing strength between different spatial property regions in the image. In this paper, we propose a spatially weighted TV image SR algorithm, in which the spatial information distributed in different image regions is added to constrain the SR process. A newly proposed and effective spatial information indicator called difference curvature is used to identify the spatial property of each pixel, and a weighted parameter determined by the difference curvature information is added to constrain the regularization strength of the TV regularization at each pixel. Meanwhile, a majorization-minimization algorithm is used to optimize the proposed spatially weighted TV SR model. Finally, a significant amount of simulated and real data experimental results show that the proposed spatially weighted TV SR algorithm not only efficiently reduces the “artifacts” produced with a TV model in fat regions of the image, but also preserves the edge information, and the reconstruction results are less sensitive to the regularization parameters than the TV model, because of the consideration of the spatial information constraint.

Journal ArticleDOI
TL;DR: An adapted EPI sequence in conjunction with a combination of ZOOmed imaging and Partially Parallel Acquisition (ZOOPPA) is introduced and it is demonstrated that the method can produce high quality diffusion-weighted images with high spatial and angular resolution at 7 T.

Journal ArticleDOI
TL;DR: A new compressed sensing (CS)-based pan-sharpening method which views the image observation model as a measurement process in the CS theory and constructs a joint dictionary from LRM and HRP images in which the HRM is sparse is proposed.
Abstract: High-resolution multispectral (HRM) images are widely used in many remote sensing applications. Using the pan-sharpening technique, a low-resolution multispectral (LRM) image and a high-resolution panchromatic (HRP) image can be fused to an HRM image. This letter proposes a new compressed sensing (CS)-based pan-sharpening method which views the image observation model as a measurement process in the CS theory and constructs a joint dictionary from LRM and HRP images in which the HRM is sparse. The novel joint dictionary makes the method practical in fusing real remote sensing images, and a tradeoff parameter is added in the image observation model to improve the results. The proposed algorithm is tested on simulated and real IKONOS images, and it results in improved image quality compared to other well-known methods in terms of both objective measurements and visual evaluation.