scispace - formally typeset
Search or ask a question

Showing papers on "Image resolution published in 2007"


Proceedings ArticleDOI
29 Jul 2007
TL;DR: A simple modification to a conventional camera is proposed to insert a patterned occluder within the aperture of the camera lens, creating a coded aperture, and introduces a criterion for depth discriminability which is used to design the preferred aperture pattern.
Abstract: A conventional camera captures blurred versions of scene information away from the plane of focus. Camera systems have been proposed that allow for recording all-focus images, or for extracting depth, but to record both simultaneously has required more extensive hardware and reduced spatial resolution. We propose a simple modification to a conventional camera that allows for the simultaneous recovery of both (a) high resolution image information and (b) depth information adequate for semi-automatic extraction of a layered depth representation of the image. Our modification is to insert a patterned occluder within the aperture of the camera lens, creating a coded aperture. We introduce a criterion for depth discriminability which we use to design the preferred aperture pattern. Using a statistical model of images, we can recover both depth information and an all-focus image from single photographs taken with the modified camera. A layered depth map is then extracted, requiring user-drawn strokes to clarify layer assignments in some cases. The resulting sharp image and layered depth map can be combined for various photographic applications, including automatic scene segmentation, post-exposure refocusing, or re-rendering of the scene from an alternate viewpoint.

1,489 citations


Journal ArticleDOI
TL;DR: Enhanced image resolution and lower noise have been achieved, concurrently with the reduction of helical cone-beam artifacts, as demonstrated by phantom studies and clinical results illustrate the capabilities of the algorithm on real patient data.
Abstract: Multislice helical computed tomography scanning offers the advantages of faster acquisition and wide organ coverage for routine clinical diagnostic purposes. However, image reconstruction is faced with the challenges of three-dimensional cone-beam geometry, data completeness issues, and low dosage. Of all available reconstruction methods, statistical iterative reconstruction (IR) techniques appear particularly promising since they provide the flexibility of accurate physical noise modeling and geometric system description. In this paper, we present the application of Bayesian iterative algorithms to real 3D multislice helical data to demonstrate significant image quality improvement over conventional techniques. We also introduce a novel prior distribution designed to provide flexibility in its parameters to fine-tune image quality. Specifically, enhanced image resolution and lower noise have been achieved, concurrently with the reduction of helical cone-beam artifacts, as demonstrated by phantom studies. Clinical results also illustrate the capabilities of the algorithm on real patient data. Although computational load remains a significant challenge for practical development, superior image quality combined with advancements in computing technology make IR techniques a legitimate candidate for future clinical applications.

987 citations


Proceedings ArticleDOI
29 Jul 2007
TL;DR: This paper shows in this paper how to produce a high quality image that cannot be obtained by simply denoising the noisy image, or deblurring the blurred image alone, by combining information extracted from both blurred and noisy images.
Abstract: Taking satisfactory photos under dim lighting conditions using a hand-held camera is challenging. If the camera is set to a long exposure time, the image is blurred due to camera shake. On the other hand, the image is dark and noisy if it is taken with a short exposure time but with a high camera gain. By combining information extracted from both blurred and noisy images, however, we show in this paper how to produce a high quality image that cannot be obtained by simply denoising the noisy image, or deblurring the blurred image alone. Our approach is image deblurring with the help of the noisy image. First, both images are used to estimate an accurate blur kernel, which otherwise is difficult to obtain from a single blurred image. Second, and again using both images, a residual deconvolution is proposed to significantly reduce ringing artifacts inherent to image deconvolution. Third, the remaining ringing artifacts in smooth image regions are further suppressed by a gain-controlled deconvolution process. We demonstrate the effectiveness of our approach using a number of indoor and outdoor images taken by off-the-shelf hand-held cameras in poor lighting environments.

929 citations


Proceedings ArticleDOI
17 Jun 2007
TL;DR: The proposed region-based active contour model can be used to segment images with intensity inhomogeneity, which overcomes the limitation of piecewise constant models and has promising application to image denoising.
Abstract: Local image information is crucial for accurate segmentation of images with intensity inhomogeneity. However, image information in local region is not embedded in popular region-based active contour models, such as the piecewise constant models. In this paper, we propose a region-based active contour model that is able to utilize image information in local regions. The major contribution of this paper is the introduction of a local binary fitting energy with a kernel function, which enables the extraction of accurate local image information. Therefore, our model can be used to segment images with intensity inhomogeneity, which overcomes the limitation of piecewise constant models. Comparisons with other major region-based models, such as the piece-wise smooth model, show the advantages of our method in terms of computational efficiency and accuracy. In addition, the proposed method has promising application to image denoising.

891 citations


Proceedings ArticleDOI
17 Jun 2007
TL;DR: A new post-processing step is presented to enhance the resolution of range images, using one or two registered and potentially high-resolution color images as reference and iteratively refine the input low-resolution range image in terms of both its spatial resolution and depth precision.
Abstract: We present a new post-processing step to enhance the resolution of range images. Using one or two registered and potentially high-resolution color images as reference, we iteratively refine the input low-resolution range image, in terms of both its spatial resolution and depth precision. Evaluation using the Middlebury benchmark shows across-the-board improvement for sub-pixel accuracy. We also demonstrated its effectiveness for spatial resolution enhancement up to 100 times with a single reference image.

834 citations


Journal ArticleDOI
05 Jul 2007-Neuron
TL;DR: A new imaging method, called array tomography, is described, which combines and extends superlative features of modern optical fluorescence and electron microscopy methods and can reveal important but previously unseen features of brain molecular architecture.

703 citations


Journal ArticleDOI
TL;DR: An introduction to wavelet transform theory and an overview of image fusion technique are given, and the results from a number of wavelet-based image fusion schemes are compared.
Abstract: Image fusion involves merging two or more images in such a way as to retain the most desirable characteristics of each. When a panchromatic image is fused with multispectral imagery, the desired result is an image with the spatial resolution and quality of the panchromatic imagery and the spectral resolution and quality of the multispectral imagery. Standard image fusion methods are often successful at injecting spatial detail into the multispectral imagery but distort the colour information in the process. Over the past decade, a significant amount of research has been conducted concerning the application of wavelet transforms in image fusion. In this paper, an introduction to wavelet transform theory and an overview of image fusion technique are given, and the results from a number of wavelet-based image fusion schemes are compared. It has been found that, in general, wavelet-based schemes perform better than standard schemes, particularly in terms of minimizing colour distortion. Schemes that combine standard methods with wavelet transforms produce superior results than either standard methods or simple wavelet-based methods alone. The results from wavelet-based methods can also be improved by applying more sophisticated models for injecting detail information; however, these schemes often have greater set-up requirements.

522 citations


Proceedings ArticleDOI
29 Jul 2007
TL;DR: A new method for upsampling images which is capable of generating sharp edges with reduced input-resolution grid-related artifacts, based on a statistical edge dependency relating certain edge features of two different resolutions, which is generically exhibited by real-world images.
Abstract: In this paper we propose a new method for upsampling images which is capable of generating sharp edges with reduced input-resolution grid-related artifacts. The method is based on a statistical edge dependency relating certain edge features of two different resolutions, which is generically exhibited by real-world images. While other solutions assume some form of smoothness, we rely on this distinctive edge dependency as our prior knowledge in order to increase image resolution. In addition to this relation we require that intensities are conserved; the output image must be identical to the input image when downsampled to the original resolution. Altogether the method consists of solving a constrained optimization problem, attempting to impose the correct edge relation and conserve local intensities with respect to the low-resolution input image. Results demonstrate the visual importance of having such edge features properly matched, and the method's capability to produce images in which sharp edges are successfully reconstructed.

480 citations


Journal ArticleDOI
TL;DR: This paper study face hallucination, or synthesizing a high-resolution face image from an input low-resolution image, with the help of a large collection of other high- resolution face images to generate photorealistic face images.
Abstract: In this paper, we study face hallucination, or synthesizing a high-resolution face image from an input low-resolution image, with the help of a large collection of other high-resolution face images. Our theoretical contribution is a two-step statistical modeling approach that integrates both a global parametric model and a local nonparametric model. At the first step, we derive a global linear model to learn the relationship between the high-resolution face images and their smoothed and down-sampled lower resolution ones. At the second step, we model the residue between an original high-resolution image and the reconstructed high-resolution image after applying the learned linear model by a patch-based non-parametric Markov network to capture the high-frequency content. By integrating both global and local models, we can generate photorealistic face images. A practical contribution is a robust warping algorithm to align the low-resolution face images to obtain good hallucination results. The effectiveness of our approach is demonstrated by extensive experiments generating high-quality hallucinated face images from low-resolution input with no manual alignment.

450 citations


Proceedings ArticleDOI
29 Oct 2007
TL;DR: This work proposes a technique for fusing a bracketed exposure sequence into a high quality image, without converting to HDR first, which avoids camera response curve calibration and is computationally efficient.
Abstract: We propose a technique for fusing a bracketed exposure sequence into a high quality image, without converting to HDR first. Skipping the physically-based HDR assembly step simplifies the acquisition pipeline. This avoids camera response curve calibration and is computationally efficient. It also allows for including flash images in the sequence. Our technique blends multiple exposures, guided by simple quality measures like saturation and contrast. This is done in a multiresolution fashion to account for the brightness variation in the sequence. The resulting image quality is comparable to existing tone mapping operators.

447 citations


Journal ArticleDOI
L Goldman1
TL;DR: The most commonly used dose descriptor is CT dose index, which represents the dose to a location in a scanned volume from a complete series of slices, and this value is often displayed on the operator's console.
Abstract: This article discusses CT radiation dose, the measurement of CT dose, and CT image quality. The most commonly used dose descriptor is CT dose index, which represents the dose to a location (e.g., depth) in a scanned volume from a complete series of slices. A weighted average of the CT dose index measured at the center and periphery of dose phantoms provides a convenient single-number estimate of patient dose for a procedure, and this value (or a related indicator that includes the scanned length) is often displayed on the operator's console. CT image quality, as in most imaging, is described in terms of contrast, spatial resolution, image noise, and artifacts. A strength of CT is its ability to visualize structures of low contrast in a subject, a task that is limited primarily by noise and is therefore closely associated with radiation dose: The higher the dose contributing to the image, the less apparent is image noise and the easier it is to perceive low-contrast structures. Spatial resolution is ultimately limited by sampling, but both image noise and resolution are strongly affected by the reconstruction filter. As a result, diagnostically acceptable image quality at acceptable doses of radiation requires appropriately designed clinical protocols, including appropriate kilovolt peaks, amperages, slice thicknesses, and reconstruction filters.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed an approach that combines techniques of photoelectron emission microscopy and attosecond streaking spectroscopy to provide direct, non-invasive access to the nanoplasmonic collective dynamics.
Abstract: We propose an approach that will provide direct, non-invasive access to the nanoplasmonic collective dynamics, with nanometer-scale spatial resolution and ~100 attoseconds temporal resolution. It combines techniques of photoelectron emission microscopy and attosecond streaking spectroscopy.

Patent
19 Jun 2007
TL;DR: In this paper, fixed size face detection is applied to at least a portion of the integral image to provide a set of candidate face regions, and the resolution is adjusted for sub-sampling a subsequent acquired image.
Abstract: An image processing apparatus for tracking faces in an image stream iteratively receives an acquired image from the image stream including one or more face regions. The acquired image is sub-sampled at a specified resolution to provide a sub-sampled image. An integral image is then calculated for a least a portion of the sub-sampled image. Fixed size face detection is applied to at least a portion of the integral image to provide a set of candidate face regions. Responsive to the set of candidate face regions produced and any previously detected candidate face regions, the resolution is adjusted for sub-sampling a subsequent acquired image.

Journal ArticleDOI
TL;DR: Experimental results show that an index such as this presents some desirable features that resemble those from an ideal image quality function, constituting a suitable quality index for natural images, and it is shown that the new measure is well correlated with classical reference metrics such as the peak signal-to-noise ratio.
Abstract: We describe an innovative methodology for determining the quality of digital images. The method is based on measuring the variance of the expected entropy of a given image upon a set of predefined directions. Entropy can be calculated on a local basis by using a spatial/spatial-frequency distribution as an approximation for a probability density function. The generalized Renyi entropy and the normalized pseudo-Wigner distribution (PWD) have been selected for this purpose. As a consequence, a pixel-by-pixel entropy value can be calculated, and therefore entropy histograms can be generated as well. The variance of the expected entropy is measured as a function of the directionality, and it has been taken as an anisotropy indicator. For this purpose, directional selectivity can be attained by using an oriented 1-D PWD implementation. Our main purpose is to show how such an anisotropy measure can be used as a metric to assess both the fidelity and quality of images. Experimental results show that an index such as this presents some desirable features that resemble those from an ideal image quality function, constituting a suitable quality index for natural images. Namely, in-focus, noise-free natural images have shown a maximum of this metric in comparison with other degraded, blurred, or noisy versions. This result provides a way of identifying in-focus, noise-free images from other degraded versions, allowing an automatic and nonreference classification of images according to their relative quality. It is also shown that the new measure is well correlated with classical reference metrics such as the peak signal-to-noise ratio.

Proceedings ArticleDOI
Shengyang Dai1, Mei Han, Wei Xu, Ying Wu1, Yihong Gong 
17 Jun 2007
TL;DR: A novel combination of this soft edge smoothness prior and the alpha matting technique for color image super resolution, by normalizing edge segments with their alpha channel description, to achieve a unified treatment of edges with different contrast and scale.
Abstract: Effective image prior is necessary for image super resolution, due to its severely under-determined nature. Although the edge smoothness prior can be effective, it is generally difficult to have analytical forms to evaluate the edge smoothness, especially for soft edges that exhibit gradual intensity transitions. This paper finds the connection between the soft edge smoothness and a soft cut metric on an image grid by generalizing the Geocuts method (Y. Boykov and V. Kolmogorov, 2003), and proves that the soft edge smoothness measure approximates the average length of all level lines in an intensity image. This new finding not only leads to an analytical characterization of the soft edge smoothness prior, but also gives an intuitive geometric explanation. Regularizing the super resolution problem by this new form of prior can simultaneously minimize the length of all level lines, and thus resulting in visually appealing results. In addition, this paper presents a novel combination of this soft edge smoothness prior and the alpha matting technique for color image super resolution, by normalizing edge segments with their alpha channel description, to achieve a unified treatment of edges with different contrast and scale.

Journal ArticleDOI
TL;DR: A computationally simple super-resolution algorithm using a type of adaptive Wiener filter that produces an improved resolution image from a sequence of low-resolution video frames with overlapping field of view and lends itself to parallel implementation.
Abstract: A computationally simple super-resolution algorithm using a type of adaptive Wiener filter is proposed. The algorithm produces an improved resolution image from a sequence of low-resolution (LR) video frames with overlapping field of view. The algorithm uses subpixel registration to position each LR pixel value on a common spatial grid that is referenced to the average position of the input frames. The positions of the LR pixels are not quantized to a finite grid as with some previous techniques. The output high-resolution (HR) pixels are obtained using a weighted sum of LR pixels in a local moving window. Using a statistical model, the weights for each HR pixel are designed to minimize the mean squared error and they depend on the relative positions of the surrounding LR pixels. Thus, these weights adapt spatially and temporally to changing distributions of LR pixels due to varying motion. Both a global and spatially varying statistical model are considered here. Since the weights adapt with distribution of LR pixels, it is quite robust and will not become unstable when an unfavorable distribution of LR pixels is observed. For translational motion, the algorithm has a low computational complexity and may be readily suitable for real-time and/or near real-time processing applications. With other motion models, the computational complexity goes up significantly. However, regardless of the motion model, the algorithm lends itself to parallel implementation. The efficacy of the proposed algorithm is demonstrated here in a number of experimental results using simulated and real video sequences. A computational analysis is also presented.

Journal ArticleDOI
TL;DR: A real-time fire-detector that combines foreground object information with color pixel statistics of fire and the use of a generic statistical model for refined fire-pixel classification is proposed.

Journal ArticleDOI
TL;DR: This paper presents a joint formulation for a complex super-resolution problem in which the scenes contain multiple independently moving objects, built upon the maximum a posteriori (MAP) framework, which judiciously combines motion estimation, segmentation, and super resolution together.
Abstract: Super resolution image reconstruction allows the recovery of a high-resolution (HR) image from several low-resolution images that are noisy, blurred, and down sampled. In this paper, we present a joint formulation for a complex super-resolution problem in which the scenes contain multiple independently moving objects. This formulation is built upon the maximum a posteriori (MAP) framework, which judiciously combines motion estimation, segmentation, and super resolution together. A cyclic coordinate descent optimization procedure is used to solve the MAP formulation, in which the motion fields, segmentation fields, and HR images are found in an alternate manner given the two others, respectively. Specifically, the gradient-based methods are employed to solve the HR image and motion fields, and an iterated conditional mode optimization method to obtain the segmentation fields. The proposed algorithm has been tested using a synthetic image sequence, the "Mobile and Calendar" sequence, and the original "Motorcycle and Car" sequence. The experiment results and error analyses verify the efficacy of this algorithm

Proceedings ArticleDOI
17 Jun 2007
TL;DR: Unlike previous approaches to the same problem, intensity blending as well as image resampling are avoided on all stages of the process, which ensures that the resolution of the produced texture is essentially the same as that of the original views.
Abstract: Image-based object modeling has emerged as an important computer vision application. Typically, the process starts with the acquisition of the image views of an object. These views are registered within the global coordinate system using structure-and-motion techniques, while on the next step the geometric shape of an object is recovered using stereo and/or silhouette cues. This paper considers the final step, which creates the texture map for the recovered geometry model. The approach proposed in the paper naturally starts by backprojecting original views onto the obtained surface. A texture is then mosaiced from these back projections, whereas the quality of the mosaic is maximized within the process of Markov random field energy optimization. Finally, the residual seams between the mosaic components are removed via seam levelling procedure, which is similar to gradient-domain stitching techniques recently proposed for image editing. Unlike previous approaches to the same problem, intensity blending as well as image resampling are avoided on all stages of the process, which ensures that the resolution of the produced texture is essentially the same as that of the original views. Importantly, due to restriction to non-greedy energy optimization techniques, good results are produced even in the presence of significant errors on image registration and geometric estimation steps.

Journal ArticleDOI
TL;DR: In this article, the effects of scale (both as spatial and spectral resolution) when searching for a relation between spectral and spatial (related to plant species richness) heterogeneity, by using satellite data with different spatial and spectrum resolution was investigated.

Patent
12 Jul 2007
TL;DR: In this paper, a method of producing a digital image with improved resolution during digital zooming, including simultaneously capturing a first low-resolution digital image of a scene and a second higher resolution digital image, was proposed.
Abstract: A method of producing a digital image with improved resolution during digital zooming, including simultaneously capturing a first low resolution digital image of a scene and a second higher resolution digital image of a portion of substantially the same scene. A composite image is then formed by combining the first low-resolution digital image and a corresponding portion of the high resolution digital image. Digital zooming of the composite image produces a zoomed image with high resolution throughout the zoom range and improved image quality.

Proceedings ArticleDOI
17 Jun 2007
TL;DR: Using a slight variant to PCA, this paper can decompose all cameras into comparable components and annotate images with respect to surface orientation, weather, and seasonal change.
Abstract: This paper details an empirical study of large image sets taken by static cameras. These images have consistent correlations over the entire image and over time scales of days to months. Simple second-order statistics of such image sets show vastly more structure than exists in generic natural images or video from moving cameras. Using a slight variant to PCA, we can decompose all cameras into comparable components and annotate images with respect to surface orientation, weather, and seasonal change. Experiments are based on a data set from 538 cameras across the United States which have collected more than 17 million images over the the last 6 months.

Journal ArticleDOI
TL;DR: In this paper, a small radio-controlled motorized vehicle flying at low altitude was used to study both channel water depth and gravel bar geometry using very high resolution (VHR) images.
Abstract: The increasing availability of aerial photography and satellite imagery offers new possibilities for characterizing river morphology. The precision of new very high resolution (VHR) images allows smaller scale objects within river corridors to be studied. High survey frequencies provide increased opportunities for the monitoring of river restoration.Following this evolution in platform technology, a small radio-controlled motorized vehicle flying at low altitude was used to study both channel water depth and gravel bar geometry. The VHR imagery provided by this equipment allowed both channel bathymetry and a high accuracy photogrammetric digital elevation model (DEM) to be realized. Using case studies from the Ain and the Drome Rivers in France, the accuracy of the results is presented and the various challenges associated with the new platform are discussed. One significant issue is that due to the low elevation of the survey the coverage of a target area is usually based on several photographs, which leads to variations in illumination conditions that stem from atmospheric changes. The images were processed to minimize this source of error but a number of issues have yet to be resolved.Bathymetric models with R2 values between 0·59 and 0·90 were created in spite of the lack of channel bed homogeneity at the various sites. The gravel bar was also effectively mapped, and the photogrammetrically predicted DEM provided a 5-10 cm pixel resolution with a vertical precision between 2 and 40 cm according to the position within the image. This paper shows that, despite unstable image acquisition, unmanned radio-controlled platforms provide significant advantages for the study of river processes, offering a flexible very high resolution data source for both channel bathymetry and gravel bar topography. Copyright ©2007 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: It could be shown that the imaging system in its present configuration is capable of producing three-dimensional images of objects with an overall size in the range of several millimeters to centimeters, and how the technique can be scaled for imaging of smaller objects with higher resolution.
Abstract: A three-dimensional photoacoustic imaging method is presented that uses a Mach-Zehnder interferometer for measurement of acoustic waves generated in an object by irradiation with short laser pulses. The signals acquired with the interferometer correspond to line integrals over the acoustic wave field. An algorithm for reconstruction of a three-dimensional image from such signals measured at multiple positions around the object is shown that is a combination of a frequency-domain technique and the inverse Radon transform. From images of a small source scanning across the interferometer beam it is estimated that the spatial resolution of the imaging system is in the range of 100 to about 300 mum, depending on the interferometer beam width and the size of the aperture formed by the scan length divided by the source-detector distance. By taking an image of a phantom it could be shown that the imaging system in its present configuration is capable of producing three-dimensional images of objects with an overall size in the range of several millimeters to centimeters. Strategies are proposed how the technique can be scaled for imaging of smaller objects with higher resolution.

Journal ArticleDOI
TL;DR: In this paper, a combination of drift distortion removal and spatial distortion removal is performed to correct Scanning Electron Microscope (SEM) images at both ×200 and ×10,000 magnification.
Abstract: A combination of drift distortion removal and spatial distortion removal are performed to correct Scanning Electron Microscope (SEM) images at both ×200 and ×10,000 magnification. Using multiple, time-spaced images and in-plane rigid body motions to extract the relative displacement field throughout the imaging process, results from numerical simulations clearly demonstrate that the correction procedures successfully remove both drift and spatial distortions with errors on the order of ±0.02 pixels. A series of 2D translation and tensile loading experiments are performed in an SEM for magnifications at ×200 and ×10,000, where both the drift and spatial distortion removal methods described above are applied to correct the digital images and improve the accuracy of measurements obtained using 2D-DIC. Results from translation and loading experiments indicate that (a) the fully corrected displacement components have nearly random variability with standard deviation of 0.02 pixels (≈25 nm at ×200 and ≈0.5 nm at ×10,000) in each displacement component and (b) the measured strain fields are unbiased and in excellent agreement with expected results, with a spatial resolution of 43 pixels (≈54 μm at ×200 and ≈1.1 μm at ×10,000) and a standard deviation on the order of 6 × 10−5 for each component.

Proceedings ArticleDOI
10 Sep 2007
TL;DR: This paper presents a BVH-based GPU ray tracer with a parallel packet traversal algorithm using a shared stack, and presents a fast, CPU-based BvH construction algorithm which very accurately approximates the surface area heuristic using streamed binning while still being one order of magnitude faster than previously published results.
Abstract: Recent GPU ray tracers can already achieve performance competitive to that of their CPU counterparts. Nevertheless, these systems can not yet fully exploit the capabilities of modern GPUs and can only handle medium-sized, static scenes. In this paper we present a BVH-based GPU ray tracer with a parallel packet traversal algorithm using a shared stack. We also present a fast, CPU-based BVH construction algorithm which very accurately approximates the surface area heuristic using streamed binning while still being one order of magnitude faster than previously published results. Furthermore, using a BVH allows us to push the size limit of supported scenes on the GPU: We can now ray trace the 12.7 million triangle Power Plant at 1024 times 1024 image resolution with 3 fps, including shading and shadows.

Journal ArticleDOI
TL;DR: This paper proposes a novel image functional whose minimization produces a perceptually inspired color enhanced version of the original, and shows that a numerical implementation of the gradient descent technique applied to this energy functional coincides with the equation of automatic color enhancement (ACE), a particular perceptual-based model of color enhancement.
Abstract: In this paper, we present a discussion about perceptual-based color correction of digital images in the framework of variational techniques. We propose a novel image functional whose minimization produces a perceptually inspired color enhanced version of the original. The variational formulation permits a more flexible local control of contrast adjustment and attachment to data. We show that a numerical implementation of the gradient descent technique applied to this energy functional coincides with the equation of automatic color enhancement (ACE), a particular perceptual-based model of color enhancement. Moreover, we prove that a numerical approximation of the Euler-Lagrange equation reduces the computational complexity of ACE from O(N2) to O(NlogN), where N is the total number of pixels in the image

Journal ArticleDOI
TL;DR: A learning-based method to super-resolve face images using a kernel principal component analysis-based prior model and demonstrates with experiments that including higher-order correlations results in significant improvements.
Abstract: We present a learning-based method to super-resolve face images using a kernel principal component analysis-based prior model. A prior probability is formulated based on the energy lying outside the span of principal components identified in a higher-dimensional feature space. This is used to regularize the reconstruction of the high-resolution image. We demonstrate with experiments that including higher-order correlations results in significant improvements

Journal ArticleDOI
TL;DR: The algorithm can embed efficiently a large amount of data that has been reached to 75% of the image size with high quality of the output and make comparison with the previous Steganography algorithms like S-Tools.
Abstract: This study deals with constructing and implementing new algorithm based on hiding a large amount of data (image, audio, text) file into color BMP image. We have been used adaptive image filtering and adaptive image segmentation with bits replacement on the appropriate pixels. These pixels are selected randomly rather than sequentially by using new concept defined by main cases with their sub cases for each byte in one pixel. This concept based on both visual and statistical. According to the steps of design, we have been concluded 16 main cases with their sub cases that cover all aspects of the input data into color bitmap image. High security layers have been proposed through three layers to make it difficult to break through the encryption of the input data and confuse steganalysis too. Our results against statistical and visual attacks are discussed and make comparison with the previous Steganography algorithms like S-Tools. We show that our algorithm can embed efficiently a large amount of data that has been reached to 75% of the image size with high quality of the output.

Patent
13 Jul 2007
TL;DR: In this article, a system, method and apparatus for eliminating image tearing effects and other visual artifacts perceived when scanning moving subject matter with a scanned beam imaging device is presented, which uses a motion detection means in conjunction with an image processor.
Abstract: A system, method and apparatus for eliminating image tearing effects and other visual artifacts perceived when scanning moving subject matter with a scanned beam imaging device. The system, method and apparatus uses a motion detection means in conjunction with an image processor to alter the native image to one without image tearing or other visual artifacts. The image processor monitors the motion detection means and reduces the image resolution or translates portions of the imaged subject matter in response to the detected motion.