scispace - formally typeset
Search or ask a question

Showing papers on "Image scaling published in 2014"


Journal ArticleDOI
TL;DR: An efficient image hashing with a ring partition and a nonnegative matrix factorization (NMF) is designed, which has both the rotation robustness and good discriminative capability.
Abstract: This paper designs an efficient image hashing with a ring partition and a nonnegative matrix factorization (NMF), which has both the rotation robustness and good discriminative capability. The key contribution is a novel construction of rotation-invariant secondary image, which is used for the first time in image hashing and helps to make image hash resistant to rotation. In addition, NMF coefficients are approximately linearly changed by content-preserving manipulations, so as to measure hash similarity with correlation coefficient. We conduct experiments for illustrating the efficiency with 346 images. Our experiments show that the proposed hashing is robust against content-preserving operations, such as image rotation, JPEG compression, watermark embedding, Gaussian low-pass filtering, gamma correction, brightness adjustment, contrast adjustment, and image scaling. Receiver operating characteristics (ROC) curve comparisons are also conducted with the state-of-the-art algorithms, and demonstrate that the proposed hashing is much better than all these algorithms in classification performances with respect to robustness and discrimination.

181 citations


Journal ArticleDOI
TL;DR: This paper proposes a novel image interpolation method, which combines these two forces-nonlocal self-similarities and sparse representation modeling, and the proposed method is contrasted with competitive and related algorithms, and demonstrated to achieve state-of-the-art results.
Abstract: Single image interpolation is a central and extensively studied problem in image processing. A common approach toward the treatment of this problem in recent years is to divide the given image into overlapping patches and process each of them based on a model for natural image patches. Adaptive sparse representation modeling is one such promising image prior, which has been shown to be powerful in filling-in missing pixels in an image. Another force that such algorithms may use is the self-similarity that exists within natural images. Processing groups of related patches together exploits their correspondence, leading often times to improved results. In this paper, we propose a novel image interpolation method, which combines these two forces—nonlocal self-similarities and sparse representation modeling. The proposed method is contrasted with competitive and related algorithms, and demonstrated to achieve state-of-the-art results.

118 citations


Journal ArticleDOI
TL;DR: Experimental results indicated that the proposed approach outperforms existing methods in terms of objective criteria and subjective perception improving the image resolution.
Abstract: This letter addresses the problem of generating a super-resolution (SR) image from a single low-resolution (LR) input image in the wavelet domain. To achieve a sharper image, an intermediate stage for estimating the high-frequency (HF) subbands has been proposed. This stage includes an edge preservation procedure and mutual interpolation between the input LR image and the HF subband images, as performed via the discrete wavelet transform (DWT). Sparse mixing weights are calculated over blocks of coefficients in an image, which provides a sparse signal representation in the LR image. All of the subband images are used to generate the new high-resolution image using the inverse DWT. Experimental results indicated that the proposed approach outperforms existing methods in terms of objective criteria and subjective perception improving the image resolution.

99 citations


Journal ArticleDOI
TL;DR: In the improved model, a regularized factor is introduced to adjust the patch priority function and a modified sum of squared differences (SSD) and normalized cross correlation are combined to search for the best matching patch.

75 citations


Journal ArticleDOI
TL;DR: The aim is to help automate this often tedious task by optimizing the compatibility of corresponding warped image neighborhoods using an adaptation of structural similarity and improving the morphs by optimizing quadratic motion paths and by seamlessly extending content beyond the image boundaries.
Abstract: The main challenge in achieving good image morphs is to create a map that aligns corresponding image elements. Our aim is to help automate this often tedious task. We compute the map by optimizing the compatibility of corresponding warped image neighborhoods using an adaptation of structural similarity. The optimization is regularized by a thin-plate spline and may be guided by a few user-drawn points. We parameterize the map over a halfway domain and show that this representation offers many benefits. The map is able to treat the image pair symmetrically, model simple occlusions continuously, span partially overlapping images, and define extrapolated correspondences. Moreover, it enables direct evaluation of the morph in a pixel shader without mesh rasterization. We improve the morphs by optimizing quadratic motion paths and by seamlessly extending content beyond the image boundaries. We parallelize the algorithm on a GPU to achieve a responsive interface and demonstrate challenging morphs obtained with little effort.

68 citations


Journal ArticleDOI
TL;DR: A fast image upsampling method within a two-scale framework to ensure the sharp construction of upsampled image for both large-scale edges and small-scale structures that outperforms current state-of-the-art approaches based on quantitative and qualitative evaluations, as well as perceptual evaluation by a user study.
Abstract: In this paper, we present a fast image upsampling method within a two-scale framework to ensure the sharp construction of upsampled image for both large-scale edges and small-scale structures. In our approach, the low-frequency image is recovered via a novel sharpness preserving interpolation technique based on a well-constructed displacement field, which is estimated by a cross-resolution sharpness preserving model. Within this model, the distances of pixels on edges are preserved, which enables the recovery of sharp edges in the high-resolution result. Likewise, local high-frequency structures are reconstructed via a sharpness preserving reconstruction algorithm. Extensive experiments show that our method outperforms current state-of-the-art approaches, based on quantitative and qualitative evaluations, as well as perceptual evaluation by a user study. Moreover, our approach is very fast so as to be practical for real applications.

62 citations


Journal ArticleDOI
TL;DR: This work introduces a new algorithm involving nonlocal image self-similarity in order to reduce interpolation artifacts when local geometry is ambiguous and introduces a clear and intuitive manner of balancing how much channel-correlation must be taken advantage of.
Abstract: Most common cameras use a CCD sensor device measuring a single color per pixel. The other two color values of each pixel must be interpolated from the neighboring pixels in the so-called demosaicking process. State-of-the-art demosaicking algorithms take advantage of inter-channel correlation locally selecting the best interpolation direction. These methods give impressive results except when local geometry cannot be inferred from neighboring pixels or channel correlation is low. In these cases, they create interpolation artifacts. We introduce a new algorithm involving non-local image self-similarity in order to reduce interpolation artifacts when local geometry is ambiguous. The proposed algorithm introduces a clear and intuitive manner of balancing how much channel-correlation must be taken advantage of. Comparison shows that the proposed algorithm gives state-of-the-art methods in several image bases.

60 citations


Journal ArticleDOI
TL;DR: Corrective actions for image scaling are suggested for manufacturers and quantitative imaging community after images generated by one of the scanners appeared to have additional intensity scaling that was not accounted for by the majority of tested quantitative image analysis SW tools.

52 citations


Journal ArticleDOI
TL;DR: This paper first constructs a set of local interpolation models, which predict the intensity labels of all image samples, and a loss term will be minimized to keep the predicted labels of the available low-resolution (LR) samples sufficiently close to the original ones.
Abstract: In this paper, we propose a novel image interpolation algorithm via graph-based Bayesian label propagation. The basic idea is to first create a graph with known and unknown pixels as vertices and with edge weights encoding the similarity between vertices, then the problem of interpolation converts to how to effectively propagate the label information from known points to unknown ones. This process can be posed as a Bayesian inference, in which we try to combine the principles of local adaptation and global consistency to obtain accurate and robust estimation. Specially, our algorithm first constructs a set of local interpolation models, which predict the intensity labels of all image samples, and a loss term will be minimized to keep the predicted labels of the available low-resolution (LR) samples sufficiently close to the original ones. Then, all of the losses evaluated in local neighborhoods are accumulated together to measure the global consistency on all samples. Moreover, a graph-Laplacian-based manifold regularization term is incorporated to penalize the global smoothness of intensity labels, such smoothing can alleviate the insufficient training of the local models and make them more robust. Finally, we construct a unified objective function to combine together the global loss of the locally linear regression, square error of prediction bias on the available LR samples, and the manifold regularization term. It can be solved with a closed-form solution as a convex optimization problem. Experimental results demonstrate that the proposed method achieves competitive performance with the state-of-the-art image interpolation algorithms.

50 citations


Journal ArticleDOI
TL;DR: A novel color image demosaicking algorithm using a voting-based edge direction detection method and a directional weighted interpolation method that provides superior performance in terms of both objective and subjective image qualities is presented.
Abstract: In this paper, we present a novel color image demosaicking algorithm using a voting-based edge direction detection method and a directional weighted interpolation method. By introducing the voting strategy, the interpolation direction of the center missing color component can be determined accurately. Along the determined interpolation direction, the center missing color component is interpolated using the gradient weighted interpolation method by exploring the intra-channel gradient correlation of the neighboring pixels. As compared with the latest demosaicking algorithms, experiments show that the proposed algorithm provides superior performance in terms of both objective and subjective image qualities.

50 citations


Proceedings ArticleDOI
27 Jul 2014
TL;DR: This course will survey and compare scattered interpolation algorithms and describe their applications in computer graphics and some of the underlying mathematical theory and briefly mention numerical considerations.
Abstract: The goal of scattered data interpolation techniques is to construct a (typically smooth) function from a set of unorganized samples. These techniques have a wide range of applications in computer graphics and computer vision. For instance they can be used to model a surface from a set of sparse samples, to reconstruct a BRDF from a set of measurements, or to interpolate motion capture data. This course will survey and compare scattered interpolation algorithms and describe their applications in computer graphics. Although the course is focused on applying these techniques, we will introduce some of the underlying mathematical theory and briefly mention numerical considerations.

Journal ArticleDOI
TL;DR: The comprehensive mathematical model as well as experimental results of the GP interpolation performance for division of focal plane polarimeter are provided, which is most pronounced in cases of low signal-to-noise ratio (SNR).
Abstract: Image interpolation and denoising are important techniques in image processing. These methods are inherent to digital image acquisition as most digital cameras are composed of a 2D grid of heterogeneous imaging sensors. Current polarization imaging employ four different pixelated polarization filters, commonly referred to as division of focal plane polarization sensors. The sensors capture only partial information of the true scene, leading to a loss of spatial resolution as well as inaccuracy of the captured polarization information. Interpolation is a standard technique to recover the missing information and increase the accuracy of the captured polarization information. Here we focus specifically on Gaussian process regression as a way to perform a statistical image interpolation, where estimates of sensor noise are used to improve the accuracy of the estimated pixel information. We further exploit the inherent grid structure of this data to create a fast exact algorithm that operates in ����(N(3/2)) (vs. the naive ���� (N³)), thus making the Gaussian process method computationally tractable for image data. This modeling advance and the enabling computational advance combine to produce significant improvements over previously published interpolation methods for polarimeters, which is most pronounced in cases of low signal-to-noise ratio (SNR). We provide the comprehensive mathematical model as well as experimental results of the GP interpolation performance for division of focal plane polarimeter.

Journal ArticleDOI
TL;DR: The present findings suggest that the evaluation of bone morphometry should be done with images registered using greyscale information, and depending on the interpolator, the processing order of registration, Gaussian filtration and binarisation played a role.
Abstract: Time-lapsed in vivo micro-computed tomography is a powerful tool to analyse longitudinal changes in the bone micro-architecture. Registration can overcome problems associated with spatial misalignment between scans; however, it requires image interpolation which might affect the outcome of a subsequent bone morphometric analysis. The impact of the interpolation error itself, though, has not been quantified to date. Therefore, the purpose of this ex vivo study was to elaborate the effect of different interpolator schemes [nearest neighbour, tri-linear and B-spline (BSP)] on bone morphometric indices. None of the interpolator schemes led to significant differences between interpolated and non-interpolated images, with the lowest interpolation error found for BSPs (1.4%). Furthermore, depending on the interpolator, the processing order of registration, Gaussian filtration and binarisation played a role. Independent from the interpolator, the present findings suggest that the evaluation of bone morphometry sh...

Proceedings ArticleDOI
01 Oct 2014
TL;DR: This paper proposes a low energy HEVC sub-pixel (half-pixel and quarter-pixel) interpolation hardware, which uses Hcub multiplierless constant multiplication algorithm, which has up to 48% less energy consumption than original HEVCsub-pixel interpolationHardware.
Abstract: Sub-pixel interpolation is one of the most computationally intensive parts of High Efficiency Video Coding (HEVC) video encoder and decoder. Therefore, in this paper, a low energy HEVC sub-pixel (half-pixel and quarter-pixel) interpolation hardware, which uses Hcub multiplierless constant multiplication algorithm, is proposed. The proposed HEVC sub-pixel interpolation hardware, in the worst case, can process 30 quad full HD (3840×2160) video frames per second. It has up to 48% less energy consumption than original HEVC sub-pixel interpolation hardware.

Journal ArticleDOI
TL;DR: Experimental results show that the proposed data hiding method can embed a large amount of secret data while keeping visual quality better than previous works.
Abstract: In this paper we propose a data hiding method that utilizes image interpolation and an edge detection algorithm. Image interpolation algorithm enlarges a cover image before hiding secret data in order to embed a large amount of secret data. Edge detection algorithm is applied to improve a quality of stego-image. Experimental results show that the proposed method can embed a large amount of secret data while keeping visual quality better than previous works. We demonstrate that the average capacity is 391,115bits, and the PSNR and quality index are 44.71dB, 0.9568 for gray images when threshold value is 4 and the embedding bits are given to 2 respectively.

Journal ArticleDOI
TL;DR: In this letter, multiple subpixel shifted images (MSIs) were utilized to increase the accuracy of subpixel mapping (SPM), based on the fast bilinear and bicubic interpolation.
Abstract: In this letter, multiple subpixel shifted images (MSIs) were utilized to increase the accuracy of subpixel mapping (SPM), based on the fast bilinear and bicubic interpolation. First, each coarse spatial resolution image of MSI is soft classified to obtain class fraction images. Using bilinear or bicubic interpolation, all fraction images of MSI are upsampled to the desired fine spatial resolution. The multiple fine spatial resolution images for each class are then integrated. Finally, the integrated fine spatial resolution images are used to allocate hard class labels to subpixels. Experiments on two remote sensing images showed that, with MSI, both bilinear and bicubic interpolation-based SPMs are more accurate. The new methods are fast and do not need any prior spatial structure information.

Proceedings ArticleDOI
02 Jul 2014
TL;DR: This paper proposes to select appropriate graph Fourier transforms -adaptive to unique signal structures of the local pixel patches- for expansion hole filling in GFT domain, and shows that the algorithm can outperform in-painting procedure employed in VSRS 3.5 by up to 4.57dB.
Abstract: Given texture and depth maps of one or more reference view-point(s), depth-image-based rendering (DIBR) can synthesize a novel viewpoint image by mapping texture pixels from reference to virtual view using geometric information provided by corresponding depth pixels. If the virtual view camera is located closer to the 3D scene than the reference view camera, objects close to the camera will increase in size in the virtual view, and DIBR's simple pixel-to-pixel mapping will result in expansion holes that require proper filling. Leveraging on recent advances in graph signal processing (GSP), in this paper we propose to select appropriate graph Fourier transforms (GFT)-adaptive to unique signal structures of the local pixel patches-for expansion hole filling. Our algorithm consists of two steps. First, using structure tensor we compute an adaptive kernel centered at a target empty pixel to identify suitable neighboring pixels for construction of a sparse graph. Second, given the constructed graph with carefully tuned edge weights, to complete the target pixel we formulate an iterative quadratic programming problem (with a closed form solution in each iteration) using a smoothness prior in the GFT domain. Experimental results show that our algorithm can outperform in-painting procedure employed in VSRS 3.5 by up to 4.57dB.

Patent
Pengju Ren1, Geng Liu1, Jiang Yu1, Hongbin Sun1, Yuehu Liu1, Nanning Zheng1 
29 May 2014
TL;DR: In this article, a parallel synchronous scaling engine for multi-view 3D display and a method thereof are provided, wherein, selection and combination calculation are provided to an interpolation pixel window, then interpolation calculation is provided to a combined pixel window of a combined view field, calculation results are directly displayed on a display terminal.
Abstract: A parallel synchronous scaling engine for multi-view 3D display and a method thereof are provided, wherein, selection and combination calculation are provided to an interpolation pixel window, then interpolation calculation is provided to a combined interpolation pixel window of a combined view field, calculation results are directly displayed on a display terminal. That is to say, interpolation is originally provided before stereoscopic pixel rearrangement, which is now improved, in such a manner that screening and combination of pixel points is provided before interpolation calculation. According to the present invention, computation and memory resource is greatly saved. The method is suitable to be implemented by hardware, for satisfying various numbers of viewpoints and interpolation algorithm, and being compatible with multi-view 3D display with the integrated and floating-point pixel arrangement, wherein the computation resource does not need to be increased with increasing of the viewpoints.

Journal ArticleDOI
TL;DR: It is shown that the robust regression can find acceptably accurate inlier sets using a much less burdensome 1D LMS robust regression (or ‘mode-finder’) and that one can produce good quality appearance interpolants, plus accurate surface properties using PTM before the additional RBF stage, provided one increases the dimensionality beyond 6D and still uses robust regression.
Abstract: Polynomial texture mapping (PTM) uses simple polynomial regression to interpolate and re-light image sets taken from a fixed camera but under different illumination directions. PTM is an extension of the classical photometric stereo (PST), replacing the simple Lambertian model employed by the latter with a polynomial one. The advantage and hence wide use of PTM is that it provides some effectiveness in interpolating appearance including more complex phenomena such as interreflections, specularities and shadowing. In addition, PTM provides estimates of surface properties, i.e., chromaticity, albedo and surface normals. The most accurate model to date utilizes multivariate Least Median of Squares (LMS) robust regression to generate a basic matte model, followed by radial basis function (RBF) interpolation to give accurate interpolants of appearance. However, robust multivariate modelling is slow. Here we show that the robust regression can find acceptably accurate inlier sets using a much less burdensome 1D LMS robust regression (or ‘mode-finder’). We also show that one can produce good quality appearance interpolants, plus accurate surface properties using PTM before the additional RBF stage, provided one increases the dimensionality beyond 6D and still uses robust regression. Moreover, we model luminance and chromaticity separately, with dimensions 16 and 9 respectively. It is this separation of colour channels that allows us to maintain a relatively low dimensionality for the modelling. Another observation we show here is that in contrast to current thinking, using the original idea of polynomial terms in the lighting direction outperforms the use of hemispherical harmonics (HSH) for matte appearance modelling. For the RBF stage, we use Tikhonov regularization, which makes a substantial difference in performance. The radial functions used here are Gaussians; however, to date the Gaussian dispersion width and the value of the Tikhonov parameter have been fixed. Here we show that one can extend a theorem from graphics that generates a very fast error measure for an otherwise difficult leave-one-out error analysis. Using our extension of the theorem, we can optimize on both the Gaussian width and the Tikhonov parameter.

Journal ArticleDOI
TL;DR: The complete integrated framework provided more satisfactory shape reconstructions than the sequential approach and was more satisfactory in cases of large gaps, due to the method taking into account the global shape of the object.
Abstract: We address the two inherently related problems of segmentation and interpolation of 3D and 4D sparse data and propose a new method to integrate these stages in a level set framework. The interpolation process uses segmentation information rather than pixel intensities for increased robustness and accuracy. The method supports any spatial configurations of sets of 2D slices having arbitrary positions and orientations. We achieve this by introducing a new level set scheme based on the interpolation of the level set function by radial basis functions. The proposed method is validated quantitatively and/or subjectively on artificial data and MRI and CT scans and is compared against the traditional sequential approach, which interpolates the images first, using a state-of-the-art image interpolation method, and then segments the interpolated volume in 3D or 4D. In our experiments, the proposed framework yielded similar segmentation results to the sequential approach but provided a more robust and accurate interpolation. In particular, the interpolation was more satisfactory in cases of large gaps, due to the method taking into account the global shape of the object, and it recovered better topologies at the extremities of the shapes where the objects disappear from the image slices. As a result, the complete integrated framework provided more satisfactory shape reconstructions than the sequential approach.

Book ChapterDOI
02 Sep 2014
TL;DR: A simple and effective framework for multi-view image sequence interpolation in space and time is proposed and two novel filtering approaches for outlier elimination and a robust approach for match extrapolations at the image boundaries are introduced.
Abstract: We propose a simple and effective framework for multi-view image sequence interpolation in space and time. For spatial view point interpolation we present a robust feature-based matching algorithm that allows for wide-baseline camera configurations. To this end, we introduce two novel filtering approaches for outlier elimination and a robust approach for match extrapolations at the image boundaries. For small-baseline and temporal interpolations we rely on an established optical flow based approach. We perform a quantitative and qualitative evaluation of our framework and present applications and results. Our method has a low runtime and results can compete with state-of-the-art methods.

Patent
08 Jan 2014
TL;DR: In this paper, a margin-oriented self-adaptive image interpolation method and a VLSI implementation device are presented, in which the marginal information is obtained by comparing the gradient magnitude and a local selfadaptive threshold value, wherein the marginal direction is perpendicular to the gradient direction.
Abstract: The invention discloses a margin-oriented self-adaptive image interpolation method and a VLSI implementation device thereof. The method comprises the steps that the gradient magnitude and the gradient direction of a source image pixel are computed, and marginal information is obtained by comparing the gradient magnitude and a local self-adaptive threshold value, wherein the marginal direction is perpendicular to the gradient direction; the marginal direction is classified, filtering is conducted through the marginal information, and an image is divided into a regular marginal area and a non-marginal area; the regular marginal area interpolation is conducted in the marginal direction, and an improved bicubic interpolation method, a slant bicubic interpolation method and a slant bilinear interpolation method based on local gradient information are adopted to conduct image interpolation according to the classification of the marginal information; image interpolation is conducted on the non-marginal area through the improved bicubic interpolation method based on the local gradient information. The VLSI implementation device comprises a marginal information extraction module, a self-adaptive interpolation module, an input line field synchronous control module and an after-scaling line field synchronous control module. The margin-oriented self-adaptive image interpolation method and the VLSI implementation device of the margin-oriented self-adaptive image interpolation method can effectively improve the effect of image interpolation with high-magnification scaling, and is beneficial to integrated circuit framework achieving.

Proceedings ArticleDOI
25 Oct 2014
TL;DR: This paper proposes a graphic processing unit acceleration-based bilinear interpolation parallel algorithm, which mainly utilizes Wallis transforming independence among various blocks in bilinears interpolation, which is adaptable to characteristics of GPU parallel processing structure.
Abstract: Bilinear interpolation algorithm is broadly applied in digital image processing but its calculation speed is very slow. In order to improve its performance in calculation, this paper proposes a graphic processing unit acceleration-based bilinear interpolation parallel It mainly utilizes Wallis transforming independence among various blocks in bilinear interpolation, which is adaptable to characteristics of GPU parallel processing structure. It maps traditional serial bilinear interpolation algorithm to CUDA parallel programming model and optimize thread allocation, memory usage, hardware resources division, etc, to make full use of huge calculation ability. The experiment results show bilinear interpolation parallel algorithm can greatly improve calculation speed with increasing image resolution.

Journal ArticleDOI
TL;DR: This paper presents an algorithm that blindly detects global rescaling operation and estimate the rescaling factor based on the autocovariance sequence of zero-crossings of second difference of the tampered image and shows the validity of the algorithm under different interpolation schemes.
Abstract: Availability of the powerful image editing softwares and advancement in digital cameras has given rise to large amount of manipulated images without any traces of tampering, generating a great demand for automatic forgery detection algorithms in order to determine its authenticity. When altering an image like copy–paste or splicing to conceal traces of tampering, it is often necessary to resize the pasted portion of the image. The resampling operation may highly likely disturb the underlying inconsistency of the pasted portion that can be used to detect the forgery. In this paper, an algorithm is presented that blindly detects global rescaling operation and estimate the rescaling factor based on the autocovariance sequence of zero-crossings of second difference of the tampered image. Experimental results using UCID and USC-SIPI database show the validity of the algorithm under different interpolation schemes. The technique is robust and successfully detects rescaling operation for images that have been subjected to various forms of attacks like JPEG compression and arbitrary cropping. As expected, some degradation in detection accuracy is observed as the JPEG quality factor decreased.

Journal ArticleDOI
06 Jul 2014
TL;DR: The paper reviews these methods, with emphasis on their comparison and relationships, from the very first steps of transform image compression methods to adaptive and local adaptive filters for image restoration and up to “compressive sensing” methods that gained popularity in last few years.
Abstract: Transform image processing methods are methods that work in domains of image transforms, such as Discrete Fourier, Discrete Cosine, Wavelet, and alike. They proved to be very efficient in image compression, in image restoration, in image resampling, and in geometrical transformations and can be traced back to early 1970s. The paper reviews these methods, with emphasis on their comparison and relationships, from the very first steps of transform image compression methods to adaptive and local adaptive filters for image restoration and up to “compressive sensing” methods that gained popularity in last few years. References are made to both first publications of the corresponding results and more recent and more easily available ones. The review has a tutorial character and purpose.

Journal ArticleDOI
TL;DR: A low-complexity color interpolation algorithm is proposed for the very-large-scale integration (VLSI) implementation in real-time applications that not only reduces gate counts or power consumption, but also improves the average color peak signal-to-noise ratio quality.
Abstract: In this paper, a low-complexity color interpolation algorithm is proposed for the very-large-scale integration (VLSI) implementation in real-time applications. The proposed novel algorithm consists of an edge detector, an anisotropic weighting model, and a filter-based compensator. The anisotropic weighting model is designed to catch more information in horizontal than vertical directions. The filter-based compensation methodology includes a Laplacian and spatial sharpening filters, which are developed to improve the edge information and reduce the blurring effect. In addition, the hardware cost was successfully reduced by hardware sharing and reconfigurable design techniques. The VLSI architecture of the proposed design achieves 200 MHz with 5.2-K gate counts, and its core area is 64 236 μm2 synthesized by a 0.18-μm CMOS process. Compared with the previous low-complexity techniques, this paper not only reduces gate counts or power consumption by more than 8% or 91.7%, respectively, but also improves the average color peak signal-to-noise ratio quality by more than 1.6 dB.

Patent
24 Jan 2014
TL;DR: In this article, an image super-resolution reconstruction method is proposed, comprising the steps of: performing an edge detection on low-resolution images to obtain edge pixel frames, amplifying the edge pixels frames so that each amplified image is the double of the original edge pixel frame in size in both horizontal direction and vertical direction, without changing the detected edge pixel information, and compensating for the interpolated interpolation pixels according to different pixel edges to obtain a high resolution image.
Abstract: The present disclosure discloses an image super-resolution reconstruction method, comprising the steps of: performing an edge detection on low-resolution images to be processed to obtain edge pixel frames, amplifying the edge pixel frames so that each amplified image is the double of the original edge pixel frame in size in both horizontal direction and vertical direction, without changing the detected edge pixel information, and compensating for the interpolated interpolation pixels according to different pixel edges to obtain a high-resolution image. The method can ensure the definition and integrity of the edges, and enhance the contrast without degrading image quality. At last, the previous interpolation pixels are compensated according to optimized rules, during which influences of edge pixels and surrounding pixels are comprehensively considered, so as to eliminate the sawtooth phenomenon of the output image.

Journal ArticleDOI
TL;DR: This study presents a method for resampling detection that is essential for constructing accurate targeted and blind steganalysis methods for heterogeneous images, raw single-sampled images, and images resampled at different scales.
Abstract: This study presents a method for resampling detection. By combining texture analysis with resampling detection, the task of resampling detection is considered as a texture classification problem. In other words, the influence of resampling operations on a raw single-sampled image is viewed as an alteration of the image texture in a fine scale. First, local linear transform is used to obtain textural detail sub-bands. A 36-D feature vector is then extracted from the normalized characteristic function moments of textural detail sub-bands to train a support vector machine classifier. Finally, experimental results are reported on three databases, with each having almost 10,000 images. Comparison with the previous study reveals that the proposed method is effective for resampling detection. In addition, extensive experiments on cover and stego bitmap images illustrate that the proposed method is essential for constructing accurate targeted and blind steganalysis methods for heterogeneous images, raw single-sampled images, and images resampled at different scales.

Proceedings ArticleDOI
06 Mar 2014
TL;DR: This work proposes a new scaling algorithm for image scaling consisting of a Discrete Wavelet Transform based interpolation and bicubic interpolation that can achieve an image quality by a factor more than 10 dB than the existing bilinear interpolation method.
Abstract: Image scaling is an important technique used to scale down or scale up the pictures or video frames to fit to the application. This work proposes a new scaling algorithm for image scaling consisting of a Discrete Wavelet Transform (DWT) based interpolation and bicubic interpolation. To achieve higher visual quality, a simple Haar wavelet based DWT interpolation is carried out first to the gray scale values of image and then bicubic interpolation is performed. DWT is based on sub band coding, which divides the image into four frequency quadrants. To reduce the artifacts, bicubic interpolation is performed to all the quadrants separately. This work can achieve an image quality by a factor more than 10 dB than the existing bilinear interpolation method. The mean square error is less and the average Peak Signal to Noise Ratio (PSNR) is more in this method. The image artifacts like blurring can be greatly reduced in the proposed method, thus this approach is better than existing methods in visual quality. The simulation of the work is carried out in MATLAB R2013a.

Journal ArticleDOI
TL;DR: The Gradient And Interpolation based deblender (GAIN) as mentioned in this paper uses image intensity gradient and using an image interpolation technique usually used to correct flawed terrestrial digital images.
Abstract: Deep optical images are often crowded with overlapping objects. This is especially true in the cores of galaxy clusters, where images of dozens of galaxies may lie atop one another. Accurate measurements of cluster properties require deblending algorithms designed to automatically extract a list of individual objects and decide what fraction of the light in each pixel comes from each object. We present new software called the Gradient And INterpolation based deblender (GAIN) as a secondary deblender to improve deblending the images of cluster cores. This software relies on using image intensity gradient and using an image interpolation technique usually used to correct flawed terrestrial digital images. We test this software on Dark Energy Survey coadd images. GAIN helps extracting unbiased photometry measurement for blended sources. It also helps improving detection completeness while introducing only a modest amount of spurious detections. For example, when applied to deep images simulated with high level of deblending difficulties, this software improves detection completeness from 91% to 97% for sources above the 10? limiting magnitude at 25.3 mag. We expect this software to be a useful tool for cluster population measurements.