scispace - formally typeset
Search or ask a question

Showing papers on "Bicubic interpolation published in 2010"


Book ChapterDOI
24 Jun 2010
TL;DR: This paper deals with the single image scale-up problem using sparse-representation modeling, and assumes a local Sparse-Land model on image patches, serving as regularization, to recover an original image from its blurred and down-scaled noisy version.
Abstract: This paper deals with the single image scale-up problem using sparse-representation modeling. The goal is to recover an original image from its blurred and down-scaled noisy version. Since this problem is highly ill-posed, a prior is needed in order to regularize it. The literature offers various ways to address this problem, ranging from simple linear space-invariant interpolation schemes (e.g., bicubic interpolation), to spatially-adaptive and non-linear filters of various sorts. We embark from a recently-proposed successful algorithm by Yang et. al. [1,2], and similarly assume a local Sparse-Land model on image patches, serving as regularization. Several important modifications to the above-mentioned solution are introduced, and are shown to lead to improved results. These modifications include a major simplification of the overall process both in terms of the computational complexity and the algorithm architecture, using a different training approach for the dictionary-pair, and introducing the ability to operate without a training-set by boot-strapping the scale-up task from the given low-resolution image. We demonstrate the results on true images, showing both visual and PSNR improvements.

2,667 citations


Journal ArticleDOI
TL;DR: The quantitative peak signal‐to‐noise ratio (PSNR) and visual results show the superiority of the proposed technique over the conventional and state‐of‐art image resolution enhancement techniques.
Abstract: In this paper, we propose a new super-resolution technique based on interpolation of the high-frequency subband images obtained by discrete wavelet transform (DWT) and the input image. The proposed technique uses DWT to decompose an image into different subband images. Then the high-frequency subband images and the input low-resolution image have been interpolated, followed by combining all these images to generate a new super-resolved image by using inverse DWT. The proposed technique has been tested on Lena, Elaine, Pepper, and Baboon. The quantitative peak signal-to-noise ratio (PSNR) and visual results show the superiority of the proposed technique over the conventional and state-of-art image resolution enhancement techniques. For Lena's image, the PSNR is 7.93 dB higher than the bicubic interpolation.

229 citations


Journal ArticleDOI
TL;DR: The quantitative peak signal-to-noise ratio (PSNR) and visual results show the superiority of the proposed technique over the conventional bicubic interpolation, wavelet zero padding, and Irani and Peleg based image resolution enhancement techniques.
Abstract: In this letter, a satellite image resolution enhancement technique based on interpolation of the high-frequency subband images obtained by dual-tree complex wavelet transform (DT-CWT) is proposed. DT-CWT is used to decompose an input low-resolution satellite image into different subbands. Then, the high-frequency subband images and the input image are interpolated, followed by combining all these images to generate a new high-resolution image by using inverse DT-CWT. The resolution enhancement is achieved by using directional selectivity provided by the CWT, where the high-frequency subbands in six different directions contribute to the sharpness of the high-frequency details such as edges. The quantitative peak signal-to-noise ratio (PSNR) and visual results show the superiority of the proposed technique over the conventional bicubic interpolation, wavelet zero padding, and Irani and Peleg based image resolution enhancement techniques.

198 citations


Journal ArticleDOI
Jong-Woo Han1, Junhyung Kim1, Sung-Hyun Cheon1, Jong-Ok Kim1, Sung-Jea Ko1 
01 Feb 2010
TL;DR: A novel interpolation framework in which denoising and image sharpening methods are embedded and the proposed algorithm outperforms the conventional methods while suppressing blurring and jagging.
Abstract: In general, since the noise deteriorates the interpolation performance in digital images, it is effective to employ denoising prior to the interpolation. In this paper, we propose a novel interpolation framework in which denoising and image sharpening methods are embedded. In the proposed framework, the image is first decomposed using the bilateral filter into the detail and base layers which represent the small and large scale features, respectively. The detail layer is adaptively smoothed to suppress the noise before interpolation and an edge-preserving interpolation method is applied to both layers. Finally, the high resolution image is obtained by combining the base and detail layers. Experimental results show that the proposed algorithm outperforms the conventional methods while suppressing blurring and jagging.

74 citations


Proceedings ArticleDOI
29 Nov 2010
TL;DR: The experimental results show that the proposed edge-directed bicubic convolution interpolation method reduces common artifacts such as blurring, blocking and ringing etc and significantly outperforms some existing interpolation methods in terms of both subjective and objective measures.
Abstract: Image interpolation is a technique of producing a highresolution image from its low-resolution counterpart, which is often required in many image processing tasks. In this paper, we propose an edge-directed bicubic convolution (BC) interpolation. The proposed method can well adapt to the varying edge structures of images. The experimental results show that it reduces common artifacts such as blurring, blocking and ringing etc. and significantly outperforms some existing interpolation methods (including BC interpolation) in terms of both subjective and objective measures.

61 citations


Journal ArticleDOI
TL;DR: The interframe interpolation method presented consists of partial changing, using the growing size of structuring elements, of the morphological skeleton decomposition subsets of a binary input or grayscale frame with the morphology skeleton decompositions subset of abinary output or grayingcale frame.
Abstract: The interframe interpolation method presented consists of partial changing, using the growing size of structuring elements, of the morphological skeleton decomposition subsets of a binary input or grayscale frame with the morphological skeleton decomposition subsets of a binary output or grayscale frame. One of the interpolated frames obtained by this method is similar to the reference frame. Computer simulations illustrate the results.

58 citations


Journal ArticleDOI
28 Jun 2010
TL;DR: This work proposes an algorithm solving for all the necessary constraints between texel values, including through different magnification modes (nearest, bilinear, biquadratic and bicubic), and across facets using different texture resolutions.
Abstract: Surface materials are commonly described by attributes stored in textures (for instance, color, normal, or displacement). Interpolation during texture lookup provides a continuous value field everywhere on the surface, except at the chart boundaries where visible discontinuities appear. We propose a solution to make these seams invisible, while still outputting a standard texture atlas. Our method relies on recent advances in quad remeshing using global parameterization to produce a set of texture coordinates aligning texel grids across chart boundaries. This property makes it possible to ensure that the interpolated value fields on both sides of a chart boundary precisely match, making all seams invisible. However, this requirement on the uv coordinates needs to be complemented by a set of constraints on the colors stored in the texels. We propose an algorithm solving for all the necessary constraints between texel values, including through different magnification modes (nearest, bilinear, biquadratic and bicubic), and across facets using different texture resolutions. In the typical case of bilinear magnification and uniform resolution, none of the texels appearing on the surface are constrained. Our approach also ensures perfect continuity across several MIP-mapping levels.

53 citations


Journal ArticleDOI
TL;DR: In this article, different types of spatial interpolation for the material-point method are analyzed for the small-strain problem of a vibrating bar and the best results are obtained using quadratic elements.

52 citations


Journal ArticleDOI
TL;DR: The simulation results show that more accurate measurement and compensation results can be achieved using the fuzzy-error interpolation technique compared with its trilinear and cubic-spline counterparts.
Abstract: This paper provides a comparison between a novel technique used for the pose-error measurements and compensations of robots based on a fuzzy-error interpolation method and some other popular interpolation methods. A traditional robot calibration implements either model or modeless methods. The measurement and compensation of pose errors in a modeless method moves the robot's end-effector to the target poses in the robot workspace and measures the target position and orientation errors using some interpolation techniques in terms of the premeasured neighboring pose errors around the target pose. For the measurement purpose, a stereo camera or other measurement devices, such as a coordinate-measurement machine (CMM) or a laser-tracking system (LTS), can be used to measure the pose errors of the robot's end-effector at predened grid points on a cubic lattice. By the use of the proposed fuzzy-error interpolation technique, the accuracy of the pose-error compensation can be improved in comparison with other interpolation methods, which is conrmed by the simulation results given in this paper. A comparison study among most popular interpolation methods used in modeless robot calibrations, such as trilinear, cubic spline, and the fuzzy-error interpolation technique, is also made and discussed via simulations. The simulation results show that more accurate measurement and compensation results can be achieved using the fuzzy-error interpolation technique compared with its trilinear and cubic-spline counterparts.

50 citations


Journal ArticleDOI
TL;DR: This letter proposes a new hybrid DCT-Wiener-based interpolation scheme for video intra frame up-sampling without referencing the original high resolution video frames that takes full advantage of interpolation in both DCT domain and spatial domain and seamlessly integrates these two approaches to design an improved up-Sampling filter.
Abstract: Video frame resizing has received more and more research attention as contemporary video distributions need to deliver video to various receiving devices with different display resolutions. It is now becoming necessary for video distribution systems to be able to generate higher resolution video from lower resolution one for some end users. Current schemes in video up-sampling either only focused on improving visual quality with little or no improvement in objective quality or increasing the up-sampling accuracy by adaptively optimizing interpolation methods. Many such algorithms are typically quite complex and require substantial extra implementation efforts. In this letter, we propose a new hybrid DCT-Wiener-based interpolation scheme for video intra frame up-sampling without referencing the original high resolution video frames. This scheme takes full advantage of interpolation in both DCT domain and spatial domain and seamlessly integrates these two approaches to design an improved up-sampling filter. Experiments have been carried out to demonstrate that noticeable improvement in both objective and visual quality are obtained. With similar complexity, the proposed algorithm can achieve up to 4 dB gain in PSNR over fixed parameter Wiener filter-based interpolation, 4 dB gain over popular bicubic interpolation, and 1 dB gain over up-sampling scheme in DCT domain.

48 citations


Journal ArticleDOI
TL;DR: A novel direct digital frequency synthesizer with an architecture based on the quasi-linear interpolation method and its VLSI implementation is introduced and the performance of the proposed implementations is compared to several state-of-the-art DDFS designs.
Abstract: This paper introduces a novel direct digital frequency synthesizer (DDFS) with an architecture based on the quasi-linear interpolation method (QLIP). The QLIP method is a hybrid polynomial interpolation in which the first quarter of a cosine function is approximated by two sets of linear and parabolic polynomials. The section of the cosine function that is closer to its peak is interpolated by parabolic polynomials, due to its resemblance to a parabola. The rest of the function, which is closer to where it approaches zero, is interpolated by linear polynomials. The paper describes the proposed interpolation method and its VLSI implementation. The performance of the proposed implementations is compared to several state-of-the-art DDFS designs.

Journal ArticleDOI
TL;DR: A novel high-order interpolation is proposed to adapt the interpolation to several edge directions in the current frame to improve the rate-distortion relation of compressed images and video sequences.
Abstract: This paper proposes a selective data pruning-based compression scheme to improve the rate-distortion relation of compressed images and video sequences. The original frames are pruned to a smaller size before compression. After decoding, they are interpolated back to their original size by an edge-directed interpolation method. The data pruning phase is optimized to obtain the minimal distortion in the interpolation phase. Furthermore, a novel high-order interpolation is proposed to adapt the interpolation to several edge directions in the current frame. This high-order filtering uses more surrounding pixels in the frame than the fourth-order edge-directed method and it is more robust. The algorithm is also considered for multiframe-based interpolation by using spatio-temporally surrounding pixels coming from the previous frame. Simulation results are shown for both image interpolation and coding applications to validate the effectiveness of the proposed methods.

Proceedings ArticleDOI
16 Aug 2010
TL;DR: Experimental results show that multi-resolution wavelet analysis is more accurate than the traditional low-pass filters on this application.
Abstract: In this paper an approach to detect smoke columns from outdoor forest video sequences is proposed. The approach follows three basic steps. The first step is an image pre-processing block which resizes the image by applying a bicubic interpolation algorithm. The image is then transformed to its intensity values with a gray-scale transformation and finally the image is grouped by common areas with an image indexation. The second step consists of a smoke detection algorithm which performs a stationary wavelet transform (SWT) to remove high frequencies on horizontal, vertical, and diagonal details. The inverse SWT is then implemented and finally the image is compared to a non-smoke scene in order to determine the possible regions of interest (ROI). In order to reduce the number of false alarms, the final step of the proposed approach consists on a smoke verification algorithm, which determines whether the ROI is increasing its area or not. These results are combined to reach a final decision for detecting a smoke column on a sequence of static images from an outdoor video. Experimental results show that multi-resolution wavelet analysis is more accurate than the traditional low-pass filters on this application.

Proceedings ArticleDOI
14 Apr 2010
TL;DR: When fully formed speckle is considered and no compression of the data is done, it is shown that the interpolated final image can be modeled following a Gamma distribution, which is a good approximation for the weighted sum of Rayleigh variables.
Abstract: The influence of the cartesian interpolation of ultrasound data over the final image statistical model is studied. When fully formed speckle is considered and no compression of the data is done, we show that the interpolated final image can be modeled following a Gamma distribution, which is a good approximation for the weighted sum of Rayleigh variables. The importance of taking into account the interpolation stage to statistically model ultrasound images is pointed out. The interpolation model here proposed can be easily extended to more complex distributions.

Journal ArticleDOI
TL;DR: The proposed saliency-directed image interpolation approach using visual attention model and particle swarm optimization (PSO) is better than those of four comparison methods.

Proceedings ArticleDOI
03 Dec 2010
TL;DR: Experimental results show that the method is quantitatively more effective than prior work using bicubic interpolation or SVR methods, and the computation time is significantly less than that of existing SVR-based methods due to the use of sparse image representations.
Abstract: Learning-based approaches for super-resolution (SR) have been studied in the past few years. In this paper, a novel single-image SR framework based on the learning of sparse image representation with support vector regression (SVR) is presented. SVR is known to offer excellent generalization ability in predicting output class labels for input data. Given a low resolution image, we approach the SR problem as the estimation of pixel labels in its high resolution version. The feature considered in this work is the sparse representation of different types of image patches. Prior studies have shown that this feature is robust to noise and occlusions present in image data. Experimental results show that our method is quantitatively more effective than prior work using bicubic interpolation or SVR methods, and our computation time is significantly less than that of existing SVR-based methods due to the use of sparse image representations.

Journal ArticleDOI
TL;DR: A novel way of applying DWT and IDWT in a piecewise manner by non-uniform down- or up-sampling of the images to achieve partially sampled versions of the image to achieve the final variable scale interpolated images is proposed.
Abstract: This paper presents discrete wavelet transform (DWT) and its inverse (IDWT) with Haar wavelets as tools to compute the variable size interpolated versions of an image at optimum computational load. As a human observer moves closer to or farther from a scene, the retinal image of the scene zooms in or out, respectively. This zooming in or out can be modeled using variable scale interpolation. The paper proposes a novel way of applying DWT and IDWT in a piecewise manner by non-uniform down- or up-sampling of the images to achieve partially sampled versions of the images. The partially sampled versions are then aggregated to achieve the final variable scale interpolated images. The non-uniform down- or up-sampling here is a function of the required scale of interpolation. Appropriate zero padding is used to make the images suitable for the required non-uniform sampling and the subsequent interpolation to the required scale. The concept of zeroeth level DWT is introduced here, which works as the basis for interpolating the images to achieve bigger size than the original one. The main emphasis here is on the computation of variable size images at less computational load, without compromise of quality of images. The interpolated images to different sizes and the reconstructed images are benchmarked using the statistical parameters and visual comparison. It has been found that the proposed approach performs better as compared to bilinear and bicubic interpolation techniques.

Journal Article
TL;DR: In this paper, the shape preserving C 1 rational cubic interpolation is investigated and the range of optimal error constant is determined. And positive, constrained and monotone data preserving schemes are developed.
Abstract: This paper deals with the shape preserving C 1 rational cubic interpolation. The developed rational cubic interpolating function has only one free parameter. The approximation order of rational cubic function is investigated and range of optimal error constant is determined. Moreover, positive, constrained and monotone data preserving schemes are developed.

Journal ArticleDOI
TL;DR: The simulation results indicate that the interpolation quality of the proposed architecture is mostly better than cubic convolution interpolations, and is able to process varying-ratio image scaling for high-definition television in real-time.
Abstract: This letter presents a high-performance architecture of a novel first-order polynomial convolution interpolation for digital image scaling. A better quality of interpolation is achieved by using higher order model that requires complex computations. The kernel of the proposed method is built up of first-order polynomials and approximates the ideal sinc-function in the interval [-2, 2]. The proposed architecture reduces the computational complexity of generating weighting coefficients and provides a simple hardware architecture design and low computation cost, and easily meets real-time requirements. The architecture is implemented on the Virtex-II FPGA, and the high-performance very-large-scale integration architecture has been successfully designed and implemented with the TSMC 0.13 μm standard cell library. The simulation results indicate that the interpolation quality of the proposed architecture is mostly better than cubic convolution interpolations, and is able to process varying-ratio image scaling for high-definition television in real-time.

Proceedings ArticleDOI
29 Nov 2010
TL;DR: This paper involves implementing various steps of extracting the tumor from the 2D slices of MRI brain images by OTSUs threshold technique and various morphological operations and designing software for reconstructing 3D image from a set of 2D tumor images.
Abstract: This paper presents a method for 3D image reconstruction, which is one of the most attractive avenues in digital image processing techniques, especially due to its application in biomedical imaging. The diversity and complexity of tumor cells makes it very challenging to visualize tumor present in magnetic resonance image (MRI) data. This paper involves implementing various steps of extracting the tumor from the 2D slices of MRI brain images by OTSUs threshold technique and various morphological operations and designing software for reconstructing 3D image from a set of 2D tumor images. Extensive use of custom made user interface that provides for ease of user interaction and visualization of reconstructed data. The volume of the tumor is also estimated based on the computation of these images. Doctors and Radiologists can now prepare and image thousands of samples and save on time per day using this automation.

Journal ArticleDOI
TL;DR: A tight bound on the complexity of smoothing quad meshes with bicubic tensor-product B-spline patches is proven to be sharp by suitably interpreting an existing surface construction.

Journal ArticleDOI
TL;DR: In this article, Zhang et al. proposed a method for correcting the systematic errors caused by the photon noise and the pixelation effect in cosmic shear measurements by interpolating the logarithms of the pixel readouts with either the Bicubic or the BICubic Spline method as long as the pixel size is about less than the scale size of the point spread function.
Abstract: We propose easy ways of correcting for the systematic errors caused by the photon noise and the pixelation effect in cosmic shear measurements. Our treatment of noise can reliably remove the noise contamination to the cosmic shear even when the flux density of the noise is comparable with those of the sources. For pixelated images, we find that one can accurately reconstruct their corresponding continuous images by interpolating the logarithms of the pixel readouts with either the Bicubic or the Bicubic Spline method as long as the pixel size is about less than the scale size of the point spread function (PSF; including the pixel response function), a condition which is almost always satisfied in practice. Our methodology is well defined regardless of the morphologies of the galaxies and the PSF. Despite that our discussion is based on the shear measurement method of Zhang, our way of treating the noise can in principle be considered in other methods, and the interpolation method that we introduce for reconstructing continuous images from pixelated ones is generally useful for digital image processing of all purposes.

Journal ArticleDOI
TL;DR: A novel interpolation method based on Radial Basis Functions (RBF) which recovers a continuous intensity function from discrete image data samples and is designed to easily deal with the local anisotropy in the data, such as edge-structures in the image.
Abstract: This paper investigates the image interpolation problem, where the objective is to improve the resolution of an image by dilating it according to a given enlargement factor. We present a novel interpolation method based on Radial Basis Functions (RBF) which recovers a continuous intensity function from discrete image data samples. The proposed anisotropic RBF interpolant is designed to easily deal with the local anisotropy in the data, such as edge-structures in the image. Considering the underlying geometry of the image, this algorithm allows us to remove the artifacts that may arise when performing interpolation, such as blocking and blurring. Computed examples demonstrate the effectiveness of the method proposed by visual comparisons and quantitative measures.

Proceedings ArticleDOI
01 Dec 2010
TL;DR: This work implemented generalized interpolation using a combination of IIR and FIR filters to provide a greater degree of freedom for selecting basis functions in video coding.
Abstract: Typical interpolation methods in video coding perform filtering of reference picture samples using FIR filters for motion-compensated prediction. This process can be viewed as a signal decomposition using basis functions which are restricted by the interpolating constraint. Using the concept of generalized interpolation provides a greater degree of freedom for selecting basis functions. We implemented generalized interpolation using a combination of IIR and FIR filters. The complexity of the proposed scheme is comparable to that of an 8-tap FIR filter. Bit rate savings up to 20% compared to the H.264/AVC 6-tap filter are shown.

01 Jan 2010
TL;DR: Numerical examples will be presented to illustrate the differences between of using E(3) spline and other interpolations that have been studied before.
Abstract: In this paper, we will consider the interpolation of fuzzy data by fuzzy-valued E(3) splines. Numerical examples will be presented to illustrate the differences between of using E(3) spline and other interpolations that have been studied before. AMS Subject Classification: 94D05, 26E50

Proceedings ArticleDOI
15 Oct 2010
TL;DR: There is an improvement to the Bicubic interpolation enlargement algorithm based on the hardware parallel processing in this article.
Abstract: There is an improvement to the Bicubic interpolation enlargement algorithm based on the hardware parallel processing in this article. Search table method used in this paper has avoided massive cubic and the floating numbers multiply operation. It reduces the computation load greatly. Convolution operation in tow directions, level and vertical, of the 4x4 picture element matrix on FPGA has been realized by LPM. The Bicubic interpolation enlargement algorithm is completely based on the hardware parallel characteristic to realize. This method has the superiority in the computation speed and occupies on the resources.

Posted Content
TL;DR: In this article, a spline interpolation method is proposed to perform statistical analysis on time-indexed sequences of 2D or 3D shapes, which can be compared to linear interpolation for one dimensional data.
Abstract: This article presents a new mathematical framework to perform statistical analysis on time-indexed sequences of 2D or 3D shapes. At the core of this statistical analysis is the task of time interpolation of such data. Current models in use can be compared to linear interpolation for one dimensional data. We develop a spline interpolation method which is directly related to cubic splines on a Riemannian manifold. Our strategy consists of introducing a control variable on the Hamiltonian equations of the geodesics. Motivated by statistical modeling of spatiotemporal data, we also design a stochastic model to deal with random shape evolutions. This model is closely related to the spline model since the control variable previously introduced is set as a random force perturbing the evolution. Although we focus on the finite dimensional case of landmarks, our models can be extended to infinite dimensional shape spaces, and they provide a first step for a non parametric growth model for shapes taking advantage of the widely developed framework of large deformations by diffeomorphisms.

Proceedings ArticleDOI
09 Nov 2010
TL;DR: A novel and robust fusion algorithm for the infrared and visible Power-Equipment image, which is invariant to large-scale changes and illumination changes in the real operating environment of Power Equipments.
Abstract: In this paper we present a novel and robust fusion algorithm for the infrared and visible Power-Equipment image, which is invariant to large-scale changes and illumination changes in the real operating environment of Power Equipments. Firstly, the Scale-Invariant Feature Transform (SIFT) algorithm is used to extract and describe the feature points from the infrared and visible images. Secondly, the best feature matching for each feature point of visible images is found by identifying its nearest neighbor in the database of feature points from infrared images. Thirdly, the Random Sample Consensus (RANSAC) technology is chose to select the appropriate geometric transformation model and estimate the transformation parameters of this geometric model. Finally, the bicubic interpolation method is employed to implement grayscale interpolation and coordinate transformation, then obtain the fusion image of visible and infrared images. Extensive experimental results on the Power-Equipment dataset demonstrate that our method has high stability and excellent performance.

Journal Article
TL;DR: A new region-based bicubic interpolation is proposed that can keep the image quality equal to the original algorithm while reducing the calculation load more than 10 percents.
Abstract: Bicubic interpolation is an effective way to get well-qualified high-resolution image,but with high calculation load.The common used interpolation methods are discussed.A new region-based bicubic interpolation is proposed.Without segmenting the image,the mean value of the four neighboring points of the interpolated point is calculated which is used to divide the image into two regions: the flat region and the complex region with more details.Two different interpolating algorithms are chosen for each region.Experimental results show that the proposed algorithm can keep the image quality equal to the original algorithm while reducing the calculation load more than 10 percents.It is useful for applications.

Journal ArticleDOI
TL;DR: In this article, a unified approach to the derivation of sufficient conditions for the k-monotonicity of splines in interpolation of kmonotone data for k = 0, …, 4 was proposed.
Abstract: We consider the problem of shape-preserving interpolation by cubic splines. We propose a unified approach to the derivation of sufficient conditions for the k-monotonicity of splines (the preservation of the sign of any derivative) in interpolation of k-monotone data for k = 0, …, 4.