scispace - formally typeset
Search or ask a question

Showing papers on "Bilinear interpolation published in 2008"


Journal ArticleDOI
TL;DR: A soft-decision interpolation technique that estimates missing pixels in groups rather than one at a time, which preserves spatial coherence of interpolated images better than the existing methods and produces the best results so far over a wide range of scenes in both PSNR measure and subjective visual quality.
Abstract: The challenge of image interpolation is to preserve spatial details. We propose a soft-decision interpolation technique that estimates missing pixels in groups rather than one at a time. The new technique learns and adapts to varying scene structures using a 2-D piecewise autoregressive model. The model parameters are estimated in a moving window in the input low-resolution image. The pixel structure dictated by the learnt model is enforced by the soft-decision estimation process onto a block of pixels, including both observed and estimated. The result is equivalent to that of a high-order adaptive nonseparable 2-D interpolation filter. This new image interpolation approach preserves spatial coherence of interpolated images better than the existing methods, and it produces the best results so far over a wide range of scenes in both PSNR measure and subjective visual quality. Edges and textures are well preserved, and common interpolation artifacts (blurring, ringing, jaggies, zippering, etc.) are greatly reduced.

588 citations


Journal ArticleDOI
TL;DR: It is shown that interpolated signals and their derivatives contain specific detectable periodic properties, and a blind, efficient, and automatic method capable of finding traces of resampling and interpolation is proposed.
Abstract: In this paper, we analyze and analytically describe the specific statistical changes brought into the covariance structure of signal by the interpolation process. We show that interpolated signals and their derivatives contain specific detectable periodic properties. Based on this, we propose a blind, efficient, and automatic method capable of finding traces of resampling and interpolation. The proposed method can be very useful in many areas, especially in image security and authentication. For instance, when two or more images are spliced together, to create high quality and consistent image forgeries, almost always geometric transformations, such as scaling, rotation, or skewing are needed. These procedures are typically based on a resampling and interpolation step. By having a method capable of detecting the traces of resampling, we can significantly reduce the successful usage of such forgeries. Among other points, the presented method is also very useful in estimation of the geometric transformations factors.

304 citations


Journal ArticleDOI
TL;DR: In this article, Lagrangian interpolation is used to approximate general functions by finite sums of well chosen, pre-defined, linearly independent interpolating functions; it is much simpler to implement than determining the best fits with respect to some Banach (or even Hilbert) norm.
Abstract: Lagrangian interpolation is a classical way to approximate general functions by finite sums of well chosen, pre-defined, linearly independent interpolating functions; it is much simpler to implement than determining the best fits with respect to some Banach (or even Hilbert) norms. In addition, only partial knowledge is required (here values on some set of points). The problem of defining the best sample of points is nevertheless rather complex and is in general open. In this paper we propose a way to derive such sets of points. We do not claim that the points resulting from the construction explained here are optimal in any sense. Nevertheless, the resulting interpolation method is proven to work under certain hypothesis, the process is very general and simple to implement, and compared to situations where the best behavior is known, it is relatively competitive.

288 citations


Journal ArticleDOI
TL;DR: A methodology utilizing information from simulations to generate Lyapunov function candidates satisfying necessary conditions for bilinear constraints is proposed and Qualified candidates are used to compute invariant subsets of the region-of-attraction and to initialize various bil inear search strategies for further optimization.

232 citations


Journal ArticleDOI
TL;DR: The method recognizes and exploits the low‐dimensional manifold structure of the parametrized functions to provide good approximation and an a posteriori error estimator is introduced to quantify the approximation error and requires little additional cost.
Abstract: We present an interpolation method for efficient approximation of parametrized functions. The method recognizes and exploits the low-dimensional manifold structure of the parametrized functions to provide good approximation. Basic ingredients include a specific problem-dependent basis set defining a low-dimensional representation of the parametrized functions, and a set of ‘best interpolation points’ capturing the spatial-parameter variation of the parametrized functions. The best interpolation points are defined as solution of a least-squares minimization problem which can be solved efficiently using standard optimization algorithms. The approximation is then determined from the basis set and the best interpolation points through an inexpensive and stable interpolation procedure. In addition, an a posteriori error estimator is introduced to quantify the approximation error and requires little additional cost. Numerical results are presented to demonstrate the accuracy and efficiency of the method. Copyright © 2007 John Wiley & Sons, Ltd.

203 citations


Journal ArticleDOI
TL;DR: Results of ES-PIMs are generally of superconvergence and "ultra-accurate"; no additional degrees of freedom are introduced, the implementation of the method is straightforward, and the method can achieve much better efficiency than the FEM using the same set of triangular meshes.
Abstract: This paper formulates an edge-based smoothed point interpolation method (ES-PIM) for solid mechanics using three-node triangular meshes. In the ES-PIM, displacement fields are construed using the point interpolation method (polynomial PIM or radial PIM), and hence the shape functions possess the Kronecker delta property, facilitates the enforcement of Dirichlet boundary conditions. Strains are obtained through smoothing operation over each smoothing domain associated with edges of the triangular background cells. The generalized smoothed Galerkin weak form is then used to create the discretized system equations and the formation is weakened weak formulation. Four schemes of selecting nodes for interpolation using the PIM have been introduced in detail and ES-PIM models using these four schemes have been developed. Numerical studies have demonstrated that the ES-PIM possesses the following good properties: (1) the ES-PIM models have a close-to-exact stiffness, which is much softer than for the overly-stiff FEM model and much stiffer than for the overly-soft node-based smoothed point interpolation method (NS-PIM) model; (2) results of ES-PIMs are generally of superconvergence and "ultra-accurate"; (3) no additional degrees of freedom are introduced, the implementation of the method is straightforward, and the method can achieve much better efficiency than the FEM using the same set of triangular meshes.

154 citations


Journal ArticleDOI
TL;DR: In this article, a bilinear immersed finite element (IFE) space for solving second-order elliptic boundary value problems with discontinuous coefficients is discussed, which is a nonconforming finite element space and its partition can be independent of the interface.
Abstract: This article discusses a bilinear immersed finite element (IFE) space for solving second-order elliptic boundary value problems with discontinuous coefficients (interface problem). This is a nonconforming finite element space and its partition can be independent of the interface. The error estimates for the interpolation of a Sobolev function indicate that this IFE space has the usual approximation capability expected from bilinear polynomials. Numerical examples of the related finite element method are provided. © 2008 Wiley Periodicals, Inc. Numer Methods Partial Differential Eq 2008

134 citations


Journal ArticleDOI
TL;DR: A new version of the Outer Approximation for Global Optimization Algorithm by Bergamini et al. is proposed, in order to speed up the convergence in nonconvex MINLP models that involve bilinear and concave terms.

115 citations


Journal ArticleDOI
TL;DR: A linearization algorithm is proposed that solves a succession of fast linear programs that converges in a few iterations to a local solution that is competitive with the considerably more complex integer programming and other formulations.
Abstract: The multiple instance classification problem (Dietterich et al., Artif. Intell. 89:31–71, [1998]; Auer, Proceedings of 14th International Conference on Machine Learning, pp. 21–29, Morgan Kaufmann, San Mateo, [1997]; Long et al., Mach. Learn. 30(1):7–22, [1998]) is formulated using a linear or nonlinear kernel as the minimization of a linear function in a finite-dimensional (noninteger) real space subject to linear and bilinear constraints. A linearization algorithm is proposed that solves a succession of fast linear programs that converges in a few iterations to a local solution. Computational results on a number of datasets indicate that the proposed algorithm is competitive with the considerably more complex integer programming and other formulations. A distinguishing aspect of our linear classifier not shared by other multiple instance classifiers is the sparse number of features it utilizes. In some tasks, the reduction amounts to less than one percent of the original features.

100 citations


Journal ArticleDOI
TL;DR: A new technique is presented for interpolating between grey-scale images in a medical data set with a modified control grid interpolation algorithm that selectively accepts displacement field updates in a manner optimized for performance.
Abstract: A new technique is presented for interpolating between grey-scale images in a medical data set. Registration between neighboring slices is achieved with a modified control grid interpolation algorithm that selectively accepts displacement field updates in a manner optimized for performance. A cubic interpolator is then applied to pixel intensities correlated by the displacement fields. Special considerations are made for efficiency, interpolation quality, and compression in the implementation of the algorithm. Experimental results show that the new method achieves good quality, while offering dramatic improvement in efficiency relative to the best competing method.

89 citations


Journal ArticleDOI
TL;DR: In this article, the authors extend the work developed in Conn et al. (2008b, Math. Program., 111, 141-172) for complete or determined interpolation models to the case where the number of interpolation points is higher (regression models) and lower (underdetermined models), and they show that the mechanisms and concepts which control the quality of the sample sets, and hence of the approximation error bounds, of the interpolation model can be extended to the over and underdetermined cases.
Abstract: In recent years there has been a considerable amount of work on the development of numerical methods for derivative-free optimization problems. Some of this work relies on the management of the geometry of sets of sampling points for function evaluation and model building. In this paper we continue the work developed in Conn et al. (2008b, Math. Program., 111, 141-172) for complete or determined interpolation models (when the number of interpolation points equals the number of basis elements), considering now the cases where the number of points is higher (regression models) and lower (underdetermined models) than the number of basis components. We show that the regression and underdetermined models essentially have similar properties to the interpolation model in that the mechanisms and concepts which control the quality of the sample sets, and hence of the approximation error bounds, of the interpolation models can be extended to the over- and underdetermined cases. We also discuss the trade-offs between using a fully determined interpolation model and the over- or underdetermined ones.

Proceedings ArticleDOI
01 Jan 2008
TL;DR: The full method provides interpolated images with a ”natural” appearance that do not present the artifacts affecting linear and nonlinear methods.
Abstract: In this paper we describe a novel general purpose image interpolation method based on the combination of two different procedures. First, an adaptive algorithm is applied interpolating locally pixel values along the direction where second order image derivative is lower. Then interpolated values are modified using an iterative refinement minimizing differences in second order image derivatives, maximizing second order derivative values and smoothing isolevel curves. The first algorithm itself provides edge preserving images that are measurably better than those obtained with similarly fast methods presented in the literature. The full method provides interpolated images with a ”natural” appearance that do not present the artifacts affecting linear and nonlinear methods. Objective and subjective tests on a wide series of natural images clearly show the advantages of the proposed technique over existing approaches.

Journal ArticleDOI
TL;DR: It is concluded that the proposed methodology based on ENO interpolation improves the detection of edges in images as compared to other fourth-order methods.

Journal ArticleDOI
TL;DR: This paper proposes a new approach to kriging minimum mean squared error linear prediction for spatial data sets with many observations by using a Gaussian Markov random field on a lattice as an approximation of aGaussian field.

Journal ArticleDOI
TL;DR: Uniform B-spline interpolation is presented, completely contained on the graphics processing unit (GPU), which implies that the CPU does not need to compute any lookup tables or B- Spline basis functions.
Abstract: This article presents uniform B-spline interpolation, completely contained on the graphics processing unit (GPU). This implies that the CPU does not need to compute any lookup tables or B-spline basis functions. The cubic interpolation can be decomposed into several linear interpolations [Sigg and Hadwiger 05], which are hard-wired on the GPU and therefore very fast. Here it is demonstrated that the cubic B-spline basis function can be evaluated in a short piece of GPU code without any conditional statements. Source code is available online.

01 Jan 2008
TL;DR: In this paper, the performance of a three-step procedure for fingerprint identification and enhancement, using CLAHE (contrast limited adaptive histogram equalization) together with "Clip Limit", standard deviation and sliding neighborhood as stages during processing of the fingerprint image, is investigated.
Abstract: the purpose of this paper is to investigate the performance of a three-step procedure for the fingerprint identification and enhancement, using CLAHE (contrast limited adaptive histogram equalization) together with 'Clip Limit', standard deviation and sliding neighborhood as stages during processing of the fingerprint image. Firstly, CLAHE with clip limit is applied to enhance the contrast of the small tiles existing in the fingerprint image and to combine the neighboring tiles using a bilinear interpolation in order to eliminate the artificially induced boundaries. In a second step, the image is decomposed into an array of distinct blocks and the discrimination of the blocks is obtained by computing the standard deviation of the matrix elements to remove the image background and obtain the boundaries for the region of interest. Finally, by using a slide neighborhood processing, an enhancement of the image is obtained by clarifying the Minutiae (endpoints and bifurcations) in each specific pixel, process known as thinning. The paper presents the motivation for developing this method, its phases, and its possible advantages through a simulated investigation.

Journal ArticleDOI
TL;DR: The authors propose a general clusterwise bilinear spatial model that simultaneously estimates market segments, their composition, a brand space, and preference/utility vectors per market segment; that is, the model performs segmentation and positioning simultaneously.
Abstract: The segmentation–targeting–positioning conceptual framework has been the traditional foundation and genesis of marketing strategy formulation. The authors propose a general clusterwise bilinear spatial model that simultaneously estimates market segments, their composition, a brand space, and preference/utility vectors per market segment; that is, the model performs segmentation and positioning simultaneously. After a review of related methodological research in the marketing, psychometrics, and classification literature streams, the authors present the technical details of the proposed two-way clusterwise bilinear spatial model. They develop an efficient alternating least squares procedure that estimates conditional globally optimum estimates of the model parameters within each iteration through analytic closed-form expressions. The authors present various model options. They provide a conceptual and empirical comparison with latent-class multidimensional scaling. They use an illustration of the ...

Journal ArticleDOI
TL;DR: In this paper, an isoparametric interpolation of total quaternion for geometrically consistent, strain-objective and path-independent finite element solutions of the geometrical exact beam was proposed.

Journal ArticleDOI
TL;DR: In this article, the restoring capability of bilinear hysteretic or frictional seismic isolation systems is investigated in some detail, and the results of the parametric analyses are processed statistically and regression analysis relations are derived that show the dependence of the residual displacement after the earthquake and the cumulative build up of displacements after a series of successive earthquakes on the governing parameters.
Abstract: The restoring capability (or re-centering capability) is identified by the current design codes as a fundamental feature of seismic isolation systems. In this paper, the restoring capability of bilinear hysteretic or frictional seismic isolation systems is investigated in some detail. Certain energy considerations are examined first in order to provide insight into and reveal governing parameters on individual aspects of the problem. The restoring capability is then investigated through an extensive parametric study of smooth bilinear single-degree-of-freedom hysteretic systems, with parameters covering a range of typical seismic isolation systems, subjected to a large group of recorded earthquakes. The results of the parametric analyses are processed statistically and regression analysis relations are derived that show the dependence of the residual displacement after the earthquake and the cumulative build up of displacements after a series of successive earthquakes on the governing parameters. Based on the analysis results, the features of the bilinear system that ensure sufficient restoring capability are identified.

Journal ArticleDOI
TL;DR: In this article, the authors give bounds on sup t |u(x, t)| for solutions u of dispersive equations on the one-dimensional torus, obtained from some improvements on bilinear types of estimate.
Abstract: We give bounds on sup t |u(x, t)| for solutions u of dispersive equations on the one-dimensional torus. They are obtained from some improvements on bilinear types of estimate.

Journal ArticleDOI
TL;DR: This work presents a real-time, GPU-based method for distance function and distance gradient interpolation which preserves discontinuity feature curves, represented by a set of quadratic Bezier curves, with minimal restrictions on their intersections.
Abstract: The standard bilinear interpolation on normal maps results in visual artifacts along sharp features, which are common for surfaces with creases, wrinkles, and dents. In many cases, spatially varying features, like the normals near discontinuity curves, are best represented as functions of the distance to the curve and the position along the curve. For high-quality interactive rendering at arbitrary magnifications, one needs to interpolate the distance field preserving discontinuity curves exactly.We present a real-time, GPU-based method for distance function and distance gradient interpolation which preserves discontinuity feature curves. The feature curves are represented by a set of quadratic Bezier curves, with minimal restrictions on their intersections. We demonstrate how this technique can be used for real-time rendering of complex feature patterns and blending normal maps with procedurally defined profiles near normal discontinuities.

Journal ArticleDOI
TL;DR: The morphological shape decomposition role to serve as an efficient image decomposition tool is extended to interpolation of images by means of generalized morphologicalshape decomposition.
Abstract: One of the main image representations in mathematical morphology is the shape decomposition representation, useful for image compression and pattern recognition. The morphological shape decomposition representation can be generalized to extend the scope of its algebraic characteristics as much as possible. With these generalizations, the morphological shape decomposition (MSD) role to serve as an efficient image decomposition tool is extended to interpolation of images. We address the binary and grayscale interframe interpolation by means of generalized morphological shape decomposition. Computer simulations illustrate the results.

Patent
10 Jan 2008
TL;DR: In this article, an image encoding method and apparatus for generating an interpolation filter using an adjacent area of a current block and a corresponding adjacent areas of a reference picture and interpolating the reference picture using the generated interpolation filters is presented.
Abstract: Provided are an image encoding method and apparatus for generating an interpolation filter using an adjacent area of a current block and a corresponding adjacent area of a reference picture and interpolating the reference picture using the generated interpolation filter, and an image decoding method and apparatus therefor. By interpolating an adjacent area of a reference picture corresponding to an adjacent area of a current block according to fractional pixel resolution of a motion vector of the current block and determining interpolation filter coefficients to minimize a difference between an interpolated adjacent area of the reference picture and the adjacent area of the current block, an interpolation filter needed for motion compensation of the current block is adaptively generated using information on the adjacent area.

Proceedings ArticleDOI
Steffen Wittmann1, Thomas Wedi1
12 Dec 2008
TL;DR: A separable adaptive interpolation filtering is proposed in this paper that achieves the same coding efficiency than the non-separable adaptive filter used in prior art.
Abstract: Motion-compensated prediction using fractional-pel motion vectors followed by transform coding of the resulting prediction error is used in hybrid video coding. In the case of fractional-pel motion, pixels at fractional-pel positions have to be determined by interpolation. For this purpose, fixed interpolation filters are applied in H.264/AVC. By using fixed interpolation filters, time varying effects such as aliasing, quantization errors, errors from inaccurate motion estimation, camera noise, etc cannot be considered accurately. Thus, the coding efficiency of the motion compensated prediction is limited. The concept of adaptive interpolation filtering addresses these effects resulting in an increased coding efficiency. Since a non-separable adaptive filter is used in prior art, it is associated with a significantly increased computational expense at encoder and decoder. In order to reduce the computational expense, a separable adaptive interpolation filtering is proposed in this paper that achieves the same coding efficiency than the non-separable adaptive filter. With this separable interpolation filter, the computational expense of the filtering is reduced by 24% in case of 4times4 motion-compensated blocks, 36% in case of 8times8 motion-compensated blocks, and 42% in case of 16times16 motion-compensated blocks compared to a non-separable filter, whereas the computational expense is measured by the number of calculation operations.

Proceedings ArticleDOI
09 Jun 2008
TL;DR: A new compact formulation of rigid shape interpolation in terms of normal equations is provided, and a way to improve mesh independence, making the interpolation result less influenced by variations in tessellation is proposed.
Abstract: In this paper we provide a new compact formulation of rigid shape interpolation in terms of normal equations, and propose several enhancements to previous techniques. Specifically, we propose 1) a way to improve mesh independence, making the interpolation result less influenced by variations in tessellation, 2) a faster way to make the interpolation symmetric, and 3) simple modifications to enable controllable interpolation. Finally we also identify 4) a failure mode related to large rotations that is easily triggered in practical use, and we present a solution for this as well.

Journal ArticleDOI
TL;DR: In this paper, a bilinear pole-shifting technique with H ∞ control method is proposed for the dynamic response control of large structures, which can make the structural systems have a certain target damping ratio.

Posted Content
TL;DR: In this article, the authors studied the interpolation of couples of separable Hilbert spaces with a function parameter and proved the main properties of the classical interpolation, and some applications to the interpolations of isotropic Hormander spaces over a closed manifold are given.
Abstract: The interpolation of couples of separable Hilbert spaces with a function parameter is studied. The main properties of the classical interpolation are proved. Some applications to the interpolation of isotropic Hormander spaces over a closed manifold are given.

Proceedings ArticleDOI
23 Jun 2008
TL;DR: This paper presents an extension to integral images that allows for application of a wide class of non-uniform filters and explains the theoretical basis of the approach and instantiate two concrete examples: filtering with bilinear interpolation, and filtering with approximated Gaussian weighting.
Abstract: Integral images are commonly used in computer vision and computer graphics applications. Evaluation of box filters via integral images can be performed in constant time, regardless of the filter size. Although Heckbert (1986) extended the integral image approach for more complex filters, its usage has been very limited, in practice. In this paper, we present an extension to integral images that allows for application of a wide class of non-uniform filters. Our approach is superior to Heckbertpsilas in terms of precision requirements and suitability for parallelization. We explain the theoretical basis of the approach and instantiate two concrete examples: filtering with bilinear interpolation, and filtering with approximated Gaussian weighting. Our experiments show the significant speedups we achieve, and the higher accuracy of our approach compared to Heckbertpsilas.

Journal ArticleDOI
TL;DR: In this paper, the L 2 bilinear generalizations of the L 4 estimate of Strichartz for solutions of the homogeneous 3D wave equation were reviewed and a short proof based solely on an estimate for the volume of intersection of two thickened spheres was given.
Abstract: We first review the L 2 bilinear generalizations of the L 4 estimate of Strichartz for solutions of the homogeneous 3D wave equation and give a short proof based solely on an estimate for the volume of intersection of two thickened spheres. We then go on to prove a number of new results, the main theme being how additional anisotropic Fourier restrictions influence the estimates. Moreover we prove some refinements, which are able to simultaneously detect both concentrations and nonconcentrations in Fourier space.

Journal ArticleDOI
TL;DR: In this paper, the authors established the time local well-posedness for large data of a solution to two-dimensional drift-diffusion system in the critical Hardy space H 1 (R 2 ).