scispace - formally typeset
Search or ask a question

Showing papers on "Bilinear interpolation published in 1996"


Proceedings ArticleDOI
16 Sep 1996
TL;DR: A new method for digitally interpolating images to higher resolution based on bilinear interpolation modified to prevent interpolation across edges, as determined from the estimated high resolution edge map is presented.
Abstract: We present a new method for digitally interpolating images to higher resolution. It consists of two phases: rendering and correction. The rendering phase is edge-directed. From the low resolution image data, we generate a high resolution edge map by first filtering with a rectangular center-on-surround-off filter and then performing piecewise linear interpolation between the zero crossings in the filter output. The rendering phase is based on bilinear interpolation modified to prevent interpolation across edges, as determined from the estimated high resolution edge map. During the correction phase, we modify the mesh values on which the rendering is based to account for the disparity between the true low resolution data, and that predicted by a sensor model operating on the high resolution output of the rendering phase. The overall process is repeated iteratively. We show experimental results which demonstrate the efficacy of our interpolation method.

455 citations


Journal ArticleDOI
TL;DR: In this paper, the inverse distance weighted (IDW) interpolation method has been expanded to allow users to define the expected degree of surface abruptness along thematic boundaries using a transition matrix.

401 citations


Journal ArticleDOI
TL;DR: In this paper, a compactly supported continuous, orthogonal wavelet basis spanning L 2 (L 2 R ) was constructed by using fractional interpolation functions, which share many properties normally associated with spline wavelets, in particular linear phase.
Abstract: Fractal interpolation functions are used to construct a compactly supported continuous, orthogonal wavelet basis spanning $L^2 (\mathbb{R})$. The wavelets share many of the properties normally associated with spline wavelets, in particular, they have linear phase.

317 citations


Journal ArticleDOI
TL;DR: The authors have conducted several evaluation studies involving patient computed tomography and magnetic resonance data as well as mathematical phantoms indicate that the new method produces more accurate results than commonly used grey-level linear interpolation methods, although at the cost of increased computation.
Abstract: Shape-based interpolation as applied to binary images causes the interpolation process to be influenced by the shape of the object. It accomplishes this by first applying a distance transform to the data. This results in the creation of a grey-level data set in which the value at each point represents the minimum distance from that point to the surface of the object. (By convention, points inside the object are assigned positive values; points outside are assigned negative values.) This distance transformed data set is then interpolated using linear or higher-order interpolation and is then thresholded at a distance value of zero to produce the interpolated binary data set. Here, the authors describe a new method that extends shape-based interpolation to grey-level input data sets. This generalization consists of first lifting the n-dimensional (n-D) image data to represent it as a surface, or equivalently as a binary image, in an (n+1)-dimensional [(n+1)-D] space. The binary shape-based method is then applied to this image to create an (n+1)-D binary interpolated image. Finally, this image is collapsed (inverse of lifting) to create the n-D interpolated grey-level data set. The authors have conducted several evaluation studies involving patient computed tomography (CT) and magnetic resonance (MR) data as well as mathematical phantoms. They all indicate that the new method produces more accurate results than commonly used grey-level linear interpolation methods, although at the cost of increased computation.

231 citations


Proceedings ArticleDOI
11 Dec 1996
TL;DR: A new algorithm for nonlinear dynamic system identification with local linear models that does not underlie the "curse of dimensionality", it reveals irrelevant inputs, it detects inputs that influence the output mainly in a linear way, and it applies robust local linear estimation schemes.
Abstract: In this paper, a new algorithm (LOLIMOT) for nonlinear dynamic system identification with local linear models is proposed. The input space is partitioned by a tree-construction algorithm. The local models are interpolated by overlapping local basis functions. The resulting structure is equivalent to a Sugeno-Takagi fuzzy system and a local model network and can therefore be interpreted correspondingly. The LOLIMOT algorithm is very simple, easy to implement, and fast. Moreover, this approach has the following appealing properties: it does not underlie the "curse of dimensionality", it reveals irrelevant inputs, it detects inputs that influence the output mainly in a linear way, and it applies robust local linear estimation schemes. The drawbacks are that only orthogonal cuts are performed and that the local estimation approach may lead to interpolation errors.

180 citations


Journal ArticleDOI
TL;DR: The Galerkin method enriched with residual-free bubbles is considered for approximating the solution of the Helmholtz equation as discussed by the authors, and two-dimensional tests demonstrate the improvement over the standard GAs and the GAs using piecewise bilinear interpolations.
Abstract: The Galerkin method enriched with residual-free bubbles is considered for approximating the solution of the Helmholtz equation. Two-dimensional tests demonstrate the improvement over the standard Galerkin method and the Galerkin-least-squares method using piecewise bilinear interpolations.

165 citations


Proceedings ArticleDOI
16 Sep 1996
TL;DR: A novel scheme for edge-preserving image interpolation is introduced, which is based on the use of a simple nonlinear filter which accurately reconstructs sharp edges, with superior performances with respect to other interpolation techniques.
Abstract: A novel scheme for edge-preserving image interpolation is introduced, which is based on the use of a simple nonlinear filter which accurately reconstructs sharp edges. Simulation results show the superior performances of the proposed approach with respect to other interpolation techniques.

161 citations


Book ChapterDOI
Andrew R. Conn1, Philippe L. Toint
01 Jan 1996
TL;DR: This paper explores the use of multivariate interpolation techniques in the context of methods for unconstrained optimization that do not require derivative of the objective function and proposes a new algorithm that uses quadratic models in a trust region framework.
Abstract: This paper explores the use of multivariate interpolation techniques in the context of methods for unconstrained optimization that do not require derivative of the objective function. A new algorithm is proposed that uses quadratic models in a trust region framework. The algorithm is constructed to require few evaluations of the objective function and is designed to be relatively insensitive to noise in the objective function values. Its performance is analyzed on a set of 20 examples, both with and without noise.

128 citations


Journal ArticleDOI
TL;DR: Both qualitative and quantitative simulation results clearly show the superiority of the new adaptive algorithm for image interpolation with edge enhancement over standard low-pass interpolation algorithms such as bilinear, diamond-filter, or B-spline interpolation.
Abstract: A new adaptive algorithm for image interpolation with percep- tual edge enhancement is proposed. Here, perceptual means that edges are enhanced and interpolated in a visually pleasing way. Each pixel neighborhood is classified into one of three categories (constant, ori- ented, or irregular). In each case, an optimal interpolation technique finds the missing pixels without generating unpleasant artifacts such as aliasing or ringing effects. Furthermore, a quadratic Volterra filter is em- ployed to extract perceptually important details from the original image, which are then used to improve the overall sharpness and contrast. Both qualitative and quantitative simulation results clearly show the superiority of our method over standard low-pass interpolation algorithms such as bilinear, diamond-filter, or B-spline interpolation. © 1996 Society of Photo-Optical Instrumentation Engineers. Subject terms: image interpolation; zooming; quadratic Volterra filters; detail en- hancement; image enhancement; edge enhancement.

88 citations


Journal ArticleDOI
TL;DR: A bilInear fault detection observer is proposed for a bilinear system with unknown inputs and the residual vector in the design of the observer is decoupled from the known inputs and is made sensitive to all the faults.

67 citations


Proceedings Article
03 Dec 1996
TL;DR: This paper studies interpolation techniques that can result in vast improvements in the online behavior of the resulting control systems: multilinear interpolation, and an interpolation algorithm based on an interesting regular triangulation of d-dimensional space.
Abstract: Dynamic Programming, Q-learning and other discrete Markov Decision Process solvers can be applied to continuous d-dimensional state-spaces by quantizing the state space into an array of boxes. This is often problematic above two dimensions: a coarse quantization can lead to poor policies, and fine quantization is too expensive. Possible solutions are variable-resolution discretization, or function approximation by neural nets. A third option, which has been little studied in the reinforcement learning literature, is interpolation on a coarse grid. In this paper we study interpolation techniques that can result in vast improvements in the online behavior of the resulting control systems: multilinear interpolation, and an interpolation algorithm based on an interesting regular triangulation of d-dimensional space. We adapt these interpolators under three reinforcement learning paradigms: (i) offline value iteration with a known model, (ii) Q-learning, and (iii) online value iteration with a previously unknown model learned from data. We describe empirical results, and the resulting implications for practical learning of continuous non-linear dynamic control.

Journal ArticleDOI
TL;DR: In this paper, the bilinear equations of a discrete BKP hierarchy are reduced to an identity of pfaffians, and a difference analogue of the Sawada-Kotera equation is given from the discrete BkP hierarchy.
Abstract: A discrete BKP hierarchy in bilinear form is given. It is shown that the solutions of the bilinear forms are given in terms of pfaffians and the bilinear equations of a discrete BKP hierarchy are reduced to an identity of pfaffians. The N -soliton solution is explicitly constructed in terms of the pfaffian. As an example of nonlinear difference equations, a difference analogue of the Sawada-Kotera equation is given from the discrete BKP hierarchy.

Journal ArticleDOI
TL;DR: In this article, the authors considered Bertrands' bilinear affine time-frequency distributions from the point of view of their geometry in the timefrequency plane and established general construction rules for interference terms, with further interpretations in terms of localization properties, generalized means and symmetries.

Journal ArticleDOI
TL;DR: In this article, the authors answer a question posed by R. Aron, C. Finet and E. Werner, on the bilinear version of the Bishop-Phelps theorem, by exhibiting an example of a Banach space X such that the set of norm-attaining bilínear forms on X×X is not dense in the space of all continuous bilinearly forms.
Abstract: We answer a question posed by R. Aron, C. Finet and E. Werner, on the bilinear version of the Bishop-Phelps theorem, by exhibiting an example of a Banach spaceX such that the set of norm-attaining bilinear forms onX×X is not dense in the space of all continuous bilinear forms.

Journal ArticleDOI
TL;DR: A two step drift reduction method which substantially reduces drift by 7.9 dB on the average compared to a reference method which consists of bilinear interpolation filtering in the first step and conventional block motion compensation in the second step.
Abstract: The prediction error instead of the original signal has to be scaled for efficient multiresolution hybrid coding. After motion compensation, the prediction error is scaled by low-pass filtering and subsequent subsampling. However, causal prediction becomes a problem when only the coded low frequency information is used to derive low resolution images at a base layer decoder. In general, a drift between the predictors of the multiresolution encoder and the base layer decoder occurs. This drift results in an additional reconstruction error of the low resolution images. It is due to aliasing which is introduced by subsampling and cross talk which is introduced by motion compensation in combination with low pass filtering. Correspondingly, a two step drift reduction method is proposed. In the first step, Wiener filtering reduces drift due to aliasing and in the second step, overlapped block motion compensation reduces drift due to cross talk. Experimental results confirm that the proposed method substantially reduces drift by 7.9 dB on the average compared to a reference method which consists of bilinear interpolation filtering in the first step and conventional block motion compensation in the second step. The drift reduction method can also be used to improve the prediction of the low frequencies at a drift free multiresolution encoder with several prediction loops. The prediction error of the low frequencies is reduced by 2.9 dB on the average for critical test signals compared to the reference method.

Journal ArticleDOI
TL;DR: The results confirm that the way in which an MIP image is constructed has a dramatic effect on information contained in the projection.
Abstract: Four methods of producing maximum intensity projection (MIP) images were studied and compared. Three of the projection methods differ in the interpolation kernel used for ray tracing. The interpolation kernels include nearest neighbor interpolation, linear interpolation, and cubic convolution interpolation. The fourth projection method is a voxel projection method that is not explicitly a ray-tracing technique. The four algorithms` performance was evaluated using a computer-generated model of a vessel and using real MR angiography data. The evaluation centered around how well an algorithm transferred an object`s width to the projection plane. The voxel projection algorithm does not suffer from artifacts associated with the nearest neighbor algorithm. Also, a speed-up in the calculation of the projection is seen with the voxel projection method. Linear interpolation dramatically improves the transfer of width information from the 3D MRA data set over both nearest neighbor and voxel projection methods. Even though the cubic convolution interpolation kernel is theoretically superior to the linear kernel, it did not project widths more accurately than linear interpolation. A possible advantage to the nearest neighbor interpolation is that the size of small vessels tends to be exaggerated in the projection plane, thereby increasing their visibility. The results confirm thatmore » the way in which an MIP image is constructed has a dramatic effect on information contained in the projection. The construction method must be chosen with the knowledge that the clinical information in the 2D projections in general will be different from that contained in the original 3D data volume. 27 refs., 16 figs., 2 tabs.« less

Journal ArticleDOI
TL;DR: The technique was used to reduce unknown rigid motion artifact arising from the head movements of two volunteers, and a minimum energy method is proposed which utilizes the fact that planar rigid motion increases the measured energy of an ideal MR image outside the boundary of the imaging object.
Abstract: A post-processing technique has been developed to suppress the magnetic resonance imaging (MRI) artifact arising from object planar rigid motion. In two-dimensional Fourier transform (2-DFT) MRI, rotational and translational motions of the target during magnetic resonance magnetic resonance (MR) scan respectively impose nonuniform sampling and a phase error an the collected MRI signal. The artifact correction method introduced considers the following three conditions: (1) for planar rigid motion with known parameters, a reconstruction algorithm based on bilinear interpolation and the super-position method is employed to remove the MRI artifact, (2) for planar rigid motion with known rotation angle and unknown translational motion (including an unknown rotation center), first, a super-position bilinear interpolation algorithm is used to eliminate artifact due to rotation about the center of the imaging plane, following which a phase correction algorithm is applied to reduce the remaining phase error of the MRI signal, and (3) to estimate unknown parameters of a rigid motion, a minimum energy method is proposed which utilizes the fact that planar rigid motion increases the measured energy of an ideal MR image outside the boundary of the imaging object; by using this property all unknown parameters of a typical rigid motion are accurately estimated in the presence of noise. To confirm the feasibility of employing the proposed method in a clinical setting, the technique was used to reduce unknown rigid motion artifact arising from the head movements of two volunteers.

Journal ArticleDOI
TL;DR: In this article, a bilinear fault detection observer is proposed for hydraulic systems with unknown input and a sufficient condition for the existence of the observer is given, and the residual generated by this observer is decoupled from the unknown input.
Abstract: A bilinear fault detection observer is proposed for bilinear systems with unknown input. A sufficient condition for the existence of the observer is given. The residual generated by this observer is decoupled from the unknown input. The method is applied to a hydraulic test rig to detect and isolate a large group of simulated faults. The effectiveness of the method in fault detection and isolation for the hydraulic system is demonstrated by real data simulation.

01 Feb 1996
TL;DR: Determining if the optimal value of a bilinear programming problem is bounded is shown to be strongly NP-complete and the proposed algorithm may be used to answer this question.
Abstract: The disjoint bilinear programming problem can be reformulated using two distinct linear maxmin programming problems. There is a simple bijection between the optimal solutions of the reformulations and the bilinear problem. Moreover, the number of local optima of the reformulations is less or equal to that of the bilinear problem. In that sense, the reformulations do not introduce additional complexity. Necessary optimality conditions (complementarity and monotonicity) of both reformulations are used to obtain a finitely convergent and exact branch and bound algorithm for the bilinear problem. Determining if the optimal value of a bilinear programming problem is bounded is shown to be strongly NP-complete. The proposed algorithm may be used to answer this question.

Journal ArticleDOI
TL;DR: In this paper, a spatially variant point-spread function (SV-PSF) is used for maximum likelihood restoration on both synthetic and real data sets from the Hubble Space Telescope.
Abstract: We present results of concurrent maximum-likelihood restoration implementations with a spatially variant point-spread function (SV-PSF) on both synthetic and real data sets from the Hubble Space Telescope. We demonstrate that SV-PSF restoration exhibits superior performance compared with restoration with a spatially invariant point-spread function. We realize concurrency on a network of Unix workstations and on a SV-PSF model from sparse point-spread function reference information by means of bilinear interpolation. We then use the interpolative point-spread function model to implement several different SV-PSF restoration methods. These restoration methods are tested on a standard synthetic Hubble Space Telescope test case, and the results are compared on a computational effort–restoration performance basis. These methods are further applied to actual Hubble Space Telescope data, including an application that corrects for motion blur, and the results are presented.

Book ChapterDOI
TL;DR: In this paper, a wave front construction (WFC) algorithm was designed for smooth media for application to prestack depth migration, and the highest priority was given to maximum computational speed to allow an extension of the techniques to 3D media.
Abstract: We implemented a wave front construction algorithm specifically designed for smooth media for application to prestack depth migration. The highest priority was given to maximum computational speed to allow an extension of the techniques to 3D media. A simple grid-based model representation in combination with fast bilinear interpolation is used. It is shown that this procedure has no distorting effects on the ray tracing results for smooth media. In our implementation, wave front construction (WFC) has proven to be as fast as some of the recently developed methods for travel time computations. WFC has advantages over these methods, since amplitudes and other ray theoretical quantities are available, and it is not restricted to the calculation of only first arrivals. Thus, it meets the requirements for migration in complex media. Furthermore, WFC allows for introduction of a perturbation scheme for computing travel times for slightly varying models simultaneously. This has applications for, e.g., prestack velocity estimation techniques. The importance of later arrivals for migration in complex media is demonstrated by prestack images of the Marmousi data set.

Journal ArticleDOI
TL;DR: It is demonstrated how parallel processing can be used to reduce computation times to levels that are suitable for interactive interpolation analyses of large spatial databases.
Abstract: Large spatial interpolation problems present significant computational challenges even for the fastest workstations. In this paper we demonstrate how parallel processing can be used to reduce computation times to levels that are suitable for interactive interpolation analyses of large spatial databases. Though the approach developed in this paper can be used with a wide variety of interpolation algorithms, we specifically contrast the results obtained from a global ‘brute force’ inverse–distance weighted interpolation algorithm with those obtained using a much more efficient local approach. The parallel versions of both implementations are superior to their sequential counterparts. However, the local version of the parallel algorithm provides the best overall performance.

Journal ArticleDOI
TL;DR: A new method for generating facial animation in which facial expression and shape can be changed simultaneously in real time, which enables a rapid change in facial expression with metamorphosis.
Abstract: This paper describes a new method for generating facial animation in which facial expression and shape can be changed simultaneously in real time. A 2D parameter space independent of facial shape is defined, on which facial expressions are superimposed so that the expressions can be applied to various facial shapes. A facial model is transformed by a bilinear interpolation, which enables a rapid change in facial expression with metamorphosis. The practical efficiency of this method has been demonstrated by a real-time animation system based on this method in live theater.

Patent
Kaoru Imao1, Satoshi Ohuchi1
10 May 1996
TL;DR: In this article, a triangular prism is selected from plural unit triangular prisms in RGB space, based on x, y and z coordinates of the input signals, a gradient factor and an intercept factor of the selected prism are read out from a memory, and correction data corresponding to the input signal is calculated through interpolation using the gradient and intercept factors from the memory, so that the correction signals are generated based on the calculated correction data
Abstract: An interpolation method for color correction generates correction signals Y, M and C from input signals R, G and B through interpolation In this interpolation method, a triangular prism is selected from plural unit triangular prisms in RGB space, based on x, y and z coordinates of the input signals, a gradient factor and an intercept factor of the selected prism are read out from a memory, and correction data corresponding to the input signals is calculated through interpolation using the gradient and intercept factors from the memory, so that the correction signals are generated based on the calculated correction data

Journal ArticleDOI
TL;DR: In this article, a system of coupled integrable dispersionless (CID) equations is considered and the integrability properties through Painleve (P) analysis are discussed.
Abstract: Considering a system of coupled integrable dispersionless (CID) equations, we discuss the integrability properties through Painleve (P) analysis. Further, we use the bilinear transformations in which nonlinear coupled dispersionless equations are modified into bilinear forms through dependent variable transformations.

Proceedings ArticleDOI
David Gesbert1, Pierre Duhamel
24 Jun 1996
TL;DR: A new joint data/channel estimation method that relies on the minimization of a bilinear MSE cost function, where the variables to be adjusted are the channel coefficient matrix and a linear equalizer, leading to globally convergent identification/equalization schemes.
Abstract: In the context of digital radio communications, the signals are transmitted through propagation channels which introduce intersymbol interference (ISI). The channels can be represented as FIR filters which have to be identified and/or equalized for the transmitted symbols to be recovered. The problem of identifying/equalizing a digital communication channel based on its temporally or spatially oversampled output has gained much attention (single-input/multiple-output-SIMO-deconvolution). In this context, we propose a new joint data/channel estimation method. Our technique relies on the minimization of a bilinear MSE cost function, where the variables to be adjusted are the channel coefficient matrix and a linear equalizer. We show that this a priori choice of a linear equalization structure allows the derivation of a second-order unimodal criterion, leading to globally convergent identification/equalization schemes. The proposed method is completely blind in that (1) no assumption is required upon the transmitted sequence statistics or alphabet, and (2) it shows some robustness with respect to the channel order estimation problem (thus improving on most previous related works). It also allows the free choice of a delay in the equalizer so that output noise amplification can be optimized.

Journal ArticleDOI
TL;DR: This work clarifies the connections between two apparently unrelated approaches to bandlimited interpolation by showing that, in a certain sense, they are the dual of each other.
Abstract: We clarify the connections between two apparently unrelated approaches to bandlimited interpolation by showing that, in a certain sense, they are the dual of each other. The advantages of recognizing this duality are discussed.

Book ChapterDOI
Fernand Meyer1
01 Jan 1996
TL;DR: The present study introduces an interpolation technique for mosaic images that extends simpler techniques designed for binary images, that the authors shall present first.
Abstract: A mosaic image is a partition of the plane. Each class of the partition has a label. Such partitions are produced, in particular, when using object-oriented image coding. The present study introduces an interpolation technique for mosaic images. The results obtained for mosaic images extend simpler techniques designed for binary images, that we shall present first.

Patent
09 Aug 1996
TL;DR: In this paper, an apparatus and method for modeling optimization problems providing variable specification of both input and output in enhanced graph theoretic form is presented, where problem elements including nodes and links may be defined, as well as constraints on nodes, links and groups of nodes, including proportional and required relationships between network elements.
Abstract: An apparatus and method for modeling optimization problems providing variable specification of both input and output in enhanced graph theoretic form. Problem elements including nodes and links may be defined, as may constraints on nodes and links and on groups of nodes and links including proportional and required relationships between network elements and groups of network elements that are connected and unconnected. Data received in enhanced graph theoretic format are transformed into the form of an objective function, possibly including linear, bilinear, and quadratic terms, and a system of constraints, which are then solved using network program, linear program or mixed integer linear program software.

Journal ArticleDOI
TL;DR: The authors propose a nonlinear-filter-based approach to gray-scale interpolation of 3-D images, referred to as column-fitting interpolation, which is reminiscent of the maximum-homogeneity filter used for image enhancement.
Abstract: Three-dimensional (3-D) images are now common in radiology. A 3-D image is formed by stacking a contiguous sequence of two-dimensional cross-sectional images, or slices. Typically, the spacing between known slices is greater than the spacing between known points on a slice. Many visualization and image-analysis tasks, however, require the 3-D image to have equal sample spacing in all directions. To meet this requirement, one applies an interpolation technique to the known 3-D image to generate a new uniformly sampled 3-D image. The authors propose a nonlinear-filter-based approach to gray-scale interpolation of 3-D images. The method, referred to as column-fitting interpolation, is reminiscent of the maximum-homogeneity filter used for image enhancement. The authors also draw upon the paradigm of relaxation labeling to devise an improved column-fitting interpolator. Both methods are typically more effective than traditional gray-scale interpolation techniques.