scispace - formally typeset
Search or ask a question

Showing papers on "Piecewise linear function published in 1996"


Journal ArticleDOI
TL;DR: In this paper, a piecewise linear recursive convolution (PLRC) method is described that has greatly improved accuracy over the original RC approach but retains its speed and efficiency advantages.
Abstract: Electromagnetic propagation through linear dispersive media can be analyzed using the finite-difference time-domain (FDTD) method by employing the recursive convolution (RC) approach to evaluate the discrete time convolution of the electric field and the dielectric susceptibility function. The RC approach results in a fast and computationally efficient algorithm; however, the accuracy achieved is not generally as good as that obtained with other methods. A new piecewise linear recursive convolution (PLRC) method is described here that has greatly improved accuracy over the original RC approach but retains its speed and efficiency advantages.

513 citations


Proceedings ArticleDOI
28 Oct 1996
TL;DR: Efficient methods for implementing general non-linear magnification transformations are presented and piecewise linear methods are introduced which allow greater efficiency and expressiveness than their continuous counterparts.
Abstract: This paper presents efficient methods for implementing general non-linear magnification transformations. Techniques are provided for: combining linear and non-linear magnifications, constraining the domain of magnifications, combining multiple transformations, and smoothly interpolating between magnified and normal views. In addition, piecewise linear methods are introduced which allow greater efficiency and expressiveness than their continuous counterparts.

253 citations


Journal ArticleDOI
TL;DR: The proposed discretization uses convolution quadrature based on the first- and second-order backward difference methods in time, and piecewise linear finite elements in space to study the numerical approximation of an integro-differential equation.
Abstract: We study the numerical approximation of an integro-differential equation which is intermediate between the heat and wave equations. The proposed discretization uses convolution quadrature based on the first- and second-order backward difference methods in time, and piecewise linear finite elements in space. Optimal-order error bounds in terms of the initial data and the inhomogeneity are shown for positive times, without assumptions of spatial regularity of the data.

227 citations


Journal ArticleDOI
TL;DR: In this paper, the double-threshold ARCH (DTARCH) model is extended to handle the situation where both the conditional mean and the conditional variance specifications are piecewise linear given previous information.
Abstract: Tong's threshold models have been found useful in modelling nonlinearities in the conditional mean of a time series. The threshold model is extended to the so-called double-threshold ARCH(DTARCH) model, which can handle the situation where both the conditional mean and the conditional variance specifications are piecewise linear given previous information. Potential applications of such models include financial data with different (asymmetric) behaviour in a rising versus a falling market and business cycle modelling. Model identification, estimation and diagnostic checking techniques are developed. Maximum likelihood estimation can be achieved via an easy-to-use iteratively weighted least squares algorithm. Portmanteau-type statistics are also derived for checking model adequacy. An illustrative example demonstrates that asymmetric behaviour in the mean and the variance could be present in financial series and that the DTARCH model is capable of capturing these phenomena.

222 citations


Journal ArticleDOI
TL;DR: A reliable and efficient computational algorithm for restoring blurred and noisy images that can be used in an adaptive/interactive manner in situations when knowledge of the noise variance is either unavailable or unreliable is proposed.
Abstract: A reliable and efficient computational algorithm for restoring blurred and noisy images is proposed. The restoration process is based on the minimal total variation principle introduced by Rudin et al. For discrete images, the proposed algorithm minimizes a piecewise linear l/sub 1/ function (a measure of total variation) subject to a single 2-norm inequality constraint (a measure of data fit). The algorithm starts by finding a feasible point for the inequality constraint using a (partial) conjugate gradient method. This corresponds to a deblurring process. Noise and other artifacts are removed by a subsequent total variation minimization process. The use of the linear l/sub 1/ objective function for the total variation measurement leads to a simpler computational algorithm. Both the steepest descent and an affine scaling Newton method are considered to solve this constrained piecewise linear l/sub 1/ minimization problem. The resulting algorithm, when viewed as an image restoration and enhancement process, has the feature that it can be used in an adaptive/interactive manner in situations when knowledge of the noise variance is either unavailable or unreliable. Numerical examples are presented to demonstrate the effectiveness of the proposed iterative image restoration and enhancement process.

198 citations


Journal ArticleDOI
TL;DR: A nonlinear orientation model for the representation of the orientation matrix of a fingerprint image shows substantial decrease in the orientation error and is explored for efficient identity verification based on the ridge orientation.

146 citations


01 Sep 1996
TL;DR: In this article, the main principle of multigrid methods is to complement the local exchange of information in point-wise iterative methods by a global one utilizing several related systems, called coarse levels, with a smaller number of variables.
Abstract: Multigrid methods are very efficient iterative solvers for system of algebraic equations arising from finite element and finite difference discretization of elliptic boundary value problems. The main principle of multigrid methods is to complement the local exchange of information in point-wise iterative methods by a global one utilizing several related systems, called coarse levels, with a smaller number of variables. The coarse levels are often obtained as a hierarchy of discretizations with different characteristic meshsizes, but this requires that the discretization is controlled by the iterative method. To solve linear systems produced by existing finite element software, one needs to create an artificial hierarchy of coarse problems. The principal issue is then to obtain computational complexity and approximation properties similar to those for nested meshes, using only information in the matrix of the system and as little extra information as possible. Such algebraic multigrid method that uses the system matrix only was developed by Ruge. The prolongations were based on the matrix of the system by partial solution from given values at selected coarse points. The coarse grid points were selected so that each point would be interpolated to via so-called strong connections. Our approach is based on smoothed aggregation introduced recently by Vanek. First the set of nodes is decomposed into small mutually disjoint subsets. A tentative piecewise constant interpolation (in the discrete sense) is then defined on those subsets as piecewise constant for second order problems, and piecewise linear for fourth order problems. The prolongation operator is then obtained by smoothing the output of the tentative prolongation and coarse level operators are defined variationally.

112 citations


Patent
03 Apr 1996
TL;DR: In this paper, a structural description of an integrated circuit is converted into a constraint graph, and the constraint graph is then expanded by replacing edges having piecewise linear cost function with subgraphs constructed from the piecewiselinear cost function.
Abstract: A system, method, and software product in a computer aided design apparatus for system design, to simultaneously optimize multiple performance criteria models of the system, where the performance criteria models are characterized by convex cost functions based on linear dimensional characteristics of system being designed. One embodiment is provided in a computer aid design environment for integrated circuit design, and used to simultaneously optimize fabrication yield along with other performance criteria. Optimization is provided by converting a structural description of an integrated circuit into a constraint graph, compacting, and modifying the constraint graph to include convex cost functions for selected performance criteria to optimized, such as yield cost functions. The cost functions are then transformed to piecewise linear cost functions. The constraint graph is then expanded by replacing edges having piecewise linear cost function with subgraphs constructed from the piecewise linear cost function. The expanded constraint graph is then minimized using a network flow algorithm. Once minimized, the constraint graph describes the positions of circuit elements that maximize yield (and other selected performance criteria) given the cost functions.

107 citations


Journal ArticleDOI
TL;DR: In this article, the authors proposed a piecewise linear two-dimensional map to analyze the dynamics in higher-dimensional networks as exemplified by a four-dimensional network that displays chaotic behavior.

107 citations


Journal ArticleDOI
TL;DR: A procedure for creating a hierarchical basis of continuous piecewise linear polynomials on an arbitrary, unstructured, nonuniform triangular mesh is developed and it is shown that the generalized condition numbers for such iterative methods are of order $J^2$, where $J$ is the number of hierarchical basis levels.
Abstract: We develop and analyze a procedure for creating a hierarchical basis of continuous piecewise linear polynomials on an arbitrary, unstructured, nonuniform triangular mesh. Using these hierarchical basis functions, we are able to define and analyze corresponding iterative methods for solving the linear systems arising from finite element discretizations of elliptic partial differential equations. We show that such iterative methods perform as well as those developed for the usual case of structured, locally refined meshes. In particular, we show that the generalized condition numbers for such iterative methods are of order \(J^2\), where \(J\) is the number of hierarchical basis levels.

105 citations


Journal ArticleDOI
TL;DR: It is shown that for two-station systems the Lyapunov function approach is equivalent to the authors' and therefore characterizes stability exactly, and new sufficient conditions for the stability of multiclass queueing networks involving any number of stations are found.
Abstract: We introduce a new method to investigate stability of work-conserving policies in multiclass queueing networks. The method decomposes feasible trajectories and uses linear programming to test stability. We show that this linear program is a necessary and sufficient condition for the stability of all work-conserving policies for multiclass fluid queueing networks with two stations. Furthermore, we find new sufficient conditions for the stability of multiclass queueing networks involving any number of stations and conjecture that these conditions are also necessary. Previous research had identified sufficient conditions through the use of a particular class of (piecewise linear convex) Lyapunov functions. Using linear programming duality, we show that for two-station systems the Lyapunov function approach is equivalent to ours and therefore characterizes stability exactly.

Journal ArticleDOI
TL;DR: This new approach allows the implementation as a classical V-cycle and preserves the usual multigrid efficiency, and gives estimates for the asymptotic convergence rates.
Abstract: We derive globally convergent multigrid methods for discrete elliptic variational inequalities of the second kind as obtained from the approximation of related continuous problems by piecewise linear finite elements. The coarse grid corrections are computed from certain obstacle problems. The actual constraints are fixed by the preceding nonlinear fine grid smoothing. This new approach allows the implementation as a classical V-cycle and preserves the usual multigrid efficiency. We give $1-O(j^{-3})$ estimates for the asymptotic convergence rates. The numerical results indicate a significant improvement as compared with previous multigrid approaches.

Journal ArticleDOI
TL;DR: A simple and efficient polygonization algorithm that gives a practical way to construct adapted piecewise linear representations of implicit surfaces by separating structuring from sampling and reducing part of the full three-dimensional search to two dimensions.
Abstract: This paper describes a simple and efficient polygonization algorithm that gives a practical way to construct adapted piecewise linear representations of implicit surfaces. The method starts with a coarse uniform polygonal approximation of the surface and subdivides each polygon recursively according to local curvature. In this way, the inherent complexity of the problem is tamed by separating structuring from sampling and reducing part of the full three-dimensional search to two dimensions.

Book ChapterDOI
01 Jan 1996
TL;DR: In this paper, the smooth and piecewise linear manifolds within a given homotopy equivalence class were studied and an obstruction theory for deforming a homotope equivalence between manifolds to a diffeomorphism or a piece-wise linear homeomorphism was proposed.
Abstract: We will study the smooth and piecewise linear manifolds within a given homotopy equivalence class. In the first part we find an obstruction theory for deforming a homotopy equivalence between manifolds to a diffeomorphism or a piecewise linear homeomorphism. In the second part we analyze the piecewise linear case and characterize the obstructions in terms of a geometric property of the homotopy equivalence. In the third part we apply this analysis to the Hauptvermutung and complex projective space.

Journal ArticleDOI
TL;DR: Henon's method locates the switching point to a high degree of accuracy in one integration step while eliminating the need for a specified tolerance.
Abstract: Results show the importance of accurately locating the switching point between linear subdomains when numerically integrating a piecewise linear system of equations Henon's method locates the switching point to a high degree of accuracy in one integration step while eliminating the need for a specified tolerance

Journal ArticleDOI
TL;DR: The proposed Binary Tree-Genetic Algorithm (BTGA) is demonstrated to produce a much lower cross validation misclassification rate and the multiple choices offered by the spectrum for the sensitivity-false alarm rate combination will provide the-flexibility needed for the pap smear slide classification.

Journal ArticleDOI
TL;DR: In this article, a method to construct the normal modes for a class of piecewise linear vibratory systems is developed, which utilizes the concepts of Poincare maps and invariant manifolds from the theory of dynamical systems.
Abstract: A method to construct the normal modes for a class of piecewise linear vibratory systems is developed in this study. The approach utilizes the concepts of Poincare maps and invariant manifolds from the theory of dynamical systems. In contrast to conventional methods for smooth systems, which expand normal modes in a series form around an equilibrium point of interest, the present method expands the normal modes in a series form of polar coordinates in a neighborhood of an in variant disk of the system. It is found thal the normal modes, modal dynamics and frequency-amplitude dependence relationship are all of piecewise type. A two degree of freedom example is used to demonstrate the method.

Journal ArticleDOI
TL;DR: A new method is proposed, the main features of which are to treat the optimal dimensioning problem as a bilevel programming problem, then to simplify the lower level problem by conjugate duality theory and to handle the upperlevel problem by its piecewise linear and convex nature.
Abstract: This paper is concerned with the optimal dimensioning problem in which the layout of a pipe network is given but several available diameters can be selected for each pipe. The purpose is to minimize total cost while satisfying certain restrictions. A new method is proposed, the main features of which are to treat the problem as a bilevel programming problem, then to simplify the lower level problem by conjugate duality theory and to handle the upper level problem by its piecewise linear and convex nature. Some numerical testing results are also presented.

Journal ArticleDOI
TL;DR: The MILP-model has been embedded in a prototype Decision Support System (DSS), and with respect to the proposed solution the DSS provides complete probability distributions for both costs and benefits.

Patent
03 Apr 1996
TL;DR: In this paper, a structural description of an integrated circuit is converted into a constraint graph and then modified to include convex cost functions for selected performance criteria to be optimized, such as yield cost functions.
Abstract: A system, method, and software product in a computer aided design apparatus for system design, to simultaneously optimize multiple performance criteria models of the system, where the performance criteria models are characterized by convex cost functions based on linear dimensional characteristics of the system being designed. One embodiment is provided in a computer aid design environment for integrated circuit design, and used to simultaneously optimize fabrication yield along with other performance criteria. Optimization is provided by converting a structural description of an integrated circuit into a constraint graph, compacting, and modifying the constraint graph to include convex cost functions for selected performance criteria to be optimized, such as yield cost functions. The cost functions are then transformed to piecewise linear cost functions. The constraint graph is directly minimized by an improved wire length minimizer that treats the piecewise linear cost function of each edge of the constraint graph as if it were a subgraph, without actually expanding the constraint graph with the subgraphs of the piecewise linear cost functions. Once minimized, the constraint graph describes the positions of circuit elements that maximize yield (and other selected performance criteria) given the cost functions.

Journal ArticleDOI
TL;DR: Dynamics of a four-parameter family of two-dimensional piecewise linear endomorphisms which consist of two linearly coupled one-dimensional maps are considered and it is shown that under analytically given conditions chaotic behavior in both maps can be synchronized.
Abstract: Dynamics of a four-parameter family of two-dimensional piecewise linear endomorphisms which consist of two linearly coupled one-dimensional maps is considered. We show that under analytically given conditions chaotic behavior in both maps can be synchronized. Depending on the coupling the parameters chaotic attractor’s synchronized state is characterized by different types of stability. @S1063-651X~96!11409-4#

Journal ArticleDOI
Gabriel Taubin1, Rémi Ronfard1
TL;DR: The implicit simplicial models that are introduced in this paper are implicit curves and surfaces defined by piecewise linear functions that allow for local deformations, control of the topological type, and prevention of self-intersections during deformations.
Abstract: Parametric deformable models have been extensively and very successfully used for reconstructing free-form curves and surfaces, and for tracking nonrigid deformations, but they require previous knowledge of the topological type of the data, and good initial curve or surface estimates. With deformable models, it is also computationally expensive to check for and to prevent self-intersections while tracking deformations. The implicit simplicial models that we introduce in this paper are implicit curves and surfaces defined by piecewise linear functions. This representation allows for local deformations, control of the topological type, and prevention of self-intersections during deformations. As a first application, we also describe an algorithm for 2D curve reconstruction from unorganized sets of data points. The topology, the number of connected components, and the geometry of the data are all estimated using an adaptive space subdivision approach. The main four components of the algorithm are topology estimation, curve fitting, adaptive space subdivision, and mesh relaxation.

Journal ArticleDOI
TL;DR: In this paper, an error bound is proved for a piecewise linear finite element approximation, using a backward-Euler time discretization, of a model for phase separation of a multi-component alloy.
Abstract: An error bound is proved for a fully practical piecewise linear finite element approximation, using a backward-Euler time discretization, of a model for phase separation of a multi-component alloy. Numerical experiments with three components in one and two space dimensions are also presented.

Journal ArticleDOI
TL;DR: Responses functions are investigated that improve the performance of piecewise linear discriminants computed with Simplex optimization that include the logarithmically scaled summation of the discriminant scores of the currently misclassified analyte-active interferograms.

Journal ArticleDOI
TL;DR: In this paper, the new concept of the baseline function is introduced, it is the mapping of the neuron state to the neuron output that is used to control the chaotic behavior of collective neurons.
Abstract: The design of a chaotic neuron model is proposed and implemented in a CMOS very large scale integration (VLSI) chip. The transfer function of the neuron is defined as a piecewise linear (PWL) N-shaped function. In this paper, the new concept of the baseline function is introduced. It is the mapping of the neuron state to the neuron output. It is used to control the chaotic behavior of collective neurons. The chaotic behavior is analyzed and verified by Lyapunov exponents. An analog CMOS chip was designed to implement the theory and it was fabricated through the MOSIS program. The measurement diagnoses of the chip is demonstrated.

Journal ArticleDOI
TL;DR: A way to obtain the entire cost versus delay tradeoff curve of a combinational logic circuit in an efficient way is described, and every point on the resulting curve is the global optimum of the corresponding gate sizing problem.
Abstract: The gate sizing problem is the problem of finding load drive capabilities for all gates in a given Boolean network such, that a given delay limit is kept, and the necessary cost in terms of active area usage and/or power consumption is minimal. This paper describes a way to obtain the entire cost versus delay tradeoff curve of a combinational logic circuit in an efficient way. Every point on the resulting curve is the global optimum of the corresponding gate sizing problem. The problem is solved by mapping it onto piecewise linear models in such a way, that a piecewise linear (circuit) simulator can do the job. It is shown that this setup is very efficient, and can produce tradeoff curves for large circuits (thousands of gates) in a few minutes. Benchmark results for the entire set of MCNC '91 two-level examples are given.

Journal ArticleDOI
TL;DR: An efficient method of approximating a set of mutually nonintersecting simple composite planar and space Bezier curves within a prescribed tolerance using piecewise linear segments and ensuring the existence of a homeomorphism between the piecewiselinear approximating segments and the actual nonlinear curves is presented.

01 Mar 1996
TL;DR: Fogel et al. as mentioned in this paper evaluated three methods of image-to-image registration using control points, i.e., the polynomial method, the piecewise linear transformation and the multiquadric method.
Abstract: Author(s): Fogel, David N.; Tinney, Larry R. | Abstract: In this report, three methods of image-to-image registration using control points are evaluated. We assume that ephemeris sensor and platform data are unavailable. These techniques are the polynomial method, the piecewise linear transformation and the multiquadric method. The motivation for this research is the need for more accurate geometric correction of digital remote sensing data. This is especially important for airborne scanned imagery which is characterized by greater distortions than satellite data.The polynomial and piecewise linear methods were developed for use with satellite imagery and have remained popular due to their relative simplicity in theory and implementation. With respect to airborne data however, both of these methods have serious shortcomings. The polynomial method, a global model, is generally applied as a least-squares approximation to the control points. Mathematically it is unconstrained between points leading to undesirable excursions in the warp. The piecewise linear method (or finite element method), a local procedure, produces a faceted irregular warp when the distortions between the control points are highly nonlinear.The multiquadric method is a radial basis function. Two radial basis functions show promise for image warping: the multiquadric and thin plate spline. The multiquadric method is a global technique which captures local variations and interpolates, passing through the control points. It includes a tension-like parameter which can be used to adjust its behavior relative to local distortions. The principal shortcoming of the multiquadric method is that it is quite computationally intensive. Both the multiquadric method and thin plate splines have been evaluated extensively for scattered data interpolation.In a test application using badly warped aircraft imagery, the multiquadric method produced better results both visually, e.g. crooked lines were straightened, and quantitatively with lower residual errors. The results for the multiquadric method are encouraging for improved environmental remote sensing and geographic information systems integration. The technique may be applied to satellite data as well as to airborne scanner data. The multiquadric method may be used for warping polygons and applied to mosaicking as well. Its present functional form is flexible and may be modified quite easily to further adapt to local distortions, a task not performed for this report. Advances in the rapid evaluation of radial basis functions will make both the multiquadric and thin plate spline techniques even more attractive in the future.

Journal ArticleDOI
TL;DR: The optimum design of the class of PWL filters introduced in this paper can be postulated as a least squares problem whose variables separate into a linear and a nonlinear part.
Abstract: The continuous threshold decomposition is a segmentation operator used to split a signal into a set of multilevel components. This decomposition method can be used to represent continuous multivariate piecewise linear (PWL) functions and, therefore, can be employed to describe PWL systems defined over a rectangular lattice. The resulting filters are canonical and have a multichannel structure that can be exploited for the development of rapidly convergent algorithms. The optimum design of the class of PWL filters introduced in this paper can be postulated as a least squares problem whose variables separate into a linear and a nonlinear part. Based on this feature, parameter estimation algorithms are developed. First, a block data processing algorithm that combines linear least-squares with grid localization through recursive partitioning is introduced. Second, a time-adaptive method based on the combination of an RLS algorithm for coefficient updating and a signed gradient descent module for threshold adaptation is proposed and analyzed. A system identification problem for wave propagation through a nonlinear multilayer channel serves as a comparative example where the concepts introduced are tested against the linear, Volterra, and neural network alternatives.