scispace - formally typeset
Search or ask a question

Showing papers on "Piecewise published in 1989"


Journal ArticleDOI
TL;DR: In this article, the authors introduce and study the most basic properties of three new variational problems which are suggested by applications to computer vision, and study their application in computer vision.
Abstract: : This reprint will introduce and study the most basic properties of three new variational problems which are suggested by applications to computer vision. In computer vision, a fundamental problem is to appropriately decompose the domain R of a function g (x,y) of two variables. This problem starts by describing the physical situation which produces images: assume that a three-dimensional world is observed by an eye or camera from some point P and that g1(rho) represents the intensity of the light in this world approaching the point sub 1 from a direction rho. If one has a lens at P focusing this light on a retina or a film-in both cases a plane domain R in which we may introduce coordinates x, y then let g(x,y) be the strength of the light signal striking R at a point with coordinates (x,y); g(x,y) is essentially the same as sub 1 (rho) -possibly after a simple transformation given by the geometry of the imaging syste. The function g(x,y) defined on the plane domain R will be called an image. What sort of function is g? The light reflected off the surfaces Si of various solid objects O sub i visible from P will strike the domain R in various open subsets R sub i. When one object O1 is partially in front of another object O2 as seen from P, but some of object O2 appears as the background to the sides of O1, then the open sets R1 and R2 will have a common boundary (the 'edge' of object O1 in the image defined on R) and one usually expects the image g(x,y) to be discontinuous along this boundary. (JHD)

5,516 citations


Journal ArticleDOI
TL;DR: In this article, a second order algorithm for finding points on a steepest descent path from the transition state of the reactants and products is presented. But the points are optimized so that the segment of the reaction path between any two adjacent points is given by an arc of a circle, and the gradient at each point is tangent to the path.
Abstract: A new algorithm is presented for obtaining points on a steepest descent path from the transition state of the reactants and products. In mass‐weighted coordinates, this path corresponds to the intrinsic reaction coordinate. Points on the reaction path are found by constrained optimizations involving all internal degrees of freedom of the molecule. The points are optimized so that the segment of the reaction path between any two adjacent points is given by an arc of a circle, and so that the gradient at each point is tangent to the path. Only the transition vector and the energy gradients are needed to construct the path. The resulting path is continuous, differentiable and piecewise quadratic. In the limit of small step size, the present algorithm is shown to take a step with the correct tangent vector and curvature vector; hence, it is a second order algorithm. The method has been tested on the following reactions: HCN→CNH, SiH2+H2→SiH4, CH4+H→CH3+H2, F−+CH3F→FCH3+F−, and C2H5F→C2H4+HF. Reaction paths calculated with a step size of 0.4 a.u. are almost identical to those computed with a step size of 0.1 a.u. or smaller.

5,487 citations


Journal ArticleDOI
TL;DR: A segmentation algorithm based on sequential optimization which produces a hierarchical decomposition of the picture that can be viewed as a tree, where the nodes correspond to picture segments and where links between nodes indicate set inclusions.
Abstract: A segmentation algorithm based on sequential optimization which produces a hierarchical decomposition of the picture is presented. The decomposition is data driven with no restriction on segment shapes. It can be viewed as a tree, where the nodes correspond to picture segments and where links between nodes indicate set inclusions. Picture segmentation is first regarded as a problem of piecewise picture approximation, which consists of finding the partition with the minimum approximation error. Then, picture segmentation is presented as an hypothesis-testing process which merges only segments that belong to the same region. A hierarchical decomposition constraint is used in both cases, which results in the same stepwise optimization algorithm. At each iteration, the two most similar segments are merged by optimizing a stepwise criterion. The algorithm is used to segment a remote-sensing picture, and illustrate the hierarchical structure of the picture. >

305 citations


Book
01 Aug 1989
TL;DR: A simple, unified, algorithmic approach to change of basis procedures in computer aided geometric design, R.A. Froyland, et al wonderful triangle.
Abstract: Symmetrizing multiaffine polynomials, P.J. Barry norm estimates for inverses of distance matrices, B.J.C. Baxter numerical treatment of surface-surface-intersection and contouring, K.-H. Brakhage modelling closed surfaces - a comparison of existing methods, P. Brunet and A. Vinacua a new characterization of plane elastica, G. Brunnett POLynomials, POLar forms, and interPOL-ation, P.de Casteljau pyramid patches provide potential polynomial paradigms, A.S. Cavaretta and C.A. Micchelli implicitizing rational surfaces with base points by applying perturbations and the factors of zero theorem, E.-W. Chionh and R.C. Goldman wavelets and multiscale interpolation, C.K. Chui and X. Shi a curve intersection algorithm with processing of singular cases - introduction of a clipping technique, M. Daniel best approximation of parametric curves by splines, W.L.F. Degen an approximately G1 cubic surface interpolant, T. DeRose and S. Mann on the G2 continuity of piecewise parametric surfaces, W. Du and F.J.M. Schmitt stationary and non-stationary binary subdivision schemes, N. Dyn and D. Levin MQ-curves are curves in tension, M. Eck offset approximation improvement by control point perturbation, G. Elber and E. Cohen curves and surfaces in geometrical optics, R.T. Farouki and J.-C.A. Chastang evaluation and properties of a derivative of a NURBS curve, M.S. Floater hybrid cubic Bezier triangle patches, T.A. Foley and K. Opitz modelling geological structures using splines, L.A. Froyland, et al wonderful triangle - a simple, unified, algorithmic approach to change of basis procedures in computer aided geometric design, R.N. Goldman an arbitrary mesh network scheme using rational splines, J.A. Gregory and P.K. Yuen Bezier curves and surface patches on quadrics, J. Hoschek monotonicity preserving interplation using C2 rational cubic Bezier curves, M.K. Ismail on piecewise quadratic G2 approximation and interplation, J. Kozak and M. Lokar non-affine blossoms and subdivision for Q-splines, R. Kulkarni. Part contents.

253 citations


Journal ArticleDOI
TL;DR: It is concluded that the deterministic algorithm (graduated nonconvexity) outstrips stochastic (simulated annealing) algorithms both in computational efficiency and in problem-solving power.
Abstract: Piecewise continuous reconstruction of real-valued data can be formulated in terms of nonconvex optimization problems. Both stochastic and deterministic algorithms have been devised to solve them. The simplest such reconstruction process is the weak string. Exact solutions can be obtained for it and are used to determine the success or failure of the algorithms under precisely controlled conditions. It is concluded that the deterministic algorithm (graduated nonconvexity) outstrips stochastic (simulated annealing) algorithms both in computational efficiency and in problem-solving power. >

200 citations


Journal ArticleDOI
E. Vieth1
TL;DR: An iterative approach was achieved for fitting piecewise linear functions to nonrectilinear responses of biological variables and an F test is proposed to determine whether one regression line is the optimal fitted function.
Abstract: An iterative approach was achieved for fitting piecewise linear functions to nonrectilinear responses of biological variables. This algorithm is used to estimate the parameters of the two (or more) regression functions and the separation point(s) (thresholds, sensitivities) by statistical approximation. Although it is often unknown whether the response of a biological variable is adequately described by one rectilinear regression function or by piecewise linear regression function(s) with separation point(s), an F test is proposed to determine whether one regression line is the optimal fitted function. A FORTRAN-77 program has been developed for estimating the optimal parameters and the coordinates of the separation point(s). A few sets of data illustrating this kind of problem in the analysis of thermoregulation, osmoregulation, and the neuronal responses are discussed.

178 citations


Journal ArticleDOI
TL;DR: In this article, the Hermite polynomials are used to preserve local positivity, monotonicity, and convexity of the data if we restrict their derivatives to satisfy constraints at the data points.
Abstract: The Hermite polynomials are simple, effective interpolants of discrete data. These interpolants can preserve local positivity, monotonicity, and convexity of the data if we restrict their derivatives to satisfy constraints at the data points. This paper de- scribes the conditions that must be satisfied for cubic and quintic Hermite interpolants to preserve these properties when they exist in the discrete data. We construct algorithms to ensure that these constraints are satisfied and give numerical examples to illustrate the effectiveness of the algorithms on locally smooth and rough data. 1. Introduction. Piecewise polynomial interpolants, especially those based on Hermite polynomials (polynomials determined by their values and values of one or more derivatives at both ends of an interval), have a number of desirable properties. They are easy to compute once the derivative values are chosen. If the derivative values are chosen locally (e.g., by finite difference methods), then the interpolant at a given point will depend only on the given data at nearby mesh points. If the derivatives are computed by spline methods, then the interpolant will have an extra degree of continuity at the mesh points. In either case, the interpolant is linear in the given function values and has excellent convergence properties as the mesh spacing decreases. These methods, however, do not necessarily preserve the shape of the given data. When the data arise from a physical experiment, it may be vital that the interpolant preserve nonnegativity (f(x) > 0), nonpositivity (f(x) 0 or f(x) 0), or concavity (f(x) < 0). In this and other cases, geometric considerations, such as preventing spurious behavior near rapid changes in the data, may be more important than the asymptotic accuracy of the interpolation method. One can construct a shape-preserving interpolant by constraining the derivatives for the Hermite polynomials to meet conditions which imply the desired properties ((4), (5), (8), (11)—(15), (20)), by adding new mesh points

147 citations


Journal ArticleDOI
TL;DR: This paper rewrites some of the conditions on the Hermite derivatives that are sufficient for a piecewise bicubic function to be monotonic and presents a much simpler five-step algorithm for satisfying them that produces a visually pleasing monotone interpolant.
Abstract: This paper describes an algorithm for monotone interpolation to monotone data on a rectangular mesh by piecewise bicubic functions. In [SIAM J. Numer. Anal. 22 (1985), pp. 386–400] the authors developed conditions on the Hermite derivatives that are sufficient for such a function to be monotonic. The present paper rewrites some of these conditions and presents a much simpler five-step algorithm for satisfying them that produces a visually pleasing monotone interpolant. The result of the algorithm does not depend on the order of the independent variables nor on whether the inequalities are swept left-to-right or right-to-left.

111 citations


Journal ArticleDOI
05 Nov 1989
TL;DR: The SPECS simulator as discussed by the authors is a piecewise approximate, tree/link based, event driven, variable accuracy circuit simulation algorithm that uses table models for device evaluation and can be built at various levels of accuracy.
Abstract: Conventional circuit simulation methods are inflexible and slow, especially for large circuits. Piecewise approximate circuit simulation, an alternative that can be more efficient and allows variable accuracy in the simulation process, is discussed. Thus, the tradeoff between accuracy and CPU time is in the hands of the user. SPECS (simulation program for electronic circuits and systems) is the prototype implementation of a piecewise approximate, tree/link based, event driven, variable accuracy circuit simulation algorithm that uses table models for device evaluation. The models can be built at various levels of accuracy, and concomitant levels of precision are reflected in the simulation results. SPECS has been benchmarked on some large, industrial circuits and has proven to be an efficient and reliable simulator. However, it suffers a penalty in run time while simulating stiff circuits, or circuits with a wide range of time constants. The authors present enhanced algorithms used in SPECS to ensure efficient steady-state computation for stiff circuits. >

96 citations


Journal ArticleDOI
TL;DR: In this paper, the authors give conditions under which invariant measures for piecewise invertible dynamical systems can be lifted to Markov extensions, based on a theorem of Hofbauer.
Abstract: Generalizing a theorem ofHofbauer (1979), we give conditions under which invariant measures for piecewise invertible dynamical systems can be lifted to Markov extensions. Using these results we prove: (1) IfT is anS-unimodal map with an attracting invariant Cantor set, then ∫log|T′|dμ=0 for the unique invariant measure μ on the Cantor set. (2) IfT is piecewise invertible, iff is the Radon-Nikodym derivative ofT with respect to a σ-finite measurem, if logf has bounded distortion underT, and if μ is an ergodicT-invariant measure satisfying a certain lower estimate for its entropy, then μ≪m iffhμ (T)=Σlogf dμ.

89 citations


Journal ArticleDOI
TL;DR: In this paper, it was shown that a curve has geometric continuity if its curvatures and Frenet frame are continuous and the basic theoretical properties of ordinary spline functions also hold for these spaces.
Abstract: We say that a curve has geometric continuity if its curvatures and Frenet frame are continuous. In this paper we introduce spaces of piecewise polynomials which can be used to model space curves which have geometric continuity. We show that the basic theoretical properties of ordinary spline functions also hold for these spaces. These results extend and unify recent work on Beta-splines and Nu-splines which are used as a design tool in computer-aided geometric design of free form curves and surfaces.

Journal ArticleDOI
TL;DR: In this article, an improved theoretical model of the coaxial colinear (COCO) antenna is presented that takes into account different element lengths and power transfers between the antenna and the transmission lines.
Abstract: An improved theoretical model of the coaxial colinear (COCO) antenna is presented that takes into account different element lengths and power transfers between the antenna and the transmission lines. The antenna equation now contains an exact kernel instead of an approximate kernel and a piecewise constant basis function is used instead of a piecewise linear function, yielding faster results. More points are used for better accuracy and yet faster computations. The linear systems of equations of the theoretical model are solved using a preconditioned conjugate gradient method. Uniform and tapered current distributions are obtained experimentally and theoretically on end-fed coaxial colinear antennas. There is reasonable agreement between theory and measurements. The gains of a few COCO antennas relative to equivalent lengths of half-wave dipoles are given. >

Journal ArticleDOI
TL;DR: An algorithm to recover three-dimensional shape, i.e., surface orientation and relative depth from a single segmented image, is presented and a variational formulation of line drawing and shading constraints in a common framework is developed.
Abstract: An algorithm to recover three-dimensional shape, i.e., surface orientation and relative depth from a single segmented image is presented. It is assumed that the scene is composed of opaque regular solid objects bounded by piecewise smooth surfaces with no markings or textures. It is also assumed that the reflectance map R(n) is known. For the canonical case of Lambertian surfaces illuminated by a point light source, this implies knowing the light-source direction. A variational formulation of line drawing and shading constraints in a common framework is developed. The global constraints are partitioned into constraint sets corresponding to the faces, edges and vertices in the scene. For a face, the constraints are given by Horn's image irradiance equation. A variational formulation of the constraints at an edge both from the known direction of the image curve corresponding to the edge and shading is developed. At a vertex, the constraints are modeled by a system of nonlinear equations. An algorithm is presented to solve this system of constraints. >

Journal ArticleDOI
TL;DR: The construction of curvature continuous, planar curves (open or closed) that consist of conic segments, represented in the rational Bézier form, is discussed, and an iterative procedure to compute their offset curves is outlined.
Abstract: In this paper the construction of curvature continuous, planar curves (open or closed) that consist of conic segments, represented in the rational Bezier form, is discussed, and an iterative procedure to compute their offset curves is outlined

Journal ArticleDOI
TL;DR: An algorithm is described that removes the noise from images without causing blurring or other distortions of edges, using a new minimization strategy called mean-field annealing (MFA).
Abstract: An algorithm is described that removes the noise from images without causing blurring or other distortions of edges. The problem of noise removal is posed as a restoration of an uncorrupted image, given additive noise. The restoration problem is solved by using a new minimization strategy called mean-field annealing (MFA). An a priori statistical model of the image is chosen that drives the minimization toward solutions that are locally homogeneous. The strategy for MFA is derived, and the resulting algorithm is discussed. Applications of the algorithm to both synthetic images and real images are presented.

Journal ArticleDOI
TL;DR: In this article, transfer operators and zeta functions of piecewise monotonie and of more general piecewise invertible dynamical systems are studied and a kind of Fredholm theory for them is developed, and carried back to the original systems.
Abstract: Transfer operators and zeta functions of piecewise monotonie and of more general piecewise invertible dynamical systems are studied. To this end we construct Markov extensions of given systems, develop a kind of Fredholm theory for them, and carry the results back to the original systems. This yields e.g. bounds on the number of ergodic maximal measures or equilibrium states.

Journal ArticleDOI
TL;DR: In this paper, a simple model for tapering optical glass fibers is described and analyzed to show that unless the initial conditions have certain spatial singularities, the cross-sectional area cannot be made to vanish in finite time.
Abstract: A simple model for tapering optical glass fibers is described and analysed to show that unless the initial conditions have certain spatial singularities, the cross-sectional area cannot be made to vanish in finite time. The model is applied to an industrial process in which the fiber has piecewise uniform temperature.

Journal ArticleDOI
TL;DR: In this paper, a linear system of differential equations with the argument [ t + 1 2 ] is studied, where [·] denotes the greatest-integer function, and the oscillatory and periodic properties of the solutions depend on the eigenvalues of a certain matrix associated with the system.

Journal ArticleDOI
TL;DR: A numerical technique for computing optimal impulse controls for P.D.P.s. under general conditions is presented and it is shown that iteration of the single-jump-or-intervention operator generates a sequence of functions converging to the value function of the problem.
Abstract: In a recent paper we presented a numerical technique for solving the optimal stopping problem for a piecewise-deterministic process (P.D.P.) by discretization of the state space. In this paper we apply these results to the impulse control problem. In the first part of the paper we study the impulse control of P.D.P.s. under general conditions. We show that iteration of the single-jump-or-intervention operator generates a sequence of functions converging to the value function of the problem. In the second part of the paper we present a numerical technique for computing optimal impulse controls for P.D.P.s. This technique reduces the problem to a sequence of one-dimensional minimizations. We conclude by presenting some numerical examples.

Proceedings ArticleDOI
08 May 1989
TL;DR: The described piecewise affine projection of algorithms on piecewise regular processor arrays fits into the desired hierarchical design approach of a certain class of processor arrays.
Abstract: Consideration is given to the design of a certain class of processor arrays. In comparison to regular arrays, i.e. wavefront or systolic arrays, they may contain context-dependent switching functions, and they can be partitioned into regular subarrays (in time and/or space). The described piecewise affine projection of algorithms on piecewise regular processor arrays fits into the desired hierarchical design approach. Based on mathematical models for piecewise regular algorithms, dependence graphs and processor arrays, a parametric representation for a class of transformation matrices is derived such that the resulting array is guaranteed to have a limited number of essentially different interconnections. In order to optimize the final realization, the methods known for systolic/wavefront arrays can be applied. >

Book ChapterDOI
01 Jun 1989
TL;DR: In this article, the construction of tangent plane continuous piecewise quadric surfaces that interpolate finite sets of essentially arbitrary points in IR 3 according to a given topology is described in terms of a piecewise linear interpolant.
Abstract: This paper is concerned with the construction of tangent plane continuous piecewise quadric surfaces that interpolate finite sets of essentially arbitrary points in IR 3 according to a given ‘topology’ which is described in terms of a piecewise linear interpolant. Moreover, within certain ranges depending on the topology and the location of data points, given normal directions at the points are also matched by the interpolating piecewise quadric surface.

Proceedings ArticleDOI
27 Nov 1989
TL;DR: The authors present an algorithm for computing the aspect graph of a curved object and a study of new visual events for piecewise smooth objects.
Abstract: The authors present an algorithm for computing the aspect graph of a curved object. The approach used to partition the viewpoint space is to compute boundary viewpoints from the shape descriptions of an object in a CAD database. These computations are formulated from the understanding of visual events and the locations of corresponding viewpoints. Also presented is a study of new visual events for piecewise smooth objects. >


Journal ArticleDOI
TL;DR: In this paper, the equality of the observation spaces defined by means of piecewise constant control with those defined in terms of differentiable control is established for differentiable observation spaces, and it is shown that differentiable controls can be used to obtain the same observation spaces.

Journal ArticleDOI
TL;DR: Super spline spaces as discussed by the authors are spaces of multivariate splines consisting of piecewise polynomials defined on triangulations, where the degree of smoothness enforced at the vertices is larger than that enforced across the edges.
Abstract: Super splines are spaces of multivariate splines consisting of piecewise polynomials defined on triangulations, where the degree of smoothness enforced at the vertices is larger than the degree of smoothness enforced across the edges. Dimension formulae for super spline spaces are established, and the construction of explicit bases of locally supported splines is presented. In addition, it is shown that many classical finite-element spaces can be interpreted as super spline spaces, providing a link between spline theory and the highly applicable finite-element theory.

Journal ArticleDOI
TL;DR: In this article, the authors describe the development and preliminary testing of a numerical scheme designed to predict the global circulation which can also telescope into local subdomains of enhanced vertical and horizontal resolution.
Abstract: We describe the development and preliminary testing of a numerical scheme designed to predict the global circulation which can also telescope into local subdomains of enhanced vertical and horizontal resolution. The accuracy of the method appears intermediate to the accuracy of purely spectral and grid point models, but it is especially well suited to study certain practical predictability problems within limited area domains. The approach is based upon separate Galerkin approximations in longitude and latitude. The longitude variation is discretized in terms of truncated Fourier series, while the latitude structure of the Fourier amplitudes is depicted as sums of piecewise continuous linear functions (finite elements). The vertical structure is also described in terms of finite elements. The technique is especially well suited to fine resolution of polar caps within which a given wavenumber truncation ensures enhanced local resolution in longitude, and where a refined element size can also be im...

Journal ArticleDOI
TL;DR: In this article, the regularity of the eigenfunctions of eigenvalue problems with piecewise analytic data is discussed and a detailed and systematic numerical study of these approximations is presented, together with an analysis of the numerical results in light of theoretical results.
Abstract: This paper discusses the regularity of the eigenfunctions of eigenvalue problems with piecewise analytic data and the approximation of the eigenvalues and eigenfunctions of such problems. A detailed and systematic numerical study of these approximations is presented, together with an analysis of the numerical results in light of the theoretical results. The specific aim is to assess the reliability of the theoretical results, which are of an asymptotic nature, as a guide to practical computations, which may take place in the pre-asymptotic phase, and to look for characteristic features of the numerical results that are not completely explained by known theoretical results.


Journal ArticleDOI
TL;DR: The Lagrangian form of the semigeostrophic equations can be integrated forward in time for arbitrarily long periods without breaking down, to give a "slow manifold" of solutions as discussed by the authors.
Abstract: The Lagrangian form of the semigeostrophic equations has been shown to possess discontinuous solutions that have been exploited as a simple model of fronts and other mesoscale flows. In this paper, it is shown that these equations can be integrated forward in time for arbitrarily long periods without breaking down, to give a “slow manifold” of solutions. In the absence of moisture, orography and surface friction, these solutions conserve energy, despite the appearance of discontinuities. In previous work these solutions have been derived by making finite parcel approximations to the data. This paper shows that there is a unique solution to the equations with general piecewise smooth data, to which the finite parcel approximation converges. It is also shown that the time integration procedure is well defined, and that the solutions remain bounded for all finite times. Most previous results on the finite parcel solutions are restricted to the case of a Boussinesq atmosphere on an f-plane with rigid...

Journal ArticleDOI
K. C. Yeh1, R. D. Small1
TL;DR: A numerical integration method based on piecewise cubic polynomial for computing the area under the curve in pharmacokinetics is presented and has been found to produce stable and monotone interpolations irrespective of experimental error.
Abstract: A numerical integration method based on piecewise cubic polynomial for computing the area under the curve in pharmacokinetics is presented. The method has been found to produce stable and monotone interpolations irrespective of experimental error. Spurious oscillations occasionally associated with cubic splines are eliminated. Comparisons with the previously available methods suggest that more reliable and less biased areas under the plasma concentration curve, AUC, or areas under the first moment of plasma curve, AUMC, can be generated by the present method.