scispace - formally typeset
Search or ask a question

Showing papers on "Piecewise published in 2006"


Journal ArticleDOI
TL;DR: This paper proves in an incremental constructive method that in order to let SLFNs work as universal approximators, one may simply randomly choose hidden nodes and then only need to adjust the output weights linking the hidden layer and the output layer.
Abstract: According to conventional neural network theories, single-hidden-layer feedforward networks (SLFNs) with additive or radial basis function (RBF) hidden nodes are universal approximators when all the parameters of the networks are allowed adjustable. However, as observed in most neural network implementations, tuning all the parameters of the networks may cause learning complicated and inefficient, and it may be difficult to train networks with nondifferential activation functions such as threshold networks. Unlike conventional neural network theories, this paper proves in an incremental constructive method that in order to let SLFNs work as universal approximators, one may simply randomly choose hidden nodes and then only need to adjust the output weights linking the hidden layer and the output layer. In such SLFNs implementations, the activation functions for additive nodes can be any bounded nonconstant piecewise continuous functions g:R→R and the activation functions for RBF nodes can be any integrable piecewise continuous functions g:R→R and ∫Rg(x)dx≠0. The proposed incremental method is efficient not only for SFLNs with continuous (including nondifferentiable) activation functions but also for SLFNs with piecewise continuous (such as threshold) activation functions. Compared to other popular methods such a new network is fully automatic and users need not intervene the learning process by manually tuning control parameters.

2,413 citations


Journal ArticleDOI
TL;DR: The results show that the OS-ELM is faster than the other sequential algorithms and produces better generalization performance on benchmark problems drawn from the regression, classification and time series prediction areas.
Abstract: In this paper, we develop an online sequential learning algorithm for single hidden layer feedforward networks (SLFNs) with additive or radial basis function (RBF) hidden nodes in a unified framework. The algorithm is referred to as online sequential extreme learning machine (OS-ELM) and can learn data one-by-one or chunk-by-chunk (a block of data) with fixed or varying chunk size. The activation functions for additive nodes in OS-ELM can be any bounded nonconstant piecewise continuous functions and the activation functions for RBF nodes can be any integrable piecewise continuous functions. In OS-ELM, the parameters of hidden nodes (the input weights and biases of additive nodes or the centers and impact factors of RBF nodes) are randomly selected and the output weights are analytically determined based on the sequentially arriving data. The algorithm uses the ideas of ELM of Huang developed for batch learning which has been shown to be extremely fast with generalization performance better than other batch training methods. Apart from selecting the number of hidden nodes, no other control parameters have to be manually chosen. Detailed performance comparison of OS-ELM is done with other popular sequential learning algorithms on benchmark problems drawn from the regression, classification and time series prediction areas. The results show that the OS-ELM is faster than the other sequential algorithms and produces better generalization performance

1,800 citations


Journal ArticleDOI
TL;DR: The development of the highly accurate ADER–DG approach for tetrahedral meshes provides a numerical technique to approach 3-D wave propagation problems in complex geometry with unforeseen accuracy.
Abstract: SUMMARY We present a new numerical method to solve the heterogeneous elastic wave equations formulated as a linear hyperbolic system using first-order derivatives with arbitrary high-order accuracy in space and time on 3-D unstructured tetrahedral meshes. The method combines the Discontinuous Galerkin (DG) Finite Element (FE) method with the ADER approach using Arbitrary high-order DERivatives for flux calculation. In the DG framework, in contrast to classical FE methods, the numerical solution is approximated by piecewise polynomials which allow for discontinuities at element interfaces. Therefore, the well-established theory of numerical fluxes across element interfaces obtained by the solution of Riemann-Problems can be applied as in the finite volume framework. To define a suitable flux over the element surfaces, we solve so-called Generalized Riemann-Problems (GRP) at the element interfaces. The GRP solution provides simultaneously a numerical flux function as well as a time-integration method. The main idea is a Taylor expansion in time in which all time-derivatives are replaced by space derivatives using the so-called Cauchy–Kovalewski or Lax–Wendroff procedure which makes extensive use of the governing PDE. The numerical solution can thus be advanced for one time step without intermediate stages as typical, for example, for classical Runge–Kutta time stepping schemes. Due to the ADER time-integration technique, the same approximation order in space and time is achieved automatically. Furthermore, the projection of the tetrahedral elements in physical space on to a canonical reference tetrahedron allows for an efficient implementation, as many computations of 3-D integrals can be carried out analytically beforehand. Based on a numerical convergence analysis, we demonstrate that the new schemes provide very high order accuracy even on unstructured tetrahedral meshes and computational cost and storage space for a desired accuracy can be reduced by higher-order schemes. Moreover, due to the choice of the basis functions for the piecewise polynomial approximation, the new ADER–DG method shows spectral convergence on tetrahedral meshes. An application of the new method to a well-acknowledged test case and comparisons with analytical and reference solutions, obtained by different well-established methods, confirm the performance of the proposed method. Therefore, the development of the highly accurate ADER–DG approach for tetrahedral meshes provides a numerical technique to approach 3-D wave propagation problems in complex geometry with unforeseen accuracy.

433 citations


Journal ArticleDOI
TL;DR: This article considers the problem of modeling a class of nonstationary time series using piecewise autoregressive (AR) processes, and the minimum description length principle is applied to compare various segmented AR fits to the data.
Abstract: This article considers the problem of modeling a class of nonstationary time series using piecewise autoregressive (AR) processes. The number and locations of the piecewise AR segments, as well as the orders of the respective AR processes, are assumed unknown. The minimum description length principle is applied to compare various segmented AR fits to the data. The goal is to find the “best” combination of the number of segments, the lengths of the segments, and the orders of the piecewise AR processes. Such a “best” combination is implicitly defined as the optimizer of an objective function, and a genetic algorithm is implemented to solve this difficult optimization problem. Numerical results from simulation experiments and real data analyses show that the procedure has excellent empirical properties. The segmentation of multivariate time series is also considered. Assuming that the true underlying model is a segmented autoregression, this procedure is shown to be consistent for estimating the location of...

418 citations


Journal ArticleDOI
TL;DR: In this article, an adaptive weighted sum (AWS) method for multiobjective optimization problems is presented, which extends the previously developed bi-objective AWS method to problems with more than two objective functions.
Abstract: This paper presents an adaptive weighted sum (AWS) method for multiobjective optimization problems. The method extends the previously developed biobjective AWS method to problems with more than two objective functions. In the first phase, the usual weighted sum method is performed to approximate the Pareto surface quickly, and a mesh of Pareto front patches is identified. Each Pareto front patch is then refined by imposing additional equality constraints that connect the pseudonadir point and the expected Pareto optimal solutions on a piecewise planar hypersurface in the \( {m} \)-dimensional objective space. It is demonstrated that the method produces a well-distributed Pareto front mesh for effective visualization, and that it finds solutions in nonconvex regions. Two numerical examples and a simple structural optimization problem are solved as case studies.

416 citations


Journal ArticleDOI
TL;DR: A new deterministic spatial branch and contract algorithm is proposed for optimizing such systems, in which piecewise under- and over-estimators are used to approximate the non-convex terms in the original model to obtain a convex relaxation whose solution gives a lower bound on the global optimum.

404 citations


Journal ArticleDOI
TL;DR: A discontinuous Galerkin (DG) method combined with the ideas of the ADER time integration approach to solve the elastic wave equation in heterogeneous media in the presence of externally given source terms with arbitrary high-order accuracy in space and time on unstructured triangular meshes is presented.
Abstract: SUMMARY We present a new numerical approach to solve the elastic wave equation in heterogeneous media in the presence of externally given source terms with arbitrary high-order accuracy in space and time on unstructured triangular meshes. We combine a discontinuous Galerkin (DG) method with the ideas of the ADER time integration approach using Arbitrary high-order DERivatives. The time integration is performed via the so-called Cauchy-Kovalewski procedure using repeatedly the governing partial differential equation itself. In contrast to classical finite element methods we allow for discontinuities of the piecewise polynomial approximation of the solution at element interfaces. This way, we can use the well-established theory of fluxes across element interfaces based on the solution of Riemann problems as developed in the finite volume framework. In particular, we replace time derivatives in the Taylor expansion of the time integration procedure by space derivatives to obtain a numerical scheme of the same high order in space and time using only one single explicit step to evolve the solution from one time level to another. The method is specially suited for linear hyperbolic systems such as the heterogeneous elastic wave equations and allows an efficient implementation. We consider continuous sources in space and time and point sources characterized by a Delta distribution in space and some continuous source time function. Hereby, the method is able to deal with point sources at any position in the computational domain that does not necessarily need to coincide with a mesh point. Interpolation is automatically performed by evaluation of test functions at the source locations. The convergence analysis demonstrates that very high accuracy is retained even on strongly irregular meshes and by increasing the order of the ADER‐DG schemes computational time and storage space can be reduced remarkably. Applications of the proposed method to Lamb’s Problem, a problem of strong material heterogeneities and to an example of global seismic wave propagation finally confirm its accuracy, robustness and high flexibility.

397 citations


Journal ArticleDOI
TL;DR: A PDE-based level set method that needs to minimize a smooth convex functional under a quadratic constraint, and shows numerical results using the method for segmentation of digital images.
Abstract: In this paper, we propose a PDE-based level set method. Traditionally, interfaces are represented by the zero level set of continuous level set functions. Instead, we let the interfaces be represented by discontinuities of piecewise constant level set functions. Each level set function can at convergence only take two values, i.e., it can only be 1 or -1; thus, our method is related to phase-field methods. Some of the properties of standard level set methods are preserved in the proposed method, while others are not. Using this new method for interface problems, we need to minimize a smooth convex functional under a quadratic constraint. The level set functions are discontinuous at convergence, but the minimization functional is smooth. We show numerical results using the method for segmentation of digital images.

382 citations


Journal ArticleDOI
John Paul Roop1
TL;DR: This paper investigates the computational aspects of the Galerkin approximation using continuous piecewise polynomial basis functions on a regular triangulation of the domain and demonstrates approximations to FADEs.

261 citations


Journal ArticleDOI
TL;DR: The minimization functional for the level set formulation for identifying curves separating regions into different phases is locally convex and differentiable and thus avoids some of the problems with the nondifferentiability of the Delta and Heaviside functions.
Abstract: In this paper we propose a variant of the level set formulation for identifying curves separating regions into different phases. In classical level set approaches, the sign of n level set functions are utilized to identify up to 2" phases. The novelty in our approach is to introduce a piecewise constant level set function and use each constant value to represent a unique phase. If 2" phases should be identified, the level set function must approach 2" predetermined constants. We just need one level set function to represent 2" unique phases, and this gains in storage capacity. Further, the reinitializing procedure requested in classical level set methods is superfluous using our approach. The minimization functional for our approach is locally convex and differentiable and thus avoids some of the problems with the nondifferentiability of the Delta and Heaviside functions. Numerical examples are given, and we also compare our method with related approaches.

240 citations


Journal ArticleDOI
TL;DR: An exact and parameter-free algorithm to build scale-sets image descriptions whose sections constitute a monotone sequence of upward global minima of a multi-scale energy, which is called the “scale climbing” algorithm is introduced.
Abstract: This paper introduces a multi-scale theory of piecewise image modelling, called the scale-sets theory, and which can be regarded as a region-oriented scale-space theory The first part of the paper studies the general structure of a geometrically unbiased region-oriented multi-scale image description and introduces the scale-sets representation, a representation which allows to handle such a description exactly The second part of the paper deals with the way scale-sets image analyses can be built according to an energy minimization principle We consider a rather general formulation of the partitioning problem which involves minimizing a two-term-based energy, of the form � C + D, where D is a goodness-of-fit term and C is a regularization term We describe the way such energies arise from basic principles of approximate modelling and we relate them to operational rate/distorsion problems involved in lossy compression problems We then show that an important subset of these energies constitutes a class of multi-scale energies in that the minimal cut of a hierarchy gets coarser and coarser as parameter � increases This allows us to devise a fast dynamic-programming procedure to find the complete scale-sets representation of this family of minimal cuts Considering then the construction of the hierarchy from which the minimal cuts are extracted, we end up with an exact and parameter-free algorithm to build scale-sets image descriptions whose sections constitute a monotone sequence of upward global minima of a multi-scale energy, which is called the "scale climbing" algorithm This algorithm can be viewed as a continuation method along the scale dimension or as a minimum pursuit along the operational rate/distorsion curve Furthermore, the solution verifies a linear scale invariance property which allows to completely postpone the tuning of the scale parameter to a subsequent stage For computational reasons, the scale climbing algorithm is approximated by a pair-wise region merging scheme: however the principal properties of the solutions are kept Some results obtained with Mumford-Shah's piece-wise constant model and a variant are provided and different applications of the proposed multi-scale analyses are finally sketched

Journal ArticleDOI
TL;DR: A priori sufficient conditions for Lyapunov asymptotic stability and exponential stability are derived in the terminal cost and constraint set fashion, while allowing for discontinuous system dynamics and discontinuous MPC value functions.
Abstract: In this note, we investigate the stability of hybrid systems in closed-loop with model predictive controllers (MPC). A priori sufficient conditions for Lyapunov asymptotic stability and exponential stability are derived in the terminal cost and constraint set fashion, while allowing for discontinuous system dynamics and discontinuous MPC value functions. For constrained piecewise affine (PWA) systems as prediction models, we present novel techniques for computing a terminal cost and a terminal constraint set that satisfy the developed stabilization conditions. For quadratic MPC costs, these conditions translate into a linear matrix inequality while, for MPC costs based on 1, infin-norms, they are obtained as norm inequalities. New ways for calculating low complexity piecewise polyhedral positively invariant sets for PWA systems are also presented. An example illustrates the developed theory

Journal ArticleDOI
TL;DR: A novel method for the construction of discrete conformal mappings from surface meshes of arbitrary topology to the plane based on circle patterns, that is, arrangements of circles---one for each face---with prescribed intersection angles, which supports very flexible boundary conditions ranging from free boundaries to control of the boundary shape via prescribed curvatures.
Abstract: We introduce a novel method for the construction of discrete conformal mappings from surface meshes of arbitrary topology to the plane. Our approach is based on circle patterns, that is, arrangements of circles---one for each face---with prescribed intersection angles. Given these angles, the circle radii follow as the unique minimizer of a convex energy. The method supports very flexible boundary conditions ranging from free boundaries to control of the boundary shape via prescribed curvatures. Closed meshes of genus zero can be parameterized over the sphere. To parameterize higher genus meshes, we introduce cone singularities at designated vertices. The parameter domain is then a piecewise Euclidean surface. Cone singularities can also help to reduce the often very large area distortion of global conformal maps to moderate levels. Our method involves two optimization problems: a quadratic program and the unconstrained minimization of the circle pattern energy. The latter is a convex function of logarithmic radius variables with simple explicit expressions for gradient and Hessian. We demonstrate the versatility and performance of our algorithm with a variety of examples.

Journal ArticleDOI
TL;DR: An efficient algorithm for minimizing the piecewise constant Mumford-Shah functional of image segmentation is proposed based on the threshold dynamics of Merriman, Bence, and Osher for evolving an interface by its mean curvature.

Proceedings ArticleDOI
14 May 2006
TL;DR: A new TV-based algorithm for image deconvolution, under the assumptions of linear observations and additive white Gaussian noise is proposed, which has O(N) computational complexity, for finite support convolutional kernels.
Abstract: The total variation regularizer is well suited to piecewise smooth images If we add the fact that these regularizers are convex, we have, perhaps, the reason for the resurgence of interest on TV-based approaches to inverse problems This paper proposes a new TV-based algorithm for image deconvolution, under the assumptions of linear observations and additive white Gaussian noise To compute the TV estimate, we propose a majorization-minimization approach, which consists in replacing a difficult optimization problem by a sequence of simpler ones, by relying on convexity arguments The resulting algorithm has O(N) computational complexity, for finite support convolutional kernels In a comparison with state-of-the-art methods, the proposed algorithm either outperforms or equals them, with similar computational complexity

DOI
01 Jan 2006
TL;DR: This thesis considers the stabilization and the robust stabilization of certain classes of hybrid systems using model predictive control, and builds a theoretical framework on stability and input-to-state stability that allows for discontinuous and nonlinear system dynamics.
Abstract: This thesis considers the stabilization and the robust stabilization of certain classes of hybrid systems using model predictive control. Hybrid systems represent a broad class of dynamical systems in which discrete behavior (usually described by a finite state machine) and continuous behavior (usually described by differential or difference equations) interact. Examples of hybrid dynamics can be found in many application domains and disciplines, such as embedded systems, process control, automated traffic-management systems, electrical circuits, mechanical and bio-mechanical systems, biological and bio-medical systems and economics. These systems are inherently nonlinear, discontinuous and multi-modal. As such, methodologies for stability analysis and (robust) stabilizing controller synthesis developed for linear or continuous nonlinear systems do not apply. This motivates the need for a new controller design methodology that is able to cope with discontinuous and multi-modal system dynamics, especially considering its wide practical applicability. Model predictive control (MPC) (also referred to as receding horizon control) is a control strategy that offers attractive solutions, already successfully implemented in industry, for the regulation of constrained linear or nonlinear systems. In this thesis, the MPC controller design methodology will be employed for the regulation of constrained hybrid systems. One of the reasons for the success of MPC algorithms is their ability to handle hard constraints on states/outputs and inputs. Stability and robustness are probably the most studied properties of MPC controllers, as they are indispensable to practical implementation. A complete theory on (robust) stability of MPC has been developed for linear and continuous nonlinear systems. However, these results do not carry over to hybrid systems easily. These challenges will be taken up in this thesis. As a starting point, in Chapter 2 of this thesis we build a theoretical framework on stability and input-to-state stability that allows for discontinuous and nonlinear system dynamics. These results act as the theoretical foundation of the thesis, enabling us to establish stability and robust stability results for hybrid systems in closed-loop with various model predictive control schemes. The (nominal) stability problem of hybrid systems in closed-loop with MPC controllers is solved in its full generality in Chapter 3. The focus is on a particular class of hybrid systems, namely piecewise affine (PWA) systems. This class of hybrid systems is very appealing as it provides a simple mathematical description on one hand, and a very high modeling power on the other hand. For particular choices of MPC cost functions and constrained PWA systems as prediction models, novel algorithms for computing a terminal cost and a local state-feedback controller that satisfy the developed stabilization conditions are presented. Algorithms for calculating low complexity piecewise polyhedral invariant sets for PWA systems are also developed. These positively invariant sets are either polyhedral, or consist of a union of a number of polyhedra that is equal to the number of affine subsystems of the PWA system. This is a significant reduction in complexity, compared to piecewise polyhedral invariant sets for PWA systems obtained via other existing algorithms. Hence, besides the study of the fundamental property of stability, the aim is to create control algorithms of low complexity to enable their on-line implementation. Before addressing the robust stabilization of PWA systems using MPC in Chapter 5, two interesting examples are presented in Chapter 4. These examples feature two discontinuous PWA systems that both admit a discontinuous piecewise quadratic Lyapunov function and are exponentially stable. However, one of the PWA systems is non-robust to arbitrarily small perturbations, while the other one is globally input-to-state stable (ISS) with respect to disturbance inputs. This indicates that one should be careful in inferring robustness from nominal stability. Moreover, for the example that is robust, the input-to-state stability property cannot be proven via a continuous piecewise quadratic (PWQ) Lyapunov function. However, as ISS can be established via a discontinuous PWQ Lyapunov function, the conservatism of continuous PWQ Lyapunov functions is shown in this setting. Therefore, this thesis provides a theoretical framework that can be used to establish robustness in terms of ISS of discontinuous PWA systems via discontinuous ISS Lyapunov functions. The sufficient conditions for ISS of PWA systems are formulated as linear matrix inequalities, which can be solved efficiently via semi-definite programming. These sufficient conditions also serve as a tool for establishing robustness of nominally stable hybrid MPC controllers a posteriori, after the MPC control law has been calculated explicitly as a PWA state-feedback. Furthermore, we also present a technique based on linear matrix inequalities for synthesizing input-to-state stabilizing state-feedback controllers for PWA systems. In Chapter 5, the problem of robust stabilization of PWA systems using MPC is considered. Previous solutions to this problem rely without exceptions on the assumption that the PWA system dynamics is a continuous function of the state. Clearly, this requirement is quite restrictive and artificial, as a continuous PWA system is in fact a Lipschitz continuous system. In Chapter 5 we present an input-to-state stabilizing MPC scheme for PWA systems based on tightened constraints that allows for discontinuous system dynamics and discontinuous MPC value functions. The advantage of this new approach, besides being the first robust stabilizing MPC scheme applicable to discontinuous PWA systems, is that the resulting MPC optimization problem can still be formulated as mixed integer linear programming problem, which is a standard optimization problem in hybrid MPC. A min-max approach to the robust stabilization of perturbed nonlinear systems using MPC is presented in Chapter 6. Min-max MPC, although computationally more demanding, can provide feedback to the disturbance, resulting in better performance when the controlled system is affected by perturbations. We show that only input-to-state practical stability can be ensured in general for perturbed nonlinear systems in closed-loop with minmax MPC schemes. However, new sufficient conditions that guarantee inputto- state stability of the min-max MPC closed-loop system are derived, via a dual-mode approach. These conditions are formulated in terms of properties that the terminal cost and a local state-feedback controller must satisfy. New techniques for calculating the terminal cost and the local controller for perturbed linear and PWA systems are also presented in Chapter 6. The final part of the thesis focuses on the design of robustly stabilizing, but computationally friendly, sub-optimal MPC algorithms for perturbed nonlinear systems and hybrid systems. This goal is achieved via new, simpler stabilizing constraints, that can be implemented as a finite number of linear inequalities. These algorithms are attractive for real-life implementation, when solvers usually provide a sub-optimal control action, rather than a globally optimal one. The potential for practical applications is illustrated via a case study on the control of DC-DC converters. Preliminary realtime computational results are encouraging, as the MPC control action is always computed within the allowed sampling interval, which is well below one millisecond for the considered Buck-Boost DC-DC converter. In conclusion, this thesis contains a complete framework on the synthesis of model predictive controllers for hybrid systems that guarantees stable and robust closed-loop systems. The latter properties are indispensable for any application of these control algorithms in practice. In the set-ups of the MPC algorithms, a clear focus was also on keeping the on-line computational burden low via simpler stabilizing constraints. The example on the control of DC-DC converters showed that the application to (very) fast systems comes within reach. This opens up a completely new range of applications, next to the traditional process control for typically slow systems. Therefore, the developed theory represents a fertile ground for future practical applications and it opens many roads for future research in model predictive control and stability of hybrid systems as well.

Proceedings ArticleDOI
17 Jul 2006
TL;DR: This work proposes training log-linear combinations of models for dependency parsing and for machine translation, and describes techniques for optimizing nonlinear functions such as precision or the BLEU metric.
Abstract: When training the parameters for a natural language system, one would prefer to minimize 1-best loss (error) on an evaluation set. Since the error surface for many natural language problems is piecewise constant and riddled with local minima, many systems instead optimize log-likelihood, which is conveniently differentiable and convex. We propose training instead to minimize the expected loss, or risk. We define this expectation using a probability distribution over hypotheses that we gradually sharpen (anneal) to focus on the 1-best hypothesis. Besides the linear loss functions used in previous work, we also describe techniques for optimizing nonlinear functions such as precision or the BLEU metric. We present experiments training log-linear combinations of models for dependency parsing and for machine translation. In machine translation, annealed minimum risk training achieves significant improvements in BLEU over standard minimum error training. We also show improvements in labeled dependency parsing.

Journal ArticleDOI
TL;DR: In this article, a comparison of pre-and post-calibration parameter covariance matrices is made, showing that the latter often possess a much smaller spectral bandwidth than the former.

Proceedings ArticleDOI
01 Sep 2006
TL;DR: It is demonstrated that high-quality SAH based acceleration structures can be constructed quickly enough to make them a viable option for interactive ray tracing of dynamic scenes, and the resulting trees are almost as good as those produced by a sorting-based SAH builder as measured by ray tracing time.
Abstract: Construction of effective acceleration structures for ray tracing is a well studied problem The highest quality acceleration structures are generally agreed to be those built using greedy cost optimization based on a surface area heuristic (SAH) This technique is most often applied to the construction of kd-trees, as in this work, but is equally applicable to the construction of other hierarchical acceleration structures Unfortunately, SAH-optimized data structure construction has previously been too slow to allow per-frame rebuilding for interactive ray tracing of dynamic scenes, leading to the use of lower-quality acceleration structures for this application The goal of this paper is to demonstrate that high-quality SAH based acceleration structures can be constructed quickly enough to make them a viable option for interactive ray tracing of dynamic scenes We present a scanning-based algorithm for choosing kd-tree split planes that are close to optimal with respect to the SAH criteria Our approach approximates the SAH cost function across the spatial domain with a piecewise quadratic function with bounded error and picks minima from this approximation This algorithm takes full advantage of SIMD operations (eg, SSE) and has favorable memory access patterns In practice this algorithm is faster than sorting-based SAH build algorithms with the same asymptotic time complexity, and is competitive with non-SAH build algorithms which produce lower-quality trees The resulting trees are almost as good as those produced by a sorting-based SAH builder as measured by ray tracing time For a test scene with 180 k polygons our system builds a high-quality kd-tree in 026 seconds that only degrades ray tracing time by 36% compared to a full quality tree

Journal ArticleDOI
TL;DR: In this article, the adaptive finite-element FE method using unstructured grids is proposed to ensure numerical accuracy, adaptive refinement using anaposterior-to-restimator is performed iteratively to improve the accuracy of the grid.
Abstract: Existing numerical modeling techniques commonly used for electromagnetic EM exploration are bound by the limitations of approximating complex structures using a rectangular grid.A more flexible tool is the adaptive finite-element FE method using unstructured grids. Composed of irregular triangles, an unstructured grid can readily conform to complicated structural boundaries. To ensure numerical accuracy, adaptive refinement usinganaposteriorierrorestimatorisperformediterativelytorefinethegridwheresolutionaccuracyisinsufficient.Tworecently developed asymptotically exact a posteriori error estimators are based on a superconvergent gradient recovery operator.The first reliessolelyonthenormeddifferencebetweentherecoveredgradients and the piecewise constant FE gradients and is effective for lowering the global error in the FE solution. For many problems, an accurate solution is required only in a few discrete regionsandamoreefficienterrorestimatorispossiblebyconsidering the local influence of errors from coarse elements elsewhere in the grid. The second error estimator accomplishes this by using weights determined from the solution to an appropriate dual problem to modify the first error estimator. Application of thesemethodsfor2DmagnetotelluricMTmodelingreveals,as expected, that the dual weighted error estimator is far more efficientinachievingaccurateMTresponses.Refiningabout15%of elements per iteration gives the fastest convergence rate. For a given refined grid, the solution error at higher frequencies varies in proportion to the skin depth, requiring refinement about every

Journal ArticleDOI
Charles Loop1, James F. Blinn1
01 Jul 2006
TL;DR: This work considers the problem of real-time GPU rendering of algebraic surfaces defined by Bezier tetrahedra and compute univariate polynomial coefficients in Bernstein form to maximize the stability of root finding, and to provide shader instances with an early exit test based on the sign of these coefficients.
Abstract: We consider the problem of real-time GPU rendering of algebraic surfaces defined by Bezier tetrahedra. These surfaces are rendered directly in terms of their polynomial representations, as opposed to a collection of approximating triangles, thereby eliminating tessellation artifacts and reducing memory usage. A key step in such algorithms is the computation of univariate polynomial coefficients at each pixel; real roots of this polynomial correspond to possibly visible points on the surface. Our approach leverages the strengths of GPU computation and is highly efficient. Furthermore, we compute these coefficients in Bernstein form to maximize the stability of root finding, and to provide shader instances with an early exit test based on the sign of these coefficients. Solving for roots is done using analytic techniques that map well to a SIMD architecture, but limits us to fourth order algebraic surfaces. The general framework could be extended to higher order with numerical root finding.

Proceedings ArticleDOI
Spiros Papadimitriou1, Philip S. Yu1
27 Jun 2006
TL;DR: This work introduces a method to discover optimal local patterns, which concisely describe the main trends in a time series and introduces a criterion to select the best window sizes, which most concisely capture the key oscillatory as well as aperiodic trends.
Abstract: We introduce a method to discover optimal local patterns, which concisely describe the main trends in a time series. Our approach examines the time series at multiple time scales (i.e., window sizes) and efficiently discovers the key patterns in each. We also introduce a criterion to select the best window sizes, which most concisely capture the key oscillatory as well as aperiodic trends. Our key insight lies in learning an optimal orthonormal transform from the data itself, as opposed to using a predetermined basis or approximating function (such as piecewise constant, short-window Fourier or wavelets), which essentially restricts us to a particular family of trends. We go one step further, lifting even that limitation. Furthermore, our method lends itself to fast, incremental estimation in a streaming setting. Experimental evaluation shows that our method can capture meaningful patterns in a variety of settings. Our streaming approach requires order of magnitude less time and space, while still producing concise and informative patterns.

Journal ArticleDOI
TL;DR: In this article, the convergence of the finite-volume scheme for a homogeneous Dirichlet problem with full diffusion matrix is proven and an error estimate is provided, and numerical tests show the actual accuracy of the method.
Abstract: Finite-volume methods for problems involving second-order operators with full diffusion matrix can be used thanks to the definition of a discrete gradient for piecewise constant functions on unstructured meshes satisfying an orthogonality condition. This discrete gradient is shown to satisfy a strong convergence property for the interpolation of regular functions, and a weak one for functions bounded in a discrete H 1 -norm. To highlight the importance of both properties, the convergence of the finite-volume scheme for a homogeneous Dirichlet problem with full diffusion matrix is proven, and an error estimate is provided. Numerical tests show the actual accuracy of the method.

Journal ArticleDOI
TL;DR: This paper focuses on either finding the proper weight of the fidelity term in the energy minimization formulation or on determining the optimal stopping time of a nonlinear diffusion process, and provides two practical alternatives for estimating this condition, based on the covariance of the noise and the residual part.
Abstract: This paper is concerned with finding the best partial differential equation-based denoising process, out of a set of possible ones. We focus on either finding the proper weight of the fidelity term in the energy minimization formulation or on determining the optimal stopping time of a nonlinear diffusion process. A necessary condition for achieving maximal SNR is stated, based on the covariance of the noise and the residual part. We provide two practical alternatives for estimating this condition by observing that the filtering of the image and the noise can be approximated by a decoupling technique, with respect to the weight or time parameters. Our automatic algorithm obtains quite accurate results on a variety of synthetic and natural images, including piecewise smooth and textured ones. We assume that the statistics of the noise were previously estimated. No a priori knowledge regarding the characteristics of the clean image is required. A theoretical analysis is carried out, where several SNR performance bounds are established for the optimal strategy and for a widely used method, wherein the variance of the residual part equals the variance of the noise

Journal ArticleDOI
TL;DR: In this article, a new numerical scheme based on the method of fundamental solutions is proposed for the numerical solution of some inverse boundary value problems associated with the Helmholtz equation, including the Cauchy problem.

Journal ArticleDOI
TL;DR: This paper develops a prototype image coder that has near-optimal asymptotic R-D performance D(R)/spl lsim/(logR)/sup 2//R/sup 2/ for piecewise smooth C/Sup 2//C/ Sup 2/ images.
Abstract: The wavelet transform provides a sparse representation for smooth images, enabling efficient approximation and compression using techniques such as zerotrees. Unfortunately, this sparsity does not extend to piecewise smooth images, where edge discontinuities separating smooth regions persist along smooth contours. This lack of sparsity hampers the efficiency of wavelet-based approximation and compression. On the class of images containing smooth C/sup 2/ regions separated by edges along smooth C/sup 2/ contours, for example, the asymptotic rate-distortion (R-D) performance of zerotree-based wavelet coding is limited to D(R) /spl lsim/1/R, well below the optimal rate of 1/R/sup 2/. In this paper, we develop a geometric modeling framework for wavelets that addresses this shortcoming. The framework can be interpreted either as 1) an extension to the "zerotree model" for wavelet coefficients that explicitly accounts for edge structure at fine scales, or as 2) a new atomic representation that synthesizes images using a sparse combination of wavelets and wedgeprints-anisotropic atoms that are adapted to edge singularities. Our approach enables a new type of quadtree pruning for piecewise smooth images, using zerotrees in uniformly smooth regions and wedgeprints in regions containing geometry. Using this framework, we develop a prototype image coder that has near-optimal asymptotic R-D performance D(R)/spl lsim/(logR)/sup 2//R/sup 2/ for piecewise smooth C/sup 2//C/sup 2/ images. In addition, we extend the algorithm to compress natural images, exploring the practical problems that arise and attaining promising results in terms of mean-square error and visual quality.

Journal ArticleDOI
TL;DR: This paper proposes a strategy for the classification of codimension-two discontinuity-induced bifurcations of limit cycles in piecewise smooth systems of ordinary differential equations, and suggests three distinct types: either the grazing point is degenerate, or the grazing cycle is itself degenerate.
Abstract: This paper proposes a strategy for the classification of codimension-two discontinuity-induced bifurcations of limit cycles in piecewise smooth systems of ordinary differential equations. Such nonsmooth transitions (also known as C-bifurcations) occur when the cycle interacts with a discontinuity boundary of phase space in a nongeneric way, such as grazing contact. Several such codimension-one events have recently been identified, causing for example, period-adding or sudden onset of chaos. Here, the focus is on codimension-two grazings that are local in the sense that the dynamics can be fully described by an appropriate Poincare map from a neighborhood of the grazing point (or points) of the critical cycle to itself. It is proposed that codimension-two grazing bifurcations can be divided into three distinct types: either the grazing point is degenerate, or the grazing cycle is itself degenerate (e.g. nonhyperbolic) or we have the simultaneous occurrence of two grazing events. A careful distinction is drawn between their occurrence in systems with discontinuous states, discontinuous vector fields, or that with discontinuity in some derivative of the vector field. Examples of each kind of bifurcation are presented, mostly derived from mechanical applications. For each example, where possible, principal bifurcation curves characteristic to the codimension-two scenario are presented and general features of the dynamics discussed. Many avenues for future research are opened.

Journal ArticleDOI
TL;DR: A nonlinear multiresolution scheme within Harten's framework is presented, based on a new nonlinear, centered piecewise polynomial interpolation technique, which shows promising results in terms of convergence, smoothness, and stability.
Abstract: A nonlinear multiresolution scheme within Harten's framework is presented, based on a new nonlinear, centered piecewise polynomial interpolation technique. Analytical properties of the resulting subdivision scheme, such as convergence, smoothness, and stability, are studied. The stability and the compression properties of the associated multiresolution transform are demonstrated on several numerical experiments on images.

Journal ArticleDOI
TL;DR: In this article, the authors consider the recovery of smooth 3D region boundaries with piecewise constant coefficients in optical tomography, based on a parametrization of the closed boundaries of the regions by spherical harmonic coefficients, and a Newton type optimization process.
Abstract: We consider the recovery of smooth 3D region boundaries with piecewise constant coefficients in optical tomography. The method is based on a parametrization of the closed boundaries of the regions by spherical harmonic coefficients, and a Newton type optimization process. A boundary integral formulation is used for the forward modelling. The calculation of the Jacobian is based on an adjoint scheme for calculating the corresponding shape derivatives. We show reconstructions for 3D situations. In addition we show the extension of the method for cases where the constant optical coefficients are also unknown. An advantage of the proposed method is the implicit regularization effect arising from the reduced dimensionality of the inverse problem.

Proceedings ArticleDOI
14 Jun 2006
TL;DR: For continuous-time systems, this article showed that it is impossible to use pure state feedback to achieve robust global asymptotic stabilization of a disconnected set of points or robust global regulation to a target while avoiding an obstacle.
Abstract: We give an elementary proof of the fact that, for continuous-time systems, it is impossible to use (even discontinuous) pure state feedback to achieve robust global asymptotic stabilization of a disconnected set of points or robust global regulation to a target while avoiding an obstacle. Indeed, we show that arbitrarily small, piecewise constant measurement noise can keep the trajectories away from the target. We give a constructive, Lyapunov-based hybrid state feedback that achieves robust regulation in the above mentioned settings.