scispace - formally typeset
Search or ask a question

Showing papers on "Piecewise published in 2002"


Journal ArticleDOI
TL;DR: A new multiphase level set framework for image segmentation using the Mumford and Shah model, for piecewise constant and piecewise smooth optimal approximations, and validated by numerical results for signal and image denoising and segmentation.
Abstract: We propose a new multiphase level set framework for image segmentation using the Mumford and Shah model, for piecewise constant and piecewise smooth optimal approximations. The proposed method is also a generalization of an active contour model without edges based 2-phase segmentation, developed by the authors earlier in T. Chan and L. Vese (1999. In Scale-Space'99, M. Nilsen et al. (Eds.), LNCS, vol. 1682, pp. 141–151) and T. Chan and L. Vese (2001. IEEE-IP, 10(2):266–277). The multiphase level set formulation is new and of interest on its own: by construction, it automatically avoids the problems of vacuum and overlaps it needs only log n level set functions for n phases in the piecewise constant cases it can represent boundaries with complex topologies, including triple junctionss in the piecewise smooth case, only two level set functions formally suffice to represent any partition, based on The Four-Color Theorem. Finally, we validate the proposed models by numerical results for signal and image denoising and segmentation, implemented using the Osher and Sethian level set method.

2,649 citations


Journal ArticleDOI
TL;DR: This work proves sampling theorems for classes of signals and kernels that generalize the classic "bandlimited and sinc kernel" case and shows how to sample and reconstruct periodic and finite-length streams of Diracs, nonuniform splines, and piecewise polynomials using sinc and Gaussian kernels.
Abstract: The authors consider classes of signals that have a finite number of degrees of freedom per unit of time and call this number the rate of innovation. Examples of signals with a finite rate of innovation include streams of Diracs (e.g., the Poisson process), nonuniform splines, and piecewise polynomials. Even though these signals are not bandlimited, we show that they can be sampled uniformly at (or above) the rate of innovation using an appropriate kernel and then be perfectly reconstructed. Thus, we prove sampling theorems for classes of signals and kernels that generalize the classic "bandlimited and sinc kernel" case. In particular, we show how to sample and reconstruct periodic and finite-length streams of Diracs, nonuniform splines, and piecewise polynomials using sinc and Gaussian kernels. For infinite-length signals with finite local rate of innovation, we show local sampling and reconstruction based on spline kernels. The key in all constructions is to identify the innovative part of a signal (e.g., time instants and weights of Diracs) using an annihilating or locator filter: a device well known in spectral analysis and error-correction coding. This leads to standard computational procedures for solving the sampling problem, which we show through experimental results. Applications of these new sampling results can be found in signal processing, communications systems, and biological systems.

1,206 citations


01 Jan 2002
TL;DR: The proposed algorithm, GUIDE, is specifically designed to eliminate variable selection bias, a problem that can undermine the reliability of inferences from a tree structure and allows fast computation speed, natural ex- tension to data sets with categorical variables, and direct detection of local two- variable interactions.
Abstract: We propose an algorithm for regression tree construction called GUIDE. It is specifically designed to eliminate variable selection bias, a problem that can undermine the reliability of inferences from a tree structure. GUIDE controls bias by employing chi-square analysis of residuals and bootstrap calibration of signif- icance probabilities. This approach allows fast computation speed, natural ex- tension to data sets with categorical variables, and direct detection of local two- variable interactions. Previous algorithms are not unbiased and are insensitive to local interactions during split selection. The speed of GUIDE enables two further enhancements—complex modeling at the terminal nodes, such as polynomial or best simple linear models, and bagging. In an experiment with real data sets, the prediction mean square error of the piecewise constant GUIDE model is within ±20% of that of CART r � . Piecewise linear GUIDE models are more accurate; with bagging they can outperform the spline-based MARS r � method.

484 citations


Proceedings ArticleDOI
27 Oct 2002
TL;DR: This paper presents a simple and fast method for computing parameterizations with strictly bounded distortion, and is the first method to compute the mesh partitioning and the parameterization simultaneously and entirely automatically, while providing guaranteed distortion bounds.
Abstract: Many computer graphics operations, such as texture mapping, 3D painting, remeshing, mesh compression, and digital geometry processing, require finding a low-distortion parameterization for irregular connectivity triangulations of arbitrary genus 2-manifolds. This paper presents a simple and fast method for computing parameterizations with strictly bounded distortion. The new method operates by flattening the mesh onto a region of the 2D plane. To comply with the distortion bound, the mesh is automatically cut and partitioned on-the-fly. The method guarantees avoiding global and local self-intersections, while attempting to minimize the total length of the introduced seams. To our knowledge, this is the first method to compute the mesh partitioning and the parameterization simultaneously and entirely automatically, while providing guaranteed distortion bounds. Our results on a variety of objects demonstrate that the method is fast enough to work with large complex irregular meshes in interactive applications.

250 citations


Journal ArticleDOI
TL;DR: Convergence of local and global discretization errors to the Radau polynomial of degree p +1 holds for smooth solutions as p →∞ and is used to construct asymptotically correct a posteriori estimates of spatial discretized errors that are effective for linear and nonlinear conservation laws in regions where solutions are smooth.

226 citations


Journal ArticleDOI
TL;DR: It is shown that the stability of the system can be established if a piecewise Lyapunovfunction can be constructed and the function can be obtained by solving a set of linear matrix inequalities (LMIs) that is numerically feasible with commercially available software.
Abstract: Presents a stability analysis method for piecewise discrete-time linear systems based on a piecewise smooth Lyapunov function. It is shown that the stability of the system can be established if a piecewise Lyapunov function can be constructed and, moreover, the function can be obtained by solving a set of linear matrix inequalities (LMIs) that is numerically feasible with commercially available software.

224 citations


Proceedings ArticleDOI
18 Apr 2002
TL;DR: It is shown that unobserved time-points can be reconstructed using the method with 10-15% less error when compared to previous best methods, and the algorithm produces stable low-error alignments on real expression data and shows a specific application to yeast knockout data that produces biologically meaningful results.
Abstract: We present algorithms for time-series gene expression analysis that permit the principled estimation of unobserved time-points, clustering, and dataset alignment. Each expression profile is modeled as a cubic spline (piecewise polynomial) that is estimated from the observed data and every time point influences the overall smooth expression curve. We constrain the spline coefficients of genes in the same class to have similar expression patterns, while also allowing for gene specific parameters. We show that unobserved time-points can be reconstructed using our method with 10-15% less error when compared to previous best methods. Our clustering algorithm operates directly on the continuous representations of gene expression profiles, and we demonstrate that this is particularly effective when applied to non-uniformly sampled data. Our continuous alignment algorithm also avoids difficulties encountered by discrete approaches. In particular, our method allows for control of the number of degrees of freedom of the warp through the specification of parameterized functions, which helps to avoid overfitting. We demonstrate that our algorithm produces stable low-error alignments on real expression data and further show a specific application to yeast knockout data that produces biologically meaningful results.

215 citations


Journal ArticleDOI
TL;DR: In this article, a discrete analogue of continuous time ratio-dependent predator-prey system is proposed, which is governed by nonautonomous difference equations, modeling the dynamics of the prey and the predator having nonoverlapping generations.

184 citations


Book
24 Oct 2002
TL;DR: This book presents a computational approach to the analysis of nonlinear and uncertain systems and describes numerical procedures for assessing stability, computing induced gains, and solving optimal control problems for piecewise linear systems.
Abstract: This book presents a computational approach to the analysis of nonlinear and uncertain systems. The main focus is systems with piecewise linear dynamics. The class of piecewise linear systems examined has nonlinear, possibly discontinuous dynamics, and allows switching rules that incorporate memory and logic. These systems may exhibit astonishingly complex behaviors. Some aspects of the successful theory of linear systems and quadratic criteria are extended here to piecewise linear systems and piecewise quadratic criteria. The book also describes numerical procedures for assessing stability, computing induced gains, and solving optimal control problems for piecewise linear systems. These developments enable researchers to analyze a large and practically important class of control systems that are not easily dealt with when using other techniques. (Less)

175 citations


Journal ArticleDOI
TL;DR: Finite termination of a Newton method is shown to the unique global solution starting from any point in R n, if the function is well conditioned and if not, an Armijo stepsize is used.
Abstract: A fundamental classification problem of data mining and machine learning is that of minimizing a strongly convex, piecewise quadratic function on the n -dimensional real space R n . We show finite termination of a Newton method to the unique global solution starting from any point in R n . If the function is well conditioned, then no stepsize is required from the start and if not, an Armijo stepsize is used. In either case, the algorithm finds the unique global minimum solution in a finite number of iterations.

170 citations


Journal ArticleDOI
TL;DR: It is shown here how a certain class of augmented NN, capable of approximating piecewise continuous functions, can be used for friction compensation.
Abstract: One of the most important properties of neural nets (NNs) for control purposes is the universal approximation property. Unfortunately,, this property is generally proven for continuous functions. In most real industrial control systems there are nonsmooth functions (e.g., piecewise continuous) for which approximation results in the literature are sparse. Examples include friction, deadzone, backlash, and so on. It is found that attempts to approximate piecewise continuous functions using smooth activation functions require many NN nodes and many training iterations, and still do not yield very good results. Therefore, a novel neural-network structure is given for approximation of piecewise continuous functions of the sort that appear in friction, deadzone, backlash, and other motion control actuator nonlinearities. The novel NN consists of neurons having standard sigmoid activation functions, plus some additional neurons having a special class of nonsmooth activation functions termed "jump approximation basis function." Two types of nonsmooth jump approximation basis functions are determined- a polynomial-like basis and a sigmoid-like basis. This modified NN with additional neurons having "jump approximation" activation functions can approximate any piecewise continuous function with discontinuities at a finite number of known points. Applications of the new NN structure are made to rigid-link robotic systems with friction nonlinearities. Friction is a nonlinear effect that can limit the performance of industrial control systems; it occurs in all mechanical systems and therefore is unavoidable in control systems. It can cause tracking errors, limit cycles, and other undesirable effects. Often, inexact friction compensation is used with standard adaptive techniques that require models that are linear in the unknown parameters. It is shown here how a certain class of augmented NN, capable of approximating piecewise continuous functions, can be used for friction compensation.

Journal ArticleDOI
TL;DR: It is proved that small oscillation relative to the best error with piecewise linears implies the saturation assumption, and it is shown that this condition is necessary, and asymptotically valid provided f is in L^2.
Abstract: The saturation assumption asserts that the best approximation error in \(H^1_0\) with piecewise quadratic finite elements is strictly smaller than that of piecewise linear finite elements. We establish a link between this assumption and the oscillation of \(f=-\Delta u\), and prove that small oscillation relative to the best error with piecewise linears implies the saturation assumption. We also show that this condition is necessary, and asymptotically valid provided \(f\in L^2\).

Journal ArticleDOI
TL;DR: It is proved that the numerical error is uniformly bounded in time for such prepared (i.e., piecewise constant) initial data, and a conjecture of non-diffusion at infinite time is state, based on some local over-compressivity of the scheme, for general initial data.
Abstract: We present a non-diffusive and contact discontinuity capturing scheme for linear advection and compressible Euler system. In the case of advection, this scheme is equivalent to the Ultra-Bee limiter of l24r, l29r. We prove for the Ultra-Bee scheme a property of exact advection for a large set of piecewise constant functions. We prove that the numerical error is uniformly bounded in time for such prepared (i.e., piecewise constant) initial data, and state a conjecture of non-diffusion at infinite time, based on some local over-compressivity of the scheme, for general initial data. We generalize the scheme to compressible gas dynamics and present some numerical results.

Journal ArticleDOI
TL;DR: A result is derived that allows us to precisely enforce piecewise constant and piecewise trigonometric polynomial masks in a finite and convex manner via linear matrix inequalities.
Abstract: The design of a finite impulse response (FIR) filter often involves a spectral "mask" that the magnitude spectrum must satisfy. The mask specifies upper and lower bounds at each frequency and, hence, yields an infinite number of constraints. In current practice, spectral masks are often approximated by discretization, but in this paper, we derive a result that allows us to precisely enforce piecewise constant and piecewise trigonometric polynomial masks in a finite and convex manner via linear matrix inequalities. While this result is theoretically satisfying in that it allows us to avoid the heuristic approximations involved in discretization techniques, it is also of practical interest because it generates competitive design algorithms (based on interior point methods) for a diverse class of FIR filtering and narrowband beamforming problems. The examples we provide include the design of standard linear and nonlinear phase FIR filters, robust "chip" waveforms for wireless communications, and narrowband beamformers for linear antenna arrays. Our main result also provides a contribution to system theory, as it is an extension of the well-known positive-real and bounded-real lemmas.

Journal ArticleDOI
TL;DR: In this article, a piecewise linear map is proposed to study the non-linear effects in a single-phase H-bridge inverter, where the PWM control is related to a current feedback control.
Abstract: In this article, we are studying the non-linear effects in a single-phase H-bridge inverter. The PWM control is related to a current feedback control. We are proposing an analytical model, which is a piecewise linear map. The distinctive feature of this study lies in the investigation of the map's properties. This investigation allows for the analytical determination of the fixed points, their domains of stability, and of the bifurcation points. More precisely, we will show that some of these bifurcations are discontinuous. The analysis is performed while keeping in mind the current controller's tuning. In this particular setting, we will show that all the bifurcations are of a certain type: border collision bifurcations. Although we are treating the appearance of chaos in a converter, the work presented stays close to the preoccupations of the engineer, because the particularities of the digital control are shown as an advantage. Moreover, we have strived to comment on the different modes observed, perio...

Journal ArticleDOI
TL;DR: In this paper, the authors consider general, anisotropic elastic media and derive a set of pseudodifferential operators that annihilate the singular part of seismic data, and construct a Fourier integral operator and a reflectivity function such that the data can be represented by this operator acting on the reflectivity functions.
Abstract: Seismic data is modeled in the high-frequency approximation, using the techniques of microlocal analysis. We consider general, anisotropic elastic media. Our methods are designed to allow for the formation of caustics. The data is modeled in two ways. First, we give a microlocal treatment of the Kirchhoff approximation, where the medium is assumed to be piecewise smooth, and reflection and transmission occur at interfaces. Second, we give a refined view on the Born approximation based upon a linearization of the scattering process in the medium parameters around a smooth background medium. The joint formulation of Born and Kirchhoff scattering allows us to take into account general scatterers as well as the nonlinear dependence of reflection coefficients on the medium parameters. The latter allows the treatment of scattering up to grazing angles. The outcome of the analysis is a characterization of the singular part of seismic data. We obtain a set of pseudodifferential operators that annihilate the data. In the process we construct a Fourier integral operator and a reflectivity function such that the data can be represented by this operator acting on the reflectivity function. In our construction this Fourier integral operator becomes invertible. We give the conditions for invertibility for general acquisition geometry. The result is also of interest for inverse scattering in acoustic media. c 2002 John Wiley & Sons, Inc.

Journal ArticleDOI
TL;DR: A key feature of the approach is a novel hierarchical transformation technique for accelerating convergence on a non-uniform, piecewise continuous grid that enables high-quality reconstructions of free-form curved surfaces with arbitrary reflectance properties.
Abstract: This paper presents a novel approach for reconstructing free-form, texture-mapped, 3D scene models from a single painting or photograph. Given a sparse set of user-specified constraints on the local shape of the scene, a smooth 3D surface that satisfies the constraints is generated. This problem is formulated as a constrained variational optimization problem. In contrast to previous work in single-view reconstruction, our technique enables high-quality reconstructions of free-form curved surfaces with arbitrary reflectance properties. A key feature of the approach is a novel hierarchical transformation technique for accelerating convergence on a non-uniform, piecewise continuous grid. The technique is interactive and updates the model in real time as constraints are added, allowing fast reconstruction of photorealistic scene models. The approach is shown to yield high-quality results on a large variety of images. Copyright © 2002 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: These models are shown to be capable of automatically finding not only the optimum model, but also the appropriate order for specific financial data, and are more acceptable to financial experts than classical (closed box) neural networks.
Abstract: Real-world financial data is often nonlinear, comprises high-frequency multipolynomial components, and is discontinuous (piecewise continuous). Not surprisingly, it is hard to model such data. Classical neural networks are unable to automatically determine the optimum model and appropriate order for financial data approximation. We address this problem by developing neuron-adaptive higher order neural-network (NAHONN) models. After introducing one-dimensional (1-D), two-dimensional (2-D), and n-dimensional NAHONN models, we present an appropriate learning algorithm. Network convergence and the universal approximation capability of NAHONNs are also established. NAHONN Group models (NAHONGs) are also introduced. Both NAHONNs and NAHONGs are shown to be "open box" and as such are more acceptable to financial experts than classical (closed box) neural networks. These models are further shown to be capable of automatically finding not only the optimum model, but also the appropriate order for specific financial data.

Journal ArticleDOI
TL;DR: In this paper, a line-by-line radiative transfer code is proposed that computes absorption coefficients to a specified percentage error tolerance in a near minimal number of calculations using a pre-computed lookup table that predicts where it is appropriate to reduce the resolution of a particular line without exceeding the required error tolerance.
Abstract: Current line-by-line radiative transfer codes accelerate calculations by interpolating the line function where it varies slowly. This can increase calculation performance by a factor of 10 or more but causes a reduction in calculation accuracy. We present a new line-by-line algorithm that computes absorption coefficients to a specified percentage-error tolerance in a near minimal number of calculations. The algorithm employs a novel binary division of a calculation's spectral interval, coupled with a pre-computed lookup table that predicts where it is appropriate to reduce the resolution of a particular line without exceeding the required error tolerance. Line contributions are computed piecewise across a cascaded series of grids which are then interpolated and summed to derive the absorption coefficient. The algorithm is coded in MATLAB as part of a toolbox of radiative transfer functions for the analysis of planetary atmospheres and laboratory experiments.

Journal ArticleDOI
TL;DR: Refinement and coarsening indicators, which are easy to compute from the gradient of the least squares misfit function, are introduced to construct iteratively the zonation and to prevent overparametrization.
Abstract: When estimating hydraulic transmissivity the question of parametrization is of great importance. The transmissivity is assumed to be a piecewise constant space-dependent function and the unknowns are both the transmissivity values and the zonation, the partition of the domain whose parts correspond to the zones where the transmissivity is constant. Refinement and coarsening indicators, which are easy to compute from the gradient of the least squares misfit function, are introduced to construct iteratively the zonation and to prevent overparametrization.

Journal ArticleDOI
TL;DR: This work develops the SLEX model parallel to the Dahlhaus (1997, Ann. Statist., 25, 1–37) model of local stationarity, and it is shown that the two models are asymptotically mean square equivalent.
Abstract: We propose a new model for non-stationary random processes to represent time series with a time-varying spectral structure. Our SLEX model can be considered as a discrete time-dependent Cramer spectral representation. It is based on the so-called Smooth Localized complex EXponential basis functions which are orthogonal and localized in both time and frequency domains. Our model delivers a finite sample size representation of a SLEX process having a SLEX spectrum which is piecewise constant over time segments. In addition, we embed it into a sequence of models with a limit spectrum, a smoothly in time varying “evolutionary” spectrum. Hence, we develop the SLEX model parallel to the Dahlhaus (1997, Ann. Statist., 25, 1–37) model of local stationarity, and we show that the two models are asymptotically mean square equivalent. Moreover, to define both the growing complexity of our model sequence and the regularity of the SLEX spectrum we use a wavelet expansion of the spectrum over time. Finally, we develop theory on how to estimate the spectral quantities, and we briefly discuss how to form inference based on resampling (bootstrapping) made possible by the special structure of the SLEX model which allows for simple synthesis of non-stationary processes.

Posted Content
TL;DR: In this article, it was shown that a piecewise linear function on a convex domain in R d can be represented as a boolean polynomial in terms of its linear components.
Abstract: It is shown that a piecewise linear function on a convex domain in R d can be represented as a boolean polynomial in terms of its linear components.

Journal Article
TL;DR: Adaptive Finite Element Methods (AFEM) are numerical proce- dures that approximate the solution to a partial differential equation (PDE) by piecewise polynomials on adaptively generated triangulations as mentioned in this paper.
Abstract: Adaptive Finite Element Methods (AFEM) are numerical proce- dures that approximate the solution to a partial differential equation (PDE) by piecewise polynomials on adaptively generated triangulations. Only re- cently has any analysis of the convergence of these methods (10, 13) or their rates of convergence (2) become available. In the latter paper it is shown that a certain AFEM for solving Laplace's equation on a polygonal domain ⊂ R 2 based on newest vertex bisection has an optimal rate of convergence in the following sense. If, for some s > 0 and for each n = 1,2, . . ., the solu- tion u can be approximated in the energy norm to order O(n s ) by piecewise linear functions on a partition P obtained from n newest vertex bisections, then the adaptively generated solution will also use O(n) subdivisions (and floating point computations) and have the same rate of convergence. The question arises whether the class of functions A s with this approximation rate can be described by classical measures of smoothness. The purpose of the present paper is to describe such approximation classes A s by Besov smoothness.

Journal ArticleDOI
TL;DR: In this article, the initial-boundary value problem for linearized gravitational theory in harmonic coordinates is investigated, and the results are used to formulate computational algorithms for Cauchy evolution in 3D bounded domains.
Abstract: We investigate the initial-boundary value problem for linearized gravitational theory in harmonic coordinates. Rigorous techniques for hyperbolic systems are applied to establish well-posedness for various reductions of the system into a set of six wave equations. The results are used to formulate computational algorithms for Cauchy evolution in a 3-dimensional bounded domain. Numerical codes based upon these algorithms are shown to satisfy tests of robust stability for random constraint violating initial data and random boundary data; and shown to give excellent performance for the evolution of typical physical data. The results are obtained for plane boundaries as well as piecewise cubic spherical boundaries cut out of a Cartesian grid.

Journal ArticleDOI
TL;DR: The compactly supported radial basis functions (CSRBFs) are presented in solving a system of shallow water hydrodynamics equations and the resulting banded matrix has shown improvement in both ill-conditioning and computational efficiency.

Proceedings ArticleDOI
10 Dec 2002
TL;DR: In this article, an observer design procedure for a class of bi-modal piecewise affine systems is proposed, where the observer does not require information on the currently active dynamics of the piecewise linear system.
Abstract: In this paper we propose an observer design procedure for a class of bi-modal piece-wise affine systems. The designed observers have the characteristic feature that they do not require information on the currently active dynamics of the piecewise linear system. A design procedure which guarantees global asymptotic stability of the estimation error is presented. It is shown that the applicability of the presented procedure is limited to continuous piece-wise affine systems. Therefore, we present an observer design procedure, applicable also to discontinuous systems, which guarantees that the estimation error is bounded, with respect to the state bounds, asymptotically. Sliding motions in the observed system, and the observer are discussed. The presented theory is illustrated with an example.

Posted Content
TL;DR: The minimization problem of a spectral measure is shown to be equivalent to the minimization of a suitable function which contains additional parameters, but displays analytical properties which allow for efficient minimization procedures.
Abstract: We study Spectral Measures of Risk from the perspective of portfolio optimization. We derive exact results which extend to general Spectral Measures M_phi the Pflug--Rockafellar--Uryasev methodology for the minimization of alpha--Expected Shortfall. The minimization problem of a spectral measure is shown to be equivalent to the minimization of a suitable function which contains additional parameters, but displays analytical properties (piecewise linearity and convexity in all arguments, absence of sorting subroutines) which allow for efficient minimization procedures. In doing so we also reveal a new picture where the classical risk--reward problem a la Markowitz (minimizing risks with constrained returns or maximizing returns with constrained risks) is shown to coincide to the unconstrained optimization of a single suitable spectral measure. In other words, minimizing a spectral measure turns out to be already an optimization process itself, where risk minimization and returns maximization cannot be disentangled from each other.

Journal ArticleDOI
TL;DR: A nonlinear fuzzy H/sub /spl infin// guidance law based on a fuzzy model is proposed for tactical missiles pursuing maneuvering targets in three-dimensional (3-D) space to eliminate the effects of approximation error and external disturbances.
Abstract: A nonlinear H/sub /spl infin// guidance law based on a fuzzy model is proposed for tactical missiles pursuing maneuvering targets in three-dimensional (3-D) space. In the proposed guidance scheme, the relative motion equations between the missile and target are first interpolated piecewise by Takagi-Sugeno linear fuzzy models. Then, a nonlinear fuzzy H/sub /spl infin// guidance law is designed to eliminate the effects of approximation error and external disturbances to achieve the desired goal. The linear matrix inequality (LMI) technique is then employed to treat this H/sub /spl infin// optimal guidance design in consideration of control constraints. Finally, the problem is further transformed into a standard eigenvalue problem so that it can be efficiently solved via a convex optimization algorithm, which is available from a numerical computation software.

Journal ArticleDOI
TL;DR: Different mixed variational methods are proposed and studied in order to approximate with finite elements the unilateral problems arising in contact mechanics.
Abstract: In this paper, we propose and study different mixed variational methods in order to approximate with finite elements the unilateral problems arising in contact mechanics. The discretized unilateral conditions at the candidate contact interface are expressed by using either continuous piecewise linear or piecewise constant Lagrange multipliers in the saddle-point formulation. A priori error estimates are established and several numerical studies corresponding to the different choices of the discretized unilateral conditions are achieved.

Journal ArticleDOI
TL;DR: Using theory of Large Deviations it is shown that the sample size needed to calculate the optimal solution of stochastic programming problems where the objective function is given as an expected value of a convex piecewise linear random function is approximately proportional to the condition number.
Abstract: In this paper we consider stochastic programming problems where the objective function is given as an expected value of a convex piecewise linear random function. With an optimal solution of such a problem we associate a condition number which characterizes well or ill conditioning of the problem. Using theory of Large Deviations we show that the sample size needed to calculate the optimal solution of such problem with a given probability is approximately proportional to the condition number.