scispace - formally typeset
Search or ask a question

Showing papers on "Piecewise published in 2011"


Journal ArticleDOI
TL;DR: In this paper, passivity analysis is conducted for discrete-time stochastic neural networks with both Markovian jumping parameters and mixed time delays by introducing a Lyapunov functional that accounts for the Mixed time delays.
Abstract: In this paper, passivity analysis is conducted for discrete-time stochastic neural networks with both Markovian jumping parameters and mixed time delays. The mixed time delays consist of both discrete and distributed delays. The Markov chain in the underlying neural networks is finite piecewise homogeneous. By introducing a Lyapunov functional that accounts for the mixed time delays, a delay-dependent passivity condition is derived in terms of the linear matrix inequality approach. The case of Markov chain with partially unknown transition probabilities is also considered. All the results presented depend upon not only discrete delay but also distributed delay. A numerical example is included to demonstrate the effectiveness of the proposed methods.

355 citations


Journal ArticleDOI
TL;DR: Sufficient conditions are given for the stability of linear switched systems with dwell time and with polytopic type parameter uncertainty and a Lyapunov function, in quadratic form, which is non-increasing at the switching instants is assigned to each subsystem.
Abstract: Sufficient conditions are given for the stability of linear switched systems with dwell time and with polytopic type parameter uncertainty. A Lyapunov function, in quadratic form, which is non-increasing at the switching instants is assigned to each subsystem. During the dwell time, this function varies piecewise linearly in time after switching occurs. It becomes time invariant afterwards. This function leads to asymptotic stability conditions for the nominal set of subsystems that can be readily extended to the case where these subsystems suffer from polytopic type parameter uncertainties. The method proposed is then applied to stabilization via state-feedback both for the nominal and the uncertain cases.

333 citations


Journal ArticleDOI
TL;DR: A dual approach to describe the evolving 3D structure in trajectory space by a linear combination of basis trajectories is proposed and the Discrete Cosine Transform (DCT) is used as the object independent basis and it is demonstrated that it approaches Principal Component Analysis (PCA) for natural motions.
Abstract: Existing approaches to nonrigid structure from motion assume that the instantaneous 3D shape of a deforming object is a linear combination of basis shapes. These bases are object dependent and therefore have to be estimated anew for each video sequence. In contrast, we propose a dual approach to describe the evolving 3D structure in trajectory space by a linear combination of basis trajectories. We describe the dual relationship between the two approaches, showing that they both have equal power for representing 3D structure. We further show that the temporal smoothness in 3D trajectories alone can be used for recovering nonrigid structure from a moving camera. The principal advantage of expressing deforming 3D structure in trajectory space is that we can define an object independent basis. This results in a significant reduction in unknowns and corresponding stability in estimation. We propose the use of the Discrete Cosine Transform (DCT) as the object independent basis and empirically demonstrate that it approaches Principal Component Analysis (PCA) for natural motions. We report the performance of the proposed method, quantitatively using motion capture data, and qualitatively on several video sequences exhibiting nonrigid motions, including piecewise rigid motion, partially nonrigid motion (such as a facial expressions), and highly nonrigid motion (such as a person walking or dancing).

262 citations


Journal ArticleDOI
TL;DR: A novel unifying algorithmic framework, dynamo (dynamic optimisation platform) is introduced designed to provide the quantum-technology community with a convenient matlab-based toolset for optimal control, and gives researchers in optimal-control techniques a framework for benchmarking and comparing new proposed algorithms to the state-of-the-art.
Abstract: For paving the way to novel applications in quantum simulation, computation, and technology, increasingly large quantum systems have to be steered with high precision. It is a typical task amenable to numerical optimal control to turn the time course of pulses, i.e., piecewise constant control amplitudes, iteratively into an optimized shape. Here, we present a comparative study of optimal-control algorithms for a wide range of finite-dimensional applications. We focus on the most commonly used algorithms: GRAPE methods which update all controls concurrently, and Krotov-type methods which do so sequentially. Guidelines for their use are given and open research questions are pointed out. Moreover, we introduce a unifying algorithmic framework, DYNAMO (dynamic optimization platform), designed to provide the quantum-technology community with a convenient matlab-based tool set for optimal control. In addition, it gives researchers in optimal-control techniques a framework for benchmarking and comparing newly proposed algorithms with the state of the art. It allows a mix-and-match approach with various types of gradients, update and step-size methods as well as subspace choices. Open-source code including examples is made available at http://qlib.info.

237 citations


Journal ArticleDOI
TL;DR: The purpose of this study is to investigate multiregion graph cut image partitioning via kernel mapping of the image data and affords an effective alternative to complex modeling of the original image data while taking advantage of the computational benefits of graph cuts.
Abstract: The purpose of this study is to investigate multiregion graph cut image partitioning via kernel mapping of the image data. The image data is transformed implicitly by a kernel function so that the piecewise constant model of the graph cut formulation becomes applicable. The objective function contains an original data term to evaluate the deviation of the transformed data, within each segmentation region, from the piecewise constant model, and a smoothness, boundary preserving regularization term. The method affords an effective alternative to complex modeling of the original image data while taking advantage of the computational benefits of graph cuts. Using a common kernel function, energy minimization typically consists of iterating image partitioning by graph cut iterations and evaluations of region parameters via fixed point computation. A quantitative and comparative performance assessment is carried out over a large number of experiments using synthetic grey level data as well as natural images from the Berkeley database. The effectiveness of the method is also demonstrated through a set of experiments with real images of a variety of types such as medical, synthetic aperture radar, and motion maps.

219 citations


Journal ArticleDOI
TL;DR: A total variation model for Retinex is presented, which assumes spatial smoothness of the illumination and piecewise continuity of the reflection, where the total variation term is employed in the model.
Abstract: Human vision has the ability to recognize color under varying illumination conditions. Retinex theory is introduced to explain how the human visual system perceives color. The main aim of this paper is to present a total variation model for Retinex. Different from the existing methods, we consider and study two important elements which include illumination and reflection. We assume spatial smoothness of the illumination and piecewise continuity of the reflection, where the total variation term is employed in the model. The existence of the solution of the model is shown in the paper. We employ a fast computation method to solve the proposed minimization problem. Numerical examples are presented to illustrate the effectiveness of the proposed model.

215 citations


Journal ArticleDOI
TL;DR: This paper uses higher order piecewise interpolation polynomial to approximate the fractional integral and fractional derivatives, and uses the Simpson method to design a higher order algorithm for the fractionsal differential equations.

209 citations


Journal ArticleDOI
TL;DR: A new semiparametric multivariate joint model is proposed that relates multiple longitudinal outcomes to a time-to-event and key components of the model are modelled nonparametrically to allow for greater flexibility.
Abstract: Motivated by a real data example on renal graft failure, we propose a new semiparametric multivariate joint model that relates multiple longitudinal outcomes to a time-to-event. To allow for greater flexibility, key components of the model are modelled nonparametrically. In particular, for the subject-specific longitudinal evolutions we use a spline-based approach, the baseline risk function is assumed piecewise constant, and the distribution of the latent terms is modelled using a Dirichlet Process prior formulation. Additionally, we discuss the choice of a suitable parameterization, from a practitioner's point of view, to relate the longitudinal process to the survival outcome. Specifically, we present three main families of parameterizations, discuss their features, and present tools to choose between them.

200 citations


Journal ArticleDOI
TL;DR: This work presents an efficient and robust method for extracting curvature information, sharp features, and normal directions of a piecewise smooth surface from its point cloud sampling in a unified framework and describes a Monte-Carlo version of the method, which is applicable in any dimension.
Abstract: We present an efficient and robust method for extracting curvature information, sharp features, and normal directions of a piecewise smooth surface from its point cloud sampling in a unified framework. Our method is integral in nature and uses convolved covariance matrices of Voronoi cells of the point cloud which makes it provably robust in the presence of noise. We show that these matrices contain information related to curvature in the smooth parts of the surface, and information about the directions and angles of sharp edges around the features of a piecewise-smooth surface. Our method is applicable in both two and three dimensions, and can be easily parallelized, making it possible to process arbitrarily large point clouds, which was a challenge for Voronoi-based methods. In addition, we describe a Monte-Carlo version of our method, which is applicable in any dimension. We illustrate the correctness of both principal curvature information and feature extraction in the presence of varying levels of noise and sampling density on a variety of models. As a sample application, we use our feature detection method to segment point cloud samplings of piecewise-smooth surfaces.

175 citations


Posted Content
TL;DR: By successfully applying diffusion to LSE, the RD-LSE model is stable by means of the simple finite difference method, which is very easy to implement.
Abstract: This paper presents a novel reaction-diffusion (RD) method for implicit active contours, which is completely free of the costly re-initialization procedure in level set evolution (LSE). A diffusion term is introduced into LSE, resulting in a RD-LSE equation, to which a piecewise constant solution can be derived. In order to have a stable numerical solution of the RD based LSE, we propose a two-step splitting method (TSSM) to iteratively solve the RD-LSE equation: first iterating the LSE equation, and then solving the diffusion equation. The second step regularizes the level set function obtained in the first step to ensure stability, and thus the complex and costly re-initialization procedure is completely eliminated from LSE. By successfully applying diffusion to LSE, the RD-LSE model is stable by means of the simple finite difference method, which is very easy to implement. The proposed RD method can be generalized to solve the LSE for both variational level set method and PDE-based level set method. The RD-LSE method shows very good performance on boundary anti-leakage, and it can be readily extended to high dimensional level set method. The extensive and promising experimental results on synthetic and real images validate the effectiveness of the proposed RD-LSE approach.

174 citations


Journal ArticleDOI
TL;DR: A new adaptive method for analyzing nonlinear and nonstationary data inspired by the empirical mode decomposition (EMD) method and the recently developed compressed sensing theory that is less sensitive to noise perturbation and the end effect compared with the original EMD method.
Abstract: We introduce a new adaptive method for analyzing nonlinear and nonstationary data. This method is inspired by the empirical mode decomposition (EMD) method and the recently developed compressed sensing theory. The main idea is to look for the sparsest representation of multiscale data within the largest possible dictionary consisting of intrinsic mode functions of the form {a(t )c os(θ(t))} ,w herea ≥ 0i s assumed to be smoother than cos(θ(t)) and θ is a piecewise smooth increasing function. We formulate this problem as a nonlinear L 1 optimization problem. Further, we propose an iterative algorithm to solve this nonlinear optimization problem recursively. We also introduce an adaptive filter method to decompose data with noise. Numerical examples are given to demonstrate the robustness of our method and comparison is made with the EMD method. One advantage of performing such a decomposition is to preserve some intrinsic physical property of the signal, such as trend and instantaneous frequency. Our method shares many important properties of the original EMD method. Because our method is based on a solid mathematical formulation, its performance does not depend on numerical parameters such as the number of shifting or stop criterion, which seem to have a major effect on the original EMD method. Our method is also less sensitive to noise perturbation and the end effect compared with the original EMD method.

Book ChapterDOI
18 Sep 2011
TL;DR: A generative model extending least squares linear regression to the space of images by using a second-order dynamic formulation for image registration, which allows for a compact representation of an approximation to the full spatio-temporal trajectory through its initial values.
Abstract: Registration of image-time series has so far been accomplished (i) by concatenating registrations between image pairs, (ii) by solving a joint estimation problem resulting in piecewise geodesic paths between image pairs, (iii) by kernel based local averaging or (iv) by augmenting the joint estimation with additional temporal irregularity penalties. Here, we propose a generative model extending least squares linear regression to the space of images by using a second-order dynamic formulation for image registration. Unlike previous approaches, the formulation allows for a compact representation of an approximation to the full spatio-temporal trajectory through its initial values. The method also opens up possibilities to design image-based approximation algorithms. The resulting optimization problem is solved using an adjoint method.

Journal ArticleDOI
TL;DR: This paper presents a new method with no phase errors for one-dimensional (1D) time-harmonic wave propagation problems using new ideas that hold promise for the multidimensional case.

Journal ArticleDOI
TL;DR: Inspired by recent work, a formulation for the piecewise linear relaxation of bilinear functions with a logarithmic number of binary variables is introduced and computationally compare the performance of this new formulation to the best-performing piecewise relaxations with a linear number ofbinary variables.

Journal ArticleDOI
TL;DR: A quasi-optimal a priori error estimate is established for interface problems whose solutions are only $H^{1+\alpha}$ smooth with $\alpha\in(0,1)$ and, hence, fill a theoretical gap of the DG method for elliptic problems with low regularity.
Abstract: Discontinuous Galerkin (DG) finite element methods were studied by many researchers for second-order elliptic partial differential equations, and a priori error estimates were established when the solution of the underlying problem is piecewise $H^{3/2+\epsilon}$ smooth with $\epsilon>0$. However, elliptic interface problems with intersecting interfaces do not possess such a smoothness. In this paper, we establish a quasi-optimal a priori error estimate for interface problems whose solutions are only $H^{1+\alpha}$ smooth with $\alpha\in(0,1)$ and, hence, fill a theoretical gap of the DG method for elliptic problems with low regularity. The second part of the paper deals with the design and analysis of robust residual- and recovery-based a posteriori error estimators. Theoretically, we show that the residual and recovery estimators studied in this paper are robust with respect to the DG norm, i.e., their reliability and efficiency bounds do not depend on the jump, provided that the distribution of coefficients is locally quasi-monotone.

Journal ArticleDOI
TL;DR: Novel PWC denoising methods are introduced, and comparisons between these methods performed on synthetic and real signals are compared, showing that the new understanding of the problem gained in part I leads to new methods that have a useful role to play.
Abstract: Removing noise from piecewise constant (PWC) signals is a challenging signal processing problem arising in many practical contexts. For example, in exploration geosciences, noisy drill hole records need to be separated into stratigraphic zones, and in biophysics, jumps between molecular dwell states have to be extracted from noisy fluorescence microscopy signals. Many PWC denoising methods exist, including total variation regularization, mean shift clustering, stepwise jump placement, running medians, convex clustering shrinkage and bilateral filtering; conventional linear signal processing methods are fundamentally unsuited. This paper (part I, the first of two) shows that most of these methods are associated with a special case of a generalized functional, minimized to achieve PWC denoising. The minimizer can be obtained by diverse solver algorithms, including stepwise jump placement, convex programming, finite differences, iterated running medians, least angle regression, regularization path following and coordinate descent. In the second paper, part II, we introduce novel PWC denoising methods, and comparisons between these methods performed on synthetic and real signals, showing that the new understanding of the problem gained in part I leads to new methods that have a useful role to play.

Journal ArticleDOI
Lei Wu1
TL;DR: In this article, a rigorous segment partition method was proposed to obtain a set of optimal segment points by minimizing the difference between chord and arc lengths, in order to derive a tighter piecewise linear approximation of QCCs and in turn a better UC solution as compared to the equipartition method.
Abstract: This letter provides a tighter piecewise linear approximation of generating units' quadratic cost curves (QCCs) for unit commitment (UC) problems. In order to facilitate the UC optimization process with efficient mixed-integer linear programing (MILP) solvers, QCCs are piecewise linearized for converting the original mixed-integer quadratic programming (MIQP) problem into an MILP problem. Traditionally, QCCs are piecewise linearized by evenly dividing the entire real power region into segments. This letter discusses a rigorous segment partition method for obtaining a set of optimal segment points by minimizing the difference between chord and arc lengths, in order to derive a tighter piecewise linear approximation of QCCs and, in turn, a better UC solution as compared to the equipartition method. Numerical test results show the effectiveness of the proposed method on a tighter piecewise linear approximation for better UC solutions.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed a new space of functions of fractional-order bounded variation called the BV α space by using the Grunwald-Letnikov definition of fractiona-order derivative, which can improve the peak signal to noise ratio of image, preserve textures and eliminate the staircase effect.

Journal ArticleDOI
TL;DR: This paper presents a novel divergence dubbed the Total Bregman divergence (TBD), which is intrinsically robust to outliers, a very desirable property in many applications, and derives the piecewise smooth active contour model for segmentation of DT-MRI using the TBD and presents several comparative results on real data.
Abstract: Divergence measures provide a means to measure the pairwise dissimilarity between “objects,” e.g., vectors and probability density functions (pdfs). Kullback-Leibler (KL) divergence and the square loss (SL) function are two examples of commonly used dissimilarity measures which along with others belong to the family of Bregman divergences (BD). In this paper, we present a novel divergence dubbed the Total Bregman divergence (TBD), which is intrinsically robust to outliers, a very desirable property in many applications. Further, we derive the TBD center, called the t-center (using the l1-norm), for a population of positive definite matrices in closed form and show that it is invariant to transformation from the special linear group. This t-center, which is also robust to outliers, is then used in tensor interpolation as well as in an active contour based piecewise constant segmentation of a diffusion tensor magnetic resonance image (DT-MRI). Additionally, we derive the piecewise smooth active contour model for segmentation of DT-MRI using the TBD and present several comparative results on real data.

Book
08 May 2011
TL;DR: In this paper, the reduction principle for linear and quasi-linear systems with piecewise constant argument was studied and the small-parameter and differential equations were analyzed with and without piecewise-constant-argument.
Abstract: 1. Introduction.- 2. Linear and quasi-linear systems with piecewise constant argument.- 3. The reduction principle for systems with piecewise constant argument.- 4. The small parameter and differential equations with piecewise constant argument.- 5. Stability.- 6. The state-dependent piecewise constant argument.- 7. Almost periodic solutions.- 8. Stability of neural networks.- 9. The blood pressure distribution.- 10. Integrate-and-fire biological oscillators.

Journal ArticleDOI
TL;DR: OpenMEEG, which solves the electromagnetic forward problem in the quasistatic regime, for head models with piecewise constant conductivity, consists of the symmetric Boundary Element Method, which is based on an extended Green Representation theorem.
Abstract: To recover the sources giving rise to electro- and magnetoencephalography in individual measurements, realistic physiological modeling is required, and accurate numerical solutions must be computed. We present OpenMEEG, which solves the electromagnetic forward problem in the quasistatic regime, for head models with piecewise constant conductivity. The core of OpenMEEG consists of the symmetric Boundary Element Method, which is based on an extended Green Representation theorem. OpenMEEG is able to provide lead fields for four different electromagnetic forward problems: Electroencephalography (EEG), Magnetoencephalography (MEG), Electrical Impedance Tomography (EIT), and intracranial electric potentials (IPs). OpenMEEG is open source and multiplatform. It can be used from Python and Matlab in conjunction with toolboxes that solve the inverse problem; its integration within FieldTrip is operational since release 2.0.

Proceedings ArticleDOI
20 Jun 2011
TL;DR: This paper reformulate the 3D reconstruction of deformable surfaces from monocular video sequences as a labeling problem in which a set of labels, instead of a single one, is assigned to each variable and proposes a mathematical formulation of this new model and shows how it can be efficiently optimized with a variant of α-expansion.
Abstract: In this paper we reformulate the 3D reconstruction of deformable surfaces from monocular video sequences as a labeling problem. We solve simultaneously for the assignment of feature points to multiple local deformation models and the fitting of models to points to minimize a geometric cost, subject to a spatial constraint that neighboring points should also belong to the same model. Piecewise reconstruction methods rely on features shared between models to enforce global consistency on the 3D surface. To account for this overlap between regions, we consider a super-set of the classic labeling problem in which a set of labels, instead of a single one, is assigned to each variable. We propose a mathematical formulation of this new model and show how it can be efficiently optimized with a variant of α-expansion. We demonstrate how this framework can be applied to Non-Rigid Structure from Motion and leads to simpler explanations of the same data. Compared to existing methods run on the same data, our approach has up to half the reconstruction error, and is more robust to over-fitting and outliers.

Journal ArticleDOI
01 Sep 2011-Calcolo
TL;DR: A new divergence-free finite element on 3D Powell–Sabin grids is constructed for Stokes equations, where the velocity is approximating by continuous piecewise quadratic polynomials while the pressure is approximated by discontinuous piecewise linear polynmials on the same grid.
Abstract: Given a tetrahedral grid in 3D, a Powell---Sabin grid can be constructed by refining each original tetrahedron into 12 subtetrahedra. A new divergence-free finite element on 3D Powell---Sabin grids is constructed for Stokes equations, where the velocity is approximated by continuous piecewise quadratic polynomials while the pressure is approximated by discontinuous piecewise linear polynomials on the same grid. To be precise, the finite element space for the pressure is exactly the divergence of the corresponding space for the velocity. Therefore, the resulting finite element solution for the velocity is pointwise divergence-free, including the inter-element boundary. By establishing the inf-sup condition, the finite element is stable and of the optimal order. Numerical tests are provided.

Journal ArticleDOI
TL;DR: In this paper, a dynamic piecewise linear model is proposed to represent dc transmission losses in optimal scheduling problems, where the linear cuts to approximate quadratic losses in each transmission line are adjusted iteratively as the optimization problem is solved.
Abstract: This paper proposes a dynamic piecewise linear model to represent dc transmission losses in optimal scheduling problems. An iterative procedure is proposed, where the linear cuts to approximate quadratic losses in each transmission line are adjusted iteratively as the optimization problem is solved. Applications of this approach to the network constrained short-term hydrothermal scheduling problem and to static dc optimal power flow problems yield a higher accuracy in representing line transmission losses as compared to other approaches, such as a priori iterative estimation, static piecewise liner model and successive linear programming. Study cases for a large-scale system also show reasonable results regarding CPU times.

Posted Content
TL;DR: In this paper, the existence of global-in-time unique solutions for the Navier-Stokes equations with piecewise constant initial densities has been shown under some smallness assumption on the data.
Abstract: Here we investigate the Cauchy problem for the inhomogeneous Navier-Stokes equations in the whole $n$-dimensional space. Under some smallness assumption on the data, we show the existence of global-in-time unique solutions in a critical functional framework. The initial density is required to belong to the multiplier space of $\dot B^{n/p-1}_{p,1}(\R^n)$. In particular, piecewise constant initial densities are admissible data \emph{provided the jump at the interface is small enough}, and generate global unique solutions with piecewise constant densities. Using Lagrangian coordinates is the key to our results as it enables us to solve the system by means of the basic contraction mapping theorem. As a consequence, conditions for uniqueness are the same as for existence.

Journal ArticleDOI
TL;DR: The elements are based on Bernstein polynomials and are the first to achieve optimal complexity for the standard finite element spaces on simplicial elements.
Abstract: Algorithms are presented that enable the element matrices for the standard finite element space, consisting of continuous piecewise polynomials of degree $n$ on simplicial elements in $\mathbb{R}^d$, to be computed in optimal complexity $\mathcal{O}(n^{2d})$. The algorithms (i) take into account numerical quadrature; (ii) are applicable to nonlinear problems; and (iii) do not rely on precomputed arrays containing values of one-dimensional basis functions at quadrature points (although these can be used if desired). The elements are based on Bernstein polynomials and are the first to achieve optimal complexity for the standard finite element spaces on simplicial elements.

Journal ArticleDOI
TL;DR: The local dynamics of all possible two-folds in three dimensions are derived, including nonlinear effects around certain bifurcations, finding that they admit a flow exhibiting chaotic but nondeterministic dynamics.
Abstract: A vector field is piecewise smooth if its value jumps across a hypersurface, and a two-fold singularity is a point where the flow is tangent to the hypersurface from both sides. Two-folds are generic in piecewise smooth systems of three or more dimensions. We derive the local dynamics of all possible two-folds in three dimensions, including nonlinear effects around certain bifurcations, finding that they admit a flow exhibiting chaotic but nondeterministic dynamics. In cases where the flow passes through the two-fold, upon reaching the singularity it is unique in neither forward nor backward time, meaning the causal link between inward and outward dynamics is severed. In one scenario this occurs recurrently. The resulting flow makes repeated, but nonperiodic, excursions from the singularity, whose path and amplitude is not determined by previous excursions. We show that this behavior is robust and has many of the properties associated with chaos. Local geometry reveals that the chaotic behavior can be eliminated by varying a single parameter: the angular jump of the vector field across the two-fold.

Book ChapterDOI
TL;DR: Quasi-optimality is shown for a fixed number of degrees of freedom per wavelength if the mesh size and the approximation order are selected such that $kh/p$ is sufficiently small and $p = O(\log k)$, and, additionally, appropriate mesh refinement is used near the vertices.
Abstract: We review the stability properties of several discretizations of the Helmholtz equation at large wavenumbers. For a model problem in a polygon, a complete $k$-explicit stability (including $k$-explicit stability of the continuous problem) and convergence theory for high order finite element methods is developed. In particular, quasi-optimality is shown for a fixed number of degrees of freedom per wavelength if the mesh size $h$ and the approximation order $p$ are selected such that $kh/p$ is sufficiently small and $p = O(\log k)$, and, additionally, appropriate mesh refinement is used near the vertices. We also review the stability properties of two classes of numerical schemes that use piecewise solutions of the homogeneous Helmholtz equation, namely, Least Squares methods and Discontinuous Galerkin (DG) methods. The latter includes the Ultra Weak Variational Formulation.

Journal ArticleDOI
TL;DR: A numerical method for a generalized Black-Scholes equation, used for option pricing, based on a central difference spatial discretization on a piecewise uniform mesh and an implicit time stepping technique that efficiently treats the singularities of the non-smooth payoff function.

Journal ArticleDOI
TL;DR: In this article, it was shown that if γ is piecewise constant with a bounded known number of unknown values, then Lipschitz continuity of γ from Λγ holds.
Abstract: In this article we investigate the boundary value problem where γ is a complex valued L ∞ coefficient, satisfying a strong ellipticity condition. In electrical impedance tomography, γ represents the admittance of a conducting body. An interesting issue is the one of determining γ uniquely and in a stable way from the knowledge of the Dirichlet-to-Neumann map Λγ. Under the above general assumptions this problem is an open issue. In this article we prove that, if we assume a priori that γ is piecewise constant with a bounded known number of unknown values, then Lipschitz continuity of γ from Λγ holds.