scispace - formally typeset
Search or ask a question

Showing papers on "Iterative method published in 2003"


Book
01 Apr 2003
TL;DR: This chapter discusses methods related to the normal equations of linear algebra, and some of the techniques used in this chapter were derived from previous chapters of this book.
Abstract: Preface 1. Background in linear algebra 2. Discretization of partial differential equations 3. Sparse matrices 4. Basic iterative methods 5. Projection methods 6. Krylov subspace methods Part I 7. Krylov subspace methods Part II 8. Methods related to the normal equations 9. Preconditioned iterations 10. Preconditioning techniques 11. Parallel implementations 12. Parallel preconditioners 13. Multigrid methods 14. Domain decomposition methods Bibliography Index.

13,484 citations


Posted Content
Abstract: We consider linear inverse problems where the solution is assumed to have a sparse expansion on an arbitrary pre-assigned orthonormal basis. We prove that replacing the usual quadratic regularizing penalties by weighted l^p-penalties on the coefficients of such expansions, with 1 < or = p < or =2, still regularizes the problem. If p < 2, regularized solutions of such l^p-penalized problems will have sparser expansions, with respect to the basis under consideration. To compute the corresponding regularized solutions we propose an iterative algorithm that amounts to a Landweber iteration with thresholding (or nonlinear shrinkage) applied at each iteration step. We prove that this algorithm converges in norm. We also review some potential applications of this method.

3,640 citations


Journal ArticleDOI
TL;DR: It is proved that all expectation-maximization algorithms and classes of Legendre minimization and variational bounding algorithms can be reexpressed in terms of CCCP.
Abstract: The concave-convex procedure (CCCP) is a way to construct discrete-time iterative dynamical systems that are guaranteed to decrease global optimization and energy functions monotonically. This procedure can be applied to almost any optimization problem, and many existing algorithms can be interpreted in terms of it. In particular, we prove that all expectation-maximization algorithms and classes of Legendre minimization and variational bounding algorithms can be reexpressed in terms of CCCP. We show that many existing neural network and mean-field theory algorithms are also examples of CCCP. The generalized iterative scaling algorithm and Sinkhorn's algorithm can also be expressed as CCCP by changing variables. CCCP can be used both as a new way to understand, and prove the convergence of, existing optimization algorithms and as a procedure for generating new algorithms.

1,253 citations


Journal ArticleDOI
TL;DR: In this paper, an iterative method for potential inversion from distribution functions developed for simple liquid systems can be generalized to polymer systems, using the differences in the potentials of mean force between the distribution functions generated from a guessed potential and the true distribution functions to improve the effective potential successively.
Abstract: We demonstrate how an iterative method for potential inversion from distribution functions developed for simple liquid systems can be generalized to polymer systems. It uses the differences in the potentials of mean force between the distribution functions generated from a guessed potential and the true distribution functions to improve the effective potential successively. The optimization algorithm is very powerful: convergence is reached for every trial function in few iterations. As an extensive test case we coarse-grained an atomistic all-atom model of polyisoprene (PI) using a 13:1 reduction of the degrees of freedom. This procedure was performed for PI solutions as well as for a PI melt. Comparisons of the obtained force fields are drawn. They prove that it is not possible to use a single force field for different concentration regimes. © 2003 Wiley Periodicals, Inc. J Comput Chem 13: 1624–1636, 2003

1,125 citations


Journal ArticleDOI
TL;DR: In this article, a review of existing image reconstruction algorithms for electrical capacitance tomography (ECT) is presented, including linear back-projection, singular value decomposition, Tikhonov regularization, Newton-Raphson, steepest descent method, Landweber iteration, conjugate gradient method, algebraic reconstruction techniques, simultaneous iterative reconstruction techniques and model-based reconstruction.
Abstract: Electrical capacitance tomography (ECT) is used to image cross-sections of industrial processes containing dielectric material. This technique has been under development for more than a decade. The task of image reconstruction for ECT is to determine the permittivity distribution and hence material distribution over the cross-section from capacitance measurements. There are three principal difficulties with image reconstruction for ECT: (1) the relationship between the permittivity distribution and capacitance is non-linear and the electric field is distorted by the material present, the so-called 'soft-field' effect; (2) the number of independent measurements is limited, leading to an under-determined problem and (3) the inverse problem is ill posed and ill conditioned, making the solution sensitive to measurement errors and noise. Regularization methods are needed to treat this ill-posedness. This paper reviews existing image reconstruction algorithms for ECT, including linear back-projection, singular value decomposition, Tikhonov regularization, Newton–Raphson, iterative Tikhonov, the steepest descent method, Landweber iteration, the conjugate gradient method, algebraic reconstruction techniques, simultaneous iterative reconstruction techniques and model-based reconstruction. Some of these algorithms are examined by simulation and experiment for typical permittivity distributions. Future developments in image reconstruction for ECT are discussed.

1,082 citations


Proceedings ArticleDOI
Yu1, Shi
13 Oct 2003
TL;DR: This work proposes a principled account on multiclass spectral clustering by solving a relaxed continuous optimization problem by eigen-decomposition and clarifying the role of eigenvectors as a generator of all optimal solutions through orthonormal transforms.
Abstract: We propose a principled account on multiclass spectral clustering Given a discrete clustering formulation, we first solve a relaxed continuous optimization problem by eigen-decomposition We clarify the role of eigenvectors as a generator of all optimal solutions through orthonormal transforms We then solve an optimal discretization problem, which seeks a discrete solution closest to the continuous optima The discretization is efficiently computed in an iterative fashion using singular value decomposition and nonmaximum suppression The resulting discrete solutions are nearly global-optimal Our method is robust to random initialization and converges faster than other clustering methods Experiments on real image segmentation are reported

1,028 citations


Proceedings ArticleDOI
09 Dec 2003
TL;DR: The problem of finding a linear iteration that yields distributed averaging consensus over a network that asymptotically computes the average of some initial values given at the nodes is considered.
Abstract: We consider the problem of finding a linear iteration that yields distributed averaging consensus over a network, i.e., that asymptotically computes the average of some initial values given at the nodes. When the iteration is assumed symmetric, the problem of finding the fastest converging linear iteration can be cast as a semidefinite program, and therefore efficiently and globally solved. These optimal linear iterations are often substantially faster than several simple heuristics that are based on the Laplacian matrix of the associated graph.

927 citations


Journal ArticleDOI
TL;DR: Numerical results demonstrate that the proposed low complexity algorithms offer comparable performance with an existing iterative algorithm.
Abstract: The paper studies the problem of finding an optimal subcarrier and power allocation strategy for downlink communication to multiple users in an orthogonal-frequency-division multiplexing-based wireless system. The problem of minimizing total power consumption with constraints on bit-error rate and transmission rate for users requiring different classes of service is formulated and simple algorithms with good performance are derived. The problem of joint allocation is divided into two steps. In the first step, the number of subcarriers that each user gets is determined based on the users' average signal-to-noise ratio. The algorithm is shown to find the distribution of subcarriers that minimizes the total power required when every user experiences a flat-fading channel. In the second stage of the algorithm, it finds the best assignment of subcarriers to users. Two different approaches are presented, the rate-craving greedy algorithm and the amplitude-craving greedy algorithm. A single cell with one base station and many mobile stations is considered. Numerical results demonstrate that the proposed low complexity algorithms offer comparable performance with an existing iterative algorithm.

709 citations


Book
11 Aug 2003
TL;DR: In this paper, the authors present the most convincing case ever made for iterative development and present a concise, information-packed summary of the key ideas that drive all agile and iterative processes, with details of four noteworthy iterative methods: Scrum, XP, RUP, and Evo.
Abstract: Agile/iterative methods: From business case to successful implementationThis is the definitive guide for managers and students to agile and iterative development methods: what they are, how they work, how to implement them-and why you should.Using statistically significant research and large-scale case studies, noted methods expert Craig Larman presents the most convincing case ever made for iterative development. Larman offers a concise, information-packed summary of the key ideas that drive all agile and iterative processes, with the details of four noteworthy iterative methods: Scrum, XP, RUP, and Evo. Coverage includes: Compelling evidence that iterative methods reduce project risk Frequently asked questions Agile and iterative values and practices Dozens of useful iterative and agile practice tips New management skills for agile/iterative project leaders Key practices of Scrum, XP, RUP, and EvoWhether you're an IT executive, project manager, student of software engineering, or developer, Craig Larman will help you understand the promise of agile/iterative development, sell it throughout your organizationaeand transform the promise into reality.

696 citations


Proceedings ArticleDOI
24 Apr 2003
TL;DR: The Sigma method is introduced as a new method for finding best local guides for each particle of the population from a set of Pareto-optimal solutions and the results are compared with the results of a multi-objective evolutionary algorithm (MOEA).
Abstract: In multi-objective particle swarm optimization (MOPSO) methods, selecting the best local guide (the global best particle) for each particle of the population from a set of Pareto-optimal solutions has a great impact on the convergence and diversity of solutions, especially when optimizing problems with high number of objectives. This paper introduces the Sigma method as a new method for finding best local guides for each particle of the population. The Sigma method is implemented and is compared with another method, which uses the strategy of an existing MOPSO method for finding the local guides. These methods are examined for different test functions and the results are compared with the results of a multi-objective evolutionary algorithm (MOEA).

679 citations


Book
09 Jun 2003
TL;DR: In this paper, the conjugate gradients method is used to solve singular systems with Krylov subspace information, and the solution of f (A)x = b with k-subspace information is given.
Abstract: Preface 1. Introduction 2. Mathematical preliminaries 3. Basic iteration methods 4. Construction of approximate solutions 5. The conjugate gradients method 6. GMRES and MINRES 7. Bi-conjugate gradients 8. How serious is irregular convergence? 9. BI-CGSTAB 10. Solution of singular systems 11. Solution of f (A)x = b with Krylov subspace information 12. Miscellaneous 13. Preconditioning References Index.

Journal ArticleDOI
TL;DR: In this paper, an iterative algorithm is proposed to generate a sequence (x n ✓ n ✓ ) from an arbitrary initial x 0∈H, which converges in norm to the unique solution of the quadratic minimization problem.
Abstract: Assume that C 1, . . . , C N are N closed convex subsets of a real Hilbert space H having a nonempty intersection C. Assume also that each C i is the fixed point set of a nonexpansive mapping T i of H. We devise an iterative algorithm which generates a sequence (x n ) from an arbitrary initial x 0∈H. The sequence (xn) is shown to converge in norm to the unique solution of the quadratic minimization problem min x∈C (1/2)〈Ax, x〉−〈x, u〉, where A is a bounded linear strongly positive operator on H and u is a given point in H. Quadratic–quadratic minimization problems are also discussed.

Proceedings ArticleDOI
04 Jun 2003
TL;DR: A heuristic for minimizing the rank of a positive semidefinite matrix over a convex set using the logarithm of the determinant as a smooth approximation for rank is presented and readily extended to handle general matrices.
Abstract: We present a heuristic for minimizing the rank of a positive semidefinite matrix over a convex set. We use the logarithm of the determinant as a smooth approximation for rank, and locally minimize this function to obtain a sequence of trace minimization problems. We then present a lemma that relates the rank of any general matrix to that of a corresponding positive semidefinite one. Using this, we readily extend the proposed heuristic to handle general matrices. We examine the vector case as a special case, where the heuristic reduces to an iterative l/sub 1/-norm minimization technique. As practical applications of the rank minimization problem and our heuristic, we consider two examples: minimum-order system realization with time-domain constraints, and finding lowest-dimension embedding of points in a Euclidean space from noisy distance data.

Book
28 Jul 2003
TL;DR: Finite Elements Orthogonal Polynomials A One-Dimensional Example of Interpolation on Physical Mesh Elements Technology of Discretization in Two and Three Dimensions Constrained Approximation.
Abstract: INTRODUCTION Finite Elements Orthogonal Polynomials A One-Dimensional Example HIERARCHIC MASTER ELEMENTS OF ARBITRARY ORDER De Rham Diagram H^1-Conforming Approximations H(curl)-Conforming Approximations H(div)-Conforming Approximations L^2-Conforming Approximations HIGHER-ORDER FINITE ELEMENT DISCRETIZATION Projection-Based Interpolation on Reference Domains Transfinite Interpolation Revisited Construction of Reference Maps Projection-Based Interpolation on Physical Mesh Elements Technology of Discretization in Two and Three Dimensions Constrained Approximation Selected Software-Technical Aspects HIGHER-ORDER NUMERICAL QUADRATURE One-Dimensional Reference Domain K(a) Reference Quadrilateral K(q) Reference Triangle K(t) Reference Brick K(B) Reference Tetrahedron K(T) Reference Prism K(P) NUMERICAL SOLUTION OF FINITE ELEMENT EQUATIONS Direct Methods for Linear Algebraic Equations Iterative Methods for Linear Algebraic Equations Choice of the Method Solving Initial Value Problems for ordinary Differential Equations MESH OPTIMIZATION, REFERENCE SOLUTIONS, AND hp-ADAPTIVITY Automatic Mesh Optimization in One Dimension Adaptive Strategies Based on Automatic Mesh Optimization Goal-Oriented Adaptivity Automatic Goal-Oriented h-, p-, and hp-Adaptivity Automatic Goal-Oriented hp-Adaptivity in Two Dimensions

Journal ArticleDOI
Alan A. Coelho1
TL;DR: Comparison with three of the most popular indexing programs, namely ITO, DICVOL91 and TREOR90, has shown that the present method as implemented in the program TOPAS is more successful at indexing simulated data.
Abstract: A fast method for indexing powder diffraction patterns has been developed for large and small lattices of all symmetries. The method is relatively insensitive to impurity peaks and missing high d-spacings: on simulated data, little effect in terms of successful indexing has been observed when one in three d-spacings are randomly removed. Comparison with three of the most popular indexing programs, namely ITO, DICVOL91 and TREOR90, has shown that the present method as implemented in the program TOPAS is more successful at indexing simulated data. Also significant is that the present method performs well on typically noisy data with large diffractometer zero errors. Critical to its success, the present method uses singular value decomposition in an iterative manner for solving linear equations relating hkl values to d-spacings.

Journal ArticleDOI
TL;DR: In this paper, an asymptotic iteration method for solving second-order homogeneous linear differential equations of the form y'' = λ0(x)y' + s 0(x),y is introduced.
Abstract: An asymptotic iteration method for solving second-order homogeneous linear differential equations of the form y'' = λ0(x)y' + s0(x)y is introduced, where λ0(x) ≠ 0 and s0(x) are C∞ functions. Applications to Schrodinger-type problems, including some with highly singular potentials, are presented.

Journal ArticleDOI
TL;DR: In this paper, the authors present a technique for producing iterative maximum likelihood estimation algorithms for a wide class of generalizations of the Bradley-Terry model for paired comparisons, which is a simple and muchstudied means to describe the probabilities of the possible outcomes when individuals are judged against one another in pairs.
Abstract: The Bradley-Terry model for paired comparisons is a simple and muchstudied means to describe the probabilities of the possible outcomes when individuals are judged against one another in pairs. Among the many studies of the model in the past 75 years, numerous authors have generalized it in several directions, sometimes providing iterative algorithms for obtaining maximum likelihood estimates for the generalizations. Building on a theory of algorithms known by the initials MM, for minorization-maximization, this paper presents a powerful technique for producing iterative maximum likelihood estimation algorithms for a wide class of generalizations of the Bradley-Terry model. While algorithms for problems of this type have tended to be custom-built in the literature, the techniques in this paper enable their mass production. Simple conditions are stated that guarantee that each algorithm described will produce a sequence that converges to the unique maximum likelihood estimator. Several of the algorithms and convergence results herein are new.

Journal ArticleDOI
TL;DR: A preconditioner for substructuring based on constrained energy minimization concepts is presented and offers a straightforward approach for the iterative solution of second- and fourth-order structural mechanics problems.
Abstract: A preconditioner for substructuring based on constrained energy minimization concepts is presented. The preconditioner is applicable to both structured and unstructured meshes and offers a straightforward approach for the iterative solution of second- and fourth-order structural mechanics problems. The approach involves constraints associated with disjoint sets of nodes on substructure boundaries. These constraints provide the means for preconditioning at both the substructure and global levels. Numerical examples are presented that demonstrate the good performance of the method in terms of iterations, compute time, and condition numbers of the preconditioned equations.

Journal ArticleDOI
TL;DR: A tool for accelerating iterative reconstruction of field-corrected MR images: a novel time-segmented approximation to the MR signal equation that uses a min-max formulation to derive the temporal interpolator.
Abstract: In magnetic resonance imaging, magnetic field inhomogeneities cause distortions in images that are reconstructed by conventional fast Fourier transform (FFT) methods Several noniterative image reconstruction methods are used currently to compensate for field inhomogeneities, but these methods assume that the field map that characterizes the off-resonance frequencies is spatially smooth Recently, iterative methods have been proposed that can circumvent this assumption and provide improved compensation for off-resonance effects However, straightforward implementations of such iterative methods suffer from inconveniently long computation times This paper describes a tool for accelerating iterative reconstruction of field-corrected MR images: a novel time-segmented approximation to the MR signal equation We use a min-max formulation to derive the temporal interpolator Speedups of around 60 were achieved by combining this temporal interpolator with a nonuniform fast Fourier transform with normalized root mean squared approximation errors of 007% The proposed method provides fast, accurate, field-corrected image reconstruction even when the field map is not smooth

Proceedings ArticleDOI
27 Oct 2003
TL;DR: A method for detecting uncertainty in pose is described, and a point selection strategy for ICP is proposed that minimizes this uncertainty by choosing samples that constrain potentially unstable transformations.
Abstract: The iterative closest point (ICP) algorithm is a widely used method for aligning three-dimensional point sets. The quality of alignment obtained by this algorithm depends heavily on choosing good pairs of corresponding points in the two datasets. If too many points are chosen from featureless regions of the data, the algorithm converges slowly, finds the wrong pose, or even diverges, especially in the presence of noise or miscalibration in the input data. We describe a method for detecting uncertainty in pose, and we propose a point selection strategy for ICP that minimizes this uncertainty by choosing samples that constrain potentially unstable transformations.

Journal ArticleDOI
TL;DR: In this paper, an iterative algorithm for multiuser detection in code division multiple access (CDMA) systems is developed on the basis of Pearl's belief propagation (BP), which exhibits nearly optimal performance by utilizing the central limit theorem and self-averaging property appropriately.
Abstract: An iterative algorithm for the multiuser detection problem that arises in code division multiple access (CDMA) systems is developed on the basis of Pearl's belief propagation (BP). We show that the BP-based algorithm exhibits nearly optimal performance in a practical time scale by utilizing the central limit theorem and self-averaging property appropriately, whereas direct application of BP to the detection problem is computationally difficult and far from practical. We further present close relationships of the proposed algorithm to the Thouless–Anderson–Palmer approach and replica analysis known in spin-glass research.

Journal ArticleDOI
TL;DR: In this article, a high resolution scheme with improved iterative convergence properties was devised by incorporating total-variation diminishing constraints, appropriate for unsteady problems, into an implicit time-marching method used for steady flow problems.
Abstract: A high resolution scheme with improved iterative convergence properties was devised by incorporating total-variation diminishing constraints, appropriate for unsteady problems, into an implicit time-marching method used for steady flow problems. The new scheme, referred to as Convergent and Universally Bounded Interpolation Scheme for the Treatment of Advection (CUBISTA), has similar accuracy to the well-known SMART scheme, both being formally third-order accurate on uniform meshes for smooth flows. Three demonstration problems are considered: (1) advection of three scalar profiles, a step, a sine-squared, and a semi-ellipse; (2) Newtonian flow over a backward-facing step; and (3) viscoelastic flow through a planar contraction and around a cylinder. For the case of the viscoelastic flows, in which the high resolution schemes are also used to represent the advective terms in the constitutive equation, it is shown that only the new scheme is able to provide a converged solution to the prescribed tolerance. Copyright © 2003 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: To preserve the important property of iterative subspace methods of regularizing the solution by the number of iterations, the model weights are incorporated into the operators and can be understood in terms of the singular vectors of the weighted transform.
Abstract: The Radon transform (RT) suffers from the typical problems of loss of resolution and aliasing that arise as a consequence of incomplete information, including limited aperture and discretization. Sparseness in the Radon domain is a valid and useful criterion for supplying this missing information, equivalent somehow to assuming smooth amplitude variation in the transition between known and unknown (missing) data. Applying this constraint while honoring the data can become a serious challenge for routine seismic processing because of the very limited processing time available, in general, per common midpoint. To develop methods that are robust, easy to use and flexible to adapt to different problems we have to pay attention to a variety of algorithms, operator design, and estimation of the hyperparameters that are responsible for the regularization of the solution. In this paper, we discuss fast implementations for several varieties of RT in the time and frequency domains. An iterative conjugate gradient algorithm with fast Fourier transform multiplication is used in all cases. To preserve the important property of iterative subspace methods of regularizing the solution by the number of iterations, the model weights are incorporated into the operators. This turns out to be of particular importance, and it can be understood in terms of the singular vectors of the weighted transform. The iterative algorithm is stopped according to a general cross validation criterion for subspaces. We apply this idea to several known implementations and compare results in order to better understand differences between, and merits of, these algorithms.

Journal ArticleDOI
TL;DR: This work model the system as a noncooperative game and perform iterative water-filling to find the Nash equilibrium distributively and proposes several numerical approaches to decide the covariance matrices of the transmitted signals and compare their performance in terms of the system mutual information.
Abstract: The system mutual information of a multiple-input multiple-output (MIMO) system with multiple users which mutually interfere is considered. Perfect channel state information is assumed to be known to both transmitters and receivers. Asymptotic performance analysis shows that the system mutual information changes behavior as the interference becomes sufficiently strong. In particular, beamforming is the optimum signaling for all users when the interference is large. We propose several numerical approaches to decide the covariance matrices of the transmitted signals and compare their performance in terms of the system mutual information. We model the system as a noncooperative game and perform iterative water-filling to find the Nash equilibrium distributively. A centralized global approach and a distributed iterative approach based on the gradient projection method are also proposed. Numerical results show that all proposed approaches give better performance than the standard signaling, which is optimum for the case without interference. Both the global and the iterative gradient projection methods are shown to outperform the Nash equilibrium significantly.

Journal ArticleDOI
TL;DR: A Bayesian framework is used to account for noise in the data and a maximum a posteriori (MAP) estimation procedure leads to an iterative procedure which is a regularized version of the focal underdetermined system solver (FOCUSS) algorithm that is superior to the OMP in noisy environments.
Abstract: We develop robust methods for subset selection based on the minimization of diversity measures. A Bayesian framework is used to account for noise in the data and a maximum a posteriori (MAP) estimation procedure leads to an iterative procedure which is a regularized version of the focal underdetermined system solver (FOCUSS) algorithm. The convergence of the regularized FOCUSS algorithm is established and it is shown that the stable fixed points of the algorithm are sparse. We investigate three different criteria for choosing the regularization parameter: quality of fit; sparsity criterion; L-curve. The L-curve method, as applied to the problem of subset selection, is found not to be robust, and we propose a novel modified L-curve procedure that solves this problem. Each of the regularized FOCUSS algorithms is evaluated through simulation of a detection problem, and the results are compared with those obtained using a sequential forward selection algorithm termed orthogonal matching pursuit (OMP). In each case, the regularized FOCUSS algorithm is shown to be superior to the OMP in noisy environments.

Journal ArticleDOI
TL;DR: This work addresses the problem of finding the most suitable index assignments to arbitrary, high order signal constellations and proposes a new method based on the binary switching algorithm that finds optimized mappings outperforming previously known ones.
Abstract: We investigate bit-interleaved coded modulation with iterative decoding (BICM-ID) for bandwidth efficient transmission, where the bit error rate is reduced through iterations between a multilevel demapper and a simple channel decoder. In order to achieve a significant turbo-gain, the assignment strategy of the binary indices to signal points is crucial. We address the problem of finding the most suitable index assignments to arbitrary, high order signal constellations. A new method based on the binary switching algorithm is proposed that finds optimized mappings outperforming previously known ones.

Journal ArticleDOI
TL;DR: The computations demonstrate that the proposed moving mesh algorithms are efficient for solving problems with shock discontinuities, obtaining the same resolution with a much smaller number of grid points than the uniform mesh approach.
Abstract: We develop efficient moving mesh algorithms for one- and two-dimensional hyperbolic systems of conservation laws. The algorithms are formed by two independent parts: PDE evolution and mesh-redistribution. The first part can be any appropriate high-resolution scheme, and the second part is based on an iterative procedure. In each iteration, meshes are first redistributed by an equidistribution principle, and then on the resulting new grids the underlying numerical solutions are updated by a conservative-interpolation formula proposed in this work. The iteration for the mesh-redistribution at a given time step is complete when the meshes governed by a nonlinear equation reach the equilibrium state. The main idea of the proposed method is to keep the mass-conservation of the underlying numerical solution at each redistribution step. In one dimension, we can show that the underlying numerical approximation obtained in the mesh-redistribution part satisfies the desired TVD property, which guarantees that the numerical solution at any time level is TVD, provided that the PDE solver in the first part satisfies such a property. Several test problems in one and two dimensions are computed using the proposed moving mesh algorithm. The computations demonstrate that our methods are efficient for solving problems with shock discontinuities, obtaining the same resolution with a much smaller number of grid points than the uniform mesh approach.

Journal ArticleDOI
TL;DR: In this paper, the geometrical interpretation of several iterative methods to solve a nonlinear scalar equation is presented, and the extension to general Banach spaces and some computational aspects of these methods are discussed.

Book ChapterDOI
26 Jun 2003
TL;DR: Two extensions of LLE to supervised feature extraction were independently proposed by the authors of this paper and are unified in a common framework and applied to a number of benchmark data sets.
Abstract: Locally linear embedding (LLE) is a recently proposed method for unsupervised nonlinear dimensionality reduction. It has a number of attractive features: it does not require an iterative algorithm, and just a few parameters need to be set. Two extensions of LLE to supervised feature extraction were independently proposed by the authors of this paper. Here, both methods are unified in a common framework and applied to a number of benchmark data sets. Results show that they perform very well on high-dimensional data which exhibits a manifold structure.

Journal ArticleDOI
TL;DR: A new iterative state-feedback controller design procedure is proposed, based on a new bounded real lemma derived upon an inequality recently proposed by Moon (2001), which solves both the instantaneous and delayed feedback problems in a unified framework.
Abstract: This paper presents some comments and further results concerning the descriptor system approach to H/sub /spl infin// control of linear time-delay systems. Upon the system model of the paper by Fridman and Shaked (2001), we propose a new iterative state-feedback controller design procedure, which is based on a new bounded real lemma derived upon an inequality recently proposed by Moon (2001). The proposed design solves both the instantaneous and delayed feedback problems in a unified framework, and is illustrated by a numerical example to be much less conservative than the above paper and other relevant references.