scispace - formally typeset
Search or ask a question

Showing papers on "Rate of convergence published in 1999"


Journal ArticleDOI
TL;DR: In this article, the Navier-Stokes equations are modified by the addition of the continuum forcing [emailprotected]?->@f, where C is the composition variable and @f is C's chemical potential.

1,263 citations


Journal ArticleDOI
TL;DR: The investigations demonstrate that the SAGE algorithm is a powerful high-resolution tool that can be successfully applied for parameter extraction from extensive channel measurement data, especially for the purpose of channel modeling.
Abstract: This study investigates the application potential of the SAGE (space-alternating generalized expectation-maximization) algorithm to jointly estimate the relative delay, incidence azimuth, Doppler frequency, and complex amplitude of impinging waves in mobile radio environments The performance, ie, high-resolution ability, accuracy, and convergence rate of the scheme, is assessed in synthetic and real macro- and pico-cellular channels The results indicate that the scheme overcomes the resolution limitation inherent to classical techniques like the Fourier or beam-forming methods In particular, it is shown that waves which exhibit an arbitrarily small difference in azimuth can be easily separated as long as their delays or Doppler frequencies differ by a fraction of the intrinsic resolution of the measurement equipment Two waves are claimed to be separated when the mean-squared estimation errors (MSEEs) of the estimates of their parameters are close to the corresponding Cramer-Rao lower bounds (CRLBs) derived in a scenario where only a single wave is impinging The adverb easily means that the MSEEs rapidly approach the CLRBs, ie, within less than 20 iteration cycles Convergence of the log-likelihood sequence is achieved after approximately ten iteration cycles when the scheme is applied in real channels In this use, the estimated dominant waves can be related to a scatterer/reflector in the propagation environment The investigations demonstrate that the SAGE algorithm is a powerful high-resolution tool that can be successfully applied for parameter extraction from extensive channel measurement data, especially for the purpose of channel modeling

1,043 citations


Journal ArticleDOI
TL;DR: A new method for solving total variation (TV) minimization problems in image restoration by introducing an additional variable for the flux quantity appearing in the gradient of the objective function, which can be interpreted as the normal vector to the level sets of the image u.
Abstract: We present a new method for solving total variation (TV) minimization problems in image restoration The main idea is to remove some of the singularity caused by the nondifferentiability of the quantity $| abla u|$ in the definition of the TV-norm before we apply a linearization technique such as Newton's method This is accomplished by introducing an additional variable for the flux quantity appearing in the gradient of the objective function, which can be interpreted as the normal vector to the level sets of the image u Our method can be viewed as a primal-dual method as proposed by Conn and Overton [ A Primal-Dual Interior Point Method for Minimizing a Sum of Euclidean Norms, preprint, 1994] and Andersen [Ph D thesis, Odense University, Denmark, 1995] for the minimization of a sum of Euclidean norms In addition to possessing local quadratic convergence, experimental results show that the new method seems to be globally convergent

894 citations


Journal ArticleDOI
TL;DR: It is shown that such a one-step method can not be optimal when di erent coe cient functions admit di Erent degrees of smoothness, and this drawback can be repaired by using the proposed two-step estimation procedure.
Abstract: Varying coefficient models are a useful extension of classical linear models. They arise naturally when one wishes to examine how regression coefficients change over different groups characterized by certain covariates such as age. The appeal of these models is that the coef .cient functions can easily be estimated via a simple local regression.This yields a simple one-step estimation procedure. We show that such a one-step method cannot be optimal when different coefficient functions admit different degrees of smoothness. This drawback can be repaired by using our proposed two-step estimation procedure.The asymptotic mean-squared error for the two-step procedure is obtained and is shown to achieve the optimal rate of convergence. A few simulation studies show that the gain by the two-step procedure can be quite substantial.The methodology is illustrated by an application to an environmental data set.

643 citations


Journal ArticleDOI
TL;DR: In this article, a simple two-step nonparametric estimator for a triangular simultaneous equation model is presented, which employs series approximations that exploit the additive structure of the model.
Abstract: This paper presents a simple two-step nonparametric estimator for a triangular simultaneous equation model. Our approach employs series approximations that exploit the additive structure of the model. The first step comprises the nonparametric estimation of the reduced form and the corresponding residuals. The second step is the estimation of the primary equation via nonparametric regression with the reduced form residuals included as a regressor. We derive consistency and asymptotic normality results for our estimator, including optimal convergence rates. Finally we present an empirical example, based on the relationship between the hourly wage rate and annual hours worked, which illustrates the utility of our approach.

522 citations


Journal ArticleDOI
Jon P. Webb1
TL;DR: Application of the new vector finite elements to the solution of a parallel-plate waveguide problem demonstrates the expected convergence rate of the phase of the reflection coefficient, but further tests reveal that the optimum balance of the gradient and rotational components is problem-dependent.
Abstract: New vector finite elements are proposed for electromagnetics. The new elements are triangular or tetrahedral edge elements (tangential vector elements) of arbitrary polynomial order. They are hierarchal, so that different orders can be used together in the same mesh and p-adaption is possible. They provide separate representation of the gradient and rotational parts of the vector field. Explicit formulas are presented for generating the basis functions to arbitrary order. The basis functions can be used directly or after a further stage of partial orthogonalization to improve the matrix conditioning. Matrix assembly for the frequency-domain curl-curl equation is conveniently carried out by means of universal matrices. Application of the new elements to the solution of a parallel-plate waveguide problem demonstrates the expected convergence rate of the phase of the reflection coefficient, for tetrahedral elements to order 4. In particular, the full-order elements have only the same asymptotic convergence rate as elements with a reduced gradient space (such as the Whitney element). However, further tests reveal that the optimum balance of the gradient and rotational components is problem-dependent.

455 citations


Journal ArticleDOI
TL;DR: A parameter expanded data augmentation (PX-DA) algorithm is rigorously defined and a new theory for iterative conditional sampling under the tra… to understand the role of the expansion parameter.
Abstract: Viewing the observed data of a statistical model as incomplete and augmenting its missing parts are useful for clarifying concepts and central to the invention of two well-known statistical algorithms: expectation-maximization (EM) and data augmentation. Recently, Liu, Rubin, and Wu demonstrated that expanding the parameter space along with augmenting the missing data is useful for accelerating iterative computation in an EM algorithm. The main purpose of this article is to rigorously define a parameter expanded data augmentation (PX-DA) algorithm and to study its theoretical properties. The PX-DA is a special way of using auxiliary variables to accelerate Gibbs sampling algorithms and is closely related to reparameterization techniques. We obtain theoretical results concerning the convergence rate of the PX-DA algorithm and the choice of prior for the expansion parameter. To understand the role of the expansion parameter, we establish a new theory for iterative conditional sampling under the tra...

373 citations


Journal ArticleDOI
TL;DR: In this paper, an adaptive wavelet estimator for nonparametric re-gression is proposed and the optimality of the procedure is investigated, based on an oracle inequality and motivated by the data compression and localization properties of wavelets.
Abstract: We study wavelet function estimation via the approach of block thresh- olding and ideal adaptation with oracle. Oracle inequalities are derived and serve as guides for the selection of smoothing parameters. Based on an oracle inequality and motivated by the data compression and localization properties of wavelets, an adaptive wavelet estimator for nonparametric re- gression is proposed and the optimality of the procedure is investigated. We show that the estimator achieves simultaneously three objectives: adaptiv- ity, spatial adaptivity and computational efficiency. Specifically, it is proved that the estimator attains the exact optimal rates of convergence over a range of Besov classes and the estimator achieves adaptive local minimax rate for estimating functions at a point. The estimator is easy to imple- ment, at the computational cost of On� . Simulation shows that the es- timator has excellent numerical performance relative to more traditional wavelet estimators. 1. Introduction. Wavelet methods have demonstrated considerable suc- cess in nonparametric function estimation in terms of spatial adaptivity, com- putational efficiency and asymptotic optimality. In contrast to the traditional linear procedures, wavelet methods achieve (near) optimal convergence rates over large function classes such as Besov classes and enjoy excellent mean squared error properties when used to estimate functions that are spatially inhomogeneous. For example, as shown by Donoho and Johnstone (1998), wavelet methods can outperform optimal linear methods, even at the level of convergence rate, over certain Besov classes. Standard wavelet methods achieve adaptivity through term-by-term thresholding of the empirical wavelet coefficients. There, each individual empirical wavelet coefficient is compared with a predetermined threshold. A wavelet coefficient is retained if its magnitude is above the threshold level and is discarded otherwise. A well-known example of term-by-term thresholding is Donoho and Johnstone's VisuShrink (Donoho and Johnstone (1994)). VisuShrink is spatially adaptive and the estimator is within a log- arithmic factor of the optimal convergence rate over a wide range of Besov classes. VisuShrink achieves a degree of tradeoff between variance and bias contributions to the mean squared error. However, the tradeoff is not optimal. VisuShrink reconstruction is often over-smoothed. Hall, Kerkyacharian and Picard (1999) considered block thresholding for wavelet function estimation which thresholds empirical wavelet coefficients in

366 citations


Journal ArticleDOI
Hakan Erdogan1, Jeffrey A. Fessler1
TL;DR: The new algorithms are based on paraboloidal surrogate functions for the log likelihood, which lead to monotonic algorithms even for the nonconvex log likelihood that arises due to background events, such as scatter and random coincidences.
Abstract: We present a framework for designing fast and monotonic algorithms for transmission tomography penalized-likelihood image reconstruction. The new algorithms are based on paraboloidal surrogate functions for the log likelihood. Due to the form of the log-likelihood function it is possible to find low curvature surrogate functions that guarantee monotonicity. Unlike previous methods, the proposed surrogate functions lead to monotonic algorithms even for the nonconvex log likelihood that arises due to background events, such as scatter and random coincidences. The gradient and the curvature of the likelihood terms are evaluated only once per iteration. Since the problem is simplified at each iteration, the CPU time is less than that of current algorithms which directly minimize the objective, yet the convergence rate is comparable. The simplicity, monotonicity, and speed of the new algorithms are quite attractive. The convergence rates of the algorithms are demonstrated using real and simulated PET transmission scans.

351 citations


Journal ArticleDOI
TL;DR: In this article, the authors proposed an approach based on maximizing the determinant of the Fisher information matrix (FIM) subject to state constraints imposed on the observer trajectory (e.g., by the target defense system).
Abstract: The problem of bearings-only target localization is to estimate the location of a fixed target from a sequence of noisy bearing measurements. Although, in theory, this process is observable even without an observer maneuver, estimation performance (i.e., accuracy, stability and convergence rate) can be greatly enhanced by properly exploiting observer motion to increase observability. This work addresses the optimization of observer trajectories for bearings-only fixed-target localization. The approach presented herein is based on maximizing the determinant of the Fisher information matrix (FIM), subject to state constraints imposed on the observer trajectory (e.g., by the target defense system). Direct optimal control numerical schemes, including the recently introduced differential inclusion (DI) method, are used to solve the resulting optimal control problem. Computer simulations, utilizing the familiar Stansfield and maximum likelihood (ML) estimators, demonstrate the enhancement to target position estimability using the optimal observer trajectories.

347 citations


Journal ArticleDOI
TL;DR: In this article, an asymptotic theory for stochastic processes generated from nonlinear transformations of nonstationary integrated time series is developed, and the convergence rate depends not only on the size of the sample but also on the realized sample path.
Abstract: An asymptotic theory for stochastic processes generated from nonlinear transformations of nonstationary integrated time series is developed. Various nonlinear functions of integrated series such as ARIMA time series are studied, and the asymptotic distributions of sample moments of such functions are obtained and analyzed. The transformations considered in the paper include a variety of functions that are used in practical nonlinear statistical analysis. It is shown that their asymptotic theory is quite different from that of integrated processes and stationary time series. When the transformation function is exponentially explosive, for instance, the convergence rate of sample functions is path dependent. In particular, the convergence rate depends not only on the size of the sample but also on the realized sample path. Some brief applications of these asymptotics are given to illustrate the effects of nonlinearly transformed integrated processes on regression. The methods developed in the paper are useful in a project of greater scope concerned with the development of a general theory of nonlinear regression for nonstationary time series. Nonstationary time series arising from autoregressive models with roots on the unit circle have been an intensive subject of recent research. The asymptotic behavior of regression statistics based on integrated time series (those for which one or more of the autoregressive roots are unity) has received the most attention, and a fairly complete theory is now available for linear time series regressions. The resulting limit theory forms the basis of much ongoing empirical econometric work, especially on the subject of unit root testing and cointegration model

Journal ArticleDOI
TL;DR: It is demonstrated that the modified forward-backward splitting algorithm of Tseng falls within the presented general framework and allows significant relaxation of tolerance requirements imposed on the solution of proximal point subproblems.
Abstract: We propose a modification of the classical extragradient and proximal point algorithms for finding a zero of a maximal monotone operator in a Hilbert space. At each iteration of the method, an approximate extragradient-type step is performed using information obtained from an approximate solution of a proximal point subproblem. The algorithm is of a hybrid type, as it combines steps of the extragradient and proximal methods. Furthermore, the algorithm uses elements in the enlargement (proposed by Burachik, Iusem and Svaiter) of the operator defining the problem. One of the important features of our approach is that it allows significant relaxation of tolerance requirements imposed on the solution of proximal point subproblems. This yields a more practical proximal-algorithm-based framework. Weak global convergence and local linear rate of convergence are established under suitable assumptions. It is further demonstrated that the modified forward-backward splitting algorithm of Tseng falls within the presented general framework.

Journal ArticleDOI
TL;DR: New preconditioners that approximate more accurately the Hessian matrices of shift-variant imaging problems are described and lead to significantly faster convergence rates for the unconstrained conjugate-gradient (CG) iteration.
Abstract: Gradient-based iterative methods often converge slowly for tomographic image reconstruction and image restoration problems, but can be accelerated by suitable preconditioners. Diagonal preconditioners offer some improvement in convergence rate, but do not incorporate the structure of the Hessian matrices in imaging problems. Circulant preconditioners can provide remarkable acceleration for inverse problems that are approximately shift-invariant, i.e., for those with approximately block-Toeplitz or block-circulant Hessians. However, in applications with nonuniform noise variance, such as arises from Poisson statistics in emission tomography and in quantum-limited optical imaging, the Hessian of the weighted least-squares objective function is quite shift-variant, and circulant preconditioners perform poorly. Additional shift-variance is caused by edge-preserving regularization methods based on nonquadratic penalty functions. This paper describes new preconditioners that approximate more accurately the Hessian matrices of shift-variant imaging problems. Compared to diagonal or circulant preconditioning, the new preconditioners lead to significantly faster convergence rates for the unconstrained conjugate-gradient (CG) iteration. We also propose a new efficient method for the line-search step required by CG methods. Applications to positron emission tomography (PET) illustrate the method.

Journal ArticleDOI
TL;DR: In this article, a Hopfield type feedback neural network is proposed for real-time monitoring and analysis of harmonic variations in the power system, where the supply-frequency variation is handled separately from the amplitude/phase variations, thus ensuring high computational speed and high convergence rate.
Abstract: With increasing harmonic pollution in the power system, real-time monitoring and analysis of harmonic variations have become important. Because of limitations associated with conventional algorithms, particularly under supply-frequency drift and transient situations, a new approach based on nonlinear least-squares parameter estimation has been proposed as an alternative solution for high-accuracy evaluation. However, the computational demand of the algorithm is very high and it is more appropriate to use Hopfield type feedback neural networks for real-time harmonic evaluation. The proposed neural network implementation determines simultaneously the supply-frequency variation, the fundamental-amplitude/phase variation as well as the harmonics-amplitude/phase variation. The distinctive feature is that the supply-frequency variation is handled separately from the amplitude/phase variations, thus ensuring high computational speed and high convergence rate. Examples by computer simulation are used to demonstrate the effectiveness of the implementation. A set of data taken on site was used as a real application of the system.

Journal ArticleDOI
TL;DR: In this article, a class of cell centered finite volume schemes for a linear convection-diusion problem is studied, where the convection and the diusion are respectively approximated by means of an upwind scheme and the so-called diamond cell method.
Abstract: In this paper, a class of cell centered nite volume schemes, on general unstructured meshes, for a linear convection-diusion problem, is studied. The convection and the diusion are respectively approximated by means of an upwind scheme and the so called diamond cell method (4). Our main result is an error estimate of order h, assuming only the W 2;p (for p> 2) regularity of the continuous solution, on a mesh of quadrangles. The proof is based on an extension of the ideas developed in (12). Some new diculties arise here, due to the weak regularity of the solution, and the necessity to approximate the entire gradient, and not only its normal component, as in (12).

Journal ArticleDOI
TL;DR: In this paper, the authors present convergence results for a minimum contrast estimator in a problem of change-points estimation, where the changes affect the marginal distribution of a sequence of random variables.

Journal ArticleDOI
TL;DR: In this article, a computational algorithm based on the multiquadric, which is a continuously differentiable radial basis function, is devised to solve the shallow water equations, which does not require the generation of a grid as in the finite element method and allows easy editing and refinement of the numerical model.
Abstract: A computational algorithm based on the multiquadric, which is a continuously differentiable radial basis function, is devised to solve the shallow water equations. The numerical solutions are evaluated at scattered collocation points and the spatial partial derivatives are formed directly from partial derivatives of the radial basis function, not by any difference scheme. The method does not require the generation of a grid as in the finite-element method and allows easy editing and refinement of the numerical model. To increase confidence in the multiquadric solution, a sensitivity and convergence analysis is performed using numerical models of a rectangular channel. Applications of the algorithm are made to compute the sea surface elevations and currents in Tolo Harbour, Hong Kong, during a typhoon attack. The numerical solution is shown to be robust and stable. The computed results are compared with measured data and good agreement is indicated.

Journal ArticleDOI
TL;DR: In this article, the authors proposed a criterion to establish whether a finite element scheme is well suited to approximate the eigensolutions and, in the positive case, they estimate the rate of convergence of the eigenvalues.
Abstract: The purpose of this paper is to address some difficulties which arise in computing the eigenvalues of Maxwell's system by a finite element method. Depending on the method used, the spectrum may be polluted by spurious modes which are difficult to pick out among the approximations of the physically correct eigenvalues. Here we propose a criterion to establish whether or not a finite element scheme is well suited to approximate the eigensolutions and, in the positive case, we estimate the rate of convergence of the eigensolutions. This criterion involves some properties of the finite element space and of a suitable Fortin operator. The lowest-order edge elements, under some regularity assumptions, give an example of space satisfying the required conditions. The construction of such a Fortin operator in very general geometries and for any order edge elements is still an open problem. Moreover, we give some justification for the spectral pollution which occurs when nodal elements are used. Results of numerical experiments confirming the theory are also reported.

Journal ArticleDOI
TL;DR: This paper presents a proof of the global and linear convergence using the framework introduced in [H. Voss and U. Eckhart, Computing, 25 (1980), pp. 243--251] and gives a bound for the convergence rate of the fixed point iteration that agrees with the experimental results.
Abstract: In this paper we show that the lagged diffusivity fixed point algorithm introduced by Vogel and Oman in [ SIAM J. Sci. Comput., 17 (1996), pp. 227--238] to solve the problem of total variation denoising, proposed by Rudin, Osher, and Fatemi in [ Phys. D, 60 (1992), pp. 259--268], is a particular instance of a class of algorithms introduced by Voss and Eckhardt in [ Computing, 25 (1980), pp. 243--251], whose origins can be traced back to Weiszfeld's original work for minimizing a sum of Euclidean lengths [ Tohoku Math. J., 43 (1937), pp. 355--386]. There have recently appeared several proofs for the convergence of this algorithm [G. Aubert et al., Technical report 94-01, Informatique, Signaux et Systemes de Sophia Antipolis, 1994], [A. Chambolle and P.-L. Lions, Technical report 9509, CEREMADE, 1995], and [D. C. Dobson and C. R. Vogel, SIAM J. Numer. Anal., 34 (1997), pp. 1779--1791]. Here we present a proof of the global and linear convergence using the framework introduced in [H. Voss and U. Eckhart, Computing, 25 (1980), pp. 243--251] and give a bound for the convergence rate of the fixed point iteration that agrees with our experimental results. These results are also valid for suitable generalizations of the fixed point algorithm.

Journal ArticleDOI
TL;DR: In this paper, the authors further unify the theory for FETI and Neumann-Neumann domain decomposition algorithms and introduce a new family of algorithms for elliptic partial differential equations with heterogeneous coefficients.
Abstract: The FETI and Neumann-Neumann families of algorithms are among the best know and most severely tested domain decomposition methods for elliptic partial differential equations. They are iterative substructuring methods and have many algorithmic components in common but there are also differences. The purpose of this paper is to further unify the theory for these two families of methods and to introduce a new family of FETI algorithms. Bounds on the rate of convergence, which are uniform with respect to the coefficients of a family of elliptic problems with heterogeneous coefficients, are established for these new algorithms. The theory for a variant of the Neumann--Neumann algorithm is also redeveloped stressing similarities to that for the FETI methods.

Journal ArticleDOI
TL;DR: This paper formulate an inexact preconditioned conjugate gradient algorithm for a symmetric positive definite system and analyze its convergence property, establishing a linear convergence result using a local relation of residual norms and showing that the algorithm may have the superlinear convergence property when the inner iteration is solved to high accuracy.
Abstract: An important variation of preconditioned conjugate gradient algorithms is inexact preconditioner implemented with inner-outer iterations [G. H. Golub and M. L. Overton, Numerical Analysis, Lecture Notes in Math. 912, Springer, Berlin, New York, 1982], where the preconditioner is solved by an inner iteration to a prescribed precision. In this paper, we formulate an inexact preconditioned conjugate gradient algorithm for a symmetric positive definite system and analyze its convergence property. We establish a linear convergence result using a local relation of residual norms. We also analyze the algorithm using a global equation and show that the algorithm may have the superlinear convergence property when the inner iteration is solved to high accuracy. The analysis is in agreement with observed numerical behavior of the algorithm. In particular, it suggests a heuristic choice of the stopping threshold for the inner iteration. Numerical examples are given to show the effectiveness of this choice and to compare the convergence bound.

Journal ArticleDOI
TL;DR: In this article, the authors consider the question of existence and uniqueness of solutions to the spatially homogeneous Boltzmann equation and show that to any initial data with finite mass and energy, there exists a unique solution for which the same two quantities are conserved.
Abstract: We consider the question of existence and uniqueness of solutions to the spatially homogeneous Boltzmann equation. The main result is that to any initial data with finite mass and energy, there exists a unique solution for which the same two quantities are conserved. We also prove that any solution which satisfies certain bounds on moments of order s A second part of the paper is devoted to the time discretization of the Boltzmann equation, the main results being estimates of the rate of convergence for the explicit and implicit Euler schemes. Two auxiliary results are of independent interest: a sharpened form of the so called Povzner inequality, and a regularity result for an iterated gain term.

Journal ArticleDOI
TL;DR: This paper proposes a new structure and a new formulation for adapting the filter coefficients based on polyphase decomposition of the filter to be adapted and is independent of the type of filter banks used in the subband decomposition.
Abstract: Subband adaptive filtering has attracted much attention lately. In this paper, we propose a new structure and a new formulation for adapting the filter coefficients. This structure is based on polyphase decomposition of the filter to be adapted and is independent of the type of filter banks used in the subband decomposition. The new formulation yields improved convergence rate when the LMS algorithm is used for coefficient adaptation. As we increase the number of bands in the filter, the convergence rate increases and approaches the rate that can be obtained with a flat input spectrum. The computational complexity of the proposed scheme is nearly the same as that of the fullband approach. Simulation results are included to demonstrate the efficacy of the new approach.

Journal ArticleDOI
TL;DR: Another tree reconstruction method, the witness-antiwitness method (WAM), is presented, which is faster than DCM, especially on random trees, and converges to the true tree topology at the same rate as DCM.

Journal ArticleDOI
TL;DR: This article focuses on gradient-based backpropagation algorithms that use either a common adaptive learning rate for all weights or an individual adaptivelearning rate for each weight and apply the Goldstein/Armijo line search.
Abstract: This article focuses on gradient-based backpropagation algorithms that use either a common adaptive learning rate for all weights or an individual adaptive learning rate for each weight and apply the Goldstein/Armijo line search. The learning-rate adaptation is based on descent techniques and estimates of the local Lipschitz constant that are obtained without additional error function and gradient evaluations. The proposed algorithms improve the backpropagation training in terms of both convergence rate and convergence characteristics, such as stable learning and robustness to oscillations. Simulations are conducted to compare and evaluate the convergence behavior of these gradient-based training algorithms with several popular training methods.

Proceedings ArticleDOI
07 Dec 1999
TL;DR: The convergence properties of a number of variants of incremental subgradient methods, including some that are stochastic are established, which appear very promising and effective for important classes of large problems.
Abstract: We propose a class of subgradient methods for minimizing a convex function that consists of the sum of a large number of component functions. This type of minimization arises in a dual context from Lagrangian relaxation of the coupling constraints of large scale separable problems. The idea is to perform the subgradient iteration incrementally, by sequentially taking steps along the subgradients of the component functions, with intermediate adjustment of the variables after processing each component function. This incremental approach has been very successful in solving large differentiable least squares problems, such as those arising in the training of neural networks, and it has resulted in a much better practical rate of convergence than the steepest descent method. We establish the convergence properties of a number of variants of incremental subgradient methods, including some that are stochastic. Based on the analysis and computational experiments, the methods appear very promising and effective for important classes of large problems.

28 Jun 1999
TL;DR: In this paper, a multi-grid, flux-difference-split, finite-volume code, VULCAN, is presented for solving the eliptic and parabolized form of the equations governing three-dimensional, turbulent, calorically perfect and non-equilibrium chemically reacting flows.
Abstract: A multi-grid, flux-difference-split, finite-volume code, VULCAN, is presented for solving the eliptic and parabolized form of the equations governing three-dimensional, turbulent, calorically perfect and non-equilibrium chemically reacting flows. The space marching algorithms developed to improve convergence rate and or reduce computational cost are emphasized. The algorithms presented are extensions to the class of implicit pseudo-time iterative, upwind space-marching schemes. A full approximate storage, full multi-grid scheme is also described which is used to accelerate the convergence of a Gauss-Seidel relaxation method. The multi-grid algorithm is shown to significantly improve convergence on high ratio grids.

Proceedings ArticleDOI
01 Jan 1999
TL;DR: In this article, a multi-grid, flux-difference-split, finite-volume code, VULCAN, is presented for solving the elliptic and parabolized form of the equations governing three-dimensional, turbulent, calorically perfect and non-equilibrium chemically reacting flows.
Abstract: A multi-grid, flux-difference-split, finite-volume code, VULCAN, is presented for solving the elliptic and parabolized form of the equations governing three-dimensional, turbulent, calorically perfect and non-equilibrium chemically reacting flows. The space marching algorithms developed to improve convergence rate and or reduce computational cost are emphasized. The algorithms presented are extensions to the class of implicit pseudo-time iterative, upwind space-marching schemes. A full approximate storage, full multi-grid scheme is also described which is used to accelerate the convergence of a Gauss-Seidel relaxation method. The multi-grid algorithm is shown to significantly improve convergence on high aspect ratio grids.

Journal ArticleDOI
TL;DR: In this paper, a hybrid finite-volume (FV)/particle method was developed for the solution of the PDF equations for statistically stationary turbulent reactive flows, where the conservation equations for mean mass, momentum, and energy conservation are solved by a FV method while a particle algorithm is employed to solve the fluctuating velocity-turbulence frequency-compositions joint PDF transport equation.

Journal ArticleDOI
TL;DR: An iteration-by-subdomain procedure is proven to converge, showing that the preconditioner implicitly defined by the iterative procedure is optimal, and proving a regularity theorem for Dirichlet and Neumann harmonic fields.
Abstract: The time-harmonic Maxwell equations are considered in the low-frequency case. A finite element domain decomposition approach is proposed for the numerical approximation of the exact solution. This leads to an iteration-by-subdomain procedure, which is proven to converge. The rate of convergence turns out to be independent of the mesh size, showing that the preconditioner implicitly defined by the iterative procedure is optimal. For obtaining this convergence result it has been necessary to prove a regularity theorem for Dirichlet and Neumann harmonic fields.