scispace - formally typeset
Search or ask a question

Showing papers on "Conjugate gradient method published in 2015"


Proceedings Article
07 Dec 2015
TL;DR: This work formulate and derive a highly efficient, conjugate gradient based alternating minimization scheme that solves optimizations with over 55 million observations up to 2 orders of magnitude faster than state-of-the-art (stochastic) gradient-descent based methods.
Abstract: Low rank matrix completion plays a fundamental role in collaborative filtering applications, the key idea being that the variables lie in a smaller subspace than the ambient space. Often, additional information about the variables is known, and it is reasonable to assume that incorporating this information will lead to better predictions. We tackle the problem of matrix completion when pairwise relationships among variables are known, via a graph. We formulate and derive a highly efficient, conjugate gradient based alternating minimization scheme that solves optimizations with over 55 million observations up to 2 orders of magnitude faster than state-of-the-art (stochastic) gradient-descent based methods. On the theoretical front, we show that such methods generalize weighted nuclear norm formulations, and derive statistical consistency guarantees. We validate our results on both real and synthetic datasets.

256 citations


Proceedings Article
06 Jul 2015
TL;DR: The algorithm is based on an inexact damped Newton method, where the inexact Newton steps are computed by a distributed preconditioned conjugate gradient method, and its iteration complexity and communication efficiency for minimizing self-concordant empirical loss functions are analyzed.
Abstract: We propose a new distributed algorithm for empirical risk minimization in machine learning. The algorithm is based on an inexact damped Newton method, where the inexact Newton steps are computed by a distributed preconditioned conjugate gradient method. We analyze its iteration complexity and communication efficiency for minimizing self-concordant empirical loss functions, and discuss the results for distributed ridge regression, logistic regression and binary classification with a smoothed hinge loss. In a standard setting for supervised learning, where the n data points are i.i.d. sampled and when the regularization parameter scales as 1/√n show that the proposed algorithm is communication efficient: the required round of communication does not increase with the sample size n, and only grows slowly with the number of machines.

212 citations


Journal ArticleDOI
TL;DR: Simulated Annealing (SA) is proposed, as an alternative approach for optimal DL using modern optimization technique, i.e. metaheuristic algorithm, to improve the performance of Convolution Neural Network (CNN).

121 citations


Journal ArticleDOI
TL;DR: This paper introduces a new class of covariant approximation averaging techniques, known as all-mode averaging (AMA), in which the approximation takes account of contributions of all eigenmodes through the inverse of the Dirac operator computed from the conjugate gradient method with a relaxed stopping condition.
Abstract: We present a new class of statistical error reduction techniques for Monte Carlo simulations. Using covariant symmetries, we show that correlation functions can be constructed from inexpensive approximations without introducing any systematic bias in the final result. We introduce a new class of covariant approximation averaging techniques, known as all-mode averaging (AMA), in which the approximation takes account of contributions of all eigenmodes through the inverse of the Dirac operator computed from the conjugate gradient method with a relaxed stopping condition. In this paper we compare the performance and computational cost of our new method with traditional methods using correlation functions and masses of the pion, nucleon, and vector meson in ${N}_{f}=2+1$ lattice QCD using domain-wall fermions. This comparison indicates that AMA significantly reduces statistical errors in Monte Carlo calculations over conventional methods for the same cost.

120 citations


Journal ArticleDOI
TL;DR: This work addresses the numerical problem of recovering large matrices of low rank when most of the entries are unknown by exploiting the geometry of the low-rank constraint to recast the problem as an unconstrained optimization problem on a single Grassmann manifold and applies second-order Riemannian trust-region methods and preconditioned methods to solve it.

116 citations


Journal ArticleDOI
TL;DR: In this paper, a scaled vector transport is introduced to improve the conjugate gradient method so that the generated sequences may have a global convergence property under a relaxed assumption, and the proposed algorithm is theoretically proved and numerically observed with examples.
Abstract: This article deals with the conjugate gradient method on a Riemannian manifold with interest in global convergence analysis. The existing conjugate gradient algorithms on a manifold endowed with a vector transport need the assumption that the vector transport does not increase the norm of tangent vectors, in order to confirm that generated sequences have a global convergence property. In this article, the notion of a scaled vector transport is introduced to improve the algorithm so that the generated sequences may have a global convergence property under a relaxed assumption. In the proposed algorithm, the transported vector is rescaled in case its norm has increased during the transport. The global convergence is theoretically proved and numerically observed with examples. In fact, numerical experiments show that there exist minimization problems for which the existing algorithm generates divergent sequences, but the proposed algorithm generates convergent sequences.

106 citations


Journal ArticleDOI
TL;DR: Both the direction and the line search technique are the derivative-free approaches, then the large-scale nonlinear equations are successfully solved and the global convergence of the given algorithm is established under suitable conditions.

94 citations


Journal ArticleDOI
TL;DR: The Conjugate Gradient Iterative Hard Thresholding (CGIHT) family of algorithms for the efficient solution of constrained underdetermined linear systems of equations arising in compressed sensing, row sparse approximation, and matrix completion is introduced.
Abstract: We introduce the Conjugate Gradient Iterative Hard Thresholding (CGIHT) family of algorithms for the efficient solution of constrained underdetermined linear systems of equations arising in compressed sensing, row sparse approximation, and matrix completion. CGIHT is designed to balance the low per iteration complexity of simple hard thresholding algorithms with the fast asymptotic convergence rate of employing the conjugate gradient method. We establish provable recovery guarantees and stability to noise for variants of CGIHT with sufficient conditions in terms of the restricted isometry constants of the sensing operators. Extensive empirical performance comparisons establish significant computational advantages for CGIHT both in terms of the size of problems which can be accurately approximated and in terms of overall computation time.

91 citations


Journal ArticleDOI
TL;DR: It is proved that the proposed method converges globally if the equations are monotone and Lipschitz continuous without differentiability requirement on the equations, which makes it possible to solve some nonsmooth equations.

86 citations


Journal ArticleDOI
TL;DR: This work presents low-complexity, quickly converging robust adaptive beamformers, for beamforming large arrays in snapshot deficient scenarios, derived by combining data-dependent Krylov-subspace-based dimensionality reduction using the Powers-of-R or conjugate gradient techniques with ellipsoidal uncertainty set based robust Capon beamformer methods.
Abstract: We present low-complexity, quickly converging robust adaptive beamformers, for beamforming large arrays in snapshot deficient scenarios. The proposed algorithms are derived by combining data-dependent Krylov-subspace-based dimensionality reduction, using the Powers-of-R or conjugate gradient (CG) techniques, with ellipsoidal uncertainty set based robust Capon beamformer methods. Further, we provide a detailed computational complexity analysis and consider the efficient implementation of automatic, online dimension-selection rules. We illustrate the benefits of the proposed approaches using simulated data.

81 citations


Proceedings Article
07 Dec 2015
TL;DR: This work advances Riemannian manifold optimization (on the manifold of positive definite matrices) as a potential replacement for Expectation Maximization (EM) and develops a well-tuned Riemansian LBFGS method that proves superior to known competing methods (e.g., Riemanian conjugate gradient).
Abstract: We take a new look at parameter estimation for Gaussian Mixture Model (GMMs). Specifically, we advance Riemannian manifold optimization (on the manifold of positive definite matrices) as a potential replacement for Expectation Maximization (EM), which has been the de facto standard for decades. An out-of-the-box invocation of Riemannian optimization, however, fails spectacularly: it obtains the same solution as EM, but vastly slower. Building on intuition from geometric convexity, we propose a simple reformulation that has remarkable consequences: it makes Riemannian optimization not only match EM (a nontrivial result on its own, given the poor record nonlinear programming has had against EM), but also outperforms it in many settings. To bring our ideas to fruition, we develop a well-tuned Riemannian LBFGS method that proves superior to known competing methods (e.g., Riemannian conjugate gradient). We hope that our results encourage a wider consideration of manifold optimization in machine learning and statistics.

Journal ArticleDOI
TL;DR: This method can be viewed as an extension of CG_DESCENT method which is one of the most effective conjugate gradient methods for solving unconstrained optimization problems and can be used to solve large-scale nonsmooth monotone nonlinear equations.
Abstract: In this paper, we present a projection method to solve monotone nonlinear equations with convex constraints. This method can be viewed as an extension of CG_DESCENT method which is one of the most effective conjugate gradient methods for solving unconstrained optimization problems. Because of derivative-free and low storage, the proposed method can be used to solve large-scale nonsmooth monotone nonlinear equations. Its global convergence is established under some appropriate conditions. Preliminary numerical results show that the proposed method is effective and promising. Moreover, we also successfully use the proposed method to solve the sparse signal reconstruction in compressive sensing.

Journal ArticleDOI
TL;DR: In this paper, a preconditioned conjugate gradient method has been implemented to solve the generalized Poisson equation and the linear regime of the Poisson-Boltzmann equation, allowing to solve iteratively the minimization problem with some ten iterations of a ordinary poisson equation solver.
Abstract: The computational study of chemical reactions in complex, wet environments is critical for applications in many fields. It is often essential to study chemical reactions in the presence of applied electrochemical potentials, taking into account the non-trivial electrostatic screening coming from the solvent and the electrolytes. As a consequence the electrostatic potential has to be found by solving the generalized Poisson and the Poisson-Boltzmann equation for neutral and ionic solutions, respectively. In the present work solvers for both problems have been developed. A preconditioned conjugate gradient method has been implemented to the generalized Poisson equation and the linear regime of the Poisson-Boltzmann, allowing to solve iteratively the minimization problem with some ten iterations of a ordinary Poisson equation solver. In addition, a self-consistent procedure enables us to solve the non-linear Poisson-Boltzmann problem. Both solvers exhibit very high accuracy and parallel efficiency, and allow for the treatment of different boundary conditions, as for example surface systems. The solver has been integrated into the BigDFT and Quantum-ESPRESSO electronic-structure packages and will be released as an independent program, suitable for integration in other codes.

Journal ArticleDOI
TL;DR: A probabilistic framework for algorithms that iteratively solve unconstrained linear problems with positive definite $B$ for x with a Gaussian posterior belief over the elements of the inverse of $B$, which can be used to estimate errors.
Abstract: This paper proposes a probabilistic framework for algorithms that iteratively solve unconstrained linear problems $Bx = b$ with positive definite $B$ for $x$. The goal is to replace the point estimates returned by existing methods with a Gaussian posterior belief over the elements of the inverse of $B$, which can be used to estimate errors. Recent probabilistic interpretations of the secant family of quasi-Newton optimization algorithms are extended. Combined with properties of the conjugate gradient algorithm, this leads to uncertainty-calibrated methods with very limited cost overhead over conjugate gradients, a self-contained novel interpretation of the quasi-Newton and conjugate gradient algorithms, and a foundation for new nonlinear optimization methods.

Book ChapterDOI
TL;DR: A communication-efficient distributed algorithm to minimize the overall empirical loss, which is the average of the local empirical losses of the distributed computing system, based on an inexact damped Newton method.
Abstract: We consider distributed convex optimization problems originating from sample average approximation of stochastic optimization, or empirical risk minimization in machine learning. We assume that each machine in the distributed computing system has access to a local empirical loss function, constructed with i.i.d. data sampled from a common distribution. We propose a communication-efficient distributed algorithm to minimize the overall empirical loss, which is the average of the local empirical losses. The algorithm is based on an inexact damped Newton method, where the inexact Newton steps are computed by a distributed preconditioned conjugate gradient method. We analyze its iteration complexity and communication efficiency for minimizing self-concordant empirical loss functions, and discuss the results for ridge regression, logistic regression and binary classification with a smoothed hinge loss. In a standard setting for supervised learning where the condition number of the problem grows with square root of the sample size, the required number of communication rounds of the algorithm does not increase with the sample size, and only grows slowly with the number of machines.

Journal ArticleDOI
26 Oct 2015
TL;DR: The algebraic multigrid method known as smoothed aggregation, agnostic to the underlying tessellation, which can even vary over time, is applied to cloth simulation, and it only requires the user to provide a fine-level mesh.
Abstract: Existing multigrid methods for cloth simulation are based on geometric multigrid. While good results have been reported, geometric methods are problematic for unstructured grids, widely varying material properties, and varying anisotropies, and they often have difficulty handling constraints arising from collisions. This paper applies the algebraic multigrid method known as smoothed aggregation to cloth simulation. This method is agnostic to the underlying tessellation, which can even vary over time, and it only requires the user to provide a fine-level mesh. To handle contact constraints efficiently, a prefiltered preconditioned conjugate gradient method is introduced. For highly efficient preconditioners, like the ones proposed here, prefiltering is essential, but, even for simple preconditioners, prefiltering provides significant benefits in the presence of many constraints. Numerical tests of the new approach on a range of examples confirm 6--8x speedups on a fully dressed character with 371k vertices, and even larger speedups on synthetic examples.

Journal ArticleDOI
TL;DR: In this article, the authors develop an optimization framework for problems whose solutions are well-approximated by Hierarchical Tucker tensors, an efficient structured tensor format based on recursive subspace factorizations.

Journal ArticleDOI
TL;DR: The results show that using PHPSO to solve the one-dimensional heat conduction equation can outperform two parallel algorithms as well as HPSO itself and is shown to be with strong robustness and high speedup.

Journal ArticleDOI
TL;DR: For 2D and 3D frequency-domain elastic wave modeling, a parallel iterative solver based on a conjugate gradient acceleration of the symmetric Kaczmarz row-projection method, named the conjugates-gradient-accelerated component-averaged row projections (CARP-CG) method, shows interesting convergence properties.
Abstract: Full-waveform inversion and reverse time migration rely on an efficient forward-modeling approach. Current 3D large-scale frequency-domain implementations of these techniques mostly extract the desired frequency component from the time-domain wavefields through discrete Fourier transform. However, instead of conducting the time-marching steps for each seismic source, in which the time step is limited by the stability condition, performing the wave modeling directly in the frequency domain using an iterative linear solver may reduce the entire computational complexity. For 2D and 3D frequency-domain elastic wave modeling, a parallel iterative solver based on a conjugate gradient acceleration of the symmetric Kaczmarz row-projection method, named the conjugate-gradient-accelerated component-averaged row projections (CARP-CG) method, shows interesting convergence properties. The parallelization is realized through row-block division and component averaging operations. Convergence is achieved systemat...

Journal ArticleDOI
TL;DR: Chan et al. as mentioned in this paper extended the primal-dual Newton conjugate gradient method (pdNCG) in [T. F. Chan, G. H. Golub, and P. Mulet, SIAM J. Sci. Comput., 20 (1999), pp.
Abstract: In this paper we are concerned with the solution of compressed sensing (CS) problems where the signals to be recovered are sparse in coherent and redundant dictionaries. We extend the primal-dual Newton Conjugate Gradient method (pdNCG) in [T. F. Chan, G. H. Golub, and P. Mulet, SIAM J. Sci. Comput., 20 (1999), pp. 1964--1977] to CS problems. We provide an inexpensive and provably effective preconditioning technique for linear systems using pdNCG. Numerical results are presented on CS problems which demonstrate the performance of pdNCG with the proposed preconditioner compared to state-of-the-art existing solvers.

Proceedings ArticleDOI
24 May 2015
TL;DR: This work proposes an FPGA design for soft-output data detection in orthogonal frequency-division multiplexing (OFDM)-based large-scale (multi-user) MIMO systems that uses a modified version of the conjugate gradient least square (CGLS) algorithm.
Abstract: We propose an FPGA design for soft-output data detection in orthogonal frequency-division multiplexing (OFDM)-based large-scale (multi-user) MIMO systems. To reduce the high computational complexity of data detection, our design uses a modified version of the conjugate gradient least square (CGLS) algorithm. In contrast to existing linear detection algorithms for massive MIMO systems, our method avoids two of the most complex tasks, namely Gram-matrix computation and matrix inversion, while still being able to compute soft-outputs. Our architecture uses an array of reconfigurable processing elements to compute the CGLS algorithm in a hardware-efficient manner. Implementation results on Xilinx Virtex-7 FPGA for a 128 antenna, 8 user large-scale MIMO system show that our design only uses 70% of the area-delay product of the competitive method, while exhibiting superior error-rate performance.

Journal ArticleDOI
TL;DR: Algorithms that impose non-negative constraints in model-based optoacoustic inversion are investigated, showing an improvement in image quality and quantitativeness with respect to the unconstrained approach.
Abstract: The inversion accuracy in optoacoustic tomography depends on a number of parameters, including the number of detectors employed, discrete sampling issues or imperfectness of the forward model. These parameters result in ambiguities on the reconstructed image. A common ambiguity is the appearance of negative values, which have no physical meaning since optical absorption can only be higher or equal than zero. We investigate herein algorithms that impose non-negative constraints in model-based optoacoustic inversion. Several state-of-the-art non-negative constrained algorithms are analyzed. Furthermore, an algorithm based on the conjugate gradient method is introduced in this work. We are particularly interested in investigating whether positive restrictions lead to accurate solutions or drive the appearance of errors and artifacts. It is shown that the computational performance of non-negative constrained inversion is higher for the introduced algorithm than for the other algorithms, while yielding equivalent results. The experimental performance of this inversion procedure is then tested in phantoms and small animals, showing an improvement in image quality and quantitativeness with respect to the unconstrained approach. The study performed validates the use of non-negative constraints for improving image accuracy compared to unconstrained methods, while maintaining computational efficiency.

Journal ArticleDOI
TL;DR: The numerical experiments for the testing problems from the Constrained and Unconstrained Test Environment collection demonstrate that the modified SS ML-BFGS method yields a desirable improvement over CGOPT and the original SSML- BFGS method.
Abstract: The introduction of quasi-Newton and nonlinear conjugate gradient methods revolutionized the field of nonlinear optimization. The self-scaling memoryless Broyden---Fletcher---Goldfarb---Shanno (SSML-BFGS) method by Perry (Disscussion Paper 269, 1977) and Shanno (SIAM J Numer Anal, 15, 1247---1257, 1978) provided a good understanding about the relationship between the two classes of methods. Based on the SSML-BFGS method, new conjugate gradient algorithms, called CG_DESCENT and CGOPT, have been proposed by Hager and Zhang (SIAM J Optim, 16, 170---192, 2005) and Dai and Kou (SIAM J Optim, 23, 296---320, 2013), respectively. It is somewhat surprising that the two conjugate gradient methods perform more efficiently than the SSML-BFGS method. In this paper, we aim at proposing some suitable modifications of the SSML-BFGS method such that the sufficient descent condition holds. Convergence analysis of the modified method is made for convex and nonconvex functions, respectively. The numerical experiments for the testing problems from the Constrained and Unconstrained Test Environment collection demonstrate that the modified SSML-BFGS method yields a desirable improvement over CGOPT and the original SSML-BFGS method.

Journal ArticleDOI
TL;DR: The construction of the LINCOA software of the author, which is designed for linearly constrained optimization without derivatives when there are hundreds of variables, is considered, and general linear constraints on the variables that have to hold at x and at $$\underline{x}_{k+1}$$x̲k-1 are allowed.
Abstract: Quadratic models Qk(x),x ∈ R n , of the objective function F(x),x ∈ R n , are used by many successful iterative algorithms for minimization, where k is the iteration number. Given the vector of variables x k ∈ R n , a new vector x k+1 may be calculated that satisfies Qk(x k+1 )< Qk(x k ), in the hope that it provides the reduction F(x k+1 )< F(x k ). Trust region methods include a bound of the form � x k+1 − x k �≤ � k. Also we allow general linear constraints on the variables that have to hold at xk and at xk+1. We consider the construction of xk+1, using only of magnitude n 2 operations on a typical iteration when n is large. The linear constraints aretreatedbyactivesets,whichmaybeupdatedduringaniteration,andwhichdecrease the number of degrees of freedom in the variables temporarily, by restricting x to an affine subset of R n . Conjugate gradient and Krylov subspace methods are addressed for adjusting the reduced variables, but the resultant steps are expressed in terms of the original variables. Termination conditions are given that are intended to combine suitable reductions in Qk(·) with a sufficiently small number of steps. The reason for our work is that x k+1 is required in the LINCOA software of the author, which is designed for linearly constrained optimization without derivatives when there are hundreds of variables. Our studies go beyond the final version of LINCOA, however, which employs conjugate gradients with termination at the trust region boundary. In particular, we find that, if an active set is updated at a point that is not the trust region centre, then the Krylov method may terminate too early due to a degeneracy. An extension to the conjugate gradient method for searching round the trust region

Book
03 Nov 2015
TL;DR: Variational Methods for the Numerical Solution of Nonlinear Elliptic Problems addresses computational methods that have proven efficient for the solution of a large variety of nonlinear elliptic problems with useful insights suitable for advanced graduate students, faculty, and researchers in applied and computational mathematics.
Abstract: Variational Methods for the Numerical Solution of Nonlinear Elliptic Problems addresses computational methods that have proven efficient for the solution of a large variety of nonlinear elliptic problems. These methods can be applied to many problems in science and engineering, but this book focuses on their application to problems in continuum mechanics and physics. This book differs from others on the topic by presenting examples of the power and versatility of operator-splitting methods; providing a detailed introduction to alternating direction methods of multipliers and their applicability to the solution of nonlinear (possibly nonsmooth) problems from science and engineering; and showing that nonlinear least-squares methods, combined with operator-splitting and conjugate gradient algorithms, provide efficient tools for the solution of highly nonlinear problems. Audience: The book provides useful insights suitable for advanced graduate students, faculty, and researchers in applied and computational mathematics as well as research engineers, mathematical physicists, and systems engineers.

Journal ArticleDOI
Hideaki Iiduka1
TL;DR: This paper proposes an algorithm which not only minimizes the objective function quickly but also converges in the fixed point set much faster than the existing algorithms and proves that the algorithm with diminishing step-size sequences strongly converges to the solution to the convex minimization problem.
Abstract: The existing algorithms for solving the convex minimization problem over the fixed point set of a nonexpansive mapping on a Hilbert space are based on algorithmic methods, such as the steepest descent method and conjugate gradient methods, for finding a minimizer of the objective function over the whole space, and attach importance to minimizing the objective function as quickly as possible. Meanwhile, it is of practical importance to devise algorithms which converge in the fixed point set quickly because the fixed point set is the set with the constraint conditions that must be satisfied in the problem. This paper proposes an algorithm which not only minimizes the objective function quickly but also converges in the fixed point set much faster than the existing algorithms and proves that the algorithm with diminishing step-size sequences strongly converges to the solution to the convex minimization problem. We also analyze the proposed algorithm with each of the Fletcher---Reeves, Polak---Ribiere---Polyak, Hestenes---Stiefel, and Dai---Yuan formulas used in the conventional conjugate gradient methods, and show that there is an inconvenient possibility that their algorithms may not converge to the solution to the convex minimization problem. We numerically compare the proposed algorithm with the existing algorithms and show its effectiveness and fast convergence.

Journal ArticleDOI
TL;DR: A method to reduce the adverse effect of unreliable local estimations is introduced, which helps to get rid of errors in specular areas and edges where depth values are discontinuous.
Abstract: In this paper, we investigate how the recently emerged photography technology—the light field—can benefit depth map estimation, a challenging computer vision problem. A novel framework is proposed to reconstruct continuous depth maps from light field data. Unlike many traditional methods for the stereo matching problem, the proposed method does not need to quantize the depth range. By making use of the structure information amongst the densely sampled views in light field data, we can obtain dense and relatively reliable local estimations. Starting from initial estimations, we go on to propose an optimization method based on solving a sparse linear system iteratively with a conjugate gradient method. Two different affinity matrices for the linear system are employed to balance the efficiency and quality of the optimization. Then, a depth-assisted segmentation method is introduced so that different segments can employ different affinity matrices. Experiment results on both synthetic and real light fields demonstrate that our continuous results are more accurate, efficient, and able to preserve more details compared with discrete approaches.

Journal ArticleDOI
TL;DR: This paper shows that the average case performance of CGIHT is robust to additive noise well beyond its theoretical worst case guarantees and, in this setting, is typically the fastest iterative hard thresholding algorithm for sparse approximation.
Abstract: Conjugate gradient iterative hard thresholding (CGIHT) for compressed sensing combines the low per iteration computational cost of simple line search iterative hard thresholding algorithms with the improved convergence rates of more sophisticated sparse approximation algorithms. This paper shows that the average case performance of CGIHT is robust to additive noise well beyond its theoretical worst case guarantees and, in this setting, is typically the fastest iterative hard thresholding algorithm for sparse approximation. Moreover, CGIHT is observed to benefit more than other iterative hard thresholding algorithms when jointly considering multiple sparse vectors whose sparsity patterns coincide.

01 Jan 2015
TL;DR: 3-layer perceptron feedforward neural network is employed for comparison of three different training algorithms, i.e., Lavenberg-Marquardt (LM), Scaled Conjugate Gradient (SCG) and Bayesian Regularization (BR) backpropagation algorithms, in the view of their ability to perform 12 multistep ahead monthly wind speed forecasting.
Abstract: Wind speed forecasting is critical for wind energy conversion systems since it greatly influences the issues such as scheduling of the power systems, and dynamic control of the wind turbines. Also, ...

Journal ArticleDOI
TL;DR: This paper considers the mathematical model of thermo- and photo-acoustic tomography for the recovery of the initial condition of a wave field from knowledge of its boundary values and derives a solvable equation for the unknown initial condition.
Abstract: In this paper we consider the mathematical model of thermo- and photo-acoustic tomography for the recovery of the initial condition of a wave field from knowledge of its boundary values. Unlike the free-space setting, we consider the wave problem in a region enclosed by a surface where an impedance boundary condition is imposed. This condition models the presence of physical boundaries such as interfaces or acoustic mirrors which reflect some of the wave energy back into the enclosed domain. By recognizing that the inverse problem is equivalent to a statement of boundary observability, we use control operators to prove the unique and stable recovery of the initial wave profile from knowledge of boundary measurements. Since our proof is constructive, we explicitly derive a solvable equation for the unknown initial condition. This equation can be solved numerically using the conjugate gradient method. We also propose an alternative approach based on the stabilization of waves. This leads to an exponentially and uniformly convergent Neumann series reconstruction when the impedance coefficient is not identically zero. In both cases, if well-known geometrical conditions are satisfied, our approaches are naturally suited for variable wave speed and for measurements on a subset of the boundary.