scispace - formally typeset
Search or ask a question

Showing papers on "Iterative method published in 2014"


Journal ArticleDOI
TL;DR: This paper establishes its linear convergence rate for the decentralized consensus optimization problem with strongly convex local objective functions in terms of the network topology, the properties ofLocal objective functions, and the algorithm parameter.
Abstract: In decentralized consensus optimization, a connected network of agents collaboratively minimize the sum of their local objective functions over a common decision variable, where their information exchange is restricted between the neighbors. To this end, one can first obtain a problem reformulation and then apply the alternating direction method of multipliers (ADMM). The method applies iterative computation at the individual agents and information exchange between the neighbors. This approach has been observed to converge quickly and deemed powerful. This paper establishes its linear convergence rate for the decentralized consensus optimization problem with strongly convex local objective functions. The theoretical convergence rate is explicitly given in terms of the network topology, the properties of local objective functions, and the algorithm parameter. This result is not only a performance guarantee but also a guideline toward accelerating the ADMM convergence.

836 citations


Journal ArticleDOI
TL;DR: This paper considers an MIMO multicell system where multiple mobile users ask for computation offloading to a common cloud server, and proposes an iterative algorithm, based on a novel successive convex approximation technique, converging to a local optimal solution of the original nonconvex problem.
Abstract: Migrating computational intensive tasks from mobile devices to more resourceful cloud servers is a promising technique to increase the computational capacity of mobile devices while saving their battery energy. In this paper, we consider a MIMO multicell system where multiple mobile users (MUs) ask for computation offloading to a common cloud server. We formulate the offloading problem as the joint optimization of the radio resources-the transmit precoding matrices of the MUs-and the computational resources-the CPU cycles/second assigned by the cloud to each MU-in order to minimize the overall users' energy consumption, while meeting latency constraints. The resulting optimization problem is nonconvex (in the objective function and constraints). Nevertheless, in the single-user case, we are able to express the global optimal solution in closed form. In the more challenging multiuser scenario, we propose an iterative algorithm, based on a novel successive convex approximation technique, converging to a local optimal solution of the original nonconvex problem. Then, we reformulate the algorithm in a distributed and parallel implementation across the radio access points, requiring only a limited coordination/signaling with the cloud. Numerical results show that the proposed schemes outperform disjoint optimization algorithms.

632 citations


Journal ArticleDOI
TL;DR: Two online schemes for an integrated design of fault-tolerant control (FTC) systems with application to Tennessee Eastman (TE) benchmark are proposed.
Abstract: In this paper, two online schemes for an integrated design of fault-tolerant control (FTC) systems with application to Tennessee Eastman (TE) benchmark are proposed. Based on the data-driven design of the proposed fault-tolerant architecture whose core is an observer/residual generator based realization of the Youla parameterization of all stabilization controllers, FTC is achieved by an adaptive residual generator for the online identification of the fault diagnosis relevant vectors, and an iterative optimization method for system performance enhancement. The performance and effectiveness of the proposed schemes are demonstrated through the TE benchmark model.

586 citations


Journal ArticleDOI
TL;DR: It is shown that the iterative performance index function is nonincreasingly convergent to the optimal solution of the Hamilton-Jacobi-Bellman equation and it is proven that any of the iteratives control laws can stabilize the nonlinear systems.
Abstract: This paper is concerned with a new discrete-time policy iteration adaptive dynamic programming (ADP) method for solving the infinite horizon optimal control problem of nonlinear systems. The idea is to use an iterative ADP technique to obtain the iterative control law, which optimizes the iterative performance index function. The main contribution of this paper is to analyze the convergence and stability properties of policy iteration method for discrete-time nonlinear systems for the first time. It shows that the iterative performance index function is nonincreasingly convergent to the optimal solution of the Hamilton-Jacobi-Bellman equation. It is also proven that any of the iterative control laws can stabilize the nonlinear systems. Neural networks are used to approximate the performance index function and compute the optimal control law, respectively, for facilitating the implementation of the iterative ADP algorithm, where the convergence of the weight matrices is analyzed. Finally, the numerical results and analysis are presented to illustrate the performance of the developed method.

535 citations


Journal ArticleDOI
TL;DR: A novel approach based on the Q -learning algorithm is proposed to solve the infinite-horizon linear quadratic tracker (LQT) for unknown discrete-time systems in a causal manner and the optimal control input is obtained by only solving an augmented ARE.

397 citations


Book
09 Oct 2014
TL;DR: In this article, a unified approach to constructing iterative methods for solving irregular operator equations and providing rigorous theoretical analysis for several classes of these methods is presented. But the main distinguishing feature of the presented approach is that it does not require any structural conditions on equations under consideration, except for standard smoothness conditions.
Abstract: This volume presents a unified approach to constructing iterative methods for solving irregular operator equations and provides rigorous theoretical analysis for several classes of these methods. The analysis of methods includes convergence theorems as well as necessary and sufficient conditions for their convergence at a given rate. The principal groups of methods studied in the book are iterative processes based on the technique of universal linear approximations, stable gradient-type processes, and methods of stable continuous approximations. Compared to existing monographs and textbooks on ill-posed problems, the main distinguishing feature of the presented approach is that it doesnt require any structural conditions on equations under consideration, except for standard smoothness conditions. This allows to obtain in a uniform style stable iterative methods applicable to wide classes of nonlinear inverse problems. Practical efficiency of suggested algorithms is illustrated in application to inverse problems of potential theory and acoustic scattering. The volume can be read by anyone with a basic knowledge of functional analysis. The book will be of interest to applied mathematicians and specialists in mathematical modeling and inverse problems.

329 citations


Journal ArticleDOI
TL;DR: A half-quadratic (HQ) framework to solve the robust sparse representation problem is developed and it is shown that the ℓ1-regularization solved by soft-thresholding function has a dual relationship to Huber M-estimator, which theoretically guarantees the performance of robust sparse Representation in terms of M-ESTimation.
Abstract: Robust sparse representation has shown significant potential in solving challenging problems in computer vision such as biometrics and visual surveillance. Although several robust sparse models have been proposed and promising results have been obtained, they are either for error correction or for error detection, and learning a general framework that systematically unifies these two aspects and explores their relation is still an open problem. In this paper, we develop a half-quadratic (HQ) framework to solve the robust sparse representation problem. By defining different kinds of half-quadratic functions, the proposed HQ framework is applicable to performing both error correction and error detection. More specifically, by using the additive form of HQ, we propose an l1-regularized error correction method by iteratively recovering corrupted data from errors incurred by noises and outliers; by using the multiplicative form of HQ, we propose an l1-regularized error detection method by learning from uncorrupted data iteratively. We also show that the l1-regularization solved by soft-thresholding function has a dual relationship to Huber M-estimator, which theoretically guarantees the performance of robust sparse representation in terms of M-estimation. Experiments on robust face recognition under severe occlusion and corruption validate our framework and findings.

257 citations


Journal ArticleDOI
TL;DR: The authors introduce an iterative algorithm, called matching demodulation transform (MDT), to generate a time-frequency (TF) representation with satisfactory energy concentration, and the MDT-based synchrosqueezing algorithm is described to further enhance the concentration and reduce the diffusion of the curved IF profile in the TF representation of original syn chrosquEEzing transform.
Abstract: The authors introduce an iterative algorithm, called matching demodulation transform (MDT), to generate a time-frequency (TF) representation with satisfactory energy concentration. As opposed to conventional TF analysis methods, this algorithm does not have to devise ad-hoc parametric TF dictionary. Assuming the FM law of a signal can be well characterized by a determined mathematical model with reasonable accuracy, the MDT algorithm can adopt a partial demodulation and stepwise refinement strategy for investigating TF properties of the signal. The practical implementation of the MDT involves an iterative procedure that gradually matches the true instantaneous frequency (IF) of the signal. Theoretical analysis of the MDT's performance is provided, including quantitative analysis of the IF estimation error and the convergence condition. Moreover, the MDT-based synchrosqueezing algorithm is described to further enhance the concentration and reduce the diffusion of the curved IF profile in the TF representation of original synchrosqueezing transform. The validity and practical utility of the proposed method are demonstrated by simulated as well as real signal.

235 citations


Journal ArticleDOI
TL;DR: An iterative algorithm is presented that enables the application of DL for the reconstruction of cardiac cine data with Cartesian undersampling and is compared to and shown to systematically outperform k- t FOCUSS, a successful CS method that uses a fixed basis transform.
Abstract: The reconstruction of dynamic magnetic resonance data from an undersampled k-space has been shown to have a huge potential in accelerating the acquisition process of this imaging modality. With the introduction of compressed sensing (CS) theory, solutions for undersampled data have arisen which reconstruct images consistent with the acquired samples and compliant with a sparsity model in some transform domain. Fixed basis transforms have been extensively used as sparsifying transforms in the past, but recent developments in dictionary learning (DL) have been shown to outperform them by training an overcomplete basis that is optimal for a particular dataset. We present here an iterative algorithm that enables the application of DL for the reconstruction of cardiac cine data with Cartesian undersampling. This is achieved with local processing of spatio-temporal 3D patches and by independent treatment of the real and imaginary parts of the dataset. The enforcement of temporal gradients is also proposed as an additional constraint that can greatly accelerate the convergence rate and improve the reconstruction for high acceleration rates. The method is compared to and shown to systematically outperform k- t FOCUSS, a successful CS method that uses a fixed basis transform.

228 citations


Journal ArticleDOI
TL;DR: Some anomalies are found in this Jarratt family of iterative methods applied to quadratic polynomials as means of studying the dynamical behavior of this fourth-order family of methods.

222 citations


Journal ArticleDOI
TL;DR: A set of efficient closed-form AOA based self-localization algorithms using auxiliary variables based methods that achieve much higher localization accuracy than the triangulation method and avoid local minima and divergence in iterative ML estimators.
Abstract: Node self-localization is a key research topic for wireless sensor networks (WSNs). There are two main algorithms, the triangulation method and the maximum likelihood (ML) estimator, for angle of arrival (AOA) based self-localization. The ML estimator requires a good initialization close to the true location to avoid divergence, while the triangulation method cannot obtain the closed-form solution with high efficiency. In this paper, we develop a set of efficient closed-form AOA based self-localization algorithms using auxiliary variables based methods. First, we formulate the self-localization problem as a linear least squares problem using auxiliary variables. Based on its closed-form solution, a new auxiliary variables based pseudo-linear estimator (AVPLE) is developed. By analyzing its estimation error, we present a bias compensated AVPLE (BCAVPLE) to reduce the estimation error. Then we develop a novel BCAVPLE based weighted instrumental variable (BCAVPLE-WIV) estimator to achieve asymptotically unbiased estimation of locations and orientations of unknown nodes based on prior knowledge of the AOA noise variance. In the case that the AOA noise variance is unknown, a new AVPLE based WIV (AVPLE-WIV) estimator is developed to localize the unknown nodes. Also, we develop an autonomous coordinate rotation (ACR) method to overcome the tangent instability of the proposed algorithms when the orientation of the unknown node is near π/2. We also derive the Cramer-Rao lower bound (CRLB) of the ML estimator. Extensive simulations demonstrate that the new algorithms achieve much higher localization accuracy than the triangulation method and avoid local minima and divergence in iterative ML estimators.

Journal ArticleDOI
TL;DR: This paper focuses on the filter design for nonuniformly sampled nonlinear systems which can be approximated by Takagi-Sugeno (T-S) fuzzy systems and derives the linear-matrix-inequality-based sufficient conditions by studying the stochastic stability and the energy-to-peak performance of the estimation error system.
Abstract: This paper focuses on the filter design for nonuniformly sampled nonlinear systems which can be approximated by Takagi-Sugeno (T-S) fuzzy systems. The sampling periods of the measurements are time varying, and the nonuniform observations of the outputs are modeled by a homogenous Markov chain. A mode-dependent estimator with a fast sampling frequency is proposed such that the estimation can track the signal to be estimated with the nonuniformly sampled outputs. The nonlinear systems are discretized with the fast sampling period. By using an augmentation technique, the corresponding stochastic estimation error system is obtained. By studying the stochastic stability and the energy-to-peak performance of the estimation error system, we derive the linear-matrix-inequality-based sufficient conditions. The parameters of the mode-dependent estimator can be calculated by using the proposed iterative algorithm. Two examples are used to demonstrate the design procedure and the efficacy of the proposed design method.

Book
14 Mar 2014
TL;DR: In this paper, the authors present an approach for using fast and efficient iterative methods to approximate solutions of nonlinear equations, and provide a huge number of exercises complementing the theory.
Abstract: The book is designed for researchers, students and practitioners interested in using fast and efficient iterative methods to approximate solutions of nonlinear equations. The following four major problems are addressed: problems 1 shows that the iterates are well defined; 2nd problem concerns the convergence of the sequences generated by a process and the question of whether the limit points are, in fact solutions of the equation; problem 3 concerns the economy of the entire operations; and last problem concerns with how to best choose a method, algorithm or software program to solve a specific type of problem and its description of when a given algorithm succeeds or fails.The book contains applications in several areas of applied sciences including mathematical programming and mathematical economics. There is also a huge number of exercises complementing the theory. This book contains the latest convergence results for the iterative methods: Iterative methods with the least computational cost; Iterative methods with the weakest convergence conditions; and Open problems on iterative methods.

Journal ArticleDOI
TL;DR: A graphical tool that allows to study the real dynamics of iterative methods whose iterations depends on one parameter in an easy and compact way and an example of the dynamics of the Damped Newton's method applied to a cubic polynomial is presented.

Proceedings ArticleDOI
31 May 2014
TL;DR: This work shows that preconditioners constructed by random sampling can perform well without meeting the standard requirements of iterative methods, and obtains a two-pass approach algorithm for constructing optimal embeddings in snowflake spaces that runs in O(m log log n) time.
Abstract: We show an algorithm for solving symmetric diagonally dominant (SDD) linear systems with m non-zero entries to a relative error of e in O(m log1/2 n logc n log(1/e)) time. Our approach follows the recursive preconditioning framework, which aims to reduce graphs to trees using iterative methods. We improve two key components of this framework: random sampling and tree embeddings. Both of these components are used in a variety of other algorithms, and our approach also extends to the dual problem of computing electrical flows. We show that preconditioners constructed by random sampling can perform well without meeting the standard requirements of iterative methods. In the graph setting, this leads to ultra-sparsifiers that have optimal behavior in expectation. The improved running time makes previous low stretch embedding algorithms the running time bottleneck in this framework. In our analysis, we relax the requirement of these embeddings to snowflake spaces. We then obtain a two-pass approach algorithm for constructing optimal embeddings in snowflake spaces that runs in O(m log log n) time. This algorithm is also readily parallelizable.

Journal ArticleDOI
TL;DR: A new method is proposed, termed LDA-L1, by maximizing the ratio of the between- class dispersion to the within-class dispersion using the L1-norm rather than the L2-norm, which is robust to outliers, and is solved by an iterative algorithm proposed.
Abstract: Fisher linear discriminant analysis (LDA) is a classical subspace learning technique of extracting discriminative features for pattern recognition problems. The formulation of the Fisher criterion is based on the L2-norm, which makes LDA prone to being affected by the presence of outliers. In this paper, we propose a new method, termed LDA-L1, by maximizing the ratio of the between-class dispersion to the within-class dispersion using the L1-norm rather than the L2-norm. LDA-L1 is robust to outliers, and is solved by an iterative algorithm proposed. The algorithm is easy to be implemented and is theoretically shown to arrive at a locally maximal point. LDA-L1 does not suffer from the problems of small sample size and rank limit as existed in the conventional LDA. Experiment results of image recognition confirm the effectiveness of the proposed method.

Journal ArticleDOI
TL;DR: A monotonically error-bound improving technique (MERIT) is proposed to obtain the global optimum or a local optimum of UQP with good sub-optimality guarantees and computational time.
Abstract: The NP-hard problem of optimizing a quadratic form over the unimodular vector set arises in radar code design scenarios as well as other active sensing and communication applications. To tackle this problem (which we call unimodular quadratic program (UQP)), several computational approaches are devised and studied. Power method-like iterations are introduced for local optimization of UQP. Furthermore, a monotonically error-bound improving technique (MERIT) is proposed to obtain the global optimum or a local optimum of UQP with good sub-optimality guarantees. The provided sub-optimality guarantees are case-dependent and may outperform the π/4 approximation guarantee of semi-definite relaxation. Several numerical examples are presented to illustrate the performance of the proposed method. The examples show that for several cases, including rank-deficient matrices, the proposed methods can solve UQPs efficiently in the sense of sub-optimality guarantee and computational time.

Journal ArticleDOI
TL;DR: A novel strategy is established to design the robust controller for a class of continuous-time nonlinear systems with uncertainties based on the online policy iteration algorithm to solve the Hamilton-Jacobi-Bellman (HJB) equation by constructing a critic neural network.
Abstract: In this paper, a novel strategy is established to design the robust controller for a class of continuous-time nonlinear systems with uncertainties based on the online policy iteration algorithm. The robust control problem is transformed into the optimal control problem by properly choosing a cost function that reflects the uncertainties, regulation, and control. An online policy iteration algorithm is presented to solve the Hamilton-Jacobi-Bellman (HJB) equation by constructing a critic neural network. The approximate expression of the optimal control policy can be derived directly. The closed-loop system is proved to possess the uniform ultimate boundedness. The equivalence of the neural-network-based HJB solution of the optimal control problem and the solution of the robust control problem is established as well. Two simulation examples are provided to verify the effectiveness of the present robust control scheme.

Journal ArticleDOI
TL;DR: An ILC scheme with an iteration-average operator is introduced for tracking tasks with non-uniform trial lengths, which thus mitigates the requirement on classic ILC that all trial lengths must be identical.
Abstract: This technical note addresses an iterative learning control (ILC) design problem for discrete-time linear systems where the trial lengths could be randomly varying in the iteration domain. An ILC scheme with an iteration-average operator is introduced for tracking tasks with non-uniform trial lengths, which thus mitigates the requirement on classic ILC that all trial lengths must be identical. In addition, the identical initialization condition can be absolutely removed. The learning convergence condition of ILC in mathematical expectation is derived through rigorous analysis. As a result, the proposed ILC scheme is applicable to more practical systems. In the end, two illustrative examples are presented to demonstrate the performance and the effectiveness of the averaging ILC scheme for both time-invariant and time-varying linear systems.

Journal ArticleDOI
TL;DR: A simple iterative Mean Shift algorithm based on the cosine distance to perform speaker clustering under speaker diarization conditions and state of the art results as measured by the Diarization Error Rate and the Number of Detected Speakers on the LDC CallHome telephone corpus are reported.
Abstract: Speaker clustering is a crucial step for speaker diarization. The short duration of speech segments in telephone speech dialogue and the absence of prior information on the number of clusters dramatically increase the difficulty of this problem in diarizing spontaneous telephone speech conversations. We propose a simple iterative Mean Shift algorithm based on the cosine distance to perform speaker clustering under these conditions. Two variants of the cosine distance Mean Shift are compared in an exhaustive practical study. We report state of the art results as measured by the Diarization Error Rate and the Number of Detected Speakers on the LDC CallHome telephone corpus.

Journal ArticleDOI
TL;DR: This paper investigates energy efficient multicell multiuser precoding design and considers a new criterion of weighted sum energy efficiency, which is defined as the weighted sum of the energy efficiencies of multiple cells.
Abstract: Energy efficiency optimization of wireless systems has become urgently important due to its impact on the global carbon footprint In this paper we investigate energy efficient multicell multiuser precoding design and consider a new criterion of weighted sum energy efficiency, which is defined as the weighted sum of the energy efficiencies of multiple cells This objective is more general than the existing methods and can satisfy heterogeneous requirements from different kinds of cells, but it is hard to tackle due to its sum-of-ratio form In order to address this non-convex problem, the user rate is first formulated as a polynomial optimization problem with the test conditional probabilities to be optimized Based on that, the sum-of-ratio form of the energy efficient precoding problem is transformed into a parameterized polynomial form optimization problem, by which a solution in closed form is achieved through a two-layer optimization We also show that the proposed iterative algorithm is guaranteed to converge Numerical results are finally provided to confirm the effectiveness of our energy efficient beamforming algorithm It is observed that in the low signal-to-noise ratio (SNR) region, the optimal energy efficiency and the optimal sum rate are simultaneously achieved by our algorithm; while in the middle-high SNR region, a certain performance loss in terms of the sum rate is suffered to guarantee the weighed sum energy efficiency

Journal Article
TL;DR: The minimizer and its subspace are interpreted as robust versions of the empirical inverse covariance and the PCA subspace respectively and compared with many other algorithms for robust PCA on synthetic and real data sets and demonstrate state-of-the-art speed and accuracy.
Abstract: We study the basic problem of robust subspace recovery. That is, we assume a data set that some of its points are sampled around a fixed subspace and the rest of them are spread in the whole ambient space, and we aim to recover the fixed underlying subspace. We first estimate "robust inverse sample covariance" by solving a convex minimization procedure; we then recover the subspace by the bottom eigenvectors of this matrix (their number correspond to the number of eigenvalues close to 0). We guarantee exact subspace recovery under some conditions on the underlying data. Furthermore, we propose a fast iterative algorithm, which linearly converges to the matrix minimizing the convex problem. We also quantify the effect of noise and regularization and discuss many other practical and theoretical issues for improving the subspace recovery in various settings. When replacing the sum of terms in the convex energy function (that we minimize) with the sum of squares of terms, we obtain that the new minimizer is a scaled version of the inverse sample covariance (when exists). We thus interpret our minimizer and its subspace (spanned by its bottom eigenvectors) as robust versions of the empirical inverse covariance and the PCA subspace respectively. We compare our method with many other algorithms for robust PCA on synthetic and real data sets and demonstrate state-of-the-art speed and accuracy.

Journal ArticleDOI
TL;DR: In this paper, an extension to the standard iterative Boltzmann inversion (IBI) method was proposed to derive coarse-grained potentials, which is better suited to simulate systems over a range of thermodynamic states than the standard IBI method.
Abstract: In this work, an extension is proposed to the standard iterative Boltzmann inversion (IBI) method used to derive coarse-grained potentials. It is shown that the inclusion of target data from multiple states yields a less state-dependent potential, and is thus better suited to simulate systems over a range of thermodynamic states than the standard IBI method. The inclusion of target data from multiple states forces the algorithm to sample regions of potential phase space that match the radial distribution function at multiple state points, thus producing a derived potential that is more representative of the underlying interactions. It is shown that the algorithm is able to converge to the true potential for a system where the underlying potential is known. It is also shown that potentials derived via the proposed method better predict the behavior of n-alkane chains than those derived via the standard IBI method. Additionally, through the examination of alkane monolayers, it is shown that the relative weight given to each state in the fitting procedure can impact bulk system properties, allowing the potentials to be further tuned in order to match the properties of reference atomistic and/or experimental systems.

Journal ArticleDOI
TL;DR: A new energy function based on an active contour model to segment water and land and minimize it with an iterative global optimization method and unify them with a binary linear programming problem by utilizing the context information.
Abstract: In this letter, we present a new method to detect inshore ships using shape and context information. We first propose a new energy function based on an active contour model to segment water and land and minimize it with an iterative global optimization method. The proposed energy performs well on the different intensity distributions between water and land and produces a result that can be well used in shape and context analyses. In the segmented image, ships are detected with successive shape analysis, including shape analysis in the localization of ship head and region growing in computing the width and length of ship. Finally, to locate ships accurately and remove the false alarms, we unify them with a binary linear programming problem by utilizing the context information. Experiments on QuickBird images show the robustness and precision of our method.

Journal ArticleDOI
TL;DR: This paper describes lower bounds on communication in linear algebra, and presents lower bounds for Strassen-like algorithms, and for iterative methods, in particular Krylov subspace methods applied to sparse matrices.
Abstract: The traditional metric for the efficiency of a numerical algorithm has been the number of arithmetic operations it performs. Technological trends have long been reducing the time to perform an arithmetic operation, so it is no longer the bottleneck in many algorithms; rather, communication, or moving data, is the bottleneck. This motivates us to seek algorithms that move as little data as possible, either between levels of a memory hierarchy or between parallel processors over a network. In this paper we summarize recent progress in three aspects of this problem. First we describe lower bounds on communication. Some of these generalize known lower bounds for dense classical (O(n3)) matrix multiplication to all direct methods of linear algebra, to sequential and parallel algorithms, and to dense and sparse matrices. We also present lower bounds for Strassen-like algorithms, and for iterative methods, in particular Krylov subspace methods applied to sparse matrices. Second, we compare these lower bounds to widely used versions of these algorithms, and note that these widely used algorithms usually communicate asymptotically more than is necessary. Third, we identify or invent new algorithms for most linear algebra problems that do attain these lower bounds, and demonstrate large speed-ups in theory and practice.

Journal ArticleDOI
TL;DR: The accelerated scheme, which minimizes this upper bound, is exhibited and the choice of the convergence test used in the schemes is discussed, which shows that two are special cases of the third, corresponding to particular choices of parameters of the method.
Abstract: Since Moulinec & Suquet (1994, 1998) introduced an iterative method based on Fourier transforms to compute the mechanical properties of heterogeneous materials, improved algorithms have been proposed to increase the convergence rate of the scheme. This paper is devoted to the comparison of the accelerated schemes proposed by Eyre & Milton (1999), by Michel et al (2000) and by Monchiet & Bonnet (2012). It shows that the algorithms by Eyre-Milton and by Michel et al are particular cases of Monchiet-Bonnet algorithm, corresponding to particular choices of parameters of the method. An upper bound of the spectral radius of the schemes is determined, which enables to propose sufficient conditions of convergence of the schemes. Conditions are found for minimizing this upper bound. This study shows that the scheme which minimizes this upper bound is the scheme of Eyre & Milton. The paper discusses the choice of the convergence test used in the schemes.

Journal ArticleDOI
TL;DR: An iterative image-domain decomposition method for noise suppression in DECT, using the full variance-covariance matrix of the decomposed images, which shows superior performance on noise suppression with high image spatial resolution and low-contrast detectability.
Abstract: Purpose: Dual energy CT (DECT) imaging plays an important role in advanced imaging applications due to its capability of material decomposition. Direct decomposition via matrix inversion suffers from significant degradation of image signal-to-noise ratios, which reduces clinical values of DECT. Existing denoising algorithms achieve suboptimal performance since they suppress image noise either before or after the decomposition and do not fully explore the noise statistical properties of the decomposition process. In this work, the authors propose an iterative image-domain decomposition method for noise suppression in DECT, using the full variance-covariance matrix of the decomposed images. Methods: The proposed algorithm is formulated in the form of least-square estimation with smoothness regularization. Based on the design principles of a best linear unbiased estimator, the authors include the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. The regularization term enforces the image smoothness by calculating the square sum of neighboring pixel value differences. To retain the boundary sharpness of the decomposed images, the authors detect the edges in the CT images before decomposition. These edge pixels have small weights in the calculation of the regularization term. Distinct from the existing denoising algorithms applied on the images before or after decomposition, the method has an iterative process for noise suppression, with decomposition performed in each iteration. The authors implement the proposed algorithm using a standard conjugate gradient algorithm. The method performance is evaluated using an evaluation phantom (Catphan©600) and an anthropomorphic head phantom. The results are compared with those generated using direct matrix inversion with no noise suppression, a denoising method applied on the decomposed images, and an existing algorithm with similar formulation as the proposed method but with an edge-preserving regularization term. Results: On the Catphan phantom, the method maintains the same spatial resolution on the decomposed images as that of the CT images before decomposition (8 pairs/cm) while significantly reducing their noise standard deviation. Compared to that obtained by the direct matrix inversion, the noise standard deviation in the images decomposed by the proposed algorithm is reduced by over 98%. Without considering the noise correlation properties in the formulation, the denoising scheme degrades the spatial resolution to 6 pairs/cm for the same level of noise suppression. Compared to the edge-preserving algorithm, the method achieves better low-contrast detectability. A quantitative study is performed on the contrast-rod slice of Catphan phantom. The proposed method achieves lower electron density measurement error as compared to that by the direct matrix inversion, and significantly reduces the error variation by over 97%. On the head phantom, the method reduces the noise standard deviation of decomposed images by over 97% without blurring the sinus structures. Conclusions: The authors propose an iterative image-domain decomposition method for DECT. The method combines noise suppression and material decomposition into an iterative process and achieves both goals simultaneously. By exploring the full variance-covariance properties of the decomposed images and utilizing the edge predetection, the proposed algorithm shows superior performance on noise suppression with high image spatial resolution and low-contrast detectability.

Proceedings ArticleDOI
14 Dec 2014
TL;DR: A Robust Spectral learning framework for unsupervised Feature Selection (RSFS), which jointly improves the robustness of graph embedding and sparse spectral regression and robust Huber M-estimator is proposed.
Abstract: In this paper, we consider the problem of unsupervised feature selection. Recently, spectral feature selection algorithms, which leverage both graph Laplacian and spectral regression, have received increasing attention. However, existing spectral feature selection algorithms suffer from two major problems: 1) since the graph Laplacian is constructed from the original feature space, noisy and irrelevant features may have adverse effect on the estimated graph Laplacian and hence degenerate the quality of the induced graph embedding, 2) since the cluster labels are discrete in natural, relaxing and approximating these labels into a continuous embedding can inevitably introduce noise into the estimated cluster labels. Without considering the noise in the cluster labels, the feature selection process may be misguided. In this paper, we propose a Robust Spectral learning framework for unsupervised Feature Selection (RSFS), which jointly improves the robustness of graph embedding and sparse spectral regression. Compared with existing methods which are sensitive to noisy features, our proposed method utilizes a robust local learning method to construct the graph Laplacian and a robust spectral regression method to handle the noise on the learned cluster labels. In order to solve the proposed optimization problem, an efficient iterative algorithm is proposed. We also show the close connection between the proposed robust spectral regression and robust Huber M-estimator. Experimental results on different datasets show the superiority of RSFS.

Journal ArticleDOI
TL;DR: A method for joint dual-energy MBIR (JDE-MBIR), which simplifies the forward model while still accounting for the complete statistical dependency in the material-decomposed sinogram components and produces images that compare favorably in quality to previous decomposition-based methods.
Abstract: Dual-energy X-ray CT (DECT) has the potential to improve contrast and reduce artifacts as compared to traditional CT. Moreover, by applying model-based iterative reconstruction (MBIR) to dual-energy data, one might also expect to reduce noise and improve resolution. However, the direct implementation of dual-energy MBIR requires the use of a nonlinear forward model, which increases both complexity and computation. Alternatively, simplified forward models have been used which treat the material-decomposed channels separately, but these approaches do not fully account for the statistical dependencies in the channels. In this paper, we present a method for joint dual-energy MBIR (JDE-MBIR), which simplifies the forward model while still accounting for the complete statistical dependency in the material-decomposed sinogram components. The JDE-MBIR approach works by using a quadratic approximation to the polychromatic log-likelihood and a simple but exact nonnegativity constraint in the image domain. We demonstrate that our method is particularly effective when the DECT system uses fast kVp switching, since in this case the model accounts for the inaccuracy of interpolated sinogram entries. Both phantom and clinical results show that the proposed model produces images that compare favorably in quality to previous decomposition-based methods, including FBP and other statistical iterative approaches.

Journal ArticleDOI
TL;DR: In this paper, a two-tier precoding strategy for multi-cell massive MIMO interference networks, with an outer precoder for inter-cell/inter-cluster interference cancellation, and an inner pre-coder for intra-cell multiplexing, is considered.
Abstract: Massive MIMO is a promising technology in future wireless communication networks. However, it raises a lot of implementation challenges, for example, the huge pilot symbols and feedback overhead, requirement of real-time global CSI, large number of RF chains needed, and high computational complexity. We consider a two-tier precoding strategy for multi-cell massive MIMO interference networks, with an outer precoder for inter-cell/inter-cluster interference cancellation, and an inner precoder for intra-cell multiplexing. In particular, to combat with the computational complexity issue for the outer precoding, we propose a low complexity online iterative algorithm to track the outer precoder under time-varying channels. We follow an optimization technique and formulate the problem on the Grassmann manifold. We develop a low complexity iterative algorithm, which converges to the global optimal solution under static channels. In time-varying channels, we propose a compensation technique to offset the variation of the time-varying optimal solution. We show with our theoretical result that, under some mild conditions, perfect tracking of the target outer precoder using the proposed algorithm is possible. Numerical results demonstrate that the two-tier precoding with the proposed iterative compensation algorithm can achieve a good performance with a significant complexity reduction compared with the conventional two-tier precoding techniques in the literature.