scispace - formally typeset
Search or ask a question

Showing papers on "Iterative method published in 2007"


Journal ArticleDOI
TL;DR: This paper introduces two-step 1ST (TwIST) algorithms, exhibiting much faster convergence rate than 1ST for ill-conditioned problems, and introduces a monotonic version of TwIST (MTwIST); although the convergence proof does not apply, the effectiveness of the new methods are experimentally confirmed on problems of image deconvolution and of restoration with missing samples.
Abstract: Iterative shrinkage/thresholding (1ST) algorithms have been recently proposed to handle a class of convex unconstrained optimization problems arising in image restoration and other linear inverse problems. This class of problems results from combining a linear observation model with a nonquadratic regularizer (e.g., total variation or wavelet-based regularization). It happens that the convergence rate of these 1ST algorithms depends heavily on the linear observation operator, becoming very slow when this operator is ill-conditioned or ill-posed. In this paper, we introduce two-step 1ST (TwIST) algorithms, exhibiting much faster convergence rate than 1ST for ill-conditioned problems. For a vast class of nonquadratic convex regularizers (lscrP norms, some Besov norms, and total variation), we show that TwIST converges to a minimizer of the objective function, for a given range of values of its parameters. For noninvertible observation operators, we introduce a monotonic version of TwIST (MTwIST); although the convergence proof does not apply to this scenario, we give experimental evidence that MTwIST exhibits similar speed gains over IST. The effectiveness of the new methods are experimentally confirmed on problems of image deconvolution and of restoration with missing samples.

1,870 citations


Journal ArticleDOI
TL;DR: It is shown that the leading methods for estimating the inter-study variance are special cases of a general method-of-moments estimate of the inter"-study variance" and suggested two new two-step methods.

1,787 citations


Journal ArticleDOI
01 Nov 2007
TL;DR: The iterative learning control (ILC) literature published between 1998 and 2004 is categorized and discussed, extending the earlier reviews presented by two of the authors.
Abstract: In this paper, the iterative learning control (ILC) literature published between 1998 and 2004 is categorized and discussed, extending the earlier reviews presented by two of the authors. The papers includes a general introduction to ILC and a technical description of the methodology. The selected results are reviewed, and the ILC literature is categorized into subcategories within the broader division of application-focused and theory-focused results.

1,417 citations


Journal ArticleDOI
TL;DR: An unsupervised learning algorithm for the separation of sound sources in one-channel music signals is presented and enables a better separation quality than the previous algorithms.
Abstract: An unsupervised learning algorithm for the separation of sound sources in one-channel music signals is presented. The algorithm is based on factorizing the magnitude spectrogram of an input signal into a sum of components, each of which has a fixed magnitude spectrum and a time-varying gain. Each sound source, in turn, is modeled as a sum of one or more components. The parameters of the components are estimated by minimizing the reconstruction error between the input spectrogram and the model, while restricting the component spectrograms to be nonnegative and favoring components whose gains are slowly varying and sparse. Temporal continuity is favored by using a cost term which is the sum of squared differences between the gains in adjacent frames, and sparseness is favored by penalizing nonzero gains. The proposed iterative estimation algorithm is initialized with random values, and the gains and the spectra are then alternatively updated using multiplicative update rules until the values converge. Simulation experiments were carried out using generated mixtures of pitched musical instrument samples and drum sounds. The performance of the proposed method was compared with independent subspace analysis and basic nonnegative matrix factorization, which are based on the same linear model. According to these simulations, the proposed method enables a better separation quality than the previous algorithms. Especially, the temporal continuity criterion improved the detection of pitched musical sounds. The sparseness criterion did not produce significant improvements

1,096 citations


Proceedings ArticleDOI
20 Jun 2007
TL;DR: A simple and effective iterative algorithm for solving the optimization problem cast by Support Vector Machines that alternates between stochastic gradient descent steps and projection steps that can seamlessly be adapted to employ non-linear kernels while working solely on the primal objective function.
Abstract: We describe and analyze a simple and effective iterative algorithm for solving the optimization problem cast by Support Vector Machines (SVM). Our method alternates between stochastic gradient descent steps and projection steps. We prove that the number of iterations required to obtain a solution of accuracy e is O(1/e). In contrast, previous analyses of stochastic gradient descent methods require Ω (1/e2) iterations. As in previously devised SVM solvers, the number of iterations also scales linearly with 1/λ, where λ is the regularization parameter of SVM. For a linear kernel, the total run-time of our method is O (d/(λe)), where d is a bound on the number of non-zero features in each example. Since the run-time does not depend directly on the size of the training set, the resulting algorithm is especially suited for learning from large datasets. Our approach can seamlessly be adapted to employ non-linear kernels while working solely on the primal objective function. We demonstrate the efficiency and applicability of our approach by conducting experiments on large text classification problems, comparing our solver to existing state-of-the-art SVM solvers. For example, it takes less than 5 seconds for our solver to converge when solving a text classification problem from Reuters Corpus Volume 1 (RCV1) with 800,000 training examples.

985 citations


Journal ArticleDOI
TL;DR: This communication describes version 4.0 of Regularization Tools, a Matlab package for analysis and solution of discrete ill-posed problems, which is expanded with several new iterative methods, as well as new test problems and new parameter-choice methods.
Abstract: This communication describes version 4.0 of Regularization Tools, a Matlab package for analysis and solution of discrete ill-posed problems. The new version allows for under-determined problems, and it is expanded with several new iterative methods, as well as new test problems and new parameter-choice methods.

734 citations


Journal ArticleDOI
TL;DR: New extensions to the previously published multivariate alteration detection (MAD) method for change detection in bi-temporal, multi- and hypervariate data such as remote sensing imagery and three regularization schemes are described.
Abstract: This paper describes new extensions to the previously published multivariate alteration detection (MAD) method for change detection in bi-temporal, multi- and hypervariate data such as remote sensing imagery. Much like boosting methods often applied in data mining work, the iteratively reweighted (IR) MAD method in a series of iterations places increasing focus on "difficult" observations, here observations whose change status over time is uncertain. The MAD method is based on the established technique of canonical correlation analysis: for the multivariate data acquired at two points in time and covering the same geographical region, we calculate the canonical variates and subtract them from each other. These orthogonal differences contain maximum information on joint change in all variables (spectral bands). The change detected in this fashion is invariant to separate linear (affine) transformations in the originally measured variables at the two points in time, such as 1) changes in gain and offset in the measuring device used to acquire the data, 2) data normalization or calibration schemes that are linear (affine) in the gray values of the original variables, or 3) orthogonal or other affine transformations, such as principal component (PC) or maximum autocorrelation factor (MAF) transformations. The IR-MAD method first calculates ordinary canonical and original MAD variates. In the following iterations we apply different weights to the observations, large weights being assigned to observations that show little change, i.e., for which the sum of squared, standardized MAD variates is small, and small weights being assigned to observations for which the sum is large. Like the original MAD method, the iterative extension is invariant to linear (affine) transformations of the original variables. To stabilize solutions to the (IR-)MAD problem, some form of regularization may be needed. This is especially useful for work on hyperspectral data. This paper describes ordinary two-set canonical correlation analysis, the MAD transformation, the iterative extension, and three regularization schemes. A simple case with real Landsat Thematic Mapper (TM) data at one point in time and (partly) constructed data at the other point in time that demonstrates the superiority of the iterative scheme over the original MAD method is shown. Also, examples with SPOT High Resolution Visible data from an agricultural region in Kenya, and hyperspectral airborne HyMap data from a small rural area in southeastern Germany are given. The latter case demonstrates the need for regularization

595 citations


Journal ArticleDOI
TL;DR: Combettes and Hirstoaga as mentioned in this paper introduced an iterative scheme by the viscosity approximation method for finding a common element of the set of solutions of an equilibrium problem and the fixed points of a nonexpansive mapping in a Hilbert space.

594 citations


Proceedings ArticleDOI
10 Jun 2007
TL;DR: This paper proposes iterative context-bounding, a new search algorithm that systematically explores the executions of a multithreaded program in an order that prioritizes executions with fewer context switches, and shows both theoretically and empirically that context-bounded search is an effective method for exploring the behaviors of multith readed programs.
Abstract: Multithreaded programs are difficult to get right because of unexpected interaction between concurrently executing threads Traditional testing methods are inadequate for catching subtle concurrency errors which manifest themselves late in the development cycle and post-deployment Model checking or systematic exploration of program behavior is a promising alternative to traditional testing methods However, it is difficult to perform systematic search on large programs as the number of possible program behaviors grows exponentially with the program size Confronted with this state-explosion problem, traditional model checkers perform iterative depth-bounded search Although effective for message-passing software, iterative depth-bounding is inadequate for multithreaded softwareThis paper proposes iterative context-bounding, a new search algorithm that systematically explores the executions of a multithreaded program in an order that prioritizes executions with fewer context switches We distinguish between preempting and nonpreempting context switches, and show that bounding the number of preempting context switches to a small number significantly alleviates the state explosion, without limiting the depth of explored executions We show both theoretically and empirically that context-bounded search is an effective method for exploring the behaviors of multithreaded programs We have implemented our algorithmin two model checkers and applied it to a number of real-world multithreaded programs Our implementation uncovered 9 previously unknown bugs in our benchmarks, each of which was exposed by an execution with at most 2 preempting context switches Our initial experience with the technique is encouraging and demonstrates that iterative context-bounding is a significant improvement over existing techniques for testing multithreaded programs

489 citations


Journal ArticleDOI
TL;DR: In this paper, iterative projection algorithms are successfully used as a substitute of lenses to recombine, numerically rather than optically, light scattered by illuminated objects, allowing aberration-free diffraction-limited imaging and the possibility of using radiation for which no lenses exist.
Abstract: Iterative projection algorithms are successfully being used as a substitute of lenses to recombine, numerically rather than optically, light scattered by illuminated objects. Images obtained computationally allow aberration-free diffraction-limited imaging and the possibility of using radiation for which no lenses exist. The challenge of this imaging technique is transferred from the lenses to the algorithms. We evaluate these new computational "instruments" developed for the phase-retrieval problem, and discuss acceleration strategies.

479 citations


Posted Content
TL;DR: A randomized version of the Kaczmarz method for consistent, overdetermined linear systems and it is proved that it converges with expected exponential rate and is the first solver whose rate does not depend on the number of equations in the system.
Abstract: The Kaczmarz method for solving linear systems of equations is an iterative algorithm that has found many applications ranging from computer tomography to digital signal processing. Despite the popularity of this method, useful theoretical estimates for its rate of convergence are still scarce. We introduce a randomized version of the Kaczmarz method for consistent, overdetermined linear systems and we prove that it converges with expected exponential rate. Furthermore, this is the first solver whose rate does not depend on the number of equations in the system. The solver does not even need to know the whole system, but only a small random part of it. It thus outperforms all previously known methods on general extremely overdetermined systems. Even for moderately overdetermined systems, numerical simulations as well as theoretical analysis reveal that our algorithm can converge faster than the celebrated conjugate gradient algorithm. Furthermore, our theory and numerical simulations confirm a prediction of Feichtinger et al. in the context of reconstructing bandlimited functions from nonuniform sampling.

Journal ArticleDOI
TL;DR: This work compares the performance of eight optimization methods: gradient descent, quasi-Newton, nonlinear conjugate gradient, Kiefer-Wolfowitz, simultaneous perturbation, Robbins-Monro, and evolution strategy, and shows that the Robbins- Monro method is the best choice in most applications.
Abstract: A popular technique for nonrigid registration of medical images is based on the maximization of their mutual information, in combination with a deformation field parameterized by cubic B-splines. The coordinate mapping that relates the two images is found using an iterative optimization procedure. This work compares the performance of eight optimization methods: gradient descent (with two different step size selection algorithms), quasi-Newton, nonlinear conjugate gradient, Kiefer-Wolfowitz, simultaneous perturbation, Robbins-Monro, and evolution strategy. Special attention is paid to computation time reduction by using fewer voxels to calculate the cost function and its derivatives. The optimization methods are tested on manually deformed CT images of the heart, on follow-up CT chest scans, and on MR scans of the prostate acquired using a BFFE, Tl, and T2 protocol. Registration accuracy is assessed by computing the overlap of segmented edges. Precision and convergence properties are studied by comparing deformation fields. The results show that the Robbins-Monro method is the best choice in most applications. With this approach, the computation time per iteration can be lowered approximately 500 times without affecting the rate of convergence by using a small subset of the image, randomly selected in every iteration, to compute the derivative of the mutual information. From the other methods the quasi-Newton and the nonlinear conjugate gradient method achieve a slightly higher precision, at the price of larger computation times.

Journal ArticleDOI
TL;DR: An algorithmic process to help retailers compute the best assortment for each store and establishes new structural properties that relate the products included in the assortment and their inventory levels to product characteristics such as gross margin, case-pack sizes, and demand variability.
Abstract: Assortment planning at a retailer entails both selecting the set of products to be carried and setting inventory levels for each product. We study an assortment planning model in which consumers might accept substitutes when their favorite product is unavailable. We develop an algorithmic process to help retailers compute the best assortment for each store. First, we present a procedure for estimating the parameters of substitution behavior and demand for products in each store, including the products that have not been previously carried in that store. Second, we propose an iterative optimization heuristic for solving the assortment planning problem. In a computational study, we find that its solutions, on average, are within 0.5% of the optimal solution. Third, we establish new structural properties (based on the heuristic solution) that relate the products included in the assortment and their inventory levels to product characteristics such as gross margin, case-pack sizes, and demand variability. We applied our method at Albert Heijn, a supermarket chain in The Netherlands. Comparing the recommendations of our system with the existing assortments suggests a more than 50% increase in profits.

Journal ArticleDOI
TL;DR: In this paper, the problem of finding the layer resistivities and thicknesses that best fit the observed data has been studied in the context of geophysical inverse problems, in which the partial derivatives of the (predicted) data with respect to the (unknown) model parameters can be calculated.
Abstract: Summary Interpretation of earth electrical measurements can often be assisted by inversion, which is a non-linear model-fitting problem in these cases. Iterative methods are normally used, and the solution is defined by ' best fit ' in the sense of generalized least-squares. The inverse problems we describe are ill-posed. That is, small changes in the data can lead to large changes in both the solution and in the iterative process that finds the solution. Through an analysis of the problem, based on local linearization, we define a class of methods that stabilize the iteration, and provide a robust solution. These methods are seen as generalizations of the well-known Singular Value Truncation and Marquardt Methods of iterative inversion. Here, and in a companion paper, we give examples illustrating the successful application of the method to ill-posed problems relating to the resistivity of the Earth. In this paper we present an analysis of the solution to a number of geophysical inverse problems. We also provide a reference for the companion paper (Joint Inversion of Geophysical Data, Vozoff & Jupp 1975), where the results are applied to some specific examples. Solutions to geophysical inverse problems are generally non-unique (Backus & Gilbert 1967, 1968, 1970), and it is usual to reduce the non-uniqueness by restricting the complexity of the Earth models. The mathematical problem that arises is commonly ill-posed (unstable) in the sense that small changes in the data lead to large changes in the solution. The solution methods must take careful account of this inherent problem. In the companion paper, and the example given in Section 3 we have data in the form of apparent resistivity measurements for both magnetotelluric (MT), and Direct Current (DC) survey methods. The restricted class of earth models consists of horizontally layered, isotropic media, with constant resistivity in each layer. The simplified inverse problem is, in this case, to find the layer resistivities and thicknesses that best fit the observed data. The analysis of the problem is not, however, restricted to layered models, but applies to any geophysical inverse problem in which the partial derivatives of the (predicted) data with respect to the (unknown) model parameters can be calculated.

Journal ArticleDOI
TL;DR: A new iterative algorithm for obtaining optimal holograms targeted to the generation of arbitrary three dimensional structures of optical traps is proposed, leading to unprecedented efficiency and uniformity in trap light distributions.
Abstract: We propose a new iterative algorithm for obtaining optimal holograms targeted to the generation of arbitrary three dimensional structures of optical traps. The algorithm basic idea and performance are discussed in conjunction to other available algorithms. We show that all algorithms lead to a phase distribution maximizing a specific performance quantifier, expressed as a function of the trap intensities. In this scheme we go a step further by introducing a new quantifier and the associated algorithm leading to unprecedented efficiency and uniformity in trap light distributions. The algorithms performances are investigated both numerically and experimentally.

Journal ArticleDOI
TL;DR: Many advances in the development of Krylov subspace methods for the iterative solution of linear systems during the last decade and a half are reviewed.
Abstract: Many advances in the development of Krylov subspace methods for the iterative solution of linear systems during the last decade and a half are reviewed. These new developments include different versions of restarted, augmented, deflated, flexible, nested, and inexact methods. Also reviewed are methods specifically tailored to systems with special properties such as special forms of symmetry and those depending on one or more parameters. Copyright © 2006 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: Shaping regularization as discussed by the authors is a general method for imposing constraints by explicit mapping of the estimated model to the space of admissible models, which is integrated in a conjugate-gradient algorithm for iterative least-squares estimation.
Abstract: Regularization is a required component of geophysical-estimation problems that operate with insufficient data. The goal of regularization is to impose additional constraints on the estimated model. I introduce shaping regularization, a general method for imposing constraints by explicit mapping of the estimated model to the space of admissible models. Shaping regularization is integrated in a conjugate-gradient algorithm for iterative least-squares estimation. It provides the advantage of better control on the estimated model in comparison with traditional regularization methods and, in some cases, leads to a faster iterative convergence. Simple data interpolation and seismic-velocity estimation examples illustrate the concept.

Journal ArticleDOI
Yijun Sun1
TL;DR: This paper proposes an iterative RELIEF (I-RELIEF) algorithm to alleviate the deficiencies of RELIEf by exploring the framework of the expectation-maximization algorithm.
Abstract: RELIEF is considered one of the most successful algorithms for assessing the quality of features. In this paper, we propose a set of new feature weighting algorithms that perform significantly better than RELIEF, without introducing a large increase in computational complexity. Our work starts from a mathematical interpretation of the seemingly heuristic RELIEF algorithm as an online method solving a convex optimization problem with a margin-based objective function. This interpretation explains the success of RELIEF in real application and enables us to identify and address its following weaknesses. RELIEF makes an implicit assumption that the nearest neighbors found in the original feature space are the ones in the weighted space and RELIEF lacks a mechanism to deal with outlier data. We propose an iterative RELIEF (I-RELIEF) algorithm to alleviate the deficiencies of RELIEF by exploring the framework of the expectation-maximization algorithm. We extend I-RELIEF to multiclass settings by using a new multiclass margin definition. To reduce computational costs, an online learning algorithm is also developed. Convergence analysis of the proposed algorithms is presented. The results of large-scale experiments on the UCI and microarray data sets are reported, which demonstrate the effectiveness of the proposed algorithms, and verify the presented theoretical results

Journal ArticleDOI
TL;DR: A new projection method to solve large-scale continuous-time Lyapunov matrix equations based on matrix factorizations, generated as a combination of Krylov subspaces in A and A^{-1}$.
Abstract: In this paper we propose a new projection method to solve large-scale continuous-time Lyapunov matrix equations. The new approach projects the problem onto a much smaller approximation space, generated as a combination of Krylov subspaces in $A$ and $A^{-1}$. The reduced problem is then solved by means of a direct Lyapunov scheme based on matrix factorizations. The reported numerical results show the competitiveness of the new method, compared to a state-of-the-art approach based on the factorized alternating direction implicit iteration.

Proceedings ArticleDOI
17 Jun 2007
TL;DR: An efficient iterative procedure is presented to directly solve the Trace Ratio problem and Convergence of the projection matrix W, as well as the global optimum of the trace ratio value lambda, are proven based on point-to-set map theories.
Abstract: A large family of algorithms for dimensionality reduction end with solving a Trace Ratio problem in the form of arg maxW Tr(WT SPW)/Tr(WT SIW)1, which is generally transformed into the corresponding Ratio Trace form arg maxW Tr[ (WTSIW)-1 (WTSPW) ] for obtaining a closed-form but inexact solution. In this work, an efficient iterative procedure is presented to directly solve the Trace Ratio problem. In each step, a Trace Difference problem arg maxW Tr [WT (SP - lambdaSI) W] is solved with lambda being the trace ratio value computed from the previous step. Convergence of the projection matrix W, as well as the global optimum of the trace ratio value lambda, are proven based on point-to-set map theories. In addition, this procedure is further extended for solving trace ratio problems with more general constraint WTCW=I and providing exact solutions for kernel-based subspace learning problems. Extensive experiments on faces and UCI data demonstrate the high convergence speed of the proposed solution, as well as its superiority in classification capability over corresponding solutions to the ratio trace problem.

Journal ArticleDOI
TL;DR: These methods involve two iteration parameters whose special choices can recover the known preconditioned HSS iteration methods, as well as yield new ones, and show that the new methods converge unconditionally to the unique solution of the saddle-point problem.
Abstract: We establish a class of accelerated Hermitian and skew-Hermitian splitting (AHSS) iteration methods for large sparse saddle-point problems by making use of the Hermitian and skew-Hermitian splitting (HSS) iteration technique. These methods involve two iteration parameters whose special choices can recover the known preconditioned HSS iteration methods, as well as yield new ones. Theoretical analyses show that the new methods converge unconditionally to the unique solution of the saddle-point problem. Moreover, the optimal choices of the iteration parameters involved and the corresponding asymptotic convergence rates of the new methods are computed exactly. In addition, theoretical properties of the preconditioned Krylov subspace methods such as GMRES are investigated in detail when the AHSS iterations are employed as their preconditioners. Numerical experiments confirm the correctness of the theory and the effectiveness of the methods.

Journal ArticleDOI
TL;DR: In this paper, higher-order Non-Uniform Rational B-Splines (NURBS) are used for non-linear elasticity and plasticity analysis. But they are not suitable for the case of large deformation.

Journal ArticleDOI
TL;DR: In this article, two approaches for solving the corresponding nonlinear eigenvalue problem are proposed, one based on an asymptotic expansion of the solution, the baseline being the acoustic modes and frequencies for a steady (or passive) flame and appropriate boundary conditions.
Abstract: two approaches for solving the corresponding nonlinear eigenvalue problem are proposed. The first one is based on an asymptotic expansion of the solution, the baseline being the acoustic modes and frequencies for a steady (or passive) flame and appropriate boundary conditions. This method allows a quick assessment of any acoustic mode stabilitybutisvalidonlyforcaseswherethecouplingbetweenthe flameandtheacousticwavesissmallinamplitude. The second approach is based on an iterative algorithm where a quadratic eigenvalue problem is solved at each subiteration. It is more central processing unit demanding but remains valid even in cases where the response of the flametoacousticperturbationsislarge.Frequency-dependentboundaryimpedancesareaccountedforinbothcases. A parallel implementation of the Arnoldi iterative method is used to solve the large eigenvalue problem that arises fromthespacediscretization ofthe Helmholtzequation.Several academicandindustrial testcasesareconsideredto illustrate the potential of the method.

Proceedings ArticleDOI
20 Jun 2007
TL;DR: This paper proposes an algorithm for solving the MKL problem through an adaptive 2-norm regularization formulation and provides an new insight on MKL algorithms based on block 1- norm regularization by showing that the two approaches are equivalent.
Abstract: An efficient and general multiple kernel learning (MKL) algorithm has been recently proposed by Sonnenburg et al. (2006). This approach has opened new perspectives since it makes the MKL approach tractable for large-scale problems, by iteratively using existing support vector machine code. However, it turns out that this iterative algorithm needs several iterations before converging towards a reasonable solution. In this paper, we address the MKL problem through an adaptive 2-norm regularization formulation. Weights on each kernel matrix are included in the standard SVM empirical risk minimization problem with a l1 constraint to encourage sparsity. We propose an algorithm for solving this problem and provide an new insight on MKL algorithms based on block 1-norm regularization by showing that the two approaches are equivalent. Experimental results show that the resulting algorithm converges rapidly and its efficiency compares favorably to other MKL algorithms.

Journal ArticleDOI
TL;DR: In this article, an iterative waterfilling power allocation algorithm for Gaussian interference channels is investigated and the system is formulated as a non-cooperative game based on the measured interference powers, users maximize their own throughput by iteratively adjusting their power allocations.
Abstract: Iterative waterfilling power allocation algorithm for Gaussian interference channels is investigated The system is formulated as a non-cooperative game Based on the measured interference powers, the users maximize their own throughput by iteratively adjusting their power allocations The Nash equilib- rium in this game is a fixed point of such iterative algorithm Both synchronous and asynchronous power update are considered Some sufficient conditions under which the algorithm converges to the Nash equilibrium are derived

Journal ArticleDOI
TL;DR: In this paper, the authors introduce two iterative sequences for finding a common element of the set of fixed points of a nonexpansive mapping and a set of solutions of an equilibrium problem in a Hilbert space.
Abstract: In this paper, we introduce two iterative sequences for finding a common element of the set of fixed points of a nonexpansive mapping and the set of solutions of an equilibrium problem in a Hilbert space. Then, we show that one of the sequences converges strongly and the other converges weakly.

Journal ArticleDOI
TL;DR: This paper discusses the modeling and control of a class of networked control systems (NCSs) with packet dropouts, and sufficient conditions for the exponential stability of the closed-loop NCS are presented in terms of nonlinear matrix inequalities.
Abstract: In this paper, we discuss the modeling and control of a class of networked control systems (NCSs) with packet dropouts. For the cases that there may be packet dropouts in both the backward and the forward channels in the communication network, and that the network-induced delays are shorter than one sampling period, the closed-loop NCS is modeled as a discrete-time switched system with four subsystems. By using the asynchronous dynamical systems approach and the average dwell-time method, sufficient conditions for the exponential stability of the closed-loop NCS are presented in terms of nonlinear matrix inequalities, and the relation between the packet dropout rate and the stability of the closed-loop NCS is explicitly established. A procedure involving an iterative algorithm is proposed to design the observer-based output feedback controllers. Lastly, an illustrative example is given to demonstrate the effectiveness of the proposed results.

Journal ArticleDOI
TL;DR: IFISS is a graphical Matlab package for the interactive numerical study of incompressible flow problems that includes algorithms for discretization by mixed finite element methods and a posteriori error estimation of the computed solutions.
Abstract: IFISS is a graphical Matlab package for the interactive numerical study of incompressible flow problems. It includes algorithms for discretization by mixed finite element methods and a posteriori error estimation of the computed solutions. The package can also be used as a computational laboratory for experimenting with state-of-the-art preconditioned iterative solvers for the discrete linear equation systems that arise in incompressible flow modelling. A unique feature of the package is its comprehensive nature; for each problem addressed, it enables the study of both discretization and iterative solution algorithms as well as the interaction between the two and the resulting effect on overall efficiency.

Journal ArticleDOI
Anthony Nouy1
TL;DR: A new robust technique for solving stochastic partial differential equations that generalizes the classical spectral decomposition, namely the Karhunen-Loeve expansion, and enables the construction of a relevant reduced basis of deterministic functions which can be efficiently reused for subsequent resolutions.

Book ChapterDOI
01 Jan 2007
TL;DR: In this paper, the authors classify the Fourier reconstruction methods into three groups: Fourier reconstructions, modified back-projection methods, and iterative direct space methods, where the second group includes convolution back projection as well as weighted back projection.
Abstract: Traditionally, three-dimensional reconstruction methods have been classified into two major groups, Fourier reconstruction methods and direct methods (e.g., Crowther et al., 1970; Gilbert, 1972). Fourier methods are defined as algorithms that restore the Fourier transform of the object from the Fourier transforms of the projections and then obtain the real-space distribution of the object by inverse Fourier transformation. Included in this group are also equivalent reconstruction schemes that use expansions of object and projections into orthogonal function systems (e.g., Cormack, 1963, 1964; Smith et al., 1973; Zeitler, Chapter 4). In contrast, direct methods are defined as those that carry out all calculations in real space. These include the convolution back-projection algorithms (Bracewell and Riddle, 1967; Ramachandran and Lakshminarayanan, 1971; Gilbert, 1972) and iterative algorithms (Gordon et al., 1970; Colsher, 1977). Weighted back-projection methods are difficult to classify in this scheme, since they are equivalent to convolution back-projection algorithms, but work on the real-space data as well as the Fourier transform data of either the object or the projections. Both convolution back-projection and weighted back-projection algorithms are based on the same theory as Fourier reconstruction methods, whereas iterative methods normally do not take into account the Fourier relations between object transform and projection transforms. Thus, it seems justified to classify the reconstruction algorithms into three groups: Fourier reconstruction methods, modified back-projection methods, and iterative direct space methods, where the second group includes convolution backprojection as well as weighted back-projection methods.