scispace - formally typeset
Search or ask a question

Showing papers on "Iterative method published in 2001"


Journal ArticleDOI
Abstract: We describe a new method of matching statistical models of appearance to images. A set of model parameters control modes of shape and gray-level variation learned from a training set. We construct an efficient iterative matching algorithm by learning the relationship between perturbations in the model parameters and the induced image errors.

6,200 citations


Proceedings ArticleDOI
01 May 2001
TL;DR: An implementation is demonstrated that is able to align two range images in a few tens of milliseconds, assuming a good initial guess, and has potential application to real-time 3D model acquisition and model-based tracking.
Abstract: The ICP (Iterative Closest Point) algorithm is widely used for geometric alignment of three-dimensional models when an initial estimate of the relative pose is known. Many variants of ICP have been proposed, affecting all phases of the algorithm from the selection and matching of points to the minimization strategy. We enumerate and classify many of these variants, and evaluate their effect on the speed with which the correct alignment is reached. In order to improve convergence for nearly-flat meshes with small features, such as inscribed surfaces, we introduce a new variant based on uniform sampling of the space of normals. We conclude by proposing a combination of ICP variants optimized for high speed. We demonstrate an implementation that is able to align two range images in a few tens of milliseconds, assuming a good initial guess. This capability has potential application to real-time 3D model acquisition and model-based tracking.

4,059 citations


Journal ArticleDOI
29 Jun 2001
TL;DR: An efficient numerical algorithm to compute the optimal input distribution that maximizes the sum capacity of a Gaussian multiple-access channel with vector inputs and a vector output is proposed.
Abstract: This paper proposes an efficient numerical algorithm to compute the optimal input distribution that maximizes the sum capacity of a Gaussian multiple-access channel with vector inputs and a vector output. The numerical algorithm has an iterative water-filling interpretation. The algorithm converges from any starting point, and it reaches within 1/2 nats per user per output dimension from the sum capacity after just one iteration. The characterization of sum capacity also allows an upper bound and a lower bound for the entire capacity region to be derived.

1,128 citations


Journal ArticleDOI
TL;DR: This paper presents a dual–primal formulation of the FETI‐2 concept that eliminates the need for that second set of Lagrange multipliers, and unifies all previously developed one‐level and two‐level FETi algorithms into a single dual‐primal FetI‐DP method.
Abstract: The FETI method and its two-level extension (FETI-2) are two numerically scalable domain decomposition methods with Lagrange multipliers for the iterative solution of second-order solid mechanics and fourth-order beam, plate and shell structural problems, respectively.The FETI-2 method distinguishes itself from the basic or one-level FETI method by a second set of Lagrange multipliers that are introduced at the subdomain cross-points to enforce at each iteration the exact continuity of a subset of the displacement field at these specific locations. In this paper, we present a dual–primal formulation of the FETI-2 concept that eliminates the need for that second set of Lagrange multipliers, and unifies all previously developed one-level and two-level FETI algorithms into a single dual–primal FETI-DP method. We show that this new FETI-DP method is numerically scalable for both second-order and fourth-order problems. We also show that it is more robust and more computationally efficient than existing FETI solvers, particularly when the number of subdomains and/or processors is very large. Copyright © 2001 John Wiley & Sons, Ltd.

628 citations


Journal ArticleDOI
TL;DR: In this paper, the eXtended Finite Element Method (X-FEM) is used to discretize the equations, allowing for the modeling of cracks whose geometry is independent of the finite element mesh.

546 citations


Journal ArticleDOI
TL;DR: A new iterative maximum-likelihood reconstruction algorithm for X-ray computed tomography prevents beam hardening artifacts by incorporating a polychromatic acquisition model and preliminary results indicate that metal artifact reduction is a very promising application.
Abstract: A new iterative maximum-likelihood reconstruction algorithm for X-ray computed tomography is presented. The algorithm prevents beam hardening artifacts by incorporating a polychromatic acquisition model. The continuous spectrum of the X-ray tube is modeled as a number of discrete energies. The energy dependence of the attenuation is taken into account by decomposing the linear attenuation coefficient into a photoelectric component and a Compton scatter component. The relative weight of these components is constrained based on prior material assumptions. Excellent results are obtained for simulations and for phantom measurements. Beam-hardening artifacts are effectively eliminated. The relation with existing algorithms is discussed. The results confirm that improving the acquisition model assumed by the reconstruction algorithm results in reduced artifacts. Preliminary results indicate that metal artifact reduction is a very promising application for this new algorithm.

478 citations


Journal ArticleDOI
TL;DR: A linear-correction least-squares estimation procedure is proposed for the source localization problem under an additive measurement error model and yields an efficient source location estimator without assuming a priori knowledge of noise distribution.
Abstract: A linear-correction least-squares estimation procedure is proposed for the source localization problem under an additive measurement error model. The method, which can be easily implemented in a real-time system with moderate computational complexity, yields an efficient source location estimator without assuming a priori knowledge of noise distribution. Alternative existing estimators, including likelihood-based, spherical intersection, spherical interpolation, and quadratic-correction least-squares estimators, are reviewed and comparisons of their complexity, estimation consistency and efficiency against the Cramer-Rao lower bound are made. Numerical studies demonstrate that the proposed estimator performs better under many practical situations.

461 citations


Journal ArticleDOI
TL;DR: In this paper, an accelerated Stokesian Dynamics (ASD) algorithm was proposed to solve all hydrodynamic interactions in a viscous fluid at low particle Reynolds number with a significantly lower computational cost of O(N ln N).
Abstract: A new implementation of the conventional Stokesian Dynamics (SD) algorithm, called accelerated Stokesian Dynamics (ASD), is presented. The equations governing the motion of N particles suspended in a viscous fluid at low particle Reynolds number are solved accurately and efficiently, including all hydrodynamic interactions, but with a significantly lower computational cost of O(N ln N). The main differences from the conventional SD method lie in the calculation of the many-body long-range interactions, where the Ewald-summed wave-space contribution is calculated as a Fourier transform sum and in the iterative inversion of the now sparse resistance matrix. The new method is applied to problems in the rheology of both structured and random suspensions, and accurate results are obtained with much larger numbers of particles. With access to larger N, the high-frequency dynamic viscosities and short-time self-diffusivities of random suspensions for volume fractions above the freezing point are now studied. The ASD method opens up an entire new class of suspension problems that can be investigated, including particles of non-spherical shape and a distribution of sizes, and the method can readily be extended to other low-Reynolds-number-flow problems.

456 citations


Journal ArticleDOI
TL;DR: This work proposes efficient block circulant preconditioners for solving the Tikhonov-regularized superresolution problem by the conjugate gradient method and extends to underdetermined systems the derivation of the generalized cross-validation method for automatic calculation of regularization parameters.
Abstract: Superresolution reconstruction produces a high-resolution image from a set of low-resolution images. Previous iterative methods for superresolution had not adequately addressed the computational and numerical issues for this ill-conditioned and typically underdetermined large scale problem. We propose efficient block circulant preconditioners for solving the Tikhonov-regularized superresolution problem by the conjugate gradient method. We also extend to underdetermined systems the derivation of the generalized cross-validation method for automatic calculation of regularization parameters. The effectiveness of our preconditioners and regularization techniques is demonstrated with superresolution results for a simulated sequence and a forward looking infrared (FLIR) camera image sequence.

442 citations


Journal ArticleDOI
TL;DR: A simple modification of iterative methods arising in numerical mathematics and optimization that makes them strongly convergent without additional assumptions is presented.
Abstract: We consider a wide class of iterative methods arising in numerical mathematics and optimization that are known to converge only weakly. Exploiting an idea originally proposed by Haugazeau, we present a simple modification of these methods that makes them strongly convergent without additional assumptions. Several applications are discussed.

428 citations


Journal ArticleDOI
TL;DR: The proposed source decoder utilizes iterative schemes, and performs well even when the correlation between the sources is not known in the decoder, since it can be estimated jointly with the iterative decoding process.
Abstract: We propose the use of punctured turbo codes for compression of correlated binary sources. Compression is achieved because of puncturing. The resulting performance is close to the theoretical limit provided by the Slepian-Wolf (1973) theorem. No information about the correlation between sources is required in the encoding process. The proposed source decoder utilizes iterative schemes, and performs well even when the correlation between the sources is not known in the decoder, since it can be estimated jointly with the iterative decoding process.

Journal ArticleDOI
TL;DR: Three different methods of phase retrieval from series of image measurements obtained at different defocus values are developed and compared, with an approximate solution to the transport of intensity equation (TIE) based on Fourier transforms using multigrid methods.

Journal ArticleDOI
TL;DR: It is shown that a symmetric version of the above method converges under assumptions of convexity (or concavity) for the functional induced by the tensor in question, assumptions that are very often satisfied in practical applications.
Abstract: Recently the problem of determining the best, in the least-squares sense, rank-1 approximation to a higher-order tensor was studied and an iterative method that extends the well-known power method for matrices was proposed for its solution. This higher-order power method is also proposed for the special but important class of supersymmetric tensors, with no change. A simplified version, adapted to the special structure of the supersymmetric problem, is deemed unreliable, as its convergence is not guaranteed. The aim of this paper is to show that a symmetric version of the above method converges under assumptions of convexity (or concavity) for the functional induced by the tensor in question, assumptions that are very often satisfied in practical applications. The use of this version entails significant savings in computational complexity as compared to the unconstrained higher-order power method. Furthermore, a novel method for initializing the iterative process is developed which has been observed to yield an estimate that lies closer to the global optimum than the initialization suggested before. Moreover, its proximity to the global optimum is a priori quantifiable. In the course of the analysis, some important properties that the supersymmetry of a tensor implies for its square matrix unfolding are also studied.

Journal ArticleDOI
01 Dec 2001-Calcolo
TL;DR: A variant of the classical weighted least-squares stabilization for the Stokes equations has improved accuracy properties, especially near boundaries, and is based on local projections of the residual terms which are used in order to achieve consistency of the method.
Abstract: We present a variant of the classical weighted least-squares stabilization for the Stokes equations. Compared to the original formulation, the new method has improved accuracy properties, especially near boundaries. Furthermore, no modification of the right-hand side is needed, and implementation is simplified, especially for generalizations to more complicated equations. The approach is based on local projections of the residual terms which are used in order to achieve consistency of the method, avoiding local evaluation of the strong form of the differential operator. We prove stability and give a priori and a posteriori error estimates. We show convergence of an iterative method which uses a simplified stabilized discretization as preconditioner. Numerical experiments indicate that the approach presented is at least as accurate as the original method, but offers some algorithmic advantages. The ideas presented here also apply to the Navier–Stokes equations. This is the topic of forthcoming work.

Journal ArticleDOI
TL;DR: The convergence of a penalty method for solving the discrete regularized American option valuation problem is studied and it is observed that an implicit treatment of the American constraint does not converge quadratically if constant timesteps are used.
Abstract: The convergence of a penalty method for solving the discrete regularized American option valuation problem is studied. Sufficient conditions are derived which both guarantee convergence of the nonlinear penalty iteration and ensure that the iterates converge monotonically to the solution. These conditions also ensure that the solution of the penalty problem is an approximate solution to the discrete linear complementarity problem. The efficiency and quality of solutions obtained using the implicit penalty method are compared with those produced with the commonly used technique of handling the American constraint explicitly. Convergence rates are studied as the timestep and mesh size tend to zero. It is observed that an implicit treatment of the American constraint does not converge quadratically (as the timestep is reduced) if constant timesteps are used. A timestep selector is suggested which restores quadratic convergence.

Journal ArticleDOI
01 Mar 2001
TL;DR: The requirements for computational hardware and memory are analyzed, and suggestions for reduced-complexity decoding and reduced control logic are provided.
Abstract: VLST implementation complexities of soft-input soft-output (SISO) decoders are discussed. These decoders are used in iterative algorithms based on Turbo codes or Low Density Parity Check (LDPC) codes, and promise significant bit error performance advantage over conventionally used partial-response maximum likelihood (PRML) systems, at the expense of increased complexity. This paper analyzes the requirements for computational hardware and memory, and provides suggestions for reduced-complexity decoding and reduced control logic. Serial concatenation of interleaved codes, using an outer block code with a partial response channel acting as an inner encoder, is of special interest for magnetic storage applications.

Journal ArticleDOI
TL;DR: In this paper, an iterative method for the extraction of velocity and angular distributions from two-dimensional (2D) ion/photoelectron imaging experiments is presented, which is based on the close relationship which exists between the initial 3D angular and velocity distribution and the measured 2d angular and radial distributions.
Abstract: We present an iterative method for the extraction of velocity and angular distributions from two-dimensional (2D) ion/photoelectron imaging experiments. This method is based on the close relationship which exists between the initial 3D angular and velocity distribution and the measured 2D angular and radial distributions, and gives significantly better results than other inversion procedures which are commonly used today. Particularly, the procedure gets rid of the center-line noise which is one of the main artifacts in many current ion/photoelectron imaging experiments.

Journal ArticleDOI
TL;DR: This work considers the application of the conjugate gradient method to the solution of large equality constrained quadratic programs arising in nonlinear optimization, and proposes iterative refinement techniques as well as an adaptive reformulation of thequadratic problem that can greatly reduce these errors without incurring high computational overheads.
Abstract: We consider the application of the conjugate gradient method to the solution of large equality constrained quadratic programs arising in nonlinear optimization. Our approach is based implicitly on a reduced linear system and generates iterates in the null space of the constraints. Instead of computing a basis for this null space, we choose to work directly with the matrix of constraint gradients, computing projections into the null space by either a normal equations or an augmented system approach. Unfortunately, in practice such projections can result in significant rounding errors. We propose iterative refinement techniques, as well as an adaptive reformulation of the quadratic problem, that can greatly reduce these errors without incurring high computational overheads. Numerical results illustrating the efficacy of the proposed approaches are presented.

Proceedings ArticleDOI
Sem Borst1, Philip Whiting
28 Feb 2001
TL;DR: It is shown that the 'best' user may be identified as the maximum-rate user when the feasible rates are weighed with some appropriately determined coefficients, and the optimal strategy may be viewed as a revenue-based policy.
Abstract: The relative delay tolerance of data applications, together with the bursty traffic characteristics, opens up the possibility for scheduling transmissions so as to optimize throughput. A particularly attractive approach, in fading environments, is to exploit the variations in the channel conditions, and transmit to the user with the currently 'best' channel. We show that the 'best' user may be identified as the maximum-rate user when the feasible rates are weighed with some appropriately determined coefficients. Interpreting the coefficients as shadow prices, or reward values, the optimal strategy may thus be viewed as a revenue-based policy. Calculating the optimal revenue vector directly is a formidable task, requiring detailed information on the channel statistics. Instead, we present adaptive algorithms for determining the optimal revenue vector on-line in an iterative fashion, without the need for explicit knowledge of the channel behavior. Starting from an arbitrary initial vector, the algorithms iteratively adjust the reward values to compensate for observed deviations from the target throughput ratios. The algorithms are validated through extensive numerical experiments. Besides verifying long-run convergence, we also examine the transient performance, in particular the rate of convergence to the optimal revenue vector. The results show that the target throughput ratios are tightly maintained, and that the algorithms are well able to track changes in the channel conditions or throughput targets.

Journal ArticleDOI
TL;DR: It is the early, nonasymptotic elements of the generated sequence of estimators that offer favorable bias covariance balance and are seen to outperform in mean-square estimation error, constraint-LMS, RLS-type, orthogonal multistage decomposition, as well as plain and diagonally loaded SMI estimates.
Abstract: Statistical conditional optimization criteria lead to the development of an iterative algorithm that starts from the matched filter (or constraint vector) and generates a sequence of filters that converges to the minimum-variance-distortionless-response (MVDR) solution for any positive definite input autocorrelation matrix. Computationally, the algorithm is a simple, noninvasive, recursive procedure that avoids any form of explicit autocorrelation matrix inversion, decomposition, or diagonalization. Theoretical analysis reveals basic properties of the algorithm and establishes formal convergence. When the input autocorrelation matrix is replaced by a conventional sample-average (positive definite) estimate, the algorithm effectively generates a sequence of MVDR filter estimators; the bias converges rapidly to zero and the covariance trace rises slowly and asymptotically to the covariance trace of the familiar sample-matrix-inversion (SMI) estimator. In fact, formal convergence of the estimator sequence to the SMI estimate is established. However, for short data records, it is the early, nonasymptotic elements of the generated sequence of estimators that offer favorable bias covariance balance and are seen to outperform in mean-square estimation error, constraint-LMS, RLS-type, orthogonal multistage decomposition, as well as plain and diagonally loaded SMI estimates. An illustrative interference suppression example is followed throughout this presentation.

Journal ArticleDOI
TL;DR: In this paper, a greedy randomized adaptive search procedure (GRASP) is applied to solve the transmission network expansion problem, and the best solution over all GRASP iterations is chosen as the result.
Abstract: A greedy randomized adaptive search procedure (GRASP) is a heuristic method that has shown to be very powerful in solving combinatorial problems. In this paper we apply GRASP to solve the transmission network expansion problem. This procedure is an expert iterative sampling technique that has two phases for each iteration. The first, construction phase, finds a feasible solution for the problem. The second phase, a local search, seeks for improvements on construction phase solution by a local search. The best solution over all GRASP iterations is chosen as the result.

Proceedings ArticleDOI
22 Jun 2001
TL;DR: It is proved that a circuit with inductors can be simplified from MNA to NA format, and the matrix becomes an s.p.d matrix, which makes it suitable for the conjugate gradient with incomplete Cholesky decomposition as the preconditioner, which is faster than other direct and iterative methods.
Abstract: In this paper, we propose preconditioned Krylov-subspace iterative methods to perform efficient DC and transient simulations for large-scale linear circuits with an emphasis on power delivery circuits. We also prove that a circuit with inductors can be simplified from MNA to NA format, and the matrix becomes an s.p.d. matrix. This property makes it suitable for the conjugate gradient with incomplete Cholesky decomposition as the preconditioner, which is faster than other direct and iterative methods. Extensive experimental results on large-scale industrial power grid circuits show that our method is over 200 times faster for DC analysis and around 10 times faster for transient simulation compared to SPICE3. Furthermore, our algorithm reduces over 75% of memory usage than SPICE3 while the accuracy is not compromised.

Journal ArticleDOI
TL;DR: The main result of the paper is to prove that this iterative algorithm provides a controller which quadratically stabilizes the uncertain system with probability one in a finite number of steps.

Journal ArticleDOI
Di He1, Chen He1, Lingge Jiang1, Hong-Wen Zhu1, Guang-Rui Hu1 
TL;DR: In this article, a one-dimensional iterative chaotic map with infinite collapses within a symmetrical region was proposed, and the stability of fixed points and that around the singular point were analyzed.
Abstract: A one-dimensional iterative chaotic map with infinite collapses within symmetrical region [-1, O)/spl cup/(O, +1] is proposed. The stability of fixed points and that around the singular point are analyzed. Higher Lyapunov exponents of proposed map show stronger chaotic characteristics than some iterative and continuous chaotic models usually used. There exist inverse bifurcation phenomena and special main periodic windows at certain positions shown in the bifurcation diagram, which can explain the generation mechanism of chaos. The chaotic model with good properties can be generated if choosing the parameter of the map properly. Stronger inner pseudorandom characteristics can also be observed through /spl chi//sup 2/ test on the supposition of even distribution. This chaotic model may have many advantages in practical use.

Journal ArticleDOI
TL;DR: In this paper, a self-consistent hybrid method is proposed for accurately simulating time-dependent quantum dynamics in complex systems, which is based on an iterative convergence procedure for a dynamical hybrid approach.
Abstract: An efficient method, the self-consistent hybrid method, is proposed for accurately simulating time-dependent quantum dynamics in complex systems. The method is based on an iterative convergence procedure for a dynamical hybrid approach. In this approach, the overall system is first partitioned into a “core” and a “reservoir” (an initial guess). The former is treated via an accurate quantum mechanical method, namely, the time-dependent multiconfiguration self-consistent field or multiconfiguration time-dependent Hartree approach, and the latter is treated via a more approximate method, e.g., classical mechanics, semiclassical initial value representations, quantum perturbation theories, etc. Next, the number of “core” degrees of freedom, as well as other variational parameters, is systematically increased to achieve numerical convergence for the overall quantum dynamics. The method is applied to two examples of quantum dissipative dynamics in the condensed phase: the spin-boson problem and the electronic resonance decay in the presence of a vibrational bath. It is demonstrated that the method provides a practical way of obtaining accurate quantum dynamical results for complex systems.

Journal ArticleDOI
TL;DR: An efficient inverse-scattering algorithm is developed to reconstruct both the permittivity and conductivity profiles of two-dimensional dielectric objects buried in a lossy earth using the distorted Born iterative method.
Abstract: An efficient inverse-scattering algorithm is developed to reconstruct both the permittivity and conductivity profiles of two-dimensional (2D) dielectric objects buried in a lossy earth using the distorted Born iterative method (DBIM). In this algorithm, the measurement data are collected on (or over) the air-earth interface for multiple transmitter and receiver locations at single frequency. The nonlinearity due to the multiple scattering of pixels to pixels, and pixels to the air-earth interface has been taken into account in the iterative minimization scheme. At each iteration, a conjugate gradient (CG) method is chosen to solve the linearized problem, which takes the calling number of the forward solver to a minimum. To reduce the CPU time, the forward solver for buried dielectric objects is implemented by the CG method and fast Fourier transform (FFT). Numerous numerical examples are given to show the convergence, stability, and error tolerance of the algorithm.

Journal ArticleDOI
TL;DR: Numerical results indicate that the k-space method is accurate for large-scale soft tissue computations with much greater efficiency than that of an analogous leapfrog pseudospectral method or a 2-4 finite difference time-domain method, however, numerical results also indicate that it is less accurate than the finite-difference method for a high contrast scatterer with bone-like properties.
Abstract: Large-scale simulation of ultrasonic pulse propagation in inhomogeneous tissue is important for the study of ultrasound-tissue interaction as well as for development of new imaging methods. Typical scales of interest span hundreds of wavelengths. This paper presents a simplified derivation of the k-space method for a medium of variable sound speed and density; the derivation clearly shows the relationship of this k-space method to both past k-space methods and pseudospectral methods. In the present method, the spatial differential equations are solved by a simple Fourier transform method, and temporal iteration is performed using a k-t space propagator. The temporal iteration procedure is shown to be exact for homogeneous media, unconditionally stable for "slow" (c(x)/spl les/c/sub 0/) media, and highly accurate for general weakly scattering media. The applicability of the k-space method to large-scale soft tissue modeling is shown by simulating two-dimensional propagation of an incident plane wave through several tissue-mimicking cylinders as well as a model chest wall cross section. A three-dimensional implementation of the k-space method is also employed for the example problem of propagation through a tissue-mimicking sphere. Numerical results indicate that the k-space method is accurate for large-scale soft tissue computations with much greater efficiency than that of an analogous leapfrog pseudospectral method or a 2-4 finite difference time-domain method. However, numerical results also indicate that the k-space method is less accurate than the finite-difference method for a high contrast scatterer with bone-like properties, although qualitative results can still be obtained by the k-space method with high efficiency. Possible extensions to the method, including representation of absorption effects, absorbing boundary conditions, elastic-wave propagation, and acoustic nonlinearity, are discussed.

Journal ArticleDOI
TL;DR: An iterative Bayesian reconstruction algorithm for limited view angle tomography, or ectomography, based on the three-dimensional total variation (TV) norm has been developed and has been shown to improve the perceived image quality.
Abstract: An iterative Bayesian reconstruction algorithm for limited view angle tomography, or ectomography, based on the three-dimensional total variation (TV) norm has been developed. The TV norm has been described in the literature as a method for reducing noise in two-dimensional images while preserving edges, without introducing ringing or edge artefacts. It has also been proposed as a 2D regularization function in Bayesian reconstruction, implemented in an expectation maximization algorithm (TV-EM). The TV-EM was developed for 2D single photon emission computed tomography imaging, and the algorithm is capable of smoothing noise while maintaining edges without introducing artefacts. The TV norm was extended from 2D to 3D and incorporated into an ordered subsets expectation maximization algorithm for limited view angle geometry. The algorithm, called TV3D-EM, was evaluated using a modelled point spread function and digital phantoms. Reconstructed images were compared with those reconstructed with the 2D filtered backprojection algorithm currently used in ectomography. Results show a substantial reduction in artefacts related to the limited view angle geometry, and noise levels were also improved. Perhaps most important, depth resolution was improved by at least 45%. In conclusion, the proposed algorithm has been shown to improve the perceived image quality.

Journal ArticleDOI
TL;DR: An acceleration for Newton's method is given and a new third order method is obtained for solving non-linear equations in Banach spaces, establishing conditions on convergence, existence and uniqueness of solution, as well as error estimates.

Journal ArticleDOI
Liu Qihou1
TL;DR: Ghosh and Debnath as mentioned in this paper proved sufficient and necessary conditions for Ishikawa iterative sequences of asymptotically quasi-nonexpansive mappings to converge to fixed points.