scispace - formally typeset
Search or ask a question

Showing papers on "Robustness (computer science) published in 2000"


Journal ArticleDOI
27 Jul 2000-Nature
TL;DR: It is found that scale-free networks, which include the World-Wide Web, the Internet, social networks and cells, display an unexpected degree of robustness, the ability of their nodes to communicate being unaffected even by unrealistically high failure rates.
Abstract: Many complex systems display a surprising degree of tolerance against errors. For example, relatively simple organisms grow, persist and reproduce despite drastic pharmaceutical or environmental interventions, an error tolerance attributed to the robustness of the underlying metabolic network1. Complex communication networks2 display a surprising degree of robustness: although key components regularly malfunction, local failures rarely lead to the loss of the global information-carrying ability of the network. The stability of these and other complex systems is often attributed to the redundant wiring of the functional web defined by the systems' components. Here we demonstrate that error tolerance is not shared by all redundant systems: it is displayed only by a class of inhomogeneously wired networks, called scale-free networks, which include the World-Wide Web3,4,5, the Internet6, social networks7 and cells8. We find that such networks display an unexpected degree of robustness, the ability of their nodes to communicate being unaffected even by unrealistically high failure rates. However, error tolerance comes at a high price in that these networks are extremely vulnerable to attacks (that is, to the selection and removal of a few nodes that play a vital role in maintaining the network's connectivity). Such error tolerance and attack vulnerability are generic properties of communication networks.

7,697 citations


Journal ArticleDOI
TL;DR: This paper focuses on the primal version of the new algorithm, an algorithm for minimizing a nonlinear function subject to nonlinear inequality constraints, which applies sequential quadratic programming techniques to a sequence of barrier problems.
Abstract: An algorithm for minimizing a nonlinear function subject to nonlinear inequality constraints is described. It applies sequential quadratic programming techniques to a sequence of barrier problems, and uses trust regions to ensure the robustness of the iteration and to allow the direct use of second order derivatives. This framework permits primal and primal-dual steps, but the paper focuses on the primal version of the new algorithm. An analysis of the convergence properties of this method is presented.

1,514 citations


Journal ArticleDOI
TL;DR: It is shown that cumulant-based classification is particularly effective when used in a hierarchical scheme, enabling separation into subclasses at low signal-to-noise ratio with small sample size.
Abstract: A simple method, based on elementary fourth-order cumulants, is proposed for the classification of digital modulation schemes. These statistics are natural in this setting as they characterize the shape of the distribution of the noisy baseband I and Q samples. It is shown that cumulant-based classification is particularly effective when used in a hierarchical scheme, enabling separation into subclasses at low signal-to-noise ratio with small sample size. Thus, the method can be used as a preliminary classifier if desired. Computational complexity is order N, where N is the number of complex baseband data samples. This method is robust in the presence of carrier phase and frequency offsets and can be implemented recursively. Theoretical arguments are verified via extensive simulations and comparisons with existing approaches.

974 citations


Journal ArticleDOI
TL;DR: A new algorithm based on polar maps is detailed for the accurate and efficient recovery of the template in an image which has undergone a general affine transformation and results are presented which demonstrate the robustness of the method against some common image processing operations.
Abstract: Digital watermarks have been proposed as a method for discouraging illicit copying and distribution of copyrighted material. This paper describes a method for the secure and robust copyright protection of digital images. We present an approach for embedding a digital watermark into an image using the Fourier transform. To this watermark is added a template in the Fourier transform domain to render the method robust against general linear transformations. We detail a new algorithm based on polar maps for the accurate and efficient recovery of the template in an image which has undergone a general affine transformation. We also present results which demonstrate the robustness of the method against some common image processing operations such as compression, rotation, scaling, and aspect ratio changes.

585 citations


Journal ArticleDOI
TL;DR: In this paper, the authors demonstrate that error tolerance is not shared by all redundant systems, but it is displayed only by a class of inhomogeneously wired networks, called scale-free networks.
Abstract: Many complex systems, such as communication networks, display a surprising degree of robustness: while key components regularly malfunction, local failures rarely lead to the loss of the global information-carrying ability of the network. The stability of these complex systems is often attributed to the redundant wiring of the functional web defined by the systems' components. In this paper we demonstrate that error tolerance is not shared by all redundant systems, but it is displayed only by a class of inhomogeneously wired networks, called scale-free networks. We find that scale-free networks, describing a number of systems, such as the World Wide Web, Internet, social networks or a cell, display an unexpected degree of robustness, the ability of their nodes to communicate being unaffected by even unrealistically high failure rates. However, error tolerance comes at a high price: these networks are extremely vulnerable to attacks, i.e. to the selection and removal of a few nodes that play the most important role in assuring the network's connectivity.

483 citations


Journal ArticleDOI
TL;DR: In this article, problems of optimizing observer-based fault detection (FD) systems in the sense of increasing the robustness to the unknown inputs and simultaneously enhancing the sensitivity to the faults are studied.
Abstract: In this paper, problems of optimizing observer-based fault detection (FD) systems in the sense of increasing the robustness to the unknown inputs and simultaneously enhancing the sensitivity to the faults are studied. The core of the study is the development of an approach that simultaneously solves four optimization problems. Different algorithms are derived for the application of this approach to the optimal selection of post-filters as well as optimization of fault detection filters, and to the systems with and without structure constraints. The achieved results also reveal some interesting relationships among the optimization problems considered. Copyright © 2000 John Wiley & Sons, Ltd.

423 citations


Journal ArticleDOI
TL;DR: In this paper, a sparse formulation for the solution of unbalanced three-phase power systems using the Newton-Raphson method is presented, where the Jacobian matrix is composed of 6/spl times/6 block matrices and retains the same structure as the nodal admittance matrix.
Abstract: This paper presents a new sparse formulation for the solution of unbalanced three-phase power systems using the Newton-Raphson method. The three-phase current injection equations are written in rectangular coordinates resulting in an order 6n system of equations. The Jacobian matrix is composed of 6/spl times/6 block matrices and retains the same structure as the nodal admittance matrix. Practical distribution systems were used to test the method and to compare its robustness with that of the backward/forward sweep method.

411 citations


Journal ArticleDOI
TL;DR: An approach for detecting vehicles in urban traffic scenes by means of rule-based reasoning on visual data and the synergy between the artificial intelligence techniques of the high-level and the low-level image analysis techniques provides the system with flexibility and robustness.
Abstract: The paper presents an approach for detecting vehicles in urban traffic scenes by means of rule-based reasoning on visual data. The strength of the approach is its formal separation between the low-level image processing modules and the high-level module, which provides a general-purpose knowledge-based framework for tracking vehicles in the scene. The image-processing modules extract visual data from the scene by spatio-temporal analysis during daytime, and by morphological analysis of headlights at night. The high-level module is designed as a forward chaining production rule system, working on symbolic data, i.e., vehicles and their attributes (area, pattern, direction, and others) and exploiting a set of heuristic rules tuned to urban traffic conditions. The synergy between the artificial intelligence techniques of the high-level and the low-level image analysis techniques provides the system with flexibility and robustness.

396 citations


Proceedings ArticleDOI
01 Jan 2000
TL;DR: A vigorous optimization formulation of the learning process is presented and explicit solutions, which are both optimal and fast to compute, are derived by using Lagrange multipliers are derived.
Abstract: Combining learning with vision techniques in interactive image retrieval has been an active research topic during the past few years. However, existing learning techniques either are based on heuristics or fail to analyze the working conditions. Furthermore, there is almost no in depth study on how to effectively learn from the users when there are multiple visual features in the retrieval system. To address these limitations, in this paper we present a vigorous optimization formulation of the learning process and solve the problem in a principled way. By using Lagrange multipliers, we have derived explicit solutions, which are both optimal and fast to compute. Extensive comparisons against state-of-the-art techniques have been performed. Experiments were carried out on a large-size heterogeneous image collection consisting of 17,000 images. Retrieval performance was tested under a wide range of conditions. Various evaluation criteria, including precision-recall curve and rank measure, have demonstrated the effectiveness and robustness of the proposed technique.

395 citations


Journal ArticleDOI
TL;DR: From the experimental results, it is concluded that global motion estimation provides significant performance gains for video material with camera zoom and/or pan and that the robust error criterion can introduce additional performance gains without increasing computational complexity.
Abstract: In this paper, we propose an efficient, robust, and fast method for the estimation of global motion from image sequences. The method is generic in that it can accommodate various global motion models, from a simple translation to an eight-parameter perspective model. The algorithm is hierarchical and consists of three stages. In the first stage, a low-pass image pyramid is built. Then, an initial translation is estimated with full-pixel precision at the top of the pyramid using a modified n-step search matching. In the third stage, a gradient descent is executed at each level of the pyramid starting from the initial translation at the coarsest level. Due to the coarse initial estimation and the hierarchical implementation, the method is very fast. To increase robustness to outliers, we replace the usual formulation based on a quadratic error criterion with a truncated quadratic function. We have applied the algorithm to various test sequences within an MPEG-4 coding system. From the experimental results we conclude that global motion estimation provides significant performance gains for video material with camera zoom and/or pan. The gains result from a reduced prediction error and a more compact representation of motion. We also conclude that the robust error criterion can introduce additional performance gains without increasing computational complexity.

388 citations


Journal ArticleDOI
TL;DR: In this paper, a stereo-based segmentation and neural network-based pedestrian detection algorithm is proposed for detecting pedestrians in a cluttered scene from a pair of moving cameras, which includes three steps.
Abstract: Pedestrian detection is essential to avoid dangerous traffic situations. We present a fast and robust algorithm for detecting pedestrians in a cluttered scene from a pair of moving cameras. This is achieved through stereo-based segmentation and neural network-based recognition. The algorithm includes three steps. First, we segment the image into sub-image object candidates using disparities discontinuity. Second, we merge and split the sub-image object candidates into sub-images that satisfy pedestrian size and shape constraints. Third, we use intensity gradients of the candidate sub-images as input to a trained neural network for pedestrian recognition. The experiments on a large number of urban street scenes demonstrate that the proposed algorithm: (1) can detect pedestrians in various poses, shapes, sizes, clothing, and occlusion status; (2) runs in real-time; and (3) is robust to illumination and background changes.

Book ChapterDOI
11 Oct 2000
TL;DR: This approach, besides its simplicity, provides a robust and efficient way to rigidly register images in various situations, and can easily be implemented on a parallel architecture, which opens potentialities for real time applications using a large number of processors.
Abstract: In order to improve the robustness of rigid registration algorithms in various medical imaging problems, we propose in this article a general framework built on block matching strategies. This framework combines two stages in a multi-scale hierarchy. The first stage consists in finding for each block (or subregion) of the first image, the most similar subregion in the other image, using a similarity criterion which depends on the nature of the images. The second stage consists in finding the global rigid transformation which best explains most of these local correspondances. This is done with a robust procedure which allows up to 50% of false matches. We show that this approach, besides its simplicity, provides a robust and efficient way to rigidly register images in various situations. This includes for instance the alignment of 2D histological sections for the 3D reconstructions of trimmed organs and tissues, the automatic computation of the mid-sagittal plane in multimodal 3D images of the brain, and the multimodal registration of 3D CT and MR images of the brain. A quantitative evaluation of the results is provided for this last example, as well as a comparison with the classical approaches involving the minimization of a global measure of similarity based on Mutual Information or the Correlation Ratio. This shows a significant improvement of the robustness, for a comparable final accuracy. Although slightly more expensive in terms of computational requirements, the proposed approach can easily be implemented on a parallel architecture, which opens potentialities for real time applications using a large number of processors.

Journal ArticleDOI
TL;DR: In this paper, the influence functions and the corresponding asymptotic variances for these robust estimators of eigenvalues and eigenvectors are investigated by a simulation study, and it turns out that the theoretical results and simulations favor the use of S-estimators since they combine a high efficiency with appealing robustness properties.
Abstract: A robust principal component analysis can be easily performed by computing the eigenvalues and eigenvectors of a robust estimator of the covariance or correlation matrix. In this paper we derive the influence functions and the corresponding asymptotic variances for these robust estimators of eigenvalues and eigenvectors. The behaviour of several of these estimators is investigated by a simulation study. It turns out that the theoretical results and simulations favour the use of S-estimators, since they combine a high efficiency with appealing robustness properties. © 2000 Biometrika Trust.

Journal ArticleDOI
TL;DR: The objective is to design a robust nonlinear state and output feedback law which simultaneously solves the global exponential regulation problem for all plants in the class and efficiency and robust features of the method proposed are proposed.

Journal ArticleDOI
TL;DR: This work studies the impact of incorporating increasing levels of design and finds that even small amounts of design lead to HOT states in percolation.
Abstract: Highly optimized tolerance (HOT) is a mechanism that relates evolving structure to power laws in interconnected systems. HOT systems arise where design and evolution create complex systems sharing common features, including (1) high efficiency, performance, and robustness to designed-for uncertainties, (2) hypersensitivity to design flaws and unanticipated perturbations, (3) nongeneric, specialized, structured configurations, and (4) power laws. We study the impact of incorporating increasing levels of design and find that even small amounts of design lead to HOT states in percolation.

Journal ArticleDOI
TL;DR: In this article, a critical comparison of estimators minimizing Wahba's loss function is presented for the QUaternion ESTimator (QUEST) and Estimators of the Optimal Quaternion (ESOQ) to avoid the computational burden of sequential rotations in these algorithms.
Abstract: This paper contains a critical comparison of estimators minimizing Wahba’s loss function Some new results are presented for the QUaternion ESTimator (QUEST) and Estimators of the Optimal Quaternion (ESOQ and ESOQ2) to avoid the computational burden of sequential rotations in these algorithms None of these methods is as robust in principle as Davenport’s q method or the Singular Value Decomposition (SVD) method, which are significantly slower Robustness is only an issue for measurements with widely differing accuracies, so the fastest estimators, the modified ESOQ and ESOQ2, are well suited to sensors that track multiple stars with comparable accuracies More robust forms of ESOQ and ESOQ2 are developed that are intermediate in speed

Journal ArticleDOI
TL;DR: A matrix formalism of the propagation operator is introduced to compare the time-reversal and inverse filter techniques and experiments investigated in various media are presented to illustrate this comparison.
Abstract: To focus ultrasonic waves in an unknown inhomogeneous medium using a phased array, one has to calculate the optimal set of signals to be applied on the transducers of the array. In the case of time-reversal mirrors, one assumes that a source is available at the focus, providing the Green’s function of this point. In this paper, the robustness of this time-reversal method is investigated when loss of information breaks the time-reversal invariance. It arises in dissipative media or when the field radiated by the source is not entirely measured by the limited aperture of a time-reversal mirror. However, in both cases, linearity and reciprocity relations ensure time reversal to achieve a spatiotemporal matched filtering. Nevertheless, though it provides robustness to this method, no constraints are imposed on the field out of the focus and sidelobes may appear. Another approach consists of measuring the Green’s functions associated to the focus but also to neighboring points. Thus, the whole information characterizing the medium is known and the inverse source problem can be solved. A matrix formalism of the propagation operator is introduced to compare the time-reversal and inverse filter techniques. Moreover, experiments investigated in various media are presented to illustrate this comparison.

MonographDOI
01 Nov 2000
TL;DR: In this article, an introduction to H-infinity control and loop-shaping complexity and robustness is given. And the best possible robustness results state-space formulae and proofs singular value inequalities.
Abstract: An introduction to H-infinity control H-infinity loop-shaping the v-gap metric more H-infinity loop-shaping complexity and robustness design examples topologies, metrics and operator theory approximation in the graph topology the best possible H-infinity robustness results state-space formulae and proofs singular value inequalities.

Journal ArticleDOI
TL;DR: The eigenvalue analysis and the nonlinear simulation results show the effectiveness of the proposed SAPSS's to damp out the local as well as the interarea modes and enhance greatly the system stability over a wide range of loading conditions and system configurations.
Abstract: Robust design of multimachine power system stabilizers (PSSs) using simulated annealing (SA) optimization technique is presented in this paper. The proposed approach employs SA to search for optimal parameter settings of a widely used conventional fixed-structure lead-lag PSS (CPSS). The parameters of the proposed simulated annealing based power system stabilizer (SAPSS) are optimized in order to shift the system electromechanical modes at different loading conditions and system configurations simultaneously to the left in the s-plane. Incorporation of SA as a derivative-free optimization technique in PSS design significantly reduces the computational burden. One of the main advantages of the proposed approach is its robustness to the initial parameter settings. In addition, the quality of the optimal solution does not rely on the initial guess. The performance of the proposed SAPSS under different disturbances and loading conditions is investigated for two multimachine power systems. The eigenvalue analysis and the nonlinear simulation results show the effectiveness of the proposed SAPSS's to damp out the local as well as the interarea modes and enhance greatly the system stability over a wide range of loading conditions and system configurations.

Journal ArticleDOI
TL;DR: This paper presents how the 2 1/2 D visual servoing scheme, recently developed, can be used with unknown objects characterized by a set of points, based on the estimation of the camera displacement from two views, given by the current and desired images.
Abstract: Classical visual servoing techniques need a strong a priori knowledge of the shape and the dimensions of the observed objects In this paper, we present how the 2 1/2 D visual servoing scheme we have recently developed, can be used with unknown objects characterized by a set of points Our scheme is based on the estimation of the camera displacement from two views, given by the current and desired images Since vision-based robotics tasks generally necessitate to be performed at video rate, we focus only on linear algorithms Classical linear methods are based on the computation of the essential matrix In this paper, we propose a different method, based on the estimation of the homography matrix related to a virtual plane attached to the object We show that our method provides a more stable estimation when the epipolar geometry degenerates This is particularly important in visual servoing to obtain a stable control law, especially near the convergence of the system Finally, experimental results confirm the improvement in the stability, robustness, and behaviour of our scheme with respect to classical methods

Journal ArticleDOI
TL;DR: It is argued that more robustness can be achieved if watermarks are embedded in dc components since dc components have much larger perceptual capacity than any ac components and a new embedding strategy for watermarking is proposed based on a quantitative analysis on the magnitudes of DCT components of host images.
Abstract: Both watermark structure and embedding strategy affect robustness of image watermarks. Where should watermarks be embedded in the discrete cosine transform (DCT) domain in order for the invisible image watermarks to be robust? Though many papers in the literature agree that watermarks should be embedded in perceptually significant components, dc components are explicitly excluded from watermark embedding. In this letter, a new embedding strategy for watermarking is proposed based on a quantitative analysis on the magnitudes of DCT components of host images. We argue that more robustness can be achieved if watermarks are embedded in dc components since dc components have much larger perceptual capacity than any ac components. Based on this idea, an adaptive watermarking algorithm is presented. We incorporate the feature of texture masking and luminance masking of the human visual system into watermarking. Experimental results demonstrate that the invisible watermarks embedded with the proposed watermark algorithm are very robust.

Journal ArticleDOI
TL;DR: It is shown that the information available in an ROLS algorithm after network training can be used to sequentially select centers to minimize the network output error and provide efficient methods for network reduction to achieve smaller architectures with acceptable accuracy and without retraining.
Abstract: Recursive orthogonal least squares (ROLS) is a numerically robust method for solving for the output layer weights of a radial basis function (RBF) network, and requires less computer memory than the batch alternative. In the paper, the use of ROLS is extended to selecting the centers of an RBF network. It is shown that the information available in an ROLS algorithm after network training can be used to sequentially select centers to minimize the network output error. This provides efficient methods for network reduction to achieve smaller architectures with acceptable accuracy and without retraining. Two selection methods are developed, forward and backward. The methods are illustrated in applications of RBF networks to modeling a nonlinear time series and a real multiinput-multioutput chemical process. The final network models obtained achieve acceptable accuracy with significant reductions in the number of required centers.

Book ChapterDOI
TL;DR: An overview of robust Bayesian analysis with emphasis on foundational, decision oriented and computational approaches is provided, including global and local sensitivity analysis and loss and likelihood robustness.
Abstract: We provide an overview of robust Bayesian analysis with emphasis on foundational, decision oriented and computational approaches Common types of robustness analyses are described, including global and local sensitivity analysis and loss and likelihood robustness

01 Jan 2000
TL;DR: The present book intends to describe the current state of this approach to ~® control, the so-called time domain or state space methods which were developed in the late 1980s.
Abstract: DURING THE LAST DECADE, much attention has been drawn to control theory especially as an approach to robust compensator design. In the past years a huge number of scientific publications, and among these several monographies, were published on this and related subjects. In the late 1980s there was a breakthrough in ~t~® control theory, the so-called time domain or state space approach, which gave very elegant results leading to simple design techniques. There has since been a demand for a thorough textbook to describe these new methods in detail. control theory originated in the early 1980s where the control community had been aware for some time of the poor robustness properties of classical observer-based controller methods and LQG design. This led to the formulation of the robust stability problem which was intensely studied in the following years. There were several approaches which lead to solutions of this problem. These were based on frequency domain methods and transfer function descriptions as presented in Francis (1987). Later on, the significance of ~i~® control theory to a wide variety of control problems such as for example loop shaping became apparent, since the ~g'~ methods are well suited to treat a rather general class of design problems with frequency domain specifications. However, the widespread popularity that ~® has attained today is mainly due to a more recent development, namely the time domain or state space methods which were developed in the late 1980s. In this line of research it became evident that solvability of the so-called ~C® standard problem (which comprises the robust stability problem and several other problems as special cases) is equivalent to solvability of two algebraic Riccati equations and a coupling condition. Moreover, a complete characterization of the whole class of solutions to the ~ control problem was obtained in closed form. The present book intends to describe the current state of this approach to ~® control. In the introductory part of the book the deficiencies of classical control with respect to robustness issues are pointed out, and the ~® control problem is introduced. It is shown, however, by means of an example that a solution to the ~ control problem does not necessarily have good stability margins. Hence, it is emphasized that the formulation of the control problem itself does not guarantee robustness. Robustness is obtained only if it designed for! The robustness issue is further addressed as stabilization of uncertain systems and as graph topology convergence, and the mixed sensitivity problem is introduced as an approach to the nominal performance/robust stability problem. The exposition given in the book requires a number of mathematical prerequisties which are collected in a separate chapter. Among these are: properties of linear continuous or discrete time systems, theory for rational matrices and theory

Journal ArticleDOI
TL;DR: A new multiparadigm intelligent system approach is presented for the solution of the incident detection problem, employing advanced signal processing, pattern recognition, and classification techniques and produced excellent incident detection rates with no false alarms when tested using both real and simulated data.
Abstract: Traffic incidents are nonrecurrent and pseudorandom events that disrupt the normal flow of traffic and create a bottleneck in the road network. The probability of incidents is higher during peak flow rates when the systemwide effect of incidents is most severe. Model-based solutions to the incident detection problem have not produced practical, useful results primarily because the complexity of the problem does not lend itself to accurate mathematical and knowledge-based representations. A new multiparadigm intelligent system approach is presented for the solution of the problem, employing advanced signal processing, pattern recognition, and classification techniques. The methodology effectively integrates fuzzy, wavelet, and neural computing techniques to improve reliability and robustness. A wavelet-based denoising technique is employed to eliminate undesirable fluctuations in observed data from traffic sensors. Fuzzy c-mean clustering is used to extract significant information from the observed data and to reduce its dimensionality. A radial basis function neural network (RBFNN) is developed to classify the denoised and clustered observed data. The new model produced excellent incident detection rates with no false alarms when tested using both real and simulated data.

Journal ArticleDOI
TL;DR: A novel optimisation-based approach is introduced for testing of model structural identifiability and distinguishability, involving semi-infinite programming and max-min problems and methods are presented to provide experiment design robustness, accounting for parameter uncertainty.

Book
08 Jun 2000
TL;DR: In this paper, Taguchi Methods for Robust Design (TMSD) is used to prevent quality problems in early stages of product development/design, to use the dynamic signal-to-noise (SN) ratio as the performance index for robustness of production functions, and to evaluate methods of data collection and analyze case studies of parameter design.
Abstract: Through the theories, strategies, and broad range of case studies in Taguchi Methods for Robust Design, you gain exposure to the entire spectrum of robust design, and master its application. You also learn to prevent quality problems in the early stages of product development/design, to use the dynamic signal-to-noise (SN) ratio as the performance index for robustness of production functions, and to evaluate methods of data collection and analyze case studies of parameter design. Contents include: Types of SN ratios Basic dynamic-type SN ratios for continuous variables Various cases Non-dynamic SN ratios Classified attributes SN ratio with complex numbers Layout and analysis of Youden Square Incomplete data Robust technology department Case studies.

Journal ArticleDOI
TL;DR: In this paper, a new control algorithm based on discrete-time variable structure systems theory is proposed, which reaches the sliding manifold in finite time, without chattering, and the robustness of the algorithm with respect to parameter uncertainties, as well as external disturbances is considered.

Journal ArticleDOI
TL;DR: A comprehensive performance comparison is conducted both analytically and via Monte Carlo simulation which clearly demonstrates the superior theoretical compression performance of signal-dependent rank-reduction, its broader region-of-convergence, and its inherent robustness to subspace leakage.
Abstract: This paper is concerned with issues and techniques associated with the development of both optimal and adaptive (data dependent) reduced-rank signal processing architectures. Adaptive algorithms for 1D beamforming, 2D space-time adaptive processing (STAP), and 3D STAP for joint hot and cold clutter mitigation are surveyed. The following concepts are then introduced for the first time (other than workshop and conference records) and evaluated in a signal-dependent versus signal independent context: (1) the adaptive processing "region-of-convergence" as a function of sample support and rank, (2) a new variant of the cross-spectral metric (CSM) that retains dominant mode estimation in the direct-form processor (DFP) structure, and (3) the robustness of the proposed methods to the subspace "leakage" problem arising in many real-world applications. A comprehensive performance comparison is conducted both analytically and via Monte Carlo simulation which clearly demonstrates the superior theoretical compression performance of signal-dependent rank-reduction, its broader region-of-convergence, and its inherent robustness to subspace leakage.

Journal ArticleDOI
TL;DR: In this article, the authors consider the iterative learning control problem from an adaptive control viewpoint and show that some standard Lyapunov adaptive designs can be modified in a straightforward manner to give a solution to either the feedback or feedforward ILC problem.
Abstract: We consider the iterative learning control problem from an adaptive control viewpoint. It is shown that some standard Lyapunov adaptive designs can be modified in a straightforward manner to give a solution to either the feedback or feedforward ILC problem. Some of the common assumptions of non-linear iterative learning control are relaxed: e.g. we relax the common linear growth asssumption on the non-linearities and handle systems of arbitrary relative degree. It is shown that generally a linear rate of convergence of the MSE can be achieved, and a simple robustness analysis is given. For linear plants we show that a linear rate of MSE convergence can be achieved for non-minimum phase plants.