scispace - formally typeset
Search or ask a question

Showing papers on "Robustness (computer science) published in 1997"


Proceedings ArticleDOI
12 Oct 1997
TL;DR: The paper reports a reworking of the particle swarm algorithm to operate on discrete binary variables, where trajectories are changes in the probability that a coordinate will take on a zero or one value.
Abstract: The particle swarm algorithm adjusts the trajectories of a population of "particles" through a problem space on the basis of information about each particle's previous best performance and the best previous performance of its neighbors. Previous versions of the particle swarm have operated in continuous space, where trajectories are defined as changes in position on some number of dimensions. The paper reports a reworking of the algorithm to operate on discrete binary variables. In the binary version, trajectories are changes in the probability that a coordinate will take on a zero or one value. Examples, applications, and issues are discussed.

4,478 citations


Book
05 Oct 1997
TL;DR: In this article, the authors introduce linear algebraic Riccati Equations and linear systems with Ha spaces and balance model reduction, and Ha Loop Shaping, and Controller Reduction.
Abstract: 1. Introduction. 2. Linear Algebra. 3. Linear Systems. 4. H2 and Ha Spaces. 5. Internal Stability. 6. Performance Specifications and Limitations. 7. Balanced Model Reduction. 8. Uncertainty and Robustness. 9. Linear Fractional Transformation. 10. m and m- Synthesis. 11. Controller Parameterization. 12. Algebraic Riccati Equations. 13. H2 Optimal Control. 14. Ha Control. 15. Controller Reduction. 16. Ha Loop Shaping. 17. Gap Metric and ...u- Gap Metric. 18. Miscellaneous Topics. Bibliography. Index.

3,471 citations


Journal ArticleDOI
TL;DR: This paper addresses the problem of retrieving images from large image databases with a method based on local grayvalue invariants which are computed at automatically detected interest points and allows for efficient retrieval from a database of more than 1,000 images.
Abstract: This paper addresses the problem of retrieving images from large image databases. The method is based on local grayvalue invariants which are computed at automatically detected interest points. A voting algorithm and semilocal constraints make retrieval possible. Indexing allows for efficient retrieval from a database of more than 1,000 images. Experimental results show correct retrieval in the case of partial visibility, similarity transformations, extraneous features, and small perspective deformations.

1,756 citations


Journal ArticleDOI
TL;DR: In this paper, the authors outline recent advances of the theory of observer-based fault diagnosis in dynamic systems towards the design of robust techniques of residual generation and residual evaluation, including H∞ theory, nonlinear unknown input observer theory, adaptive observer theory and artificial intelligence including fuzzy logic, knowledge-based techniques and the natural intelligence of the human operator.

1,277 citations


Journal ArticleDOI
TL;DR: This work considers least-squares problems where the coefficient matrices A,b are unknown but bounded and minimize the worst-case residual error using (convex) second-order cone programming, yielding an algorithm with complexity similar to one singular value decomposition of A.
Abstract: We consider least-squares problems where the coefficient matrices A,b are unknown but bounded. We minimize the worst-case residual error using (convex) second-order cone programming, yielding an algorithm with complexity similar to one singular value decomposition of A. The method can be interpreted as a Tikhonov regularization procedure, with the advantage that it provides an exact bound on the robustness of solution and a rigorous way to compute the regularization parameter. When the perturbation has a known (e.g., Toeplitz) structure, the same problem can be solved in polynomial-time using semidefinite programming (SDP). We also consider the case when A,b are rational functions of an unknown-but-bounded perturbation vector. We show how to minimize (via SDP) upper bounds on the optimal worst-case residual. We provide numerical examples, including one from robust identification and one from robust interpolation.

1,164 citations


Journal ArticleDOI
TL;DR: This tutorial presents what is probably the most commonly used techniques for parameter estimation, including linear least-squares (pseudo-inverse and eigen analysis); orthogonal least- Squares; gradient-weighted least-Squares; bias-corrected renormalization; Kalman filtering; and robust techniques (clustering, regression diagnostics, M-estimators, least median of squares).

1,015 citations


Journal ArticleDOI
01 Mar 1997
TL;DR: A comparative analysis of four popular and efficient algorithms, each of which computes the translational and rotational components of the transform in closed form, as the solution to a least squares formulation of the problem, indicates that under “ideal” data conditions certain distinctions in accuracy and stability can be seen.
Abstract: A common need in machine vision is to compute the 3-D rigid body transformation that aligns two sets of points for which correspondence is known. A comparative analysis is presented here of four popular and efficient algorithms, each of which computes the translational and rotational components of the transform in closed form, as the solution to a least squares formulation of the problem. They differ in terms of the transformation representation used and the mathematical derivation of the solution, using respectively singular value decomposition or eigensystem computation based on the standard $[ \vec{R}, \vec{T} ]$ representation, and the eigensystem analysis of matrices derived from unit and dual quaternion forms of the transform. This comparison presents both qualitative and quantitative results of several experiments designed to determine (1) the accuracy and robustness of each algorithm in the presence of different levels of noise, (2) the stability with respect to degenerate data sets, and (3) relative computation time of each approach under different conditions. The results indicate that under “ideal” data conditions (no noise) certain distinctions in accuracy and stability can be seen. But for “typical, real-world” noise levels, there is no difference in the robustness of the final solutions (contrary to certain previously published results). Efficiency, in terms of execution time, is found to be highly dependent on the computer system setup.

857 citations


Proceedings ArticleDOI
17 Jun 1997
TL;DR: This paper presents a trainable object detection architecture that is applied to detecting people in static images of cluttered scenes and shows how the invariant properties and computational efficiency of the wavelet template make it an effective tool for object detection.
Abstract: This paper presents a trainable object detection architecture that is applied to detecting people in static images of cluttered scenes. This problem poses several challenges. People are highly non-rigid objects with a high degree of variability in size, shape, color, and texture. Unlike previous approaches, this system learns from examples and does not rely on any a priori (hand-crafted) models or on motion. The detection technique is based on the novel idea of the wavelet template that defines the shape of an object in terms of a subset of the wavelet coefficients of the image. It is invariant to changes in color and texture and can be used to robustly define a rich and complex class of objects such as people. We show how the invariant properties and computational efficiency of the wavelet template make it an effective tool for object detection.

811 citations


Journal ArticleDOI
TL;DR: An analytically robust, globally convergent approach to managing the use of approximation models of varying fidelity in optimization, based on the trust region idea from nonlinear programming, which is shown to be provably convergent to a solution of the original high-fidelity problem.
Abstract: This paper presents an analytically robust, globally convergent approach to managing the use of approximation models of various fidelity in optimization. By robust global behavior we mean the mathematical assurance that the iterates produced by the optimization algorithm, started at an arbitrary initial iterate, will converge to a stationary point or local optimizer for the original problem. The approach we present is based on the trust region idea from nonlinear programming and is shown to be provably convergent to a solution of the original high-fidelity problem. The proposed method for managing approximations in engineering optimization suggests ways to decide when the fidelity, and thus the cost, of the approximations might be fruitfully increased or decreased in the course of the optimization iterations. The approach is quite general. We make no assumptions on the structure of the original problem, in particular, no assumptions of convexity and separability, and place only mild requirements on the approximations. The approximations used in the framework can be of any nature appropriate to an application; for instance, they can be represented by analyses, simulations, or simple algebraic models. This paper introduces the approach and outlines the convergence analysis.

651 citations


Proceedings ArticleDOI
05 May 1997
TL;DR: Several methods of achieving software diversity are discussed based on randomizations that respect the specified behavior of the program, which could potentially increase the robustness of software systems with minimal impact on convenience, usability, and efficiency.
Abstract: Diversity is an important source of robustness in biological systems. Computers, by contrast, are notable for their lack of diversity. Although homogeneous systems have many advantages, the beneficial effects of diversity in computing systems have been overlooked, specifically in the area of computer security. Several methods of achieving software diversity are discussed based on randomizations that respect the specified behavior of the program. Such randomization could potentially increase the robustness of software systems with minimal impact on convenience, usability, and efficiency. Randomization of the amount of memory allocated on a stack frame is shown to disrupt a simple buffer overflow attack.

575 citations


Journal ArticleDOI
TL;DR: In this article, the observer-based fault detection and isolation problem with an emphasis on robustness and applications is studied, and a summary of the basic ideas behind the use of observers in generating diagnostic residual signals is provided.

Journal ArticleDOI
TL;DR: Methods for robust stability analysis and robust stabilization are developed dependent on the size of the delay and are given in terms of linear matrix inequalities.
Abstract: This paper considers the problems of robust stability analysis and robust control design for a class of uncertain linear systems with a constant time-delay. The uncertainty is assumed to be norm-bounded and appears in all the matrices of the state-space model. We develop methods for robust stability analysis and robust stabilization. The proposed methods are dependent on the size of the delay and are given in terms of linear matrix inequalities.

Journal ArticleDOI
TL;DR: Some schemes extending the well-known diagnosis methods for linear systems to the nonlinear case are considered and the robustness of these schemes in presence of unknown inputs is discussed.

Proceedings Article
01 Jan 1997
TL;DR: A spectrum of middle-agents is identified, characterizes the behavior of three different types, and reports on initial experiments that focus on evaluating performance tradeoffs between matchmaking and brokering middle- agents, according to criteria such as load balancing, robustness, dynamic preferences or capabilities, and privacy.
Abstract: Like middle-men in physical commerce, middleagents support the flow of information in electronic commerce, assisting in locating and connecting the ultimate information provider with the ultimate information requester. Many different types of middleagents will be useful in realistic, large, distributed, open multi-agent problem solving systems. These include matchmakers or yellow page agents that process advertisements, blackboard agents that collect requests, and brokers that process both. The behaviors of each type of middle-agent have certain performance characteristics—privacy, robustness, and adaptiveness qualities—that are related to characteristics of the external environment and of the agents themselves. For example, while brokered systems are more vulnerable to certain failures, they are also able to cope more quickly with a rapidly fluctuating agent workforce and meet certain privacy considerations. This paper identifies a spectrum of middle-agents, characterizes the behavior of three different types, and reports on initial experiments that focus on evaluating performance tradeoffs between matchmaking and brokering middle-agents, according to criteria such as load balancing, robustness, dynamic preferences or capabilities, and privacy.

Journal ArticleDOI
TL;DR: In this paper, the authors provide a tutorial and caution for prospective model users, with the specific purpose of illustrating that, in spite of advanced physical-process parameterizations and high resolutions permitted by faster computers, and modern mesoscale data for initial conditions, there is still a basic limitation to predictability with a LAM, i.e., lateral boundary conditions (LBC).
Abstract: Limited-area models (LAMs) are presently used for a wide variety of research and operational forecasting applications, and such use will likely expand greatly as the rapid increase in the performance/price ratio of computers and workstations makes LAMs more accessible to novice users. The robustness of these well-tested and documented models will make it tempting for many to consider them as turn-key systems that can be used without any experience or formal training in numerical weather prediction. This paper is intended as a tutorial and caution for such prospective model users, with the specific purpose of illustrating that, in spite of advanced physical-process parameterizations and high resolutions permitted by faster computers, and modern mesoscale data for initial conditions, there is still a basic limitation to predictability with a LAM—lateral boundary conditions (LBC). Illustrations are provided of previous work that show the serious negative effects of LBCs, and guidelines are provided for helpi...

Journal ArticleDOI
TL;DR: The main contribution of this paper is to show that the robust H/spl infin/ filtering problem can be solved using linear matrix inequality (LMI) techniques, which are numerically efficient owing to recent advances in convex optimization.
Abstract: We consider the robust H/sub /spl infin// filtering problem for a general class of uncertain linear systems described by the so-called integral quadratic constraints (IQCs). This problem is important in many signal processing applications where noise, nonlinearity, quantization errors, time delays, and unmodeled dynamics can be naturally described by IQCs. The main contribution of this paper is to show that the robust H/spl infin/ filtering problem can be solved using linear matrix inequality (LMI) techniques, which are numerically efficient owing to recent advances in convex optimization. The paper deals with both continuous and discrete-time uncertain linear systems.

Proceedings ArticleDOI
21 Apr 1997
TL;DR: An alternative approach is detailed which reformulates the problem as a linear regression of phase data and then estimates the time-delay through minimization of a robust statistical error measure and is shown to be less susceptible to room reverberation effects.
Abstract: Conventional time-delay estimators exhibit dramatic performance degradations in the presence of multipath signals. This limits their application in reverberant enclosures, particularly when the signal of interest is speech and it may not possible to estimate and compensate for channel effects prior to time-delay estimation. This paper details an alternative approach which reformulates the problem as a linear regression of phase data and then estimates the time-delay through minimization of a robust statistical error measure. The technique is shown to be less susceptible to room reverberation effects. Simulations are performed across a range of source placements and room conditions to illustrate the utility of the proposed time-delay estimation method relative to conventional methods.

Journal ArticleDOI
TL;DR: In this article, a new interior point nonlinear programming algorithm for optimal power flow problems (OPF) based on the perturbed KKT conditions of the primal problem is presented. But the algorithm is not suitable for large-scale systems.
Abstract: This paper presents a new interior point nonlinear programming algorithm for optimal power flow problems (OPF) based on the perturbed KKT conditions of the primal problem. Through the concept of the centering direction, the authors extend this algorithm to classical power flow (PF) and approximate OPF problems. For the latter, CPU time can be reduced substantially. To efficiently handle functional inequality constraints, a reduced correction equation is derived, the size of which depends on that of equality constraints. A novel data structure is proposed which has been realized by rearranging the correction equation. Compared with the conventional data structure of Newton OPF, the number of fill-ins of the proposed scheme is roughly halved and CPU time is reduced by about 15% for large scale systems. The proposed algorithm includes four kinds of objective functions and two different data structures. Extensive numerical simulations on test systems that range in size from 14 to 1047 buses, have shown that the proposed method is very promising for large scale application due to its robustness and fast execution time.

Journal ArticleDOI
A.S. Morse1
TL;DR: This paper proves that without any further modification, the same supervisor described in Part I can also perform this function in the face of norm-bounded unmodeled dynamics, and moreover that none of the signals within the overall system can grow without bound in response to bounded disturbance and noise inputs, whether they are constant or not.
Abstract: A simply structured high-level controller, called a "supervisor", has recently been proposed in part I of this article (ibid., vol.41, 1996) for the purpose of orchestrating the switching of a sequence of candidate set-point controllers into feedback with an imprecisely modeled SISO process so as to cause the output of the process to approach and track a constant reference input. The process is assumed to be modeled by an SISO linear system whose transfer function is in the union of a number of subclasses, each subclass being small enough so that one of the candidate controllers would solve the set-point tracking problem, if the process transfer function was to be one of the subclass members. This paper proves that without any further modification, the same supervisor described in Part I can also perform this function in the face of norm-bounded unmodeled dynamics, and moreover that none of the signals within the overall system can grow without bound in response to bounded disturbance and noise inputs, whether they are constant or not.

Proceedings ArticleDOI
17 Jun 1997
TL;DR: A new, efficient stereo algorithm addressing robust disparity estimation in the presence of occlusions by an adaptive, multi-window scheme using left-right consistency to compute disparity and its associated uncertainty.
Abstract: We present a new, efficient stereo algorithm addressing robust disparity estimation in the presence of occlusions. The algorithm is an adaptive, multi-window scheme using left-right consistency to compute disparity and its associated uncertainty. We demonstrate and discuss performances with both synthetic and real stereo pairs, and show how our results improve on those of closely related techniques for both robustness and efficiency.

Proceedings ArticleDOI
26 Oct 1997
TL;DR: An approach for still image watermarking is presented in which the watermark embedding process employs multiresolution fusion techniques and incorporates a model of the human visual system (HVS) to extract a watermark.
Abstract: We present an approach for still image watermarking in which the watermark embedding process employs multiresolution fusion techniques and incorporates a model of the human visual system (HVS). The original unmarked image is required to extract the watermark. Simulation results demonstrate the high robustness of the algorithm to such image degradations as JPEG compression, additive noise and linear filtering.

Journal ArticleDOI
TL;DR: Two simple on-line schemes for force tracking within the impedance-control framework are presented, demonstrating that the adaptive schemes are able to compensate for uncertainties in both the environmental stiffness and location.
Abstract: This article presents two simple on-line schemes for force tracking within the impedance-control framework. The force- tracking capability of impedance control is particularly important for providing robustness in the presence of large uncertainties or variations in environmental parameters. The two proposed schemes generate the reference position trajec tory required to produce a desired contact force despite lack of knowledge of the environmental stiffness and location. The first scheme uses direct adaptive control to generate the refer ence position on-line as a function of the force-tracking error. Alternatively, the second scheme utilizes an indirect adaptive strategy in which the environmental parameters are estimated on-line, and the required reference position is computed based on these estimates. In both schemes, adaptation allows au tomatic gain adjustment to provide a uniform performance despite variations in the environmental parameters. Simula tion studies are presented for a 7-DOF Robotics R...

Journal ArticleDOI
TL;DR: This paper introduces a second-order blind identification technique based on a linear prediction approach and it will be shown that the linear prediction error method is "robust" to order overdetermination.
Abstract: Blind channel identification methods based on the oversampled channel output are a problem of current theoretical and practical interest. In this paper, we introduce a second-order blind identification technique based on a linear prediction approach. In contrast to eigenstructure-based methods, it will be shown that the linear prediction error method is "robust" to order overdetermination. An asymptotic performance analysis of the proposed estimation method is carried out, consistency and asymptotic normality of the estimates is established. A closed-form expression for the asymptotic covariance of the estimates is given. Numerical simulations and investigations are finally presented to demonstrate the potential and the "robustness" of the proposed method.

Patent
09 Sep 1997
TL;DR: In this paper, an external bio-layer is applied to an ultrasonic crystal to increase its bandwidth and its robustness, and a frequency sweep procedure is performed when the crystal is at site temperature and is transparent to the operator.
Abstract: An energy delivery system and method control the frequency of the power driving an ultrasonic device (24) to achieve more efficient power delivery. During operation of the ultrasonic device to deliver power to a patient site (16), the system and method automatically sweep the drive power through a frequency range, locate the series and parallel resonance frequencies, calculate the average of those frequencies and lock the power generator at that average frequency to drive the crystal. This frequency sweep procedure occurs automatically when the ultrasonic crystal is located at the patient site and the power generator operator presses the power-on switch to apply power. The method of tuning the power generator thus occurs when the crystal is at the site temperature and is transparent to the operator. The application of an external bio-layer to the crystal increases its bandwidth and its robustness. Mounting a temperature sensor (28) or sensors at the crystal permits monitoring of the crystal temperature and allows drive level control over the power generator to control the temperature at the crystal.

Proceedings ArticleDOI
17 Jun 1997
TL;DR: Visual processes to detect and track faces for video compression and transmission based on an architecture in which a supervisor selects and activates visual processes in cyclic manner provides robust and precise tracking.
Abstract: Visual processes to detect and track faces for video compression and transmission. The system is based on an architecture in which a supervisor selects and activates visual processes in cyclic manner. Control of visual processes is made possible by a confidence factor which accompanies each observation. Fusion of results into a unified estimation for tracking is made possible by estimating a covariance matrix with each observation. Visual processes for face tracking are described using blink detection, normalised color histogram matching, and cross correlation (SSD and NCC). Ensembles of visual processes are organised into processing states so as to provide robust tracking. Transition between states is determined by events detected by processes. The result of face detection is fed into recursive estimator (Kalman filter). The output from the estimator drives a PD controller for a pan/tilt/zoom camera. The resulting system provides robust and precise tracking which operates continuously at approximately 20 images per second on a 150 megahertz computer workstation.

Proceedings ArticleDOI
Bin Yao1
10 Dec 1997
TL;DR: In this article, a general framework is proposed for the design of high-performance robust controllers based on the recently proposed adaptive robust control (ARC), which attenuates the effect of model uncertainties as much as possible while learning mechanisms such as parameter adaptation are used to reduce the model uncertainties.
Abstract: A general framework is proposed for the design of a new class of high-performance robust controllers based on the recently proposed adaptive robust control (ARC). Robust filter structures are used to attenuate the effect of model uncertainties as much as possible while learning mechanisms such as parameter adaptation are used to reduce the model uncertainties. Under the proposed general framework, a simple new ARC controller is also constructed for a class of nonlinear systems transformable to a semi-strict feedback form. The new design utilizes the popular discontinuous projection method in solving the conflicts between the deterministic robust control design and the adaptive control design. The controller achieves a guaranteed transient performance and a prescribed final tracking accuracy in the presence of both parametric uncertainties and uncertain nonlinearities while achieving asymptotic stability in the presence of parametric uncertainties without using a discontinuous control law or infinite-gain feedback.

Book ChapterDOI
01 Jan 1997
TL;DR: The efficiency and ease of application of the proposed method are demonstrated by solving four mechanical component design problems borrowed from the optimization literature and the solutions obtained are better than those obtained with the traditional methods.
Abstract: A robust optimal design algorithm for solving nonlinear engineering design optimization problems is presented. The algorithm works according to the principles of genetic algorithms (GAs). Since most engineering problems involve mixed variables (zero-one, discrete, continuous), a combination of binary GAs and real-coded GAs is used to allow a natural way of handling these mixed variables. The combined approach is called GeneAS to abbreviate Genetic Adaptive Search. The robustness and flexibility of the algorithm come from its restricted search to the permissible values of the variables. This also makes the search efficient by requiring a reduced search effort in converging to the optimum solution. The efficiency and ease of application of the proposed method are demonstrated by solving four mechanical component design problems borrowed from the optimization literature. The proposed technique is compared with traditional optimization methods. In all cases, the solutions obtained using GeneAS are better than those obtained with the traditional methods. These results show how GeneAS can be effectively used in other mechanical component design problems.

Journal ArticleDOI
TL;DR: In this article, a probabilistic approach for robustness analysis of control systems affected by bounded uncertainty is presented. But the authors focus on the problem of estimating the number of samples required to estimate the probability that a certain level of robustness is attained.

Journal ArticleDOI
01 Jun 1997
TL;DR: It is shown that by installing a number of redundant sensors on the Stewart platform, the system is able to perform self-calibration and the approach provides a tool for rapid and autonomous calibration of the parallel mechanism.
Abstract: Self-calibration has the potential of: 1) removing the dependence on any external pose sensing information; 2) producing high accuracy measurement data over the entire workspace of the system with an extremely fast measurement rate; 3) being automated and completely noninvasive; 4) facilitating on-line accuracy compensation; and 5) being cost effective. A general framework is introduced in this paper for the self-calibration of parallel manipulators. The concept of creating forward and inverse measurement residuals by exploring conflicting information provided with redundant sensing is proposed. Some of these ideas have been widely used for robot calibration when robot end-effector poses are available. By this treatment, many existing kinematic parameter estimation techniques can be applied for the self-calibration of parallel mechanisms. It is illustrated through a case study, i.e. calibration of the Stewart platform, that with this framework the design of a suitable self-calibration system and the formulation of the relevant mathematical model become more systematic. A few principles important to the system self-calibration are also demonstrated through the case study. It is shown that by installing a number of redundant sensors on the Stewart platform, the system is able to perform self-calibration. The approach provides a tool for rapid and autonomous calibration of the parallel mechanism.

Journal ArticleDOI
TL;DR: A notion of model uncertainty based on the closeness of input-output trajectories which is not tied to a particular uncertainty representation, such as additive, parametric, structured, etc. is pursued.
Abstract: This paper presents an approach to robustness analysis for nonlinear feedback systems. We pursue a notion of model uncertainty based on the closeness of input-output trajectories which is not tied to a particular uncertainty representation, such as additive, parametric, structured, etc. The basic viewpoint is to regard systems as operators on signal spaces. We present two versions of a global theory where stability is captured by induced norms or by gain functions. We also develop local approaches (over bounded signal sets) and give a treatment for systems with potential for finite-time escape. We compute the relevant stability margin for several examples and demonstrate robustness of stability for some specific perturbations, e.g., small-time delays. We also present examples of nonlinear control systems which have zero robustness margin and are destabilized by arbitrarily small gap perturbations. The paper considers the case where uncertainty is present in the controller as well as the plant and the generalization of the approach to the case where uncertainty occurs in several subsystems in an arbitrary interconnection.