scispace - formally typeset
Search or ask a question

Showing papers on "Robustness (computer science) published in 1998"


Book
01 Jan 1998
TL;DR: In this article, a fault detection and diagnosis framework for discrete linear systems with residual generators and residual generator parameters is presented for additive and multiplicative faults by parameter estimation using a parity equation.
Abstract: Introduction to fault detection and diagnosis discrete linear systems random variables parameter estimation fundamentals analytical redundancy concepts parity equation implementation of residual generators design for structured residuals design for directional residuals residual generation for parametric faults robustness in residual generation statistical testing of residuals model identification for the diagnosis of additive faults diagnosing multiplicative faults by parameter estimation

2,188 citations


Journal ArticleDOI
TL;DR: In this paper, an order of the maximal differentiation error to the square root of the maximum deviation of the measured input signal from the base signal from Lipschitz's constant of the derivative was proposed.

1,958 citations


Reference BookDOI
01 Sep 1998
TL;DR: This graduate text provides an authoritative account of neural network (NN) controllers for robotics and nonlinear systems and gives the first textbook treatment of a general and streamlined design procedure for NN controllers.
Abstract: From the Publisher: This graduate text provides an authoritative account of neural network (NN) controllers for robotics and nonlinear systems and gives the first textbook treatment of a general and streamlined design procedure for NN controllers. Stability proofs and performance guarantees are provided which illustrate the superior efficiency of the NN controllers over other design techniques when the system is unknown. New NN properties, such as robustness and passivity are introduced, and new weight tuning algorithms are presented. Neural Network Control of Robot Manipulators and Nonlinear Systems provides a welcome introduction to graduate students, and an invaluable reference to professional engineers and researchers in control systems.

1,337 citations


Journal ArticleDOI
TL;DR: Two alternative design techniques for constructing gain-scheduled controllers for uncertain linear parameter-varying systems are discussed and are amenable to linear matrix inequality problems via a gridding of the parameter space and a selection of basis functions.
Abstract: This paper is concerned with the design of gain-scheduled controllers for uncertain linear parameter-varying systems. Two alternative design techniques for constructing such controllers are discussed. Both techniques are amenable to linear matrix inequality problems via a gridding of the parameter space and a selection of basis functions. These problems are then readily solvable using available tools in convex semidefinite programming. When used together, these techniques provide complementary advantages of reduced computational burden and ease of controller implementation. The problem of synthesis for robust performance is then addressed by a new scaling approach for gain-scheduled control. The validity of the theoretical results are demonstrated through a two-link flexible manipulator design example. This is a challenging problem that requires scheduling of the controller in the manipulator geometry and robustness in face of uncertainty in the high-frequency range.

887 citations


Book ChapterDOI
02 Jun 1998
TL;DR: A new technique to combine low- and high-level information in a consistent probabilistic framework is presented, using the statistical technique of importance sampling combined with the Condensation algorithm, and a hand tracker is demonstrated which combines colour blob-tracking with a contour model.
Abstract: Tracking research has diverged into two camps; low-level approaches which are typically fast and robust but provide little fine-scale information, and high-level approaches which track complex deformations in high-dimensional spaces but must trade off speed against robustness. Real-time high-level systems perform poorly in clutter and initialisation for most high-level systems is either performed manually or by a separate module. This paper presents a new technique to combine low- and high-level information in a consistent probabilistic framework, using the statistical technique of importance sampling combined with the Condensation algorithm. The general framework, which we term Icondensation, is described, and a hand tracker is demonstrated which combines colour blob-tracking with a contour model. The resulting tracker is robust to rapid motion, heavy clutter and hand-coloured distractors, and re-initialises automatically. The system runs comfortably in real time on an entry-level desktop workstation.

675 citations


Journal ArticleDOI
TL;DR: The robustness of an estimator against specific violations of assumptions can be determined empirically by means of a Monte Carlo study as mentioned in this paper, which can be used as explanatory variables in a meta-analysis concerning the behavior of parameter estimators, standard error estimators and goodness-of-fit statistics.
Abstract: In covariance structure modeling, several estimation methods are available. The robustness of an estimator against specific violations of assumptions can be determined empirically by means of a Monte Carlo study. Many such studies in covariance structure analysis have been published, but the conclusions frequently seem to contradict each other An overview of robustness studies in covariance structure analysis is given, and an attempt is made to generalize findings. Robustness studies are described and distinguished from each other systematically by means of certain characteristics. These characteristics serve as explanatory variables in a meta-analysis concerning the behavior of parameter estimators, standard error estimators, and goodness-of-fit statistics when the model is correctly specified.

668 citations


Journal ArticleDOI
TL;DR: A new definition describing the quasi-sliding mode as a motion of the system, such that its state always remains in a certain band around the sliding hyperplane, is introduced and two novel reaching laws satisfying conditions of the definition are proposed and applied to the design of appropriate linear control strategies.
Abstract: In this paper, discrete-time quasi-sliding-mode control systems are considered. A new definition describing the quasi-sliding mode as a motion of the system, such that its state always remains in a certain band around the sliding hyperplane, is introduced. Then, two novel reaching laws satisfying conditions of the definition are proposed and applied to the design of appropriate linear control strategies which drive the state of the controlled system to a band around the sliding hyperplane. Consequently, the undesirable chattering and high-frequency switching between different values of the control signal are avoided. The strategies, when compared with previously published results, guarantee better robustness, faster error convergence, and improved steady-state accuracy of the system. Furthermore, better performance of the system is achieved using essentially reduced control effort.

560 citations


Journal ArticleDOI
TL;DR: A fundamental open problem in computer vision—determining pose and correspondence between two sets of points in space—is solved with a novel, fast, robust and easily implementable algorithm using a combination of optimization techniques.

532 citations


Book ChapterDOI
01 Jan 1998
TL;DR: This chapter gives a summary of the reasons for using multilevel models, and provides examples why these reasons are indeed valid and recent (simulation) research is reviewed on the robustness and power of the usual estimation procedures with varying sample sizes.
Abstract: Multilevel models have become popular for the analysis of a variety of problems. This chapter gives a summary of the reasons for using multilevel models, and provides examples why these reasons are indeed valid. Next, recent (simulation) research is reviewed on the robustness and power of the usual estimation procedures with varying sample sizes.

472 citations


Journal ArticleDOI
TL;DR: The effects of spreading the noise power while localizing the source energy in the t-f domain amounts to increasing the robustness of the proposed approach with respect to noise and, hence, improved performance.
Abstract: Blind source separation consists of recovering a set of signals of which only instantaneous linear mixtures are observed. Thus far, this problem has been solved using statistical information available on the source signals. This paper introduces a new blind source separation approach exploiting the difference in the time-frequency (t-f) signatures of the sources to be separated. The approach is based on the diagonalization of a combined set of "spatial t-f distributions". In contrast to existing techniques, the proposed approach allows the separation of Gaussian sources with identical spectral shape but with different t-f localization properties. The effects of spreading the noise power while localizing the source energy in the t-f domain amounts to increasing the robustness of the proposed approach with respect to noise and, hence, improved performance. Asymptotic performance analysis and numerical simulations are provided.

450 citations


Journal ArticleDOI
Olli Viikki1, Kari Laurila1
TL;DR: A segmental feature vector normalization technique is proposed which makes an automatic speech recognition system more robust to environmental changes by normalizing the output of the signal-processing front-end to have similar segmental parameter statistics in all noise conditions.

Proceedings ArticleDOI
04 Jan 1998
TL;DR: A significant development of random sampling methods to allow automatic switching between multiple motion models as a natural extension of the tracking process is presented.
Abstract: There is considerable interest in the computer vision community in representing and modelling motion. Motion models are used as predictors to increase the robustness and accuracy of visual trackers, and as classifiers for gesture recognition. This paper presents a significant development of random sampling methods to allow automatic switching between multiple motion models as a natural extension of the tracking process. The Bayesian mixed-state framework is described in its generality, and the example of a bouncing ball is used to demonstrate that a mixed-state model can significantly improve tracking performance in heavy clutter. The relevance of the approach to the problem of gesture recognition is then investigated using a tracker which is able to follow the natural drawing action of a hand holding a pen, and switches state according to the hand's motion.

Journal ArticleDOI
TL;DR: In this article, a method is developed to successively find the multiple design points of a component reliability problem, when they exist on the limit-state surface of the SORM or Formula-SORM approximations at each design point.

Journal ArticleDOI
TL;DR: X Vision as discussed by the authors is a programming environment for real-time vision which provides high performance on standard workstations outfitted with a simple digitizer and consists of a small set of image-level tracking primitives, and a framework for combining them to form complex tracking systems.

Journal ArticleDOI
12 Oct 1998
TL;DR: In this paper, the authors analyzed the stability limitations of digital dead-beat current control applied to voltage-source three-phase converters used as pulsewidth modulation rectifiers and/or active filters.
Abstract: This paper analyzes the stability limitations of digital dead-beat current control applied to voltage-source three-phase converters used as pulsewidth modulation rectifiers and/or active filters. In these applications, the conventional control algorithm, as used in drive applications, is not sufficiently robust and stability problems may arise for the current control loop. The current loop is, indeed, particularly sensitive to any model mismatch and to the possibly incorrect identification of the model parameters. A detailed analysis of the stability limitations of the commonly adopted dead-beat algorithm, based on a discrete-time state-space model of the controlled system, is presented. A modified line voltage estimation technique is proposed, which increases the control's robustness to parameter mismatches. The results of the theoretical analysis and the validity of the proposed modification to the control strategy are finally verified both by simulations and by experimental tests.

Journal ArticleDOI
TL;DR: This work shows how the complexity of computing the R-D data can be reduced without significantly reducing the performance of the optimization procedure, and proposes two methods which provide successive reductions in complexity.
Abstract: Digital video's increased popularity has been driven to a large extent by a flurry of international standards (MPEG-1, MPEG-2, H.263, etc). In most standards, the rate control scheme, which plays an important role in improving and stabilizing the decoding and playback quality, is not defined, and thus different strategies can be implemented in each encoder design. Several rate-distortion (R-D)-based techniques have been proposed aimed at the best possible quality for a given channel rate and buffer size. These approaches are complex because they require the R-D characteristics of the input data to be measured before making quantization assignment decisions. We show how the complexity of computing the R-D data can be reduced without significantly reducing the performance of the optimization procedure. We propose two methods which provide successive reductions in complexity by: (1) using models to interpolate the rate and distortion characteristics, and (2) using past frames instead of current ones to determine the models. Our first method is applicable to situations (e.g., broadcast video) where a long encoding delay is possible, while our second approach is more useful for computation-constrained interactive video applications. The first method can also be used to benchmark other approaches. Both methods can achieve over 1 dB peak signal-to-noise rate (PSNR) gain over simple methods like the MPEG Test Model 5 (TM5) rate control, with even greater gains during scene change transitions. In addition, both methods make few a priori assumptions and provide robustness in their performance over a range of video sources and encoding rates. In terms of complexity, our first algorithm roughly doubles the encoding time as compared to simpler techniques (such as TM5). However, the complexity is greatly reduced as compared to methods which exactly measure the R-D data. Our second algorithm has a complexity marginally higher than TM5 and a PSNR performance slightly lower than that of the first approach.

Journal ArticleDOI
TL;DR: The proposed approach is based on the interacting multiple-model (IMM) estimation algorithm, which is one of the most cost-effective adaptive estimation techniques for systems involving structural as well as parametric changes.
Abstract: An approach to detection and diagnosis of multiple failures in a dynamic system is proposed. It is based on the interacting multiple-model (IMM) estimation algorithm, which is one of the most cost-effective adaptive estimation techniques for systems involving structural as well as parametric changes. The proposed approach provides an integrated framework for fault detection, diagnosis, and state estimation. It is able to detect and isolate multiple faults substantially more quickly and more reliably than many existing approaches. Its superiority is illustrated in two aircraft examples for single and double faults of both sensors and actuators, in the forms of "total", "partial", and simultaneous failures. Both deterministic and random fault scenarios are designed and used for testing and comparing the performance fairly. Some new performance indices are presented. The robustness of the proposed approach to the design of model transition probabilities, fault modeling errors, and the uncertainties of noise statistics are also evaluated.

Journal ArticleDOI
TL;DR: In this article, a genetic algorithm is used to search a design coefe cient space; Monte Carlo evaluation at each search point estimates stability and performance robustness, and the robustness of a compensator is indicated by the probability that the closed-loop system will fall within allowable bounds.
Abstract: Robuste ightcontrolsystemsaresynthesizedforthelongitudinalmotionofahypersonicaircraft.Aircraftmotion is modeled by nonlinear longitudinal dynamic equations containing 28 uncertain parameters. Each controller is designed using a genetic algorithm to search a design coefe cient space; Monte Carlo evaluation at each search point estimates stability and performance robustness. Robustness of a compensator is indicated by the probability that stability and performance of the closed-loop system will fall within allowable bounds, given likely parameter variations. A stochastic cost function containing engineering design criteria (in this case, a stability metric plus 38 step-response metrics )is minimized, producing feasible control system coefe cient sets for specie ed control system structures. This approach trades the likelihood of satisfying design goals against each other, and it identie es the plant parameter uncertainties that are most likely to compromise robustness goals. The approach makes efe cient useofcomputationaltoolsandbroadlyacceptedengineeringknowledgetoproducepracticalcontrolsystemdesigns.

Journal ArticleDOI
TL;DR: In this article, the robust stability of uncertain linear stochastic differential delay equations was discussed and extended to cope with the robustness of uncertain semi-linear stochastically delayed delay equations.

Proceedings ArticleDOI
04 Jan 1998
TL;DR: It is shown that when this registration technique is applied to the chosen image representation with a local normalized-correlation similarity measure, it provides a new multi-sensor alignment algorithm which is robust to outliers, and applies to a wide variety of globally complex brightness transformations between the two images.
Abstract: This paper presents a method for alignment of images acquired by sensors of different modalities (e.g., EO and IR). The paper has two main contributions: (i) It identifies an appropriate image representation, for multi-sensor alignment, i.e., a representation which emphasizes the common information between the two multi-sensor images, suppresses the non-common information, and is adequate for coarse-to-fine processing. (ii) It presents a new alignment technique which applies global estimation to any choice of a local similarity measure. In particular, it is shown that when this registration technique is applied to the chosen image representation with a local normalized-correlation similarity measure, it provides a new multi-sensor alignment algorithm which is robust to outliers, and applies to a wide variety of globally complex brightness transformations between the two images. Our proposed image representation does not rely on sparse image features (e.g., edge, contour, or point features). It is continuous and does not eliminate the detailed variations within local image regions. Our method naturally extends to coarse-to-fine processing, and applies even in situations when the multi-sensor signals are globally characterized by low statistical correlation.

Journal ArticleDOI
TL;DR: It is illustrated that the robustness of conclusions drawn from SVAR exercises are questionable, and the problem of identification failure in structural VAR models is examined.


Journal Article
TL;DR: In this paper, the robustness of cross-section and panel data regressions to measurement error was evaluated using the augmented Solow growth model, and it was shown that estimated technology parameters and convergence rates are highly sensitive to measurement errors.

Journal ArticleDOI
Shunji Manabe1
TL;DR: In this article, a controller design method, called Coefficient Diagram Method (CDM), is introduced, which can design the controller and the characteristic polynomial of the closed-loop system simultaneously taking a good balance of stability, response, and robustness.

Journal ArticleDOI
TL;DR: It is shown that the well-posedness conditions provided can be used to give a less conservative, yet computable bound on the real structured singular value, as illustrated by numerical examples.
Abstract: This paper establishes a framework for robust stability analysis of linear time-invariant uncertain systems, The uncertainty is assumed to belong to an arbitrary subset of complex matrices. The concept used here is well-posedness of feedback systems, leading to necessary and sufficient conditions for robust stability. Based on this concept, some insights into exact robust stability conditions are given, In particular, frequency domain and state-space conditions for well-posedness are provided in terms of Hermitian-form inequalities. It is shown that these inequalities can be interpreted as small-gain conditions with a generalized class of scalings given by linear fractional transformations (LFT). Using the LFT-scaled small-gain condition in the state-space setting, the "duality" is established between the H/sub /spl infin// norm condition with frequency-dependent scalings and the parameter-dependent Lyapunov condition. Connections to the existing results, including the structured singular value and the integral quadratic constraints, are also discussed. Finally, we show that our well-posedness conditions can be used to give a less conservative, yet computable bound on the real structured singular value. This result is illustrated by numerical examples.

Journal ArticleDOI
TL;DR: In this paper, an efficient and reliable evolutionary-programming-based algorithm for solving the environmentally constrained economic dispatch (ECED) problem was developed, which can deal with load demand specifications in multiple intervals of the generation scheduling horizon.
Abstract: This paper develops an efficient and reliable evolutionary-programming-based algorithm for solving the environmentally constrained economic dispatch (ECED) problem. The algorithm can deal with load demand specifications in multiple intervals of the generation scheduling horizon. In the paper, the principal components of the evolutionary-programming-based ECED algorithm are presented. Solution acceleration techniques in the algorithm which enhance the speed and robustness of the algorithm are developed. The power and usefulness of the algorithm is demonstrated through its application to a test system.

Proceedings ArticleDOI
23 Jun 1998
TL;DR: The Ballista methodology for scalable, portable, automated robustness testing of component interfaces is described, an object-oriented approach based on parameter data types rather than component functionality essentially eliminates the need for function-specific test scaffolding.
Abstract: Mission-critical system designers may have to use a commercial off-the-shelf (COTS) approach to reduce costs and shorten development time, even though COTS software components may not specifically be designed for robust operation. Automated testing can assess component robustness without sacrificing the advantages of a COTS approach. This paper describes the Ballista methodology for scalable, portable, automated robustness testing of component interfaces. An object-oriented approach based on parameter data types rather than component functionality essentially eliminates the need for function-specific test scaffolding. A full-scale implementation that automatically tests the robustness of 233 operating system software components has been ported to ten POSIX systems. Between 42% and 63% of components tested had robustness problems, with a normalized failure rate ranging from 10% to 23% of tests conducted. Robustness testing could be used by developers to measure and improve robustness, or by consumers to compare the robustness of competing COTS component libraries.

Book ChapterDOI
27 Sep 1998
TL;DR: For real world problems it is often not sufficient to find solutions of high quality, but the solutions should also be robust, which means that the quality of the solution does not falter completely when a slight change of the environment occurs.
Abstract: For real world problems it is often not sufficient to find solutions of high quality, but the solutions should also be robust. By robust we mean that the quality of the solution does not falter completely when a slight change of the environment occurs, or that certain deviations from the solution should be tolerated without a total loss of quality.

Journal ArticleDOI
TL;DR: This paper describes a recent breakthrough in approximating the Hamilton- Jacobi-Bellman and Hamilton-Jacobi-Isaacs equations and derives a novel algorithm that produces stabilizing, closed-loop control laws with well-defined stability regions.
Abstract: Nonlinear optimal control and nonlinear H infinity control are two of the most significant paradigms in nonlinear systems theory. Unfortunately, these problems require the solution of Hamilton-Jacobi equations, which are extremely difficult to solve in practice. To make matters worse, approximation techniques for these equations are inherently prone to the so-called 'curse of dimensionality'. While there have been many attempts to approximate these equations, solutions resulting in closed-loop control with well-defined stability and robustness have remained elusive. This paper describes a recent breakthrough in approximating the Hamilton-Jacobi-Bellman and Hamilton-Jacobi-Isaacs equations. Successive approximation and Galerkin approximation methods are combined to derive a novel algorithm that produces stabilizing, closed-loop control laws with well-defined stability regions. In addition, we show how the structure of the algorithm can be exploited to reduce the amount of computation from exponential to po...

Proceedings ArticleDOI
23 May 1998
TL;DR: The disclosed method can be combined with proactive function sharing techniques to establish the first efficient, optimal-resilience, robust and proactively-secure RSA-based distributed trust services where the key is never entrusted to a single entity.
Abstract: The invention provides for robust efficient distributed generation of RSA keys. An efficient protocol is one which is independent of the primality test “circuit size”, while a robust protocol allows correct completion even in the presence of a minority of arbitrarily misbehaving malicious parties. The disclosed protocol is secure against any minority of malicious parties (which is optimal). The disclosed method is useful in establishing sensitive distributed cryptographic function sharing services (certification authorities, signature schemes with distributed trust, and key escrow authorities), as well as other applications besides RSA (namely: composite ElGamal, identification schemes, simultaneous bit exchange, etc.). The disclosed method can be combined with proactive function sharing techniques to establish the first efficient, optimal-resilience, robust and proactively-secure RSA-based distributed trust services where the key is never entrusted to a single entity (i.e., distributed trust totally “from scratch”). The disclosed method involves new efficient “robustness assurance techniques” which guarantee “correct computations” by mutually distrusting parties with malicious minority.