scispace - formally typeset
Search or ask a question

Showing papers on "Robustness (computer science) published in 1978"


Journal ArticleDOI
TL;DR: In this paper, a new method of digital process control is described, which relies on three principles: 1) the multivariable plant is represented by its impulse responses which will be used on line by the control computer for long range prediction; 2) the behavior of the closed-loop system is prescribed by means of reference trajectories initiated on the actual outputs; 3) the control variables are computed in a heuristic way with the same procedure used in identification, which appears as a dual of the control under this formulation.

1,835 citations


Proceedings ArticleDOI
01 Jan 1978
TL;DR: This paper describes an adjustment procedure for observer-based linear control systems which asymptotically achieves the same loop transfer functions (and hence the same relative stability, robustness, and disturbance rejection properties) as full-state feedback control implementations.
Abstract: This paper describes an adjustment procedure for observer-based linear control systems which asymptotically achieves the same loop transfer functions (and hence the same relative stability, robustness, and disturbance rejection properties) as full-state feedback control implementations.

750 citations



Journal ArticleDOI
Allen Gersho1
Abstract: Quantization is the process of replacing analog samples with approximate values taken from a finite set of allowed values. The approximate values corresponding to a sequence of analog samples can then be specified by a digital signal for transmission, storage, or other digital processing. In this expository paper, the basic ideas of uniform quantization, companding, robustness to input power level, and optimal quantization are reviewed and explained. The performance of various schemes are compared using the ratio of signal power to mean-square quantizing noise as a criterion. Entropy coding and the ultimate theoretical bound on block quantizer performance are also compared with the simpler zero-memory quantizer.

153 citations


Journal ArticleDOI
TL;DR: The authors compare the results using nonmetric analysis, full factorial designs, and rank data with quicker and less expensive methods of metric analysis, orthogonal arrays and stimulus ratings to indicate that metric analysis using ratings data and orthogonic arrays is very robust.
Abstract: In many industrial applications of conjoint analysis the use of nonmetric algorithms to analyze respondent ranks of products described by more than eight or 10 attributes is time consuming and very...

148 citations


Proceedings ArticleDOI
John Doyle1
01 Jan 1978
TL;DR: In this article, a new approach to the frequency-domain analysis of multiloop linear feed-back systems is presented, where the properties of the return difference equation are examined using the concepts of singular values, singular vectors and the spectral norm of a matrix.
Abstract: This paper presents a new approach to the frequency-domain analysis of multiloop linear feed-back systems. The properties of the return difference equation are examined using the concepts of singular values, singular vectors and the spectral norm of a matrix. A number of new tools for multiloop systems are developed which are analogous to those for scalar Nyquist and Bode analysis. These provide a generalization of the scalar frequency-domain notions such as gain, bandwidth, stability margins and M-circles, and provide considerable insight into system robustness.

145 citations


Journal ArticleDOI
TL;DR: In this paper, the authors gave a new optimal property of the classical method of multi-dimensional scaling when the distance matrix is non-Euclidean and examined robustness of the method under a linear model.
Abstract: The paper gives a new optimal property of the classical method of multi-dimensional scaling when the distance matrix is non-Euclidean. We also examine robustness of the method under a linear model. A technique to estimate missing values is also given.

143 citations


Proceedings ArticleDOI
01 Jan 1978
TL;DR: In this article, the robustness of the robust linear servomechanism for nonlinear multi-input multi-output distributed servomechasymptotic tracking and disturbance rejection was investigated.
Abstract: We study the asymptotic tracking and disturbance rejection property of a general nonlinear multi-input multi-output distributed servomechanism which consists of input as well as output channel nonlinearity. We also explore the robustness of this property of such nonlinear servomechanism. Our result shows that the design principle of the robust linear servomechanism (i.e. replicating the dynamics of the reference and disturbance signals) works well for a large class of nonlinear servos provided that certain stability conditions are satisfied.

43 citations


Book ChapterDOI
R. S. Dembo1
TL;DR: The pitfalls encountered when solving GP problems and some proposed remedies are discussed in detail and a numerical comparison of some of the more promising recently developed computer codes for geometric programming on a specially chosen set of GP test problems is given.
Abstract: This paper attempts to consolidate over 15 years of attempts at designing algorithms for geometric programming (GP) and its extensions. The pitfalls encountered when solving GP problems and some proposed remedies are discussed in detail. A comprehensive summary of published software for the solution of GP problems is included. Also included is a numerical comparison of some of the more promising recently developed computer codes for geometric programming on a specially chosen set of GP test problems. The relative performance of these codes is measured in terms of their robustness as well as speed of computation. The performance of some general nonlinear programming (NLP) codes on the same set of test problems is also given and compared with the results for the GP codes. The paper concludes with some suggestions for future research.

37 citations


Journal ArticleDOI
TL;DR: A new ‘nearest‐neighbour’ or ‘distance’ method of estimating neurone population density is introduced, which has advantages include tests of randomness for the spatial distribution of the cells at issue and a robustness which can tolerate some departure from a random distribution pattern.
Abstract: SUMMARY A new ‘nearest-neighbour’ or ‘distance’ method of estimating neurone population density is introduced. The method was originally developed for ecological studies but can be imported into histology without significant modification; changes in population density can be estimated by inverting the measure of area per unit cell (the so-called mean area). Its advantages include tests of randomness for the spatial distribution of the cells at issue and a robustness which can tolerate some departure from a random distribution pattern. To illustrate how the method is applied estimates of neurone density, in terms of ‘mean area’ per cell-point, are made on a montage tracing of the human cerebellar dentate nucleus.

31 citations


Journal ArticleDOI
TL;DR: In this article, a method for the construction of exact confidence intervals on nonnegative linear combinations of variance components from nested classification models is proposed, and the robustness of these confidence intervals to model breakdown is discussed.
Abstract: Methodology is proposed for the construction of exact confidence intervals on nonnegative linear combinations of variance components from nested classification models. Examples are given for the one-fold and two-fold classifications. The robustness of these confidence intervals to model breakdown is also discussed.

Journal ArticleDOI
TL;DR: It is proved that, to provide the necessary synchronization capability without impairing the quality of speech reproduction, it is necessary to use a minimum, unexpectedly large, number of bits in the machine words and to carefully specify the internal arithmetic, as is done here.
Abstract: We propose a new adaptive quantization scheme for digitally implementing PCM and DPCM structures. The arithmetics we develop for the digital processing are useful as well in the implementation of previously existing schemes for adaptive quantization. Two objectives are stressed here: (i) The system must be robust in the presence of noise in the transmission channel which causes the synchronization between quantizer adaptations in the transmitter and receiver to deteriorate. (ii) It must also minimize the complexity of the digital realization. In addition to the above objectives, we require, of course, good fidelity of the processed speech waveform. The problem of synchronization in digital implementations where the constraint of finite precision arithmetic exists has not been addressed previously. We begin by examining an existing, idealized adaptation algorithm which contains a leakage parameter for the purpose of deriving robustness. We prove that, to provide the necessary synchronization capability without impairing the quality of speech reproduction, it is necessary to use a minimum, unexpectedly large, number of bits in the machine words and, additionally, to carefully specify the internal arithmetic, as is done here. The new scheme that we propose here uses an order of magnitude less memory in an ROM-based implementation. The key innovations responsible for the improvement are: (i) modification of the adaptation algorithm to one where leakage is interleaved infrequently but at regular intervals into the adaptation recursion; (ii) a specification of the internal machine arithmetic that guarantees synchronization in the presence of channel errors. A detailed theoretical analysis of the statistical behavior of the proposed system for random inputs is given here. Results of a simulation of a realistic 16-level adaptive quantizer are reported.

Journal ArticleDOI
TL;DR: The first part of this paper contains an introduction into robust statistics with special emphasis on some difficult conceptual points and on recent results concerning the intuitive basis of robustness theory as discussed by the authors, and the last part discusses a number of research areas in robustness which are of current interest, including treatment of arbitrary unstructured data and of unbalanced linear models.
Abstract: The first part of this paper contains an introduction into robust statistics with special emphasis on some difficult conceptual points and on recent results concerning the intuitive basis of robustness theory. A brief review of some classical robustness concepts and results follows, and the last part discusses a number of research areas in robustness which are of current interest, including treatment of arbitrary unstructured data and of unbalanced linear models.

Journal ArticleDOI
TL;DR: In this article, the effects of applying the normal classificatory rule to a nonnormal population are studied through the distribution of the misclassification errors in the case of the Edgeworth type distribution.

Journal ArticleDOI
TL;DR: In this paper, a mathematical model solved by decomposition is proposed for the control of a multiprogrammed virtual memory computer system, which avoids thrashing and is shown to be robust to transients in the workload.
Abstract: We propose a new method for the control of a multiprogrammed virtual memory computer system. A mathematical model solved by decomposition permits us to justify that the method avoids thrashing. Simulation experiments are used to test the robustness of the predictions of the mathematical model when certain simplifying assumptions are relaxed and when a slightly simpler control technique based on the same principle is used. Comparisons are given with the case where an "optimal" control is used and with that with no control. We also provide a simulation evaluating the estimators used in an implementation of the control, as well as the responsiveness of the controlled system to transients in the workload.

Proceedings ArticleDOI
01 Jan 1978
TL;DR: In this article, a two-stage adaptive control scheme is presented, the first stage being an identifier of the exomodel and the second stage a time-varying robust regulator containing an internal model.
Abstract: The regulator problem is considered in the context of linear time-invariant multivariable systems. The exogeneous signal, the disturbance and/or reference signal, is assumed to be generated by a model (the exomodel) with unknown initial conditions. Unlike in previous treatments, the exomodel parameters are not assumed known. A two-stage adaptive control scheme is presented, the first stage being an identifier of the exomodel and the second stage a time-varying robust regulator containing an internal model of the exomodel which is tuned on-line by the first stage.


01 Jan 1978
TL;DR: The parameters and the decision rules used in the segmentation are described and even the error rate does not affect the recognition accuracy of the Harpy system significantly because the scheme is designed to provide several extra segments at the cost of speed of operation.
Abstract: The first step in the recognition of continuous speech by machine is segmentation of the utterance. The Harpy continuous speech recognition system, developed at Carnegie-Mellon University , uses a segmentation procedure based on simple time domain parameters called ZAPDASH. In this paper the parameters and the decision rules used in the segmentation are described. Considerations in the choice of parameters are discussed briefly. The heuristics used in arriving at some of the decision rules are also discussed. The performance of the segmentation scheme is evaluated by comparing the results with the results of hand segmentation of the waveform of the utterance. The results show an overall error rate of 4% in 34 utterances. However, even the error rate does not affect the recognition accuracy of the Harpy system significantly because the scheme is designed to provide several extra segments at the cost of speed of operation. The average duration of the segments obtained by this technique was found to be 4.7 centiseconds. The robustness of the segmentation scheme for noise and distortion in input speech is currently being investigated.

Journal ArticleDOI
TL;DR: In this article, the authors compared the robustness of chain block designs and coat-of-mail designs and found that the latter is more robust to missing values or outliers than the former.
Abstract: Chain block designs are relatively vulnerable to loss of information when missing values or outliers may occur An alternative class of designs, coat-of-mail designs, are proposed and the relative robustness of the two types of design are compared

Proceedings ArticleDOI
01 Jan 1978
TL;DR: An application of Model Algorithmic Control with IDCOM to the design of an adaptive autopilot for pitch control of a high performance aircraft is described.
Abstract: Some of the outstanding problems in the application of modern control theory to Flight Control Systems are: (i) model selection, (ii) incorporation of state and control constraints, and (iii) robustness and sensitivity to unknown parameters and disturbances. In this paper, we use a technique called Model Algorithmic Control (MAC) with IDCOM which resolves the above problems in an effective manner. The technique was originally developed for industrial applications in France. It is based on an identification-optimization approach, which is very general in nature. MAC is a digital technique that makes full use of the capabilities of current microprocessors and any future developments in microprocessor technology would further enhance its effectiveness. In this paper, we describe an application of MAC to the design of an adaptive autopilot for pitch control of a high performance aircraft.

Proceedings ArticleDOI
Mohamed Gawdat Gouda1
01 Aug 1978
TL;DR: This paper defines a controller architecture which satisfies all the three criteria of freedom of deadlocks, robustness, and parallelism and defines three controllers with hierarchical architectures.
Abstract: An access controller for a distributed database is a (central or distributed) structure which routes access requests to the different components of the database. Such a controller is also supposed to resolve the conflicts between concurrent requests, if any, such that deadlock situations never arise.In this paper, some architectures for distributed access controllers of distributed databases are investigated. In particular, three controllers with hierarchical architectures are considered. The controllers are evaluated based on three criteria: (i) freedom of deadlocks, (ii) robustness, and (iii) parallelism. The third criterion implies that the added redundancy to increase the controller robustness against failure conditions should also contribute to the amount of achieved parallelism during the no-failure periods. We then define a controller architecture which satisfies all the three criteria.

Journal ArticleDOI
TL;DR: In this paper, the robustness of several scale-free goodness-of-fit tests for exponentiality is investigated, including the sample Lorenz curve, Gini's statistic, and the Shapiro-Wilk test.
Abstract: SUMMARY We investigate the robustness to rounding and truncation measurement error of several scale-free goodness-of-fit tests for exponentiality. Moran's statistic is so sensitive to rounding and truncation error as to be potentially misleading in practice. Tests based on the sample Lorenz curve, Gini's statistic and the Shapiro-Wilk test are remarkably insensitive to rounding errors and relatively insensitive to truncation errors. Other tests exhibit intermediate sensitivity to such errors. Trimming Moran's statistic and adding appropriate constants to observations in rounded or truncated samples improves robustness to measurement error.


Book
01 Jan 1978
TL;DR: This dissertation is concerned with the development and numerical implementation of algorithms for solving finite dimensional optimization problems based on Newton-like methods implemented in a robust manner by means of hybrid, curved line searches and stable linear algebra techniques.
Abstract: : This dissertation is concerned with the development and numerical implementation of algorithms for solving finite dimensional optimization problems. Special emphasis is given to robustness, by which is meant the ability of an algorithm to cope with adverse circumstances, whether due to pathologies of a particular problem or to the shortcomings of finite precision computer arithmetic. A uniform framework is developed in which a common set of techniques may be applied to all of the standard problems of optimization. The algorithms are based on Newton-like methods implemented in a robust manner by means of hybrid, curved line searches and stable linear algebra techniques. Developed first in the context of systems of nonlinear equations, nonlinear least squares, and unconstrained minimization, the algorithms are combined and extended to include problems with equality or inequality constraints. Constrained problems are handled by means of separate line searches in the range and null spaces of the matrix of constraint normals. The classical Lagrangian is modified to allow the same Newton-like methods to be applied to inequality constraints. Test results are presented which show the validity and promise of the methods developed in this dissertation. (Author)

Journal ArticleDOI
TL;DR: In this paper, the two distinct concepts of inference and criterion robustness are illustrated with a simple example based on the one-parameter exponential model, which is used for elementary mathematical statistics courses, where the topic of robustness is often neglected.
Abstract: The two distinct concepts of inference and criterion robustness are illustrated with a simple example based on the one-parameter exponential model. The study of inference and criterion robustness of point and interval estimators of the population mean under a more general exponential power model involves simple but fascinating distribution theory. This example could be usefully exploited in elementary mathematical statistics courses, where the topic of robustness is often neglected.

Proceedings ArticleDOI
01 Jan 1978
TL;DR: In this paper, the robust M-estimates of regression are extended to filtering and fixed lag smoothing employing a pseudodensity of the observations in a maximum likelihood derivation of the filter and a fixed lag smoother.
Abstract: Robust methods provide a fresh approach to the problem of treatment of wild observations in filtering and smoothing problems. The robust M-estimates of regression are extended to filtering and fixed lag smoothing employing a pseudodensity of the observations in a maximum likelihood derivation of the filter and fixed lag smoother. These robust methods have been applied to simulated and real tracking data to obtain improved estimstion performance in the presence of wild observations.

Proceedings ArticleDOI
01 Jan 1978

Journal ArticleDOI
B.R. Barmish1, Y.H. Lin1
TL;DR: In this paper, a new notion of robustness is proposed for a class of uncertain linear dynamical systems, where a linear system is said to be robust with respect to the pair (δH max (•), δy max ) if it can guarantee (by choice of admissible input) a terminal output error of δ y max at most, for all possible perturbations of the impulse response matrix H(·) which are bounded by δHmax (·).

Journal ArticleDOI
TL;DR: In this paper, the robustness of an extended version of Colton's decision theoretic model is investigated and it is shown that the investigated model is robust with respect to all these changes with the exception of the use of the modified prior density.
Abstract: The robustness of an extended version of Colton's decision theoretic model is considered. The extended version includes the losses due to the patients who are not entered in the experiment, but require treatment while the experiment is in progress. Among the topics considered are the effects of risk of using a sample size considerably less than the optimum, use of an incorrect patient horizon, application of a modified loss function, and use of a two point prior distribution. It is shown that the investigated model is robust with respect to all these changes with the exception of the use of the modified prior density.

Journal ArticleDOI
TL;DR: Results of the performance of 23 different codes on up to 37 test problems are presented, with a view to testing for robustness, speed with which solutions were obtained, sensitivity to changes in starting points and solution tolerances, in-core storage and case of use.
Abstract: The purpose of this paper is to consolidate the recent independent work of the authors on computational comparisons of geometric programming codes. Results of the performance of 23 different codes on up to 37 test problems are presented, with a view to testing for robustness (fraction of successful solutions computed), speed with which solutions were obtained, sensitivity to changes in starting points and solution tolerances, in-core storage and case of use.