scispace - formally typeset
Search or ask a question

Showing papers on "Robustness (computer science) published in 1999"


Book ChapterDOI
21 Sep 1999
TL;DR: A survey of the theory and methods of photogrammetric bundle adjustment can be found in this article, with a focus on general robust cost functions rather than restricting attention to traditional nonlinear least squares.
Abstract: This paper is a survey of the theory and methods of photogrammetric bundle adjustment, aimed at potential implementors in the computer vision community. Bundle adjustment is the problem of refining a visual reconstruction to produce jointly optimal structure and viewing parameter estimates. Topics covered include: the choice of cost function and robustness; numerical optimization including sparse Newton methods, linearly convergent approximations, updating and recursive methods; gauge (datum) invariance; and quality control. The theory is developed for general robust cost functions rather than restricting attention to traditional nonlinear least squares.

3,521 citations


Book
01 Jan 1999
TL;DR: Sliding mode control (SMC) is gaining increasing importance as a universal design tool for the robust control of linear and nonlinear systems as mentioned in this paper, and is particularly useful for electro-mechanical systems because of its discontinuous structure.
Abstract: Sliding Mode Control (SMC) is gaining increasing importance as a universal design tool for the robust control of linear and nonlinear systems. The strengths of sliding mode controllers result from the ease and flexibility of the methodology for their design and implementation. They provide inherent order reduction, direct incorporation of robustness against system uncertainties and disturbances, and an implicit stability proof. They also allow for the design of high performance control systems at low costs. SMC is particularly useful for electro-mechanical systems because of its discontinuous structure. In fact, since the hardware of many electro-mechanical systems (such as electric motors) prescribes discontinuous inputs, SMC has become the natural choice for direct implementation. The book is intended primarily for engineers and establishes an interdisciplinary bridge between control science, electrical and mechanical engineering.

2,593 citations


Journal ArticleDOI
TL;DR: It is shown that the RC of an LP with ellipsoidal uncertainty set is computationally tractable, since it leads to a conic quadratic program, which can be solved in polynomial time.

1,809 citations


Book ChapterDOI
TL;DR: The basic concepts of MPC are reviewed, the uncertainty descriptions considered in the MPC literature are surveyed, and the techniques proposed for robust constraint handling, stability, and performance are surveyed.
Abstract: This paper gives an overview of robustness in Model Predictive Control (MPC). After reviewing the basic concepts of MPC, we survey the uncertainty descriptions considered in the MPC literature, and the techniques proposed for robust constraint handling, stability, and performance. The key concept of “closedloop prediction” is discussed at length. The paper concludes with some comments on future research directions.

1,126 citations


Journal ArticleDOI
TL;DR: Discusses analysis and synthesis techniques for robust pole placement in linear matrix inequality (LMI) regions, a class of convex regions of the complex plane that embraces most practically useful stability regions, and describes the effectiveness of this robust pole clustering technique.
Abstract: Discusses analysis and synthesis techniques for robust pole placement in linear matrix inequality (LMI) regions, a class of convex regions of the complex plane that embraces most practically useful stability regions. The focus is on linear systems with static uncertainty on the state matrix. For this class of uncertain systems, the notion of quadratic stability and the related robustness analysis tests are generalized to arbitrary LMI regions. The resulting tests for robust pole clustering are all numerically tractable because they involve solving linear matrix inequalities (LMIs) and cover both unstructured and parameter uncertainty. These analysis results are then applied to the synthesis of dynamic output-feedback controllers that robustly assign the closed-loop poles in a prescribed LMI region. With some conservatism, this problem is again tractable via LMI optimization. In addition, robust pole placement can be combined with other control objectives, such as H/sub 2/ or H/sub /spl infin// performance, to capture realistic sets of design specifications. Physically motivated examples demonstrate the effectiveness of this robust pole clustering technique.

743 citations


Proceedings ArticleDOI
30 Aug 1999
TL;DR: The key technique is called Dynamic Packet State (DPS), which provides a lightweight and robust mechanism for routers to coordinate actions and implement distributed algorithms and an implementation of the proposed algorithms that has minimum incompatibility with IPv4.
Abstract: Existing approaches for providing guaranteed services require routers to manage per flow states and perform per flow operations [9, 21]. Such a stateful network architecture is less scalable and robust than stateless network architectures like the original IP and the recently proposed Diffserv [3]. However, services provided with current stateless solutions, Diffserv included, have lower flexibility, utilization, and/or assurance level as compared to the services that can be provided with per flow mechanisms.In this paper, we propose techniques that do not require per flow management (either control or data planes) at core routers, but can implement guaranteed services with levels of flexibility, utilization, and assurance similar to those that can be provided with per flow mechanisms. In this way we can simultaneously achieve high quality of service, high scalability and robustness. The key technique we use is called Dynamic Packet State (DPS), which provides a lightweight and robust mechanism for routers to coordinate actions and implement distributed algorithms. We present an implementation of the proposed algorithms that has minimum incompatibility with IPv4.

534 citations


Journal ArticleDOI
TL;DR: The robustness of this communication scheme with respect to errors in the estimation of the fading process is studied, and the degradation in performance that results from such estimation errors is quantified.
Abstract: The analysis of flat-fading channels is often performed under the assumption that the additive noise is white and Gaussian, and that the receiver has precise knowledge of the realization of the fading process. These assumptions imply the optimality of Gaussian codebooks and of scaled nearest-neighbor decoding. Here we study the robustness of this communication scheme with respect to errors in the estimation of the fading process. We quantify the degradation in performance that results from such estimation errors, and demonstrate the lack of robustness of this scheme. For some situations we suggest the rule of thumb that, in order to avoid degradation, the estimation error should be negligible compared to the reciprocal of the signal-to-noise ratio (SNR).

468 citations


Journal ArticleDOI
17 Oct 1999
TL;DR: This work provides a novel algorithmic analysis via a model of robust concept learning (closely related to “margin classifiers”), and shows that a relatively small number of examples are sufficient to learn rich concept classes.
Abstract: We study the phenomenon of cognitive learning from an algorithmic standpoint. How does the brain effectively learn concepts from a small number of examples despite the fact that each example contains a huge amount of information? We provide a novel analysis for a model of robust concept learning (closely related to "margin classifiers"), and show that a relatively small number of examples are sufficient to learn rich concept classes (including threshold functions, Boolean formulae and polynomial surfaces). As a result, we obtain simple intuitive proofs for the generalization bounds of Support Vector Machines. In addition, the new algorithm has several advantages-they are faster conceptually simpler and highly resistant to noise. For example, a robust half-space can be PAC-learned in linear time using only a constant number of training examples, regardless of the number of attributes. A general (algorithmic) consequence of the model, that "more robust concepts are easier to learn", is supported by a multitude of psychological studies.

396 citations


Journal ArticleDOI
12 Sep 1999
TL;DR: Under the framework of probabilistic optimization, this work proposes to use a most probable point (MPP) based importance sampling method, a method rooted in the field of reliability analysis, for evaluating the feasibility robustness.
Abstract: In robust design, it is important not only to achieve robust design objectives but also to maintain the robustness of design feasibility under the effect of variations (or uncertainties). However, the evaluation of feasibility robustness is often a computationally intensive process. Simplified approaches in existing robust design applications may lead to either over-conservative or infeasible design solutions. In this paper, several feasibility-modeling techniques for robust optimization are examined. These methods are classified into two categories: methods that require probability and statistical analyses and methods that do not. Using illustrative examples, the effectiveness of each method is compared in terms of its efficiency and accuracy. Constructive recommendations are made to employ different techniques under different circumstances. Under the framework of probabilistic optimization, we propose to use a most probable point (MPP) based importance sampling method, a method rooted in the field of reliability analysis, for evaluating the feasibility robustness. The advantages of this approach are discussed. Though our discussions are centered on robust design, the principles presented are also applicable for general probabilistic optimization problems. The practical significance of this work also lies in the development of efficient feasibility evaluation methods that can support quality engineering practice, such as the Six Sigma approach that is being widely used in American industry.

395 citations


Proceedings Article
01 Jan 1999
TL;DR: These techniques are at least as accurate as previously used filtering algorithms, and in some situations, they are more than 37% more accurate.
Abstract: Accurate network bandwidth measurement is important to a variety of network applications. Unfortunately, accurate bandwidth measurement is difficult. We describe some current bandwidth measurement techniques: using throughput, and packet pair. We explain some of the problems with these techniques, including poor accuracy, poor scalability, lack of statistical robustness, poor agility in adapting to bandwidth changes, lack of flexibility in deployment, and inaccuracy when used on a variety of traffic types. The authors solutions to these problems include the use of a packet window to adapt quickly to bandwidth changes, receiver only packet pair to combine accuracy and ease of deployment, and potential bandwidth filtering to increase the accuracy. These techniques are at least as accurate as previously used filtering algorithms, and in some situations, they are more than 37% more accurate.

345 citations


Journal ArticleDOI
TL;DR: A new Kalman-filter based active contour model is proposed for tracking of nonrigid objects in combined spatio-velocity space and an optical-flow based detection mechanism is proposed to improve robustness to image clutter and to occlusions.
Abstract: A new Kalman-filter based active contour model is proposed for tracking of nonrigid objects in combined spatio-velocity space. The model employs measurements of gradient-based image potential and of optical-flow along the contour as system measurements. In order to improve robustness to image clutter and to occlusions an optical-flow based detection mechanism is proposed. The method detects and rejects spurious measurements which are not consistent with previous estimation of image motion.

Journal ArticleDOI
02 Jun 1999
TL;DR: A new design method for PID controllers based on optimization of load disturbance rejection with constraints on robustness to model uncertainties is presented, leading to a constrained optimization problem which can be solved iteratively.
Abstract: This paper presents a new design method for PID controllers based on optimization of load disturbance rejection with constraints on robustness to model uncertainties. The design also delivers parameters to deal with measurement noise and set point response. Thus, the formulation of the design problem captures four essential aspects of industrial control problems, leading to a constrained optimization problem which can be solved iteratively.

Journal ArticleDOI
TL;DR: In this paper, the authors focus on robustness of model-predictive control with respect to satisfaction of process output constraints and propose a method of improving such robustness by formulating output constraints as chance constraints.
Abstract: This work focuses on robustness of model-predictive control with respect to satisfaction of process output constraints. A method of improving such robustness is presented. The method relies on formulating output constraints as chance constraints using the uncertainty description of the process model. The resulting on-line optimization problem is convex. The proposed approach is illustrated through a simulation case study on a high-purity distillation column. Suggestions for further improvements are made.

Journal ArticleDOI
TL;DR: This paper presents a new approach to an auditory model for robust speech recognition in noisy environments that consists of cochlear bandpass filters and nonlinear operations in which frequency information of the signal is obtained by zero-crossing intervals.
Abstract: This paper presents a new approach to an auditory model for robust speech recognition in noisy environments. The proposed model consists of cochlear bandpass filters and nonlinear operations in which frequency information of the signal is obtained by zero-crossing intervals. Intensity information is also incorporated by a peak detector and a compressive nonlinearity. The robustness of the zero-crossings in spectral estimation is verified by analyzing the variance of the level-crossing intervals as a function of the crossing level values. Compared with other auditory models, the proposed auditory model is computationally efficient, free from many unknown parameters, and able to serve as a robust front-end for speech recognition in noisy environments. Experimental results of speech recognition demonstrate the robustness of the proposed method in various types of noisy environments.

Book
22 Sep 1999
TL;DR: A high-order P-type iterative learning controller for uncertain nonlinear discrete-time systems using current iteration tracking error and an initial state learning method for iterativeLearning control of uncertain time-varying systems.
Abstract: High-order iterative learning control of uncertain nonlinear systems with state delays.- High-order P-type iterative learning controller using current iteration tracking error.- Iterative learning control for uncertain nonlinear discrete-time systems using current iteration tracking error.- Iterative learning control for uncertain nonlinear discrete-time feedback systems with saturation.- Initial state learning method for iterative learning control of uncertain time-varying systems.- High-order terminal iterative learning control with an application to a rapid thermal process for chemical vapor deposition.- Designing iterative learning controllers via noncausal filtering.- Practical iterative learning control using weighted local symmetrical double-integral.- Iterative learning identification with an application to aerodynamic drag coefficient curve extraction problem.- Iterative learning control of functional neuromuscular stimulation systems.- Conclusions and future research.

Journal ArticleDOI
TL;DR: An implementation of the Gilbert-Johnson-Keerthi algorithm for comput ing the distance between convex objects, that has improved performance, robustness, and versatility over earlier implementations is presented.
Abstract: This paper presents an implementation of the Gilbert-Johnson-Keerthi algorithm for comput ing the distance between convex objects, that has improved performance, robustness, and versatility over earlier implementations. The algorithm presented here is especially suitable for use in collision detection of objects modeled using various types of geometric primitives, such as boxes, cones, and spheres, and their images under affine transformation.

Journal ArticleDOI
TL;DR: The adaptive control laws proposed in this paper do not require any dynamic dominating signal to guarantee the robustness property of Lagrange stability and can be regarded as a robustification of the now popular adaptive backstepping algorithm.
Abstract: This paper presents a constructive robust adaptive nonlinear control scheme which can be regarded as a robustification of the now popular adaptive backstepping algorithm. The allowed class of uncertainties includes nonlinearly appearing parametric uncertainty, uncertain nonlinearities, and unmeasured input-to-state stable dynamics. The adaptive control laws proposed in this paper do not require any dynamic dominating signal to guarantee the robustness property of Lagrange stability. The numerical example of a simple pendulum with unknown parameters and without velocity measurement illustrates our theoretical results.

Journal ArticleDOI
TL;DR: A novel combined backstepping and small-gain approach is presented to the global adaptive output feedback control of a class of uncertain nonlinear systems with unmodeled dynamics and can be made robust against dynamic uncertainties by means of recent nonlinear small- gain results.

Journal ArticleDOI
01 Dec 1999
TL;DR: Compared with traditional RBF networks, the proposed network demonstrates the following advantages: (1) better capability of approximation to underlying functions; (2) faster learning speed; (3) better size of network; (4) high robustness to outliers.
Abstract: Function approximation has been found in many applications. The radial basis function (RBF) network is one approach which has shown a great promise in this sort of problems because of its faster learning capacity. A traditional RBF network takes Gaussian functions as its basis functions and adopts the least-squares criterion as the objective function, However, it still suffers from two major problems. First, it is difficult to use Gaussian functions to approximate constant values. If a function has nearly constant values in some intervals, the RBF network will be found inefficient in approximating these values. Second, when the training patterns incur a large error, the network will interpolate these training patterns incorrectly. In order to cope with these problems, an RBF network is proposed in this paper which is based on sequences of sigmoidal functions and a robust objective function. The former replaces the Gaussian functions as the basis function of the network so that constant-valued functions can be approximated accurately by an RBF network, while the latter is used to restrain the influence of large errors. Compared with traditional RBF networks, the proposed network demonstrates the following advantages: (1) better capability of approximation to underlying functions; (2) faster learning speed; (3) better size of network; (4) high robustness to outliers.

Journal ArticleDOI
TL;DR: In this article, the authors present a general framework for mesh adaption in strongly nonlinear, possibly dynamic, problems, where the solutions of the incremental boundary value problem for a wide class of materials, including nonlinear elastic materials, compressible Newtonian fluids and viscoplastic solids, obey a minimum principle.

Journal ArticleDOI
TL;DR: This work describes RICP, a robust algorithm for registering and finding correspondences in sets of 3-D points with significant percentages of missing data, and therefore useful for both motion analysis and reverse engineering.

Proceedings ArticleDOI
17 Oct 1999
TL;DR: A two-step process for correction of 'systematic errors' in encoder measurements followed by fusion of the calibrated odometry with a gyroscope and GPS resulting in a robust localization scheme for localizing mobile robots is described.
Abstract: A low cost strategy based on well calibrated odometry is presented for localizing mobile robots The paper describes a two-step process for correction of 'systematic errors' in encoder measurements followed by fusion of the calibrated odometry with a gyroscope and GPS resulting in a robust localization scheme A Kalman filter operating on data from the sensors is used for estimating position and orientation of the robot Experimental results are presented that show an improvement of at least one order of magnitude in accuracy compared to the un-calibrated, un-filtered case Our method is systematic, simple and yields very good results We show that this strategy proves useful when the robot is using GPS to localize itself as well as when GPS becomes unavailable for some time As a result robot can move in and out of enclosed spaces, such as buildings, while keeping track of its position on the fly

Proceedings ArticleDOI
08 Mar 1999
TL;DR: The limitations of this approach are discussed, and an alternative is developed by extending Sutton''s work on linear systems to the general, nonlinear case, and the resulting online algorithms are computationally little more expensive than other acceleration techniques, and do not assume statistical independence between successive training patterns.
Abstract: Gain adaptation algorithms for neural networks typically adjust learning rates by monitoring the correlation between successive gradients. Here we discuss the limitations of this approach, and develop an alternative by extending Sutton''s work on linear systems to the general, nonlinear case. The resulting online algorithms are computationally little more expensive than other acceleration techniques, do not assume statistical independence between successive training patterns, and do not require an arbitrary smoothing parameter. In our benchmark experiments, they consistently outperform other acceleration methods, and show remarkable robustness when faced with non-i.i.d. sampling of the input space.

Journal ArticleDOI
TL;DR: It is observed that it is possible to synchronize the master-slave systems up to a relatively small error bound, even in the case of different qualitative behavior between the master and the uncontrolled slave system, such as limit cycles and stable equilibria.
Abstract: In this paper a method for robust synthesis of full static-state error feedback and dynamic-output error feedback for master-slave synchronization of Lur'e systems is presented. Parameter mismatch between the systems is considered in the synchronization schemes. Sufficient conditions for uniform synchronization with a bound on the synchronization error are derived, based on a quadratic Lyapunov function. The matrix inequalities from the case without parameter mismatch between the Lur'e systems remain preserved, but an additional robustness criterion must be taken into account. The robustness criterion is based on an uncertainty relation between the synchronization error bound and the parameter mismatch. The robust synthesis method is illustrated on Chua's circuit with the double scroll. One observes that it is possible to synchronize the master-slave systems up to a relatively small error bound, even in the case of different qualitative behavior between the master and the uncontrolled slave system, such as limit cycles and stable equilibria.

Journal ArticleDOI
TL;DR: This article focuses on gradient-based backpropagation algorithms that use either a common adaptive learning rate for all weights or an individual adaptivelearning rate for each weight and apply the Goldstein/Armijo line search.
Abstract: This article focuses on gradient-based backpropagation algorithms that use either a common adaptive learning rate for all weights or an individual adaptive learning rate for each weight and apply the Goldstein/Armijo line search. The learning-rate adaptation is based on descent techniques and estimates of the local Lipschitz constant that are obtained without additional error function and gradient evaluations. The proposed algorithms improve the backpropagation training in terms of both convergence rate and convergence characteristics, such as stable learning and robustness to oscillations. Simulations are conducted to compare and evaluate the convergence behavior of these gradient-based training algorithms with several popular training methods.

Proceedings ArticleDOI
01 Jan 1999
TL;DR: An original method for tracking, in an image sequence, complex objects which can be approximately modeled by a polyhedral shape that fulfills real-time constraints along with reliability and robustness requirements is presented.
Abstract: We present an original method for tracking, in an image sequence, complex objects which can be approximately modeled by a polyhedral shape. The approach relies on the estimation of the 2D object image motion along with the computation of the 3D object pose. The proposed method fulfills real-time constraints along with reliability and robustness requirements. Real tracking experiments and results concerning a visual servoing positioning task are presented.


Journal ArticleDOI
TL;DR: An approach for the determination of robust design solutions is outlined, where uncertainty is quantified and its effects mitigated and the robust solution is found through maximization of the probability of an overall figure of merit achieving or exceeding a specified target.
Abstract: Most current paradigms in multidisciplinary design analysis and optimization fail to address the presence of uncertainty at numerous levels of the design hierarchy and over the design process time line. Consequently, the issue of robustness of the design is neglected. An approach for the determination of robust design solutions is outlined in this paper, where uncertainty is quantified and its effects mitigated. The robust solution is found through maximization of the probability of an overall figure of merit achieving or exceeding a specified target. The proposed methodology is referred to as robust design simulation (RDS). Arguments as to why a probabilistic approach to aircraft design is preferable over the traditional deterministic approaches are presented, along with a step-by-step description of how one could implement the RDS. An application involving the high-speed civil transport is conducted as a case study to demonstrate the proposed method and to introduce an evaluation criterion that guarantees the highest customer satisfaction.

Journal ArticleDOI
TL;DR: In this paper, the generalized trimmed k-means (GKM) method was proposed to improve the robustness properties of the M estimator from which they came by combining the generalized k means idea with a so-called impartial trimming procedure.
Abstract: The generalized k means method is based on the minimization of the discrepancy between a random variable (or a sample of this random variable) and a set with k points measured through a penalty function Φ. As in the M estimators setting (k = 1), a penalty function, Φ, with unbounded derivative, Ψ, naturally leads to nonrobust generalized k means. However, surprisingly the lack of robustness extends also to the case of bounded Ψ; that is, generalized k means do not inherit the robustness properties of the M estimator from which they came. Attempting to robustify the generalized k means method, the generalized trimmed k means method arises from combining k means idea with a so-called impartial trimming procedure. In this article study generalized k means and generalized trimmed k means performance from the viewpoint of Hampel's robustness criteria; that is, we investigate the influence function, breakdown point, and qualitative robustness, confirming the superiority provided by the trimming. We inc...

Proceedings ArticleDOI
24 Oct 1999
TL;DR: A simultaneous multi-frame super-resolution video reconstruction procedure, utilizing spatio-temporal smoothness constraints and motion estimator confidence parameters and temporal constraints result in higher quality super-resolved reconstructions with improved robustness to motion estimation errors.
Abstract: A simultaneous multi-frame super-resolution video reconstruction procedure, utilizing spatio-temporal smoothness constraints and motion estimator confidence parameters is proposed. The ill-posed inverse problem of reconstructing super-resolved imagery from the low resolution, degraded observations is formulated as a statistical inference problem and a Bayesian, maximum a-posteriori (MAP) approach is utilized for its approximate solution. The inclusion of motion estimator confidence parameters and temporal constraints result in higher quality super-resolution reconstructions with improved robustness to motion estimation errors.