scispace - formally typeset
Search or ask a question

Showing papers on "Robustness (computer science) published in 2002"



Journal ArticleDOI
TL;DR: The method introduces complexity parameters for time series based on comparison of neighboring values and shows that its complexity behaves similar to Lyapunov exponents, and is particularly useful in the presence of dynamical or observational noise.
Abstract: We introduce complexity parameters for time series based on comparison of neighboring values. The definition directly applies to arbitrary real-world data. For some well-known chaotic dynamical systems it is shown that our complexity behaves similar to Lyapunov exponents, and is particularly useful in the presence of dynamical or observational noise. The advantages of our method are its simplicity, extremely fast calculation, robustness, and invariance with respect to nonlinear monotonous transformations.

3,433 citations


Journal ArticleDOI
TL;DR: The methods of this paper are illustrated for RBF kernels and demonstrate how to obtain robust estimates with selection of an appropriate number of hidden units, in the case of outliers or non-Gaussian error distributions with heavy tails.

1,197 citations


Proceedings ArticleDOI
07 Nov 2002
TL;DR: This work proposes using coordinates-based mechanisms in a peer-to-peer architecture to predict Internet network distance (i.e. round-trip propagation and transmission delay), and proposes the GNP approach, based on absolute coordinates computed from modeling the Internet as a geometric space.
Abstract: We propose using coordinates-based mechanisms in a peer-to-peer architecture to predict Internet network distance (i.e. round-trip propagation and transmission delay). We study two mechanisms. The first is a previously proposed scheme, called the triangulated heuristic, which is based on relative coordinates that are simply the distances from a host to some special network nodes. We propose the second mechanism, called global network positioning (GNP), which is based on absolute coordinates computed from modeling the Internet as a geometric space. Since end hosts maintain their own coordinates, these approaches allow end hosts to compute their inter-host distances as soon as they discover each other. Moreover, coordinates are very efficient in summarizing inter-host distances, making these approaches very scalable. By performing experiments using measured Internet distance data, we show that both coordinates-based schemes are more accurate than the existing state of the art system IDMaps, and the GNP approach achieves the highest accuracy and robustness among them.

1,177 citations


Journal ArticleDOI
TL;DR: In this article, the synchronization phenomenon in scale-free dynamical networks is investigated and it is shown that if the coupling strength of a scale free dynamical network is greater than a positive threshold, then the network will synchronize no matter how large it is.
Abstract: Recently, it has been demonstrated that many large complex networks display a scale-free feature, that is, their connectivity distributions are in the power-law form. In this paper, we investigate the synchronization phenomenon in scale-free dynamical networks. We show that if the coupling strength of a scale-free dynamical network is greater than a positive threshold, then the network will synchronize no matter how large it is. We show that the synchronizability of a scale-free dynamical network is robust against random removal of nodes, but is fragile to specific removal of the most highly connected nodes.

1,089 citations


Journal ArticleDOI
TL;DR: This paper contrasts HOT with alternative perspectives on complexity, drawing on real-world examples and also model systems, particularly those from self-organized criticality.
Abstract: Highly optimized tolerance (HOT) was recently introduced as a conceptual framework to study fundamental aspects of complexity. HOT is motivated primarily by systems from biology and engineering and emphasizes, (i) highly structured, nongeneric, self-dissimilar internal configurations, and (ii) robust yet fragile external behavior. HOT claims these are the most important features of complexity and not accidents of evolution or artifices of engineering design but are inevitably intertwined and mutually reinforcing. In the spirit of this collection, our paper contrasts HOT with alternative perspectives on complexity, drawing on real-world examples and also model systems, particularly those from self-organized criticality.

706 citations


Journal ArticleDOI
TL;DR: In this paper, the important role of evolutionary algorithms in multi-objective optimisation is highlighted, and evolutionary advances in adaptive control and multidisciplinary design are predicted, as well as significant applications in parameter and structure optimisation for controller design and model identification, in addition to fault diagnosis, reliable systems, robustness analysis, and robot control.

612 citations


Journal ArticleDOI
TL;DR: The analysis of the proposed fault isolation scheme provides rigorous analytical results concerning the fault isolation time, and two simulation examples are given to show the effectiveness of the fault diagnosis methodology.
Abstract: This paper presents a robust fault diagnosis scheme for abrupt and incipient faults in nonlinear uncertain dynamic systems. A detection and approximation estimator is used for online health monitoring. Once a fault is detected, a bank of isolation estimators is activated for the purpose of fault isolation. A key design issue of the proposed fault isolation scheme is the adaptive residual threshold associated with each isolation estimator. A fault that has occurred can be isolated if the residual associated with the matched isolation estimator remains below its corresponding adaptive threshold, whereas at least one of the components of the residuals associated with all the other estimators exceeds its threshold at some finite time. Based on the class of nonlinear uncertain systems under consideration, an isolation decision scheme is devised and fault isolability conditions are given, characterizing the class of nonlinear faults that are isolable by the robust fault isolation scheme. The nonconservativeness of the fault isolability conditions is illustrated by deriving a subclass of nonlinear systems and of faults for which these conditions are also necessary for fault isolability. Moreover, the analysis of the proposed fault isolation scheme provides rigorous analytical results concerning the fault isolation time. Two simulation examples are given to show the effectiveness of the fault diagnosis methodology.

571 citations


Journal ArticleDOI
TL;DR: A new approach for watermarking of digital images providing robustness to geometrical distortions by proposing an embedding and detection scheme where the mark is bound with a content descriptor defined by salient points.
Abstract: This paper presents a new approach for watermarking of digital images providing robustness to geometrical distortions. The weaknesses of classical watermarking methods to geometrical distortions are outlined first. Geometrical distortions can be decomposed into two classes: global transformations such as rotations and translations and local transformations such as the StirMark attack. An overview of existing self-synchronizing schemes is then presented. Theses schemes can use periodical properties of the mark, invariant properties of transforms, template insertion, or information provided by the original image to counter geometrical distortions. Thereafter, a new class of watermarking schemes using the image content is presented. We propose an embedding and detection scheme where the mark is bound with a content descriptor defined by salient points. Three different types of feature points are studied and their robustness to geometrical transformations is evaluated to develop an enhanced detector. The embedding of the signature is done by extracting feature points of the image and performing a Delaunay tessellation on the set of points. The mark is embedded using a classical additive scheme inside each triangle of the tessellation. The detection is done using correlation properties on the different triangles. The performance of the presented scheme is evaluated after JPEG compression, geometrical attack and transformations. Results show that the fact that the scheme is robust to these different manipulations. Finally, in our concluding remarks, we analyze the different perspectives of such content-based watermarking scheme.

496 citations


Book ChapterDOI
28 May 2002
TL;DR: It is shown that EMICP robustly aligns the barycenters and inertia moments with a high variance, while it tends toward the accurate ICP for a small variance, and is used in a multi-scale approach using an annealing scheme on this parameter to combine robustness and accuracy.
Abstract: We investigate in this article the rigid registration of large sets of points, generally sampled from surfaces. We formulate this problem as a general Maximum-Likelihood (ML) estimation of the transformation and the matches. We show that, in the specific case of a Gaussian noise, it corresponds to the Iterative Closest Point algorithm(ICP) with the Mahalanobis distance.Then, considering matches as a hidden variable, we obtain a slightly more complex criterion that can be efficiently solved using Expectation-Maximization (EM) principles. In the case of a Gaussian noise, this new methods corresponds to an ICP with multiple matches weighted by normalized Gaussian weights, giving birth to the EM-ICP acronym of the method.The variance of the Gaussian noise is a new parameter that can be viewed as a "scale or blurring factor" on our point clouds. We show that EMICP robustly aligns the barycenters and inertia moments with a high variance, while it tends toward the accurate ICP for a small variance. Thus, the idea is to use a multi-scale approach using an annealing scheme on this parameter to combine robustness and accuracy. Moreover, we show that at each "scale", the criterion can be efficiently approximated using a simple decimation of one point set, which drastically speeds up the algorithm.Experiments on real data demonstrate a spectacular improvement of the performances of EM-ICP w.r.t. the standard ICP algorithm in terms of robustness (a factor of 3 to 4) and speed (a factor 10 to 20), with similar performances in precision. Though the multiscale scheme is only justified with EM, it can also be used to improve ICP, in which case the performances reaches then the one of EM when the data are not too noisy.

470 citations


Journal ArticleDOI
TL;DR: Adaptive and iterative control algorithms based on explicit criterion minimization are briefly reviewed and an overview of one such algorithm, iterative feedback tuning IFT, is presented.
Abstract: Adaptive and iterative control algorithms based on explicit criterion minimization are briefly reviewed and an overview of one such algorithm, iterative feedback tuning IFT, is presented. The basic IFT algorithm is reviewed for both single-input/single-output and multi-input/multi-output systems. Subsequently the application to non-linear systems is discussed. Stability and robustness aspects are covered. A survey of existing extensions, applications and related methods is also provided. Copyright © 2002 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: A coupled multiphase propagation that imposes the idea of mutually exclusive propagating curves and increases the robustness as well as the convergence rate is proposed and has been validated using three important applications in computer vision.

Journal ArticleDOI
10 Dec 2002
TL;DR: This paper proposes a new approach to resolve difficulties in vision feedback control loop techniques by coupling path planning in image space and image-based control and ensures robustness with respect to modeling errors.
Abstract: Vision feedback control loop techniques are efficient for a large class of applications, but they come up against difficulties when the initial and desired robot positions are distant. Classical approaches are based on the regulation to zero of an error function computed from the current measurement and a constant desired one. By using such an approach, it is not obvious how to introduce any constraint in the realized trajectories or to ensure the convergence for all the initial configurations. In this paper, we propose a new approach to resolve these difficulties by coupling path planning in image space and image-based control. Constraints such that the object remains in the camera field of view or the robot avoids its joint limits can be taken into account at the task planning level. Furthermore, by using this approach, current measurements always remain close to their desired value, and a control by image-based servoing ensures robustness with respect to modeling errors. The proposed method is based on the potential field approach and is applied whether the object shape and dimensions are known or not, and when the calibration parameters of the camera are well or badly estimated. Finally, real-time experimental results using an eye-in-hand robotic system are presented and confirm the validity of our approach.

Journal ArticleDOI
TL;DR: In this paper, attention is limited to two-dimensional inviscid flows using a standard finite volume discretization, although the procedure may be readily applied to other types of multidimensional problems and discretizations.

Book
01 Mar 2002
TL;DR: Recent developments on the class of uncertain deterministic/stochastic dynamical systems with time-delay, including problems such as stochastic stability, stabilizability under memory and memoryless state feedback control, Hinfinite control, filtering and its robustness are treated.
Abstract: From the Publisher: Deterministic and Stochastic Time-Delay Systems provides professionals and advanced students in control engineering with the most recent results on deterministic and stochastic time-delay systems. It is an excellent text/reference for graduate level engineering students, or a reference for practicing control engineers, and researchers in control engineering. The control of uncertain systems has dominated the research effort of the control community during the last two decades. Some practical dynamical systems have time-delay in their dynamics, which makes their control a complicated task even in the deterministic case. The presence of time-delay in dynamical systems is a well-known case of instability and preference degradation. This book presents recent developments on the class of uncertain deterministic/stochastic dynamical systems with time-delay. Problems such as stochastic stability, stabilizability under memory and memoryless state feedback controller and output feedback control, Hinfinite control, filtering and its robustness are treated. Practical implications of the different methods are considered and numerical algorithms are provided for implementation.

Journal ArticleDOI
07 Aug 2002
TL;DR: Two classes of three-channel control architectures, that are perfectly transparent under ideal conditions are introduced, and the stability robustness of the proposed architectures to delays is rigorously analyzed, leading to certain bounds on force feedforward control parameters.
Abstract: This paper first investigates the issue of transparency in time-delayed teleoperation. It then studies the advantages of employing local force feedback for enhanced stability and performance. In addition, two classes of three-channel control architectures, that are perfectly transparent under ideal conditions are introduced. The stability robustness of the proposed architectures to delays is rigorously analyzed, leading to certain bounds on force feedforward control parameters. Experimental results are included in support of the theoretical work.

Journal ArticleDOI
TL;DR: In this paper, a one-hour-ahead load forecasting method using the correction of similar day data is proposed, where the forecasted load power is obtained by adding a correction to the selected similar-day data.
Abstract: Load forecasting has always been the essential part of an efficient power system planning and operation. Several electric power companies are now forecasting load power based on conventional methods. However, since the relationship between load power and factors influencing load power is nonlinear, it is difficult to identify its nonlinearity by using conventional methods. Most of papers deal with 24-hour-ahead load forecasting or next day peak load forecasting. These methods forecast the demand power by using forecasted temperature as forecast information. But, when the temperature curves changes rapidly on the forecast day, load power changes greatly and forecast error would going to increase. In conventional methods neural networks uses all similar day's data to learn the trend of similarity. However, learning of all similar day's data is very complex, and it does not suit learning of neural network. Therefore, it is necessary to reduce the neural network structure and learning time. To overcome these problems, we propose a one-hour-ahead load forecasting method using the correction of similar day data. In the proposed prediction method, the forecasted load power is obtained by adding a correction to the selected similar day data.

Journal ArticleDOI
TL;DR: In this article, an efficient and reliable tabu search (TS)-based approach to solve the optimal power flow (OPF) problem is presented, which employs TS algorithm for optimal settings of the control variables of the OPF problem.
Abstract: This paper presents an efficient and reliable tabu search (TS)-based approach to solve the optimal power flow (OPF) problem. The proposed approach employs TS algorithm for optimal settings of the control variables of the OPF problem. Incorporation of TS as a derivative-free optimization technique in solving OPF problem significantly reduces the computational burden. One of the main advantages of TS algorithm is its robustness to its own parameter settings as well as the initial solution. In addition, TS is characterized by its ability to avoid entrapment in local optimal solution and prevent cycling by using flexible memory of search history. The proposed approach has been examined on the standard IEEE 30-bus test system with different objectives and generator cost curves. The results are promising and show the effectiveness and robustness of the proposed approach.

Proceedings Article
08 Jul 2002
TL;DR: A new multi-view algorithm, Co-EMT, which combines semi-supervised and active learning is introduced, which outperforms the other algorithms both on the parameterized problems and on two additional real world domains.
Abstract: In a multi-view problem, the features of the domain can be partitioned into disjoint subsets (views) that are sufficient to learn the target concept. Semi-supervised, multi-view algorithms, which reduce the amount of labeled data required for learning, rely on the assumptions that the views are compatible and uncorrelated (i.e., every example is identically labeled by the target concepts in each view; and, given the label of any example, its descriptions in each view are independent). As these assumptions are unlikely to hold in practice, it is crucial to understand the behavior of multi-view algorithms on problems with incompatible, correlated views. We address this issue by studying several algorithms on a parameterized family of text classification problems in which we control both view correlation and incompatibility. We first show that existing semi-supervised algorithms are not robust over the whole spectrum of parameterized problems. Then we introduce a new multi-view algorithm, Co-EMT, which combines semi-supervised and active learning. Co-EMT outperforms the other algorithms both on the parameterized problems and on two additional real world domains. Our experiments suggest that Co-EMT’s robustness comes from active learning compensating for the correlation of the views.

Journal ArticleDOI
TL;DR: A novel region-based approach to snakes designed to optimally separate the values of certain image statistics over a known number of region types is developed, endowing the algorithm with a robustness to initial contour placement above and beyond the significant improvement exhibited by other region based snakes over earlier edge based snakes.

Journal ArticleDOI
TL;DR: This work argues that with a systematic incremental methodology one can go beyond shallow parsing to deeper language analysis, while preserving robustness, and describes a generic system based on such a methodology and designed for building robust analyzers that tackle deeper linguistic phenomena than those traditionally handled by the now widespread shallow parsers.
Abstract: Robustness is a key issue for natural language processing in general and parsing in particular, and many approaches have been explored in the last decade for the design of robust parsing systems. Among those approaches is shallow or partial parsing, which produces minimal and incomplete syntactic structures, often in an incremental way. We argue that with a systematic incremental methodology one can go beyond shallow parsing to deeper language analysis, while preserving robustness. We describe a generic system based on such a methodology and designed for building robust analyzers that tackle deeper linguistic phenomena than those traditionally handled by the now widespread shallow parsers. The rule formalism allows the recognition of n-ary linguistic relations between words or constituents on the basis of global or local structural, topological and/or lexical conditions. It offers the advantage of accepting various types of inputs, ranging from raw to chunked or constituent-marked texts, so for instance it can be used to process existing annotated corpora, or to perform a deeper analysis on the output of an existing shallow parser. It has been successfully used to build a deep functional dependency parser, as well as for the task of co-reference resolution, in a modular way.

Journal ArticleDOI
TL;DR: A robust hierarchical algorithm for fully-automatic registration of a pair of images of the curved human retina photographed by a fundus microscope, making the algorithm robust to unmatchable image features and mismatches between features caused by large interframe motions.
Abstract: This paper describes a robust hierarchical algorithm for fully-automatic registration of a pair of images of the curved human retina photographed by a fundus microscope. Accurate registration is essential for mosaic synthesis, change detection, and design of computer-aided instrumentation. Central to the algorithm is a 12-parameter interimage transformation derived by modeling the retina as a rigid quadratic surface with unknown parameters. The parameters are estimated by matching vascular landmarks by recursively tracing the blood vessel structure. The parameter estimation technique, which could be generalized to other applications, is a hierarchy of models and methods, making the algorithm robust to unmatchable image features and mismatches between features caused by large interframe motions. Experiments involving 3,000 image pairs from 16 different healthy eyes were performed. Final registration errors less than a pixel are routinely achieved. The speed, accuracy, and ability to handle small overlaps compare favorably with retinal image registration techniques published in the literature.

Journal ArticleDOI
TL;DR: In this paper, a family of hexapedal robots whose functional biomimetic design is made possible by shape deposition manufacturing was presented, whose fast over four body-lengths per second and robust traversal over hip-height obstacles performance begins to compare to that seen in nature.
Abstract: Robots to date lack the robustness and performance of even the simplest animals when operating in unstructured environments. This observation has prompted an interest in biomimetic robots that take design inspiration from biology. However, even biomimetic designs are compromised by the complexity and fragility that result from using traditional engineering materials and manufacturing methods. We argue that biomimetic design must be combined with structures that mimic the way biological structures are composed, with embedded actuators and sensors and spatially-varied materials. This proposition is made possible by a layered-manufacturing technology called shape deposition manufacturing SDM. We present a family of hexapedal robots whose functional biomimetic design is made possible by SDM's unique capabilities and whose fast over four body-lengths per second and robust traversal over hip-height obstacles performance begins to compare to that seen in nature. We describe the design and fabrication of the robots and we present the results of experiments that focus on their performance and locomotion dynamics.

Proceedings ArticleDOI
07 Aug 2002
TL;DR: A cooperative CML algorithm that merges sensor and navigation information from multiple autonomous vehicles is presented, based on stochastic estimation and uses a feature-based approach to extract landmarks from the environment.
Abstract: Autonomous vehicles require the ability to build maps of an unknown environment while concurrently using these maps for navigation. Current algorithms for this concurrent mapping and localization (CML) problem have been implemented for single vehicles, but do not account for extra positional information available when multiple vehicles operate simultaneously. Multiple vehicles have the potential to map an environment more quickly and robustly than a single vehicle. This paper presents a cooperative CML algorithm that merges sensor and navigation information from multiple autonomous vehicles. The algorithm presented is based on stochastic estimation and uses a feature-based approach to extract landmarks from the environment. The theoretical framework for the collaborative CML algorithm is presented, and a convergence theorem central to the cooperative CML problem. is proved for the first time. This theorem quantifies the performance gains of collaboration, allowing for determination of the number of cooperating vehicles required to accomplish a task. A simulated implementation of the collaborative CML algorithm demonstrates substantial performance improvement over non-cooperative CML.

Proceedings ArticleDOI
07 Nov 2002
TL;DR: A new framework for studying networks and their capacity is presented, based on algebraic methods, and it is shown that, if a multicast connection is achievable under different failure scenarios, a single static code can ensure robustness of the connection under all of those failure scenarios.
Abstract: We consider the issue of network capacity. Recent work by Li and Yeung examined the network capacity of multicast networks and related capacity to cutsets. Capacity is achieved by coding over a network. We present a new framework for studying networks and their capacity. Our framework, based on algebraic methods, is surprisingly simple and effective. For networks which are restricted to using linear codes (we make the meaning of linear codes precise, since the codes are not bit-wise linear), we find necessary and sufficient conditions for any given set of connections to be achievable over a given network. For multicast connections, linear codes are not a restrictive assumption, since all achievable connections can be achieved using linear codes. Moreover, coding can be used to maintain connections after permanent failures, such as the removal of an edge from the network. We show necessary and sufficient conditions for a set of connections to be robust to a set of permanent failures. For multicast connections, we show the rather surprising result that, if a multicast connection is achievable under different failure scenarios, a single static code can ensure robustness of the connection under all of those failure scenarios.

Proceedings ArticleDOI
07 Aug 2002
TL;DR: An approach for improved state estimation by augmenting traditional inertial navigation techniques with image-based motion estimation (IBME) is developed, which results in increased state estimation accuracy and robustness.
Abstract: Numerous upcoming NASA missions need to land safely and precisely on planetary bodies. Accurate and robust state estimation during the descent phase is necessary. Towards this end, we have developed an approach for improved state estimation by augmenting traditional inertial navigation techniques with image-based motion estimation (IBME). A Kalman filter that processes rotational velocity and linear acceleration measurements provided from an inertial measurement unit has been enhanced to accommodate relative pose measurements from the IBME. In addition to increased state estimation accuracy, IBME convergence time is reduced while robustness of the overall approach is improved. The methodology is described in detail and experimental results with a 5 DOF gantry testbed are presented.

Journal ArticleDOI
TL;DR: The objective is to control a web transport system with winder and unwinder for elastic material and an H/sub /spl infin// robust control strategy with varying gains is shown to render the control more robust to the radius variations.
Abstract: The objective is to control a web transport system with winder and unwinder for elastic material. A physical modeling of this plant is made based on the general laws of physics. For this type of control problem, it is extremely important to prevent the occurrence of web break or fold by decoupling the web tension and the web velocity. Due to the wide-range variation of the radius and inertia of the rollers the system dynamics change considerably during the winding/unwinding process. Different strategies for web tension control and linear transport velocity control are presented. First, an H/sub /spl infin// robust control strategy which reduces the coupling between tension and velocity, is compared to the decentralized control strategy with proportional integral derivative (PID) controllers commonly used in the industry. Second, an H/sub /spl infin// robust control strategy with varying gains is shown to render the control more robust to the radius variations. Then, a linear parameter varying (LPV) control strategy with smooth scheduling of controllers is synthesized for different operating points and compared to the previous methods. Finally, this LPV control and the H/sub /spl infin// robust control strategy with varying gains are combined to give the best results on an experimental setup, for the rejection of the disturbances introduced by velocity variations and for the robustness to radius and inertia changes.

Journal ArticleDOI
TL;DR: The hypothesis that potential errors in models will result in parameter sensitivities is tested by analysis of two models of the biochemical oscillator underlying the Xenopus cell cycle and the analysis successfully identifies known weaknesses in the older model and suggests areas for further investigation in the more recent model.

Journal ArticleDOI
TL;DR: A novel decoding scheme for Alamouti's space-time coded transmissions over time-selective fading channels that arise due to Doppler shifts and carrier frequency offsets is proposed, employing Kalman filtering for channel tracking in order to enable ST decoding with diversity gains.
Abstract: This paper proposes a novel decoding scheme for Alamouti's (see IEEE J. Select. Areas Commun., vol.16, p.1451-1458, 1998) space-time (ST) coded transmissions over time-selective fading channels that arise due to Doppler shifts and carrier frequency offsets. Modeling the time-selective channels as random processes, we employ Kalman filtering for channel tracking in order to enable ST decoding with diversity gains. Computer simulations confirm that the proposed scheme exhibits robustness to time-selectivity with a few training symbols.

Journal ArticleDOI
TL;DR: In this article, a multidisciplinary robust design procedure that utilizes efficient methods for uncertainty analysis is developed, and the proposed techniques bring the features of a multi-disciplinary design optimization framework into consideration.
Abstract: Robust design has been gaining wide attention, and its applications have been extended to making reliable decisions when designing complex engineering systems in a multidisciplinary design environment. Though the usefulness of robust design is widely acknowledged for multidisciplinary design systems, its implementation is rare. One of the reasons is the complexity and computational burden associated with the evaluation of performance variations caused by the randomness (uncertainty) of a system. A multidisciplinary robust design procedure that utilizes efficient methods for uncertainty analysis is developed here. Different from the existing uncertainty analysis techniques, our proposed techniques bring the features of a multidisciplinary design optimization I MDO) framework into consideration. The system uncertainty analysis method and the concurrent subsystem uncertainty analysis method are developed to estimate the mean and variance of system performance subject to uncertainties associated with both design parameters and design models. As shown both analytically and empirically, compared to the conventional Monte Carlo simulation approach, the proposed techniques used for uncertainty analysis will significantly reduce the number of design evaluations at the system level and, therefore, improve the efficiency of robust design in the domain of MDO. A mathematical example and an electronic packaging problem are used as examples to verify the effectiveness of these approaches.