scispace - formally typeset
Search or ask a question

Showing papers on "Robustness (computer science) published in 1995"


Journal ArticleDOI
TL;DR: This work describes a method for building models by learning patterns of variability from a training set of correctly annotated images that can be used for image search in an iterative refinement algorithm analogous to that employed by Active Contour Models (Snakes).

7,969 citations


Journal ArticleDOI
TL;DR: This paper characterize the desirable properties of a solution to models, when the problem data are described by a set of scenarios for their value, instead of using point estimates, and develops a general model formulation, called robust optimization RO, that explicitly incorporates the conflicting objectives of solution and model robustness.
Abstract: Mathematical programming models with noisy, erroneous, or incomplete data are common in operations research applications. Difficulties with such data are typically dealt with reactively-through sensitivity analysis-or proactively-through stochastic programming formulations. In this paper, we characterize the desirable properties of a solution to models, when the problem data are described by a set of scenarios for their value, instead of using point estimates. A solution to an optimization model is defined as: solution robust if it remains "close" to optimal for all scenarios of the input data, and model robust if it remains "almost" feasible for all data scenarios. We then develop a general model formulation, called robust optimization RO, that explicitly incorporates the conflicting objectives of solution and model robustness. Robust optimization is compared with the traditional approaches of sensitivity analysis and stochastic linear programming. The classical diet problem illustrates the issues. Robust optimization models are then developed for several real-world applications: power capacity expansion; matrix balancing and image reconstruction; air-force airline scheduling; scenario immunization for financial planning; and minimum weight structural design. We also comment on the suitability of parallel and distributed computer architectures for the solution of robust optimization models.

1,793 citations


Journal ArticleDOI
TL;DR: The methodology presented in this paper is applied to the gain scheduling of a missile autopilot and is to bypass most difficulties associated with more classical schemes such as gain-interpolation or gain-scheduling techniques.

1,439 citations


Book
01 Jan 1995
TL;DR: This theory allows us to determine if a linear time invariant control system, containing several uncertain real parameters remains stable as the parameters vary over a set and nicely complements the optimal theories as well as Classical Control and considerably extends the range of possibilities available to the control specialist.
Abstract: From the Book: PREFACE: The subject of robust control began to receive worldwide attention in the late 1970's when it was found that Linear Quadratic Optimal Control (optimal control), state feedback through observers, and other prevailing methods of control system synthesis such as Adaptive Control, lacked any guarantees of stability or performance under uncertainty Thus, the issue of robustness, prominent in Classical Control, took rebirth in a modern setting Optimal control was proposed as a first approach to the solution of the robustness problem This elegant approach, and its offshoots, such as theory, have been intensely developed over the past 12 years or so, and constitutes one of the triumphs of control theory The theory provides a precise formulation and solution of the problem of synthesizing an output feedback compensator that minimizes the norm of a prescribed system transfer function Many robust stabilization and performance problems can be cast in this formulation and there now exists effective, and fairly complete theory for control system synthesis subjected to perturbations, in the framework The theory delivers an "optimal" feedback compensator for the system Before such a compensator can be eployed in a physical (real-world) system it is natural to test its capabilities with regard to additional design criteria, not covered by the optimality criterion used In particular the performance of any controller under real parameter uncertainty, as well as mixed parametric-unstructured uncertainty, is an issue which is vital to most control systems However, optimal theory is incapable of providing a direct and nonconservative answer to thisimportantquestion The problem of robustness under parametric uncertainty received a shot in the arm in the form of Kharitonov's Theorem for interval polynomials, which appeared in the mid-1980's in the Western literature It was originally published in 1978 in a Russian journal With this surprising theorem the entire field of robust control under real parametric uncertainty came alive and it can be said that Kharitonov's Theorem is the most important occurrence in this area after the development of the Routh-Hurwitz criterion A significant development following Kharitonov's Theorem was the calculation, in 1985, by Soh, Berger and Dabke of the radius of the stability ball in the space of coefficients of a polynomial From the mid-1980's rapid and spectacular developments have taken place in this field As a result we now have a rigorous, coherent, and comprehensive theory to deal directly and effectively with real parameter uncertainty in control systems This theory nicely complements the optimal theories as well as Classical Control and considerably extends the range of possibilities available to the control specialist The main accomplishment of this theory is that it allows us to determine if a linear time invariant control system, containing several uncertain real parameters remains stable as the parameters vary over a set This question can be answered in a precise manner, that is, nonconservatively, when the parameters appear linearly or multilinearly in the characteristic polynomial In developing the solution to the above problem, several important control system design problems are answered These are 1) the calculation of the real parametric stability margin, 2) the determination of stability and stability margins under mixed parametric and unstructured (norm-bounded or nonlinear) uncertainty 3) the evaluation of the worst case or robust performance measured in the norm, over a prescribed parametric uncertainty set and 4) the extension of classical design techniques involving Nyquist, Nichols and Bode plots and root-loci to systems containing several uncertain real parameters These results are made possible because the theory developed provides built-in solutions to several extremal problems It identifies apriori the critical subset of the uncertain parameter set over which stability or performance will be lost and thereby reduces to a very small set, usually points or lines, the parameters over which robustness must be verified This built-in optimality of the parametric theory is its main strong point particularly from the point of view of applications It allows us, for the first time, to devise methods to effectively carry out robust stability and performance analysis of control systems under parametric and mixed uncertainty To balance this rather strong claim we point out that a significant deficiency of control theory at the present time is the lack of nonconservative synthesis methods to achieve robustness under parameter uncertainty Nevertheless, even here the sharp analysis results obtained in the parametric framework can be exploited in conjunction with synthesis techniques developed in the framework to develop design techniques to partially cover this drawback The objective of this book is to describe the parametric theory in a self-contained manner The book is suitable for use as a graduate textbook and also for self-study The entire subject matter of the book is developed from the single fundamental fact that the roots of a polynomial depend continuously on its coefficients This fact is the basis of the Boundary Crossing Theorem developed in Chapter 1 and is repeatedly used throughout the book Surprisingly enough this simple idea, used systematically is sufficient to derive even the most mathematically sophisticated results This economy and transparency of concepts is another strength of the parametric theory It makes the results accessible and appealing to a wide audience and allows for a unified and systematic development of the subject The contents of the book can therefore be covered in one semester despite the size of the book In accordance with our focus we do not develop any results in theory although some results from theory are used in the chapter on synthesis In Chapter 0, which serves as an extension of this preface, we rapidly overview some basic aspects of control systems, uncertainty models and robustness issues We also give a brief historical sketch of Control Theory, and then describe the contents of the rest of the chapters in some detail The theory developed in the book is presented in mathematical language The results described in these theorems and lemmas however are completely oriented towards control systems applications and in fact lead to effective algorithms and graphical displays for design and analysis We have throughout included examples to illustrate the theory and indeed the reader who wants to avoid reading the proofs can understand the significance and utility of the results by reading through the examples A MATLAB based software package, the Robust Parametric Control ToolBox, has been developed by the authors in collaboration with Samir Ahmad, our graduate student It implements most of the theory presented in the book In fact, all the examples and figures in this book have been generated by this ToolBox We gratefully acknowledge Samir's dedication and help in the preparation of the numerical examples given in the book A demonstration diskette illustrating this package is included with this book SPB would like to thank R Kishan Baheti, Director of the Engineering Systems Program at the National Science Foundation, for supporting his research program LHK thanks Harry Frisch and Frank Bauer of NASA Goddard Space Flight Center and Jer-Nan Juang of NASA Langley Research Center for their support of his research, and Mike Busby, Director of the Center of Excellence in Information Systems at Tennessee State University for his encouragement It is a pleasure to express our gratitude to several colleagues and coworkers in this field We thank Antonio Vicino, Alberto Tesi, Mario Milanese, Jo W Howze, Aniruddha Datta, Mohammed Mansour, J Boyd Pearson, Peter Dorato, Yakov Z Tsypkin, Boris T Polyak, Vladimir L Kharitonov, Kris Hollot, Juergen Ackermann, Diedrich Hinrichsen, Tony Pritchard, Dragoslav D Siljak, Charles A Desoer, Soura Dasgupta, Suhada Jayasuriya, Rama K Yedavalli, Bob R Barmish, Mohammed Dahleh, and Biswa N Datta for their, support, enthusiasm, ideas and friendship In particular we thank Nirmal K Bose, John A Fleming and Bahram Shafai for thoroughly reviewing the manuscript and suggesting many improvements We are indeed honored that Academician Ya Z Tsypkin, one of the leading control theorists of the world, has written a Foreword to our book Professor Tsypkin's pioneering contributions range from the stability analysis of time-delay systems in the 1940's, learning control systems in the 1960's to robust control under parameter uncertainty in the 1980's and 1990's His observations on the contents of the book and this subject based on this wide perspective are of great value The first draft of this book was written in 1989 We have added new results of our own and others as we became aware of them However, because of the rapid pace of developments of the subject and the sheer volume of literature that has been published in the last few years, it is possible that we have inadvertently omitted some results and references worthy of inclusion We apologize in advance to any authors or readers who feel that we have not given credit where it is due S P Bhattacharyya H Chapellat L H Keel December 5, 1994

1,052 citations


Proceedings ArticleDOI
20 Jun 1995
TL;DR: A new information-theoretic approach is presented for finding the pose of an object in an image that works well in domains where edge or gradient-magnitude based methods have difficulty, yet it is more robust then traditional correlation.
Abstract: A new information-theoretic approach is presented for finding the pose of an object in an image. The technique does not require information about the surface properties of the object, besides its shape, and is robust with respect to variations of illumination. In our derivation, few assumptions are made about the nature of the imaging process. As a result, the algorithms are quite general and can foreseeably be used in a wide variety of imaging situations. Experiments are presented that demonstrate the approach in registering magnetic resonance images, aligning a complex 3D object model to real scenes including clutter and occlusion, tracking a human head in a video sequence and aligning a view-based 2D object model to real images. The method is based on a formulation of the mutual information between the model and the image. As applied in this paper, the technique is intensity-based, rather than feature-based. It works well in domains where edge or gradient-magnitude based methods have difficulty, yet it is more robust then traditional correlation. Additionally, it has an efficient implementation that is based on stochastic approximation. >

966 citations


BookDOI
TL;DR: The basic process capability indices - Cp, Cpk and their modifications the Cpm index and related indicesprocess capability indices under non-normality robustness properties Multivariate process capability index.
Abstract: The basic process capability indices - Cp, Cpk and their modifications the Cpm index and related indices process capability indices under non-normality robustness properties Multivariate process capability indices.

726 citations


Journal ArticleDOI
TL;DR: A novel technique for the integration of multiple classifiers at an hybrid rank/measurement level is introduced using HyperBF networks and two different methods for the rejection of an unknown person are introduced.
Abstract: This paper presents a person identification system based on acoustic and visual features. The system is organized as a set of non-homogeneous classifiers whose outputs are integrated after a normalization step. In particular, two classifiers based on acoustic features and three based on visual ones provide data for an integration module whose performance is evaluated. A novel technique for the integration of multiple classifiers at an hybrid rank/measurement level is introduced using HyperBF networks. Two different methods for the rejection of an unknown person are introduced. The performance of the integrated system is shown to be superior to that of the acoustic and visual subsystems. The resulting identification system can be used to log personal access and, with minor modifications, as an identity verification system. >

663 citations


Journal ArticleDOI
TL;DR: In this article, a new genetic approach for solving the economic dispatch problem in large-scale power systems is presented, where the chromosome contains only an encoding of the normalized system incremental cost in this encoding technique.
Abstract: This paper presents a new genetic approach for solving the economic dispatch problem in large-scale power systems. A new encoding technique is developed. The chromosome contains only an encoding of the normalized system incremental cost in this encoding technique. Therefore, the total number of bits of chromosome is entirely independent of the number of units. The salient feature makes the proposed genetic approach attractive in large and complex systems which other methodologies may fail to achieve. Moreover, the approach can take network losses, ramp rate limits, and prohibited zone avoidance into account because of genetic algorithm's flexibility. Numerical results on an actual utility system of up to 40 units show that the proposed approach is faster and more robust than the well-known lambda-iteration method in large-scale systems.

583 citations


Proceedings Article
20 Aug 1995
TL;DR: First results are reported on first results of a research program that uses par tially observable Markov models to robustly track a robots location in office environments and to direct its goal-oriented actions.
Abstract: Autonomous mobile robots need very reliable navigation capabilities in order to operate unattended for long periods of time. This paper reports on first results of a research program that uses par tially observable Markov models to robustly track a robots location in office environments and to direct its goal-oriented actions. The approach explicitly maintains a probability distribution over the possible locations of the robot taking into account various sources of uncertainly including approximate knowledge of the environment and actuator and sensor uncertainty. A novel feature of our approach is its integration of topological map information with approximate metric information. We demonstrate the robustness of this approach in controlling an actual indoor mobile robot navigating corridors.

572 citations


01 Jan 1995
TL;DR: A method using a JPEG model based, frequency hopped, randomly sequenced pulse position modulated code (RSPPMC) is described, which supports robustness of embedded labels against several damaging possibilities such as lossy data compression, low pass filtering and/or color space conversion.
Abstract: This paper first presents a "hidden label" approach for identifying the ownership and distribution of multimedia information (image or video data) in digital networked environment. Then it discusses criteria and difficulties in implementing the approach. Finally a method using a JPEG model based, frequency hopped, randomly sequenced pulse position modulated code (RSPPMC) is described. This method supports robustness of embedded labels against several damaging possibilities such as lossy data compression, low pass filtering and/or color space conversion.

528 citations


Journal ArticleDOI
01 Jun 1995
TL;DR: This paper proposes a new locally adaptive multigrid block matching motion estimation technique that leads to a robust motion field estimation precise prediction along moving edges and a decreased amount of side information in uniform areas.
Abstract: The key to high performance in image sequence coding lies in an efficient reduction of the temporal redundancies. For this purpose, motion estimation and compensation techniques have been successfully applied. This paper studies motion estimation algorithms in the context of first generation coding techniques commonly used in digital TV. In this framework, estimating the motion in the scene is not an intrinsic goal. Motion estimation should indeed provide good temporal prediction and simultaneously require low overhead information. More specifically the aim is to minimize globally the bandwidth corresponding to both the prediction error information and the motion parameters. This paper first clarifies the notion of motion, reviews classical motion estimation techniques, and outlines new perspectives. Block matching techniques are shown to be the most appropriate in the framework of first generation coding. To overcome the drawbacks characteristic of most block matching techniques, this paper proposes a new locally adaptive multigrid block matching motion estimation technique. This algorithm has been designed taking into account the above aims. It leads to a robust motion field estimation precise prediction along moving edges and a decreased amount of side information in uniform areas. Furthermore, the algorithm controls the accuracy of the motion estimation procedure in order to optimally balance the amount of information corresponding to the prediction error and to the motion parameters. Experimental results show that the technique results in greatly enhanced visual quality and significant saving in terms of bit rate when compared to classical block matching techniques. >

Journal ArticleDOI
Sung-Bae Cho1, Jong-Sung Kim1
01 Feb 1995
TL;DR: The authors propose a method for multinetwork combination based on the fuzzy integral that nonlinearly combines objective evidence, in the form of a fuzzy membership function, with subjective evaluation of the worth of the individual neural networks with respect to the decision.
Abstract: In the area of artificial neural networks, the concept of combining multiple networks has been proposed as a new direction for the development of highly reliable neural network systems. The authors propose a method for multinetwork combination based on the fuzzy integral. This technique nonlinearly combines objective evidence, in the form of a fuzzy membership function, with subjective evaluation of the worth of the individual neural networks with respect to the decision. The experimental results with the recognition problem of on-line handwriting characters confirm the superiority of the presented method to the other voting techniques. >

Journal ArticleDOI
TL;DR: For an automatic steering problem of a city bus the reference maneuvers and specifications are introduced: a linear controller and a nonlinear controller, both with feedback of the lateral displacement and the yaw rate, which meet all specifications.
Abstract: For an automatic steering problem of a city bus the reference maneuvers and specifications are introduced. The robustness problem arises from large variations in velocity, mass, and road-tire contact. Two controller structures, both with feedback of the lateral displacement and the yaw rate, are introduced: a linear controller and a nonlinear controller. The controller parameters are first hand-tuned and then refined by performance vector optimization. Both controllers meet all specifications. Their relative merits are analyzed in simulations for four typical driving maneuvers. >

Journal ArticleDOI
TL;DR: In this article, it is shown that under error models used in robust estimation, unidentified population parameters can often be bounded, and that when the data may be contaminated or corrupted, estimating the bounds is more natural than attempting point estimation of unidentified parameters.
Abstract: Robust estimation aims at developing point estimators that are not highly sensitive to errors in the data. However, the population parameters of interest are not identified under the assumptions of robust estimation, so the rationale for point estimation is not apparent. This paper shows that under error models used in robust estimation, unidentified population parameters can often be bounded. The bounds provide information that is not available in robust estimation. For example, it is possible to obtain finite bounds on the population mean under contaminated sampling. A method for estimating the bounds is given and illustrated with an application. It is argued that when the data may be contaminated or corrupted, estimating the bounds is more natural than attempting point estimation of unidentified parameters

Journal ArticleDOI
TL;DR: The trilinearity result is shown to be of much practical use in visual recognition by alignment-yielding a direct reprojection method that cuts through the computations of camera transformation, scene structure and epipolar geometry.
Abstract: In the general case, a trilinear relationship between three perspective views is shown to exist. The trilinearity result is shown to be of much practical use in visual recognition by alignment-yielding a direct reprojection method that cuts through the computations of camera transformation, scene structure and epipolar geometry. Moreover, the direct method is linear and sets a new lower theoretical bound on the minimal number of points that are required for a linear solution for the task of reprojection. The proof of the central result may be of further interest as it demonstrates certain regularities across homographics of the plane and introduces new view invariants. Experiments on simulated and real image data were conducted, including a comparative analysis with epipolar intersection and the linear combination methods, with results indicating a greater degree of robustness in practice and a higher level of performance in reprojection tasks. >

Journal ArticleDOI
TL;DR: This work proposes a new algorithm of range data registration and segmentation that is robust in the presence of outlying points (outliers) like noise and occlusion and integrates the inliers obtained from multiple range images to construct a data set representing an entire object.

Journal ArticleDOI
TL;DR: The on-line control ability, robustness, learning ability and interpolation ability of the proposed model reference control structure are confirmed by simulation results.

Proceedings Article
20 Aug 1995
TL;DR: This paper examines C4.5, a decision tree algorithm that is already quite robust - few algorithms have been shown to consistently achieve higher accuracy, and extends the pruning method to fully remove the effect of outliers, and this results in improvement on many databases.
Abstract: Finding and removing outliers is an important problem in data mining. Errors in large databases can be extremely common, so an important property of a data mining algorithm is robustness with respect to errors in the database. Most sophisticated methods in machine learning address this problem to some extent, but not fully, and can be improved by addressing the problem more directly. In this paper we examine C4.5, a decision tree algorithm that is already quite robust - few algorithms have been shown to consistently achieve higher accuracy. C4.5 incorporates a pruning scheme that partially addresses the outlier removal problem. In our ROBUST-C4.5 algorithm we extend the pruning method to fully remove the effect of outliers, and this results in improvement on many databases.

Journal ArticleDOI
TL;DR: This paper discusses a method of building nonlinear models of possibly chaotic systems from data, while maintaining good robustness against noise, and shows how the models that are built are close to the simplest possible according to a description length criterion.

Journal ArticleDOI
TL;DR: The authors' robust rules improve the performances of the existing PCA algorithms significantly when outliers are present and perform excellently for fulfilling various PCA-like tasks such as obtaining the first principal component vector, the first k principal component vectors, and directly finding the subspace spanned by the firstk vector principal components vectors without solving for each vector individually.
Abstract: This paper applies statistical physics to the problem of robust principal component analysis (PCA). The commonly used PCA learning rules are first related to energy functions. These functions are generalized by adding a binary decision field with a given prior distribution so that outliers in the data are dealt with explicitly in order to make PCA robust. Each of the generalized energy functions is then used to define a Gibbs distribution from which a marginal distribution is obtained by summing over the binary decision field. The marginal distribution defines an effective energy function, from which self-organizing rules have been developed for robust PCA. Under the presence of outliers, both the standard PCA methods and the existing self-organizing PCA rules studied in the literature of neural networks perform quite poorly. By contrast, the robust rules proposed here resist outliers well and perform excellently for fulfilling various PCA-like tasks such as obtaining the first principal component vector, the first k principal component vectors, and directly finding the subspace spanned by the first k vector principal component vectors without solving for each vector individually. Comparative experiments have been made, and the results show that the authors' robust rules improve the performances of the existing PCA algorithms significantly when outliers are present. >

Journal ArticleDOI
TL;DR: A novel method for the on-line identification of steady state in noisy processes is developed using critical values of an F-like statistic, and its computational efficiency and robustness to process noise distribution and non-noise patterns provide advantages over existing methods.

Journal ArticleDOI
TL;DR: An algorithm for constructing a single hidden layer feedforward neural network that uses the quasi-Newton method to minimize the sequence of error functions associated with the growing network is described.
Abstract: This paper describes an algorithm for constructing a single hidden layer feedforward neural network. A distinguishing feature of this algorithm is that it uses the quasi-Newton method to minimize the sequence of error functions associated with the growing network. Experimental results indicate that the algorithm is very efficient and robust. The algorithm was tested on two test problems. The first was the n-bit parity problem and the second was the breast cancer diagnosis problem from the University of Wisconsin Hospitals. For the n-bit parity problem, the algorithm was able to construct neural network having less than n hidden units that solved the problem for n=4,/spl middotspl middotspl middot/,7. For the cancer diagnosis problem, the neural networks constructed by the algorithm had small number of hidden units and high accuracy rates on both the training data and the testing data. >

Book ChapterDOI
27 Aug 1995
TL;DR: A number of attacks, some new, on public key protocols are presented, and a number of principles which may help designers avoid many of the pitfalls, and help attackers spot errors which can be exploited.
Abstract: We present a number of attacks, some new, on public key protocols. We also advance a number of principles which may help designers avoid many of the pitfalls, and help attackers spot errors which can be exploited.

Journal ArticleDOI
TL;DR: The proposed tuning rules are inspired from the symmetrical optimum principles and have the advantage to take into account both robustness aspects and desired closed-loop characteristics and to cover a large domain of current, real applications.

Journal ArticleDOI
TL;DR: In this article, the authors apply genetic optimization with an adaptive penalty function to the shape-constrained unequal-area facility layout problem, and show how optimal solutions are affected by constraints on permitted department shapes, as specified by a maximum allowable aspect ratio for each department.
Abstract: This paper applies genetic optimization with an adaptive penalty function to the shape-constrained unequal-area facility layout problem. We implement a genetic search for unequal-area facility layout, and show how optimal solutions are affected by constraints on permitted department shapes, as specified by a maximum allowable aspect ratio for each department. We show how an adaptive penalty function can be used to find good feasible solutions to even the most highly constrained problems. We describe our genetic encoding, reproduction and mutation operators, and penalty evolution strategy. We provide results from several test problems that demonstrate the robustness of this approach across different problems and parameter settings.

Journal ArticleDOI
TL;DR: In this article, a two-stage mapping process is constructed by the mapping relationships between unsigned decimal integers and discrete values, which can significantly reduce the computational effort and promote the computational efficiency.

Proceedings ArticleDOI
30 May 1995
TL;DR: Single-rail handshake circuits are introduced as a cost effective implementation of asynchronous circuits that can be implemented in any (generic) standard-cell library and makes asynchronous circuits a potential technology of choice for low-power applications.
Abstract: Single-rail handshake circuits are introduced as a cost effective implementation of asynchronous circuits. Compared to double-rail implementations, the circuits are smaller, faster, and more energy-efficient. Furthermore, in contrast to common belief, all four phases of the four-phase handshake protocol can be productive. An important selling point for single-rail circuits is that they can be implemented in any (generic) standard-cell library. This facilitates technology migration and makes asynchronous circuits a potential technology of choice for low-power applications.

Journal ArticleDOI
TL;DR: A simple and robust ATM call admission control is described, and the theoretical background for its analysis is developed, allowing an explicit treatment of the trade-off between cell loss and call rejection.
Abstract: This paper describes a simple and robust ATM call admission control, and develops the theoretical background for its analysis. Acceptance decisions are based on whether the current load is less than a precalculated threshold, and Bayesian decision theory provides the framework for the choice of thresholds. This methodology allows an explicit treatment of the trade-off between cell loss and call rejection, and of the consequences of estimation error. Further topics discussed include the robustness of the control to departures from model assumptions, its performance relative to a control possessing precise knowledge of all unknown parameters, the relationship between leaky bucket depths and buffer requirements, and the treatment of multiple call types. >

Journal ArticleDOI
TL;DR: These questions for three different controller design techniques as applied to the problem of global tracking of robots with flexible joints are investigated and the connection between the various controllers is investigated.

Journal ArticleDOI
TL;DR: Numerical experiments show suitable performance of the proposed method with regard to estimation accuracy, convergence robustness and computational efficiency, and indicate the decoupled nature of the state estimation problem.
Abstract: The need for higher frequency in state estimation execution covering larger supervised networks has led to the investigation of faster and numerically more stable state estimation algorithms. However, technical developments in distributed Energy Management Systems, based on fast data communication networks, open up the possibility of parallel or distributed state estimation implementation. In this paper, this possibility is exploited to derive a solution methodology based on conventional state estimation algorithms and a coupling constraints optimization technique. Numerical experiments show suitable performance of the proposed method with regard to estimation accuracy, convergence robustness and computational efficiency. The results of these experiments also indicate the decoupled nature of the state estimation problem. >