scispace - formally typeset
Search or ask a question

Showing papers on "Robustness (computer science) published in 2004"


Journal ArticleDOI
TL;DR: The high utility of MSERs, multiple measurement regions and the robust metric is demonstrated in wide-baseline experiments on image pairs from both indoor and outdoor scenes.

3,422 citations


01 Jan 2004
TL;DR: An approach is proposed that flexibly adjust the level of conservatism of the robust solutions in terms of probabilistic bounds of constraint violations, and an attractive aspect of this method is that the new robust formulation is also a linear optimization problem, so it naturally extend to discrete optimization problems in a tractable way.
Abstract: A robust approach to solving linear optimization problems with uncertain data was proposed in the early 1970s and has recently been extensively studied and extended. Under this approach, we are willing to accept a suboptimal solution for the nominal values of the data in order to ensure that the solution remains feasible and near optimal when the data changes. A concern with such an approach is that it might be too conservative. In this paper, we propose an approach that attempts to make this trade-off more attractive; that is, we investigate ways to decrease what we call the price of robustness. In particular, we flexibly adjust the level of conservatism of the robust solutions in terms of probabilistic bounds of constraint violations. An attractive aspect of our method is that the new robust formulation is also a linear optimization problem. Thus we naturally extend our methods to discrete optimization problems in a tractable way. We report numerical results for a portfolio optimization problem, a knapsack problem, and a problem from the Net Lib library.

3,359 citations


Book ChapterDOI
11 May 2004
TL;DR: By proving that this scheme implements a coarse-to-fine warping strategy, this work gives a theoretical foundation for warping which has been used on a mainly experimental basis so far and demonstrates its excellent robustness under noise.
Abstract: We study an energy functional for computing optical flow that combines three assumptions: a brightness constancy assumption, a gradient constancy assumption, and a discontinuity-preserving spatio-temporal smoothness constraint. In order to allow for large displacements, linearisations in the two data terms are strictly avoided. We present a consistent numerical scheme based on two nested fixed point iterations. By proving that this scheme implements a coarse-to-fine warping strategy, we give a theoretical foundation for warping which has been used on a mainly experimental basis so far. Our evaluation demonstrates that the novel method gives significantly smaller angular errors than previous techniques for optical flow estimation. We show that it is fairly insensitive to parameter variations, and we demonstrate its excellent robustness under noise.

2,902 citations


Proceedings ArticleDOI
03 Nov 2004
TL;DR: The FTSP achieves its robustness by utilizing periodic flooding of synchronization messages, and implicit dynamic topology update and comprehensive error compensation including clock skew estimation, which is markedly better than that of the existing RBS and TPSN algorithms.
Abstract: Wireless sensor network applications, similarly to other distributed systems, often require a scalable time synchronization service enabling data consistency and coordination. This paper describes the Flooding Time Synchronization Protocol (FTSP), especially tailored for applications requiring stringent precision on resource limited wireless platforms. The proposed time synchronization protocol uses low communication bandwidth and it is robust against node and link failures. The FTSP achieves its robustness by utilizing periodic flooding of synchronization messages, and implicit dynamic topology update. The unique high precision performance is reached by utilizing MAC-layer time-stamping and comprehensive error compensation including clock skew estimation. The sources of delays and uncertainties in message transmission are analyzed in detail and techniques are presented to mitigate their effects. The FTSP was implemented on the Berkeley Mica2 platform and evaluated in a 60-node, multi-hop setup. The average per-hop synchronization error was in the one microsecond range, which is markedly better than that of the existing RBS and TPSN algorithms.

2,267 citations


Journal ArticleDOI
01 Aug 2004
TL;DR: Two variants of fuzzy c-means clustering with spatial constraints, using the kernel methods, are proposed, inducing a class of robust non-Euclidean distance measures for the original data space to derive new objective functions and thus clustering theNon-E Euclidean structures in data.
Abstract: Fuzzy c-means clustering (FCM) with spatial constraints (FCM/spl I.bar/S) is an effective algorithm suitable for image segmentation. Its effectiveness contributes not only to the introduction of fuzziness for belongingness of each pixel but also to exploitation of spatial contextual information. Although the contextual information can raise its insensitivity to noise to some extent, FCM/spl I.bar/S still lacks enough robustness to noise and outliers and is not suitable for revealing non-Euclidean structure of the input data due to the use of Euclidean distance (L/sub 2/ norm). In this paper, to overcome the above problems, we first propose two variants, FCM/spl I.bar/S/sub 1/ and FCM/spl I.bar/S/sub 2/, of FCM/spl I.bar/S to aim at simplifying its computation and then extend them, including FCM/spl I.bar/S, to corresponding robust kernelized versions KFCM/spl I.bar/S, KFCM/spl I.bar/S/sub 1/ and KFCM/spl I.bar/S/sub 2/ by the kernel methods. Our main motives of using the kernel methods consist in: inducing a class of robust non-Euclidean distance measures for the original data space to derive new objective functions and thus clustering the non-Euclidean structures in data; enhancing robustness of the original clustering algorithms to noise and outliers, and still retaining computational simplicity. The experiments on the artificial and real-world datasets show that our proposed algorithms, especially with spatial constraints, are more effective.

1,077 citations


Journal ArticleDOI
17 Sep 2004-Cell
TL;DR: This work states that theoretical approaches to complex engineered systems can provide guidelines for investigating cellular robustness and may be a key to understanding cellular complexity, elucidating design principles, and fostering closer interactions between experimentation and theory.

1,046 citations


Journal ArticleDOI
TL;DR: A detailed study of several very important aspects of Super‐Resolution, often ignored in the literature, are presented, and robustness, treatment of color, and dynamic operation modes are discussed.
Abstract: Super-Resolution reconstruction produces one or a set of high-resolution images from a sequence of low-resolution frames. This article reviews a variety of Super-Resolution methods proposed in the last 20 years, and provides some insight into, and a summary of, our recent contributions to the general Super-Resolution problem. In the process, a detailed study of several very important aspects of Super-Resolution, often ignored in the literature, is presented. Spe- cifically, we discuss robustness, treatment of color, and dynamic operation modes. Novel methods for addressing these issues are accompanied by experimental results on simulated and real data. Finally, some future challenges in Super-Resolution are outlined and discussed. © 2004 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 14, 47-57, 2004; Published online in Wiley InterScience (www.interscience.wiley. com). DOI 10.1002/ima.20007

807 citations


Proceedings ArticleDOI
26 Sep 2004
TL;DR: The system is sufficiently robust to enable a variety of location-aware applications without requiring special-purpose hardware or complicated training and calibration procedures, and can be adapted to work with previously unknown user hardware.
Abstract: We demonstrate a system built using probabilistic techniques that allows for remarkably accurate localization across our entire office building using nothing more than the built-in signal intensity meter supplied by standard 802.11 cards. While prior systems have required significant investments of human labor to build a detailed signal map, we can train our system by spending less than one minute per office or region, walking around with a laptop and recording the observed signal intensities of our building's unmodified base stations. We actually collected over two minutes of data per office or region, about 28 man-hours of effort. Using less than half of this data to train the localizer, we can localize a user to the precise, correct location in over 95% of our attempts, across the entire building. Even in the most pathological cases, we almost never localize a user any more distant than to the neighboring office. A user can obtain this level of accuracy with only two or three signal intensity measurements, allowing for a high frame rate of localization results. Furthermore, with a brief calibration period, our system can be adapted to work with previously unknown user hardware. We present results demonstrating the robustness of our system against a variety of untrained time-varying phenomena, including the presence or absence of people in the building across the day. Our system is sufficiently robust to enable a variety of location-aware applications without requiring special-purpose hardware or complicated training and calibration procedures.

778 citations


Journal ArticleDOI
TL;DR: This work proposes an approach that incorporates appearance-adaptive models in a particle filter to realize robust visual tracking and recognition algorithms and demonstrates the effectiveness and robustness of the tracking algorithm.
Abstract: We present an approach that incorporates appearance-adaptive models in a particle filter to realize robust visual tracking and recognition algorithms. Tracking needs modeling interframe motion and appearance changes, whereas recognition needs modeling appearance changes between frames and gallery images. In conventional tracking algorithms, the appearance model is either fixed or rapidly changing, and the motion model is simply a random walk with fixed noise variance. Also, the number of particles is typically fixed. All these factors make the visual tracker unstable. To stabilize the tracker, we propose the following modifications: an observation model arising from an adaptive appearance model, an adaptive velocity motion model with adaptive noise variance, and an adaptive number of particles. The adaptive-velocity model is derived using a first-order linear predictor based on the appearance difference between the incoming observation and the previous particle configuration. Occlusion analysis is implemented using robust statistics. Experimental results on tracking visual objects in long outdoor and indoor video sequences demonstrate the effectiveness and robustness of our tracking algorithm. We then perform simultaneous tracking and recognition by embedding them in a particle filter. For recognition purposes, we model the appearance changes between frames and gallery images by constructing the intra- and extrapersonal spaces. Accurate recognition is achieved when confronted by pose and view variations.

742 citations


Journal Article
TL;DR: Leader-to-formation stability (LFS) gains quantify error amplification, relate interconnection topology to stability and performance, and offer safety bounds for different formation topologies.
Abstract: The paper investigates the stability properties of mobile agent formations which are based on leader following. We derive nonlinear gain estimates that capture how leader behavior affects the interconnection errors observed in the formation. Leader-to-formation stability (LFS) gains quantify error amplification, relate interconnection topology to stability and performance, and offer safety bounds for different formation topologies. Analysis based on the LFS gains provides insight to error propagation and suggests ways to improve the safety, robustness, and performance characteristics of a formation.

729 citations


Proceedings ArticleDOI
07 Jun 2004
TL;DR: In this article, a canonical first order delay model is proposed to propagate timing quantities like arrival times and required arrival times through the timing graph in this canonical form and the sensitivities of all timing quantities to each of the sources of variation are available.
Abstract: Variability in digital integrated circuits makes timing verification an extremely challenging task. In this paper, a canonical first order delay model is proposed that takes into account both correlated and independent randomness. A novel linear-time block-based statistical timing algorithm is employed to propagate timing quantities like arrival times and required arrival times through the timing graph in this canonical form. At the end of the statistical timing, the sensitivities of all timing quantities to each of the sources of variation are available. Excessive sensitivities can then be targeted by manual or automatic optimization methods to improve the robustness of the design. This paper also reports the first incremental statistical timer in the literature which is suitable for use in the inner loop of physical synthesis or other optimization programs. The third novel contribution of this paper is the computation of local and global criticality probabilities. For a very small cost in CPU time, the probability of each edge or node of the timing graph being critical is computed. Numerical results are presented on industrial ASIC chips with over two million logic gates.

Journal ArticleDOI
TL;DR: A general multi-sensor optimal information fusion decentralized Kalman filter with a two-layer fusion structure is given for discrete time linear stochastic control systems with multiple sensors and correlated noises.

Journal ArticleDOI
TL;DR: A form of feedback model predictive control that overcomes disadvantages of conventional MPC but which has manageable computational complexity is presented.

Proceedings ArticleDOI
03 Nov 2004
TL;DR: This paper presents a general framework for achievingantly more accurate and reliable answers by combining energy-efficient multi-path routing schemes with techniques that avoid double-counting, and demonstrates the significant robustness, accuracy, and energy-efficiency improvements of synopsis diffusion over previous approaches.
Abstract: Previous approaches for computing duplicate-sensitive aggregates in sensor networks (e.g., in TAG) have used a tree topology, in order to conserve energy and to avoid double-counting sensor readings. However, a tree topology is not robust against node and communication failures, which are common in sensor networks. In this paper, we present synopsis diffusion, a general framework for achieving signi.cantly more accurate and reliable answers by combining energy-efficient multi-path routing schemes with techniques that avoid double-counting. Synopsis diffusion avoids double-counting through the use of order- and duplicate-insensitive (ODI) synopses that compactly summarize intermediate results during in-network aggregation. We provide a surprisingly simple test that makes it easy to check the correctness of an ODI synopsis. We show that the properties of ODI synopses and synopsis di.usion create implicit acknowledgments of packet delivery. We show that this property can, in turn, enable the system to adapt message routing to dynamic message loss conditions, even in the presence of asymmetric links. Finally, we illustrate, using extensive simulations, the significant robustness, accuracy, and energy-efficiency improvements of synopsis diffusion over previous approaches.

Journal ArticleDOI
TL;DR: The controllers constructed do not rely on the generation of sliding motions while providing robustness features similar to those possessed by their sliding mode counterparts, and are illustrated via application to a friction servo-motor.
Abstract: Stability analysis is developed for uncertain nonlinear switched systems. While being asymptotically stable and homogeneous of degree q < 0, these systems are shown to approach the equilibrium point in finite time. Restricted to second order systems, this feature is additionally demonstrated to persist regardless of inhomogeneous perturbations. Based on this fundamental property, switched control algorithms are then developed to globally stabilize uncertain minimum phase systems of uniform m-vector relative degree (2,...,2)T. The controllers constructed do not rely on the generation of sliding motions while providing robustness features similar to those possessed by their sliding mode counterparts. The proposed synthesis procedure is illustrated via application to a friction servo-motor.

Journal ArticleDOI
TL;DR: It is shown that the maximum synchronizability of a network is completely determined by its associated feedback system, which has a precise meaning in terms of synchronous communication.
Abstract: Many real-world complex networks display a small-world feature-a high degree of clustering and a small average distance. We show that the maximum synchronizability of a network is completely determined by its associated feedback system, which has a precise meaning in terms of synchronous communication. We introduce a new concept of synchronizability matrix to characterize the maximum synchronizability of a network. Several new concepts, such as sensitive edge and robust edge, are proposed for analyzing the robustness and fragility of synchronization of a network. Using the knowledge of synchronizability, we can purposefully increase the robustness of the network synchronization and prevent it from attacks. Some applications in small-world networks are also discussed briefly.


Journal ArticleDOI
TL;DR: This work has formulated the tracking problem in terms of local bundle adjustment and developed a method for establishing image correspondences that can equally well handle short and wide-baseline matching and results in a real-time tracker that does not jitter or drift and can deal with significant aspect changes.
Abstract: We propose an efficient real-time solution for tracking rigid objects in 3D using a single camera that can handle large camera displacements, drastic aspect changes, and partial occlusions. While commercial products are already available for offline camera registration, robust online tracking remains an open issue because many real-time algorithms described in the literature still lack robustness and are prone to drift and jitter. To address these problems, we have formulated the tracking problem in terms of local bundle adjustment and have developed a method for establishing image correspondences that can equally well handle short and wide-baseline matching. We then can merge the information from preceding frames with that provided by a very limited number of keyframes created during a training stage, which results in a real-time tracker that does not jitter or drift and can deal with significant aspect changes.

Journal ArticleDOI
TL;DR: A new solution to the thermal unit-commitment (UC) problem based on an integer-coded genetic algorithm (GA) that achieves significant chromosome size reduction compared to the usual binary coding.
Abstract: This paper presents a new solution to the thermal unit-commitment (UC) problem based on an integer-coded genetic algorithm (GA). The GA chromosome consists of a sequence of alternating sign integer numbers representing the sequence of operation/reservation times of the generating units. The proposed coding achieves significant chromosome size reduction compared to the usual binary coding. As a result, algorithm robustness and execution time are improved. In addition, generating unit minimum up and minimum downtime constraints are directly coded in the chromosome, thus avoiding the use of many penalty functions that usually distort the search space. Test results with systems of up to 100 units and 24-h scheduling horizon are presented.

Journal ArticleDOI
Min Wu1, Bede Liu1
TL;DR: The proposed data embedding method can be used to detect unauthorized use of a digitized signature, and annotate or authenticate binary documents, and presents analysis and discussions on robustness and security issues.
Abstract: This paper proposes a new method to embed data in binary images, including scanned text, figures, and signatures. The method manipulates "flippable" pixels to enforce specific block-based relationship in order to embed a significant amount of data without causing noticeable artifacts. Shuffling is applied before embedding to equalize the uneven embedding capacity from region to region. The hidden data can be extracted without using the original image, and can also be accurately extracted after high quality printing and scanning with the help of a few registration marks. The proposed data embedding method can be used to detect unauthorized use of a digitized signature, and annotate or authenticate binary documents. The paper also presents analysis and discussions on robustness and security issues.

Book
27 Aug 2004
TL;DR: In this paper, a scenario approach for Probabilistic Robust Design is presented for LPV systems. But the approach is not suitable for linear systems and does not address the limitations of the robustness Paradigm.
Abstract: Overview.- Elements of Probability Theory.- Uncertain Linear Systems and Robustness.- Linear Robust Control Design.- Some Limits of the Robustness Paradigm.- Probabilistic Methods for Robustness.- Monte Carlo Methods.- Randomized Algorithms in Systems and Control.- Probability Inequalities.- Statistical Learning Theory and Control Design.- Sequential Algorithms for Probabilistic Robust Design.- Sequential Algorithms for LPV Systems.- Scenario Approach for Probabilistic Robust Design.- Random Number and Variate Generation.- Statistical Theory of Radial Random Vectors.- Vector Randomization Methods.- Statistical Theory of Radial Random Matrices.- Matrix Randomization Methods.- Applications of Randomized Algorithms.- Appendix.

Journal ArticleDOI
TL;DR: A new simple stable force tracking impedance control scheme that has the capability to track a specified desired force and to compensate for uncertainties in environment location and stiffness as well as in robot dynamic model is proposed.
Abstract: In this paper, a new simple stable force tracking impedance control scheme that has the capability to track a specified desired force and to compensate for uncertainties in environment location and stiffness as well as in robot dynamic model is proposed. The uncertainties in robot dynamics are compensated by the robust position control algorithm. After contact, in force controllable direction the new impedance function is realized based on a desired force, environment stiffness and a position error. The new impedance function is simple and stable. The force error is minimized by using an adaptive technique. Stability and convergence of the adaptive technique are analyzed for a stable force tracking execution. Simulation studies with a three link rotary robot manipulator are shown to demonstrate the robustness of the proposed scheme under uncertainties in robot dynamics, and little knowledges of environment position and environment stiffness. Experimental results are carried out to confirm the proposed controller's performance.

Journal ArticleDOI
01 Jan 2004
TL;DR: The authors analyzed the robustness of results on the relationship between growth and trust previously derived by Knack and Keefer and Zak and Knack along several dimensions, acknowledging the complexity of the concept of robustness.
Abstract: This paper analyses the robustness of results on the relationship between growth and trust previously derived by Knack and Keefer (1997) and Zak and Knack (2001) along several dimensions, acknowledging the complexity of the concept of robustness. Our results show that the Knack and Keefer results are only limitedly robust, whereas the results found by Zak and Knack are highly robust in terms of significance of the estimated coefficients and reasonably robust in terms of the estimated effect size. The improvement in robustness is caused by the inclusion of countries with relatively low scores on trust (most notably, the Philippines and Peru). Overall, our results point at a relatively important role for trust. However, the answer to the question how large this payoff actually is depends on the set of conditioning variables controlled for in the regression analysis and—to an even larger extent—on the underlying sample.

Proceedings ArticleDOI
07 Jun 2004
TL;DR: The indicating means provides a replace battery indication which is presented when the battery is in a replacement condition and the amplifier is operated at saturation.
Abstract: Recent developments in techniques for modeling, digitizing and visualizing 3D shapes has led to an explosion in the number of available 3D models on the Internet and in domain-specific databases. This has led to the development of 3D shape retrieval systems that, given a query object, retrieve similar 3D objects. For visualization, 3D shapes are often represented as a surface, in particular polygonal meshes, for example in VRML format. Often these models contain holes, intersecting polygons, are not manifold, and do not enclose a volume unambiguously. On the contrary, 3D volume models, such as solid models produced by CAD systems, or voxels models, enclose a volume properly. This paper surveys the literature on methods for content based 3D retrieval, taking into account the applicability to surface models as well as to volume models. The methods are evaluated with respect to several requirements of content based 3D shape retrieval, such as: (1) shape representation requirements, (2) properties of dissimilarity measures, (3) efficiency, (4) discrimination abilities, (5) ability to perform partial matching, (6) robustness, and (7) necessity of pose normalization. Finally, the advantages and limits of the several approaches in content based 3D shape retrieval are discussed.

Journal ArticleDOI
TL;DR: This paper provides a complete analysis of a norm constrained Capon beamforming (NCCB) approach, which uses a norm constraint on the weight vector to improve the robustness against array steering vector errors and noise and provides a natural extension of the SCB to the case of uncertain steering vectors.
Abstract: The standard Capon beamformer (SCB) is known to have better resolution and much better interference rejection capability than the standard data-independent beamformer when the array steering vector is accurately known. However, the major problem of the SCB is that it lacks robustness in the presence of array steering vector errors. In this paper, we will first provide a complete analysis of a norm constrained Capon beamforming (NCCB) approach, which uses a norm constraint on the weight vector to improve the robustness against array steering vector errors and noise. Our analysis of NCCB is thorough and sheds more light on the choice of the norm constraint than what was commonly known. We also provide a natural extension of the SCB, which has been obtained via covariance matrix fitting, to the case of uncertain steering vectors by enforcing a double constraint on the array steering vector, viz. a constant norm constraint and a spherical uncertainty set constraint, which we refer to as the doubly constrained robust Capon beamformer (DCRCB). NCCB and DCRCB can both be efficiently computed at a comparable cost with that of the SCB. Performance comparisons of NCCB, DCRCB, and several other adaptive beamformers via a number of numerical examples are also presented.

01 Jan 2004
TL;DR: There has been a resurgence of interest and exciting new developments in total variation minimizing models, some extending the applicabilities to inpainting, blind deconvolution and vector-valued images, while others offer improvements in better preservation of contrast, geometry and textures.
Abstract: Since their introduction in a classic paper by Rudin, Osher and Fatemi [26], total variation minimizing models have become one of the most popular and successful methodology for image restoration. More recently, there has been a resurgence of interest and exciting new developments, some extending the applicabilities to inpainting, blind deconvolution and vector-valued images, while others offer improvements in better preservation of contrast, geometry and textures, in ameliorating the staircasing effect, and in exploiting the multiscale nature of the models. In addition, new computational methods have been proposed with improved computational speed and robustness. We shall review some of these recent developments.

Journal ArticleDOI
TL;DR: Least-Squared Support Vector Machine (LS-SVM) regression, a semi-parametric modeling technique, is used to predict the acidity of three different grape varieties using NIR spectra and produces more accurate prediction than Partial Least Square Regression and Multivariate Linear Regression.

Journal ArticleDOI
TL;DR: In this paper, a power system's ability to survive imminent disturbances (contingencies) without interruption to customer service is defined as the degree of risk in a power systems ability to cope with such disturbances without interrupting customer service.
Abstract: Security refers to the degree of risk in a power system's ability to survive imminent disturbances (contingencies) without interruption to customer service It relates to robustness of the system to imminent disturbances and, hence, depends on the system operating condition as well as the contingent probability of disturbances DSA refers to the analysis required to determine whether or not a power system can meet specified reliability and security criteria in both transient and steady-state time frames for all credible contingencies Ensuring security in the new environment requires the use of advanced power system analysis tools capable of comprehensive security assessment with due consideration to practical operating criteria These tools must be able to model the system appropriately, compute security limits in a fast and accurate manner, and provide meaningful displays to system operators Online dynamics security assessment can provide the first line of defense against widespread system disturbances by quickly scanning the system for potential problems and providing operators with actionable results With the development of emerging technologies, such as wide-area PMs and ISs, online DSA is expected to become a dominant weapon against system blackouts

Journal ArticleDOI
TL;DR: An innovative watermarking scheme based on genetic algorithms (GA) in the transform domain is proposed, which is robust againstWatermarking attacks, and the improvement in watermarked image quality with GA.

Journal ArticleDOI
TL;DR: Simple but powerful criteria of stability are presented for both continuous-time and discrete-time systems and using these criteria, stability can be checked in a closed loop Bode plot, making it easy to design the system for robustness.