scispace - formally typeset
Search or ask a question

Showing papers on "Robustness (computer science) published in 2011"


Book
27 Sep 2011
TL;DR: Robust Model-Based Fault Diagnosis for Dynamic Systems targets both newcomers who want to get into this subject, and experts who are concerned with fundamental issues and are also looking for inspiration for future research.
Abstract: There is an increasing demand for dynamic systems to become safer and more reliable This requirement extends beyond the normally accepted safety-critical systems such as nuclear reactors and aircraft, where safety is of paramount importance, to systems such as autonomous vehicles and process control systems where the system availability is vital It is clear that fault diagnosis is becoming an important subject in modern control theory and practice Robust Model-Based Fault Diagnosis for Dynamic Systems presents the subject of model-based fault diagnosis in a unified framework It contains many important topics and methods; however, total coverage and completeness is not the primary concern The book focuses on fundamental issues such as basic definitions, residual generation methods and the importance of robustness in model-based fault diagnosis approaches In this book, fault diagnosis concepts and methods are illustrated by either simple academic examples or practical applications The first two chapters are of tutorial value and provide a starting point for newcomers to this field The rest of the book presents the state of the art in model-based fault diagnosis by discussing many important robust approaches and their applications This will certainly appeal to experts in this field Robust Model-Based Fault Diagnosis for Dynamic Systems targets both newcomers who want to get into this subject, and experts who are concerned with fundamental issues and are also looking for inspiration for future research The book is useful for both researchers in academia and professional engineers in industry because both theory and applications are discussed Although this is a research monograph, it will be an important text for postgraduate research students world-wide The largest market, however, will be academics, libraries and practicing engineers and scientists throughout the world

3,826 citations


Proceedings ArticleDOI
06 Nov 2011
TL;DR: A comprehensive evaluation on benchmark datasets reveals BRISK's adaptive, high quality performance as in state-of-the-art algorithms, albeit at a dramatically lower computational cost (an order of magnitude faster than SURF in cases).
Abstract: Effective and efficient generation of keypoints from an image is a well-studied problem in the literature and forms the basis of numerous Computer Vision applications. Established leaders in the field are the SIFT and SURF algorithms which exhibit great performance under a variety of image transformations, with SURF in particular considered as the most computationally efficient amongst the high-performance methods to date. In this paper we propose BRISK1, a novel method for keypoint detection, description and matching. A comprehensive evaluation on benchmark datasets reveals BRISK's adaptive, high quality performance as in state-of-the-art algorithms, albeit at a dramatically lower computational cost (an order of magnitude faster than SURF in cases). The key to speed lies in the application of a novel scale-space FAST-based detector in combination with the assembly of a bit-string descriptor from intensity comparisons retrieved by dedicated sampling of each keypoint neighborhood.

3,292 citations


Proceedings ArticleDOI
06 Nov 2011
TL;DR: This paper presents a framework for adaptive visual object tracking based on structured output prediction that is able to avoid the need for an intermediate classification step, and uses a kernelized structured output support vector machine (SVM), which is learned online to provide adaptive tracking.
Abstract: Adaptive tracking-by-detection methods are widely used in computer vision for tracking arbitrary objects. Current approaches treat the tracking problem as a classification task and use online learning techniques to update the object model. However, for these updates to happen one needs to convert the estimated object position into a set of labelled training examples, and it is not clear how best to perform this intermediate step. Furthermore, the objective for the classifier (label prediction) is not explicitly coupled to the objective for the tracker (accurate estimation of object position). In this paper, we present a framework for adaptive visual object tracking based on structured output prediction. By explicitly allowing the output space to express the needs of the tracker, we are able to avoid the need for an intermediate classification step. Our method uses a kernelized structured output support vector machine (SVM), which is learned online to provide adaptive tracking. To allow for real-time application, we introduce a budgeting mechanism which prevents the unbounded growth in the number of support vectors which would otherwise occur during tracking. Experimentally, we show that our algorithm is able to outperform state-of-the-art trackers on various benchmark videos. Additionally, we show that we can easily incorporate additional features and kernels into our framework, which results in increased performance.

1,719 citations


Journal ArticleDOI
Xue Mei1, Haibin Ling2
TL;DR: This paper proposes a robust visual tracking method by casting tracking as a sparse approximation problem in a particle filter framework and extends the method for simultaneous tracking and recognition by introducing a static template set which stores target images from different classes.
Abstract: In this paper, we propose a robust visual tracking method by casting tracking as a sparse approximation problem in a particle filter framework. In this framework, occlusion, noise, and other challenging issues are addressed seamlessly through a set of trivial templates. Specifically, to find the tracking target in a new frame, each target candidate is sparsely represented in the space spanned by target templates and trivial templates. The sparsity is achieved by solving an l1-regularized least-squares problem. Then, the candidate with the smallest projection error is taken as the tracking target. After that, tracking is continued using a Bayesian state inference framework. Two strategies are used to further improve the tracking performance. First, target templates are dynamically updated to capture appearance changes. Second, nonnegativity constraints are enforced to filter out clutter which negatively resembles tracking targets. We test the proposed approach on numerous sequences involving different types of challenges, including occlusion and variations in illumination, scale, and pose. The proposed approach demonstrates excellent performance in comparison with previously proposed trackers. We also extend the method for simultaneous tracking and recognition by introducing a static template set which stores target images from different classes. The recognition result at each frame is propagated to produce the final result for the whole video. The approach is validated on a vehicle tracking and classification task using outdoor infrared video sequences.

911 citations


Journal ArticleDOI
TL;DR: This paper presents a unified framework for the rigid and nonrigid point set registration problem in the presence of significant amounts of noise and outliers, and shows that the popular iterative closest point (ICP) method and several existing point setRegistration methods in the field are closely related and can be reinterpreted meaningfully in this general framework.
Abstract: In this paper, we present a unified framework for the rigid and nonrigid point set registration problem in the presence of significant amounts of noise and outliers. The key idea of this registration framework is to represent the input point sets using Gaussian mixture models. Then, the problem of point set registration is reformulated as the problem of aligning two Gaussian mixtures such that a statistical discrepancy measure between the two corresponding mixtures is minimized. We show that the popular iterative closest point (ICP) method and several existing point set registration methods in the field are closely related and can be reinterpreted meaningfully in our general framework. Our instantiation of this general framework is based on the the L2 distance between two Gaussian mixtures, which has the closed-form expression and in turn leads to a computationally efficient registration algorithm. The resulting registration algorithm exhibits inherent statistical robustness, has an intuitive interpretation, and is simple to implement. We also provide theoretical and experimental comparisons with other robust methods for point set registration.

909 citations


Proceedings ArticleDOI
20 Jun 2011
TL;DR: This paper proposes a generic and simple framework comprising three steps: constructing a cost volume, fast cost volume filtering and winner-take-all label selection, and achieves state-of-the-art results that achieve disparity maps in real-time, and optical flow fields with very fine structures as well as large displacements.
Abstract: Many computer vision tasks can be formulated as labeling problems. The desired solution is often a spatially smooth labeling where label transitions are aligned with color edges of the input image. We show that such solutions can be efficiently achieved by smoothing the label costs with a very fast edge preserving filter. In this paper we propose a generic and simple framework comprising three steps: (i) constructing a cost volume (ii) fast cost volume filtering and (iii) winner-take-all label selection. Our main contribution is to show that with such a simple framework state-of-the-art results can be achieved for several computer vision applications. In particular, we achieve (i) disparity maps in real-time, whose quality exceeds those of all other fast (local) approaches on the Middlebury stereo benchmark, and (ii) optical flow fields with very fine structures as well as large displacements. To demonstrate robustness, the few parameters of our framework are set to nearly identical values for both applications. Also, competitive results for interactive image segmentation are presented. With this work, we hope to inspire other researchers to leverage this framework to other application areas.

898 citations


Journal ArticleDOI
TL;DR: It is shown that with small changes in the network structure (low cost) the robustness of diverse networks can be improved dramatically whereas their functionality remains unchanged and is useful not only for improving significantly with low cost the robustity of existing infrastructures but also for designing economically robust network systems.
Abstract: Terrorist attacks on transportation networks have traumatized modern societies. With a single blast, it has become possible to paralyze airline traffic, electric power supply, ground transportation or Internet communication. How and at which cost can one restructure the network such that it will become more robust against a malicious attack? We introduce a new measure for robustness and use it to devise a method to mitigate economically and efficiently this risk. We demonstrate its efficiency on the European electricity system and on the Internet as well as on complex networks models. We show that with small changes in the network structure (low cost) the robustness of diverse networks can be improved dramatically whereas their functionality remains unchanged. Our results are useful not only for improving significantly with low cost the robustness of existing infrastructures but also for designing economically robust network systems.

793 citations


Journal ArticleDOI
TL;DR: A new formulation of appearance-only SLAM suitable for very large scale place recognition that incorporates robustness against perceptual aliasing and substantially outperforms the standard term-frequency inverse-document-frequency (tf-idf) ranking measure.
Abstract: We describe a new formulation of appearance-only SLAM suitable for very large scale place recognition. The system navigates in the space of appearance, assigning each new observation to either a new or a previously visited location, without reference to metric position. The system is demonstrated performing reliable online appearance mapping and loop-closure detection over a 1000 km trajectory, with mean filter update times of 14 ms. The scalability of the system is achieved by defining a sparse approximation to the FAB-MAP model suitable for implementation using an inverted index. Our formulation of the problem is fully probabilistic and naturally incorporates robustness against perceptual aliasing. We also demonstrate that the approach substantially outperforms the standard term-frequency inverse-document-frequency (tf-idf) ranking measure. The 1000 km data set comprising almost a terabyte of omni-directional and stereo imagery is available for use, and we hope that it will serve as a benchmark for future systems.

661 citations


Journal ArticleDOI
TL;DR: This paper proposes a novel approach for multiperson tracking-by-detection in a particle filtering framework that detects and tracks a large number of dynamically moving people in complex scenes with occlusions, requires no camera or ground plane calibration, and only makes use of information from the past.
Abstract: In this paper, we address the problem of automatically detecting and tracking a variable number of persons in complex scenes using a monocular, potentially moving, uncalibrated camera. We propose a novel approach for multiperson tracking-by-detection in a particle filtering framework. In addition to final high-confidence detections, our algorithm uses the continuous confidence of pedestrian detectors and online-trained, instance-specific classifiers as a graded observation model. Thus, generic object category knowledge is complemented by instance-specific information. The main contribution of this paper is to explore how these unreliable information sources can be used for robust multiperson tracking. The algorithm detects and tracks a large number of dynamically moving people in complex scenes with occlusions, does not rely on background modeling, requires no camera or ground plane calibration, and only makes use of information from the past. Hence, it imposes very few restrictions and is suitable for online applications. Our experiments show that the method yields good tracking performance in a large variety of highly dynamic scenarios, such as typical surveillance videos, webcam footage, or sports sequences. We demonstrate that our algorithm outperforms other methods that rely on additional information. Furthermore, we analyze the influence of different algorithm components on the robustness.

658 citations


Journal ArticleDOI
TL;DR: Compared with other PSO algorithms, the comparisons show that OLPSO significantly improves the performance of PSO, offering faster global convergence, higher solution quality, and stronger robustness.
Abstract: Particle swarm optimization (PSO) relies on its learning strategy to guide its search direction. Traditionally, each particle utilizes its historical best experience and its neighborhood's best experience through linear summation. Such a learning strategy is easy to use, but is inefficient when searching in complex problem spaces. Hence, designing learning strategies that can utilize previous search information (experience) more efficiently has become one of the most salient and active PSO research topics. In this paper, we proposes an orthogonal learning (OL) strategy for PSO to discover more useful information that lies in the above two experiences via orthogonal experimental design. We name this PSO as orthogonal learning particle swarm optimization (OLPSO). The OL strategy can guide particles to fly in better directions by constructing a much promising and efficient exemplar. The OL strategy can be applied to PSO with any topological structure. In this paper, it is applied to both global and local versions of PSO, yielding the OLPSO-G and OLPSO-L algorithms, respectively. This new learning strategy and the new algorithms are tested on a set of 16 benchmark functions, and are compared with other PSO algorithms and some state of the art evolutionary algorithms. The experimental results illustrate the effectiveness and efficiency of the proposed learning strategy and algorithms. The comparisons show that OLPSO significantly improves the performance of PSO, offering faster global convergence, higher solution quality, and stronger robustness.

633 citations


Journal ArticleDOI
TL;DR: The proposed sparse correntropy framework is more robust and efficient in dealing with the occlusion and corruption problems in face recognition as compared to the related state-of-the-art methods and the computational cost is much lower than the SRC algorithms.
Abstract: In this paper, we present a sparse correntropy framework for computing robust sparse representations of face images for recognition. Compared with the state-of-the-art l1norm-based sparse representation classifier (SRC), which assumes that noise also has a sparse representation, our sparse algorithm is developed based on the maximum correntropy criterion, which is much more insensitive to outliers. In order to develop a more tractable and practical approach, we in particular impose nonnegativity constraint on the variables in the maximum correntropy criterion and develop a half-quadratic optimization technique to approximately maximize the objective function in an alternating way so that the complex optimization problem is reduced to learning a sparse representation through a weighted linear least squares problem with nonnegativity constraint at each iteration. Our extensive experiments demonstrate that the proposed method is more robust and efficient in dealing with the occlusion and corruption problems in face recognition as compared to the related state-of-the-art methods. In particular, it shows that the proposed method can improve both recognition accuracy and receiver operator characteristic (ROC) curves, while the computational cost is much lower than the SRC algorithms.

Proceedings Article
28 Jun 2011
TL;DR: This paper develops "Go Decomposition" (GoDec) to efficiently and robustly estimate the low-rank part L and the sparse part S of a matrix X = L + S + G with noise G to discover the robustness of GoDec.
Abstract: Low-rank and sparse structures have been profoundly studied in matrix completion and compressed sensing. In this paper, we develop "Go Decomposition" (GoDec) to efficiently and robustly estimate the low-rank part L and the sparse part S of a matrix X = L + S + G with noise G. GoDec alternatively assigns the low-rank approximation of X - S to L and the sparse approximation of X - L to S. The algorithm can be significantly accelerated by bilateral random projections (BRP). We also propose GoDec for matrix completion as an important variant. We prove that the objective value ||X - L - S||2F converges to a local minimum, while L and S linearly converge to local optimums. Theoretically, we analyze the influence of L, S and G to the asymptotic/convergence speeds in order to discover the robustness of GoDec. Empirical studies suggest the efficiency, robustness and effectiveness of GoDec comparing with representative matrix decomposition and completion tools, e.g., Robust PCA and OptSpace.

Proceedings ArticleDOI
20 Jun 2011
TL;DR: The robust sparse coding (RSC) scheme is proposed, which seeks for the MLE (maximum likelihood estimation) solution of the sparse coding problem, and it is much more robust to outliers (e.g., occlusions, corruptions, etc.) than SRC.
Abstract: Recently the sparse representation (or coding) based classification (SRC) has been successfully used in face recognition. In SRC, the testing image is represented as a sparse linear combination of the training samples, and the representation fidelity is measured by the l 2 -norm or l 1 -norm of coding residual. Such a sparse coding model actually assumes that the coding residual follows Gaussian or Laplacian distribution, which may not be accurate enough to describe the coding errors in practice. In this paper, we propose a new scheme, namely the robust sparse coding (RSC), by modeling the sparse coding as a sparsity-constrained robust regression problem. The RSC seeks for the MLE (maximum likelihood estimation) solution of the sparse coding problem, and it is much more robust to outliers (e.g., occlusions, corruptions, etc.) than SRC. An efficient iteratively reweighted sparse coding algorithm is proposed to solve the RSC model. Extensive experiments on representative face databases demonstrate that the RSC scheme is much more effective than state-of-the-art methods in dealing with face occlusion, corruption, lighting and expression changes, etc.

Journal ArticleDOI
TL;DR: This paper provides a systematic approach to the design of filter-based active damping methods with tuning procedures, performance, robustness, and limitations discussed with theoretical analysis, selected simulation, and experimental results.
Abstract: Pulsewidth modulation (PWM) voltage source converters are becoming a popular interface to the power grid for many applications. Hence, issues related to the reduction of PWM harmonics injection in the power grid are becoming more relevant. The use of high-order filters like LCL filters is a standard solution to provide the proper attenuation of PWM carrier and sideband voltage harmonics. However, those grid filters introduce potentially unstable dynamics that should be properly damped either passively or actively. The second solution suffers from control and system complexity (a high number of sensors and a high-order controller), even if it is more attractive due to the absence of losses in the damping resistors and due to its flexibility. An interesting and straightforward active damping solution consists in plugging in, in cascade to the main controller, a filter that should damp the unstable dynamics. No more sensors are needed, but there are open issues such as preserving the bandwidth, robustness, and limited complexity. This paper provides a systematic approach to the design of filter-based active damping methods. The tuning procedures, performance, robustness, and limitations of the different solutions are discussed with theoretical analysis, selected simulation, and experimental results.

Journal ArticleDOI
TL;DR: In this article, a robust optimization model for handling the inherent uncertainty of input data in a closed-loop supply chain network design problem is proposed, and the robust counterpart of the proposed mixed-integer linear programming model is presented by using the recent extensions in robust optimization theory.

Journal ArticleDOI
TL;DR: This paper introduces a robust, learning-based brain extraction system (ROBEX), which combines a discriminative and a generative model to achieve the final result and shows that ROBEX provides significantly improved performance measures for almost every method/dataset combination.
Abstract: Automatic whole-brain extraction from magnetic resonance images (MRI), also known as skull stripping, is a key component in most neuroimage pipelines. As the first element in the chain, its robustness is critical for the overall performance of the system. Many skull stripping methods have been proposed, but the problem is not considered to be completely solved yet. Many systems in the literature have good performance on certain datasets (mostly the datasets they were trained/tuned on), but fail to produce satisfactory results when the acquisition conditions or study populations are different. In this paper we introduce a robust, learning-based brain extraction system (ROBEX). The method combines a discriminative and a generative model to achieve the final result. The discriminative model is a Random Forest classifier trained to detect the brain boundary; the generative model is a point distribution model that ensures that the result is plausible. When a new image is presented to the system, the generative model is explored to find the contour with highest likelihood according to the discriminative model. Because the target shape is in general not perfectly represented by the generative model, the contour is refined using graph cuts to obtain the final segmentation. Both models were trained using 92 scans from a proprietary dataset but they achieve a high degree of robustness on a variety of other datasets. ROBEX was compared with six other popular, publicly available methods (BET, BSE, FreeSurfer, AFNI, BridgeBurner, and GCUT) on three publicly available datasets (IBSR, LPBA40, and OASIS, 137 scans in total) that include a wide range of acquisition hardware and a highly variable population (different age groups, healthy/diseased). The results show that ROBEX provides significantly improved performance measures for almost every method/dataset combination.

Journal ArticleDOI
TL;DR: A comprehensive survey of the existing condition monitoring and protection methods in the following five areas: thermal protection and temperature estimation, stator insulation monitoring, bearing fault detection, broken rotor bar/end-ring detection, and air gap eccentricity detection is presented in this article.
Abstract: Medium-voltage (MV) induction motors are widely used in the industry and are essential to industrial processes. The breakdown of these MV motors not only leads to high repair expenses but also causes extraordinary financial losses due to unexpected downtime. To provide reliable condition monitoring and protection for MV motors, this paper presents a comprehensive survey of the existing condition monitoring and protection methods in the following five areas: thermal protection and temperature estimation, stator insulation monitoring and fault detection, bearing fault detection, broken rotor bar/end-ring detection, and air gap eccentricity detection. For each category, the related features of MV motors are discussed; the effectiveness of the existing methods are discussed in terms of their robustness, accuracy, and implementation complexity. Recommendations for the future research in these areas are also presented.

Posted Content
TL;DR: In this paper, the authors consider the case of 1-bit CS measurements and provide a lower bound on the best achievable reconstruction error, and show that the same class of matrices that provide almost optimal noiseless performance also enable a robust mapping.
Abstract: The Compressive Sensing (CS) framework aims to ease the burden on analog-to-digital converters (ADCs) by reducing the sampling rate required to acquire and stably recover sparse signals. Practical ADCs not only sample but also quantize each measurement to a finite number of bits; moreover, there is an inverse relationship between the achievable sampling rate and the bit depth. In this paper, we investigate an alternative CS approach that shifts the emphasis from the sampling rate to the number of bits per measurement. In particular, we explore the extreme case of 1-bit CS measurements, which capture just their sign. Our results come in two flavors. First, we consider ideal reconstruction from noiseless 1-bit measurements and provide a lower bound on the best achievable reconstruction error. We also demonstrate that i.i.d. random Gaussian matrices describe measurement mappings achieving, with overwhelming probability, nearly optimal error decay. Next, we consider reconstruction robustness to measurement errors and noise and introduce the Binary $\epsilon$-Stable Embedding (B$\epsilon$SE) property, which characterizes the robustness measurement process to sign changes. We show the same class of matrices that provide almost optimal noiseless performance also enable such a robust mapping. On the practical side, we introduce the Binary Iterative Hard Thresholding (BIHT) algorithm for signal reconstruction from 1-bit measurements that offers state-of-the-art performance.

Book ChapterDOI
26 Mar 2011
TL;DR: S-TaLiRo is a Matlab toolbox that searches for trajectories of minimal robustness in Simulink/Stateflow diagrams using randomized testing based on stochastic optimization techniques including Monte-Carlo methods and Ant-Colony Optimization.
Abstract: S-TaLiRo is a Matlab (TM) toolbox that searches for trajectories of minimal robustness in Simulink/Stateflow diagrams. It can analyze arbitrary Simulink models or user defined functions that model the system. At the heart of the tool, we use randomized testing based on stochastic optimization techniques including Monte-Carlo methods and Ant-Colony Optimization. Among the advantages of the toolbox is the seamless integration inside the Matlab environment, which is widely used in the industry for model-based development of control software.We present the architecture of S-TALIRO and its working on an application example.

Proceedings ArticleDOI
18 Sep 2011
TL;DR: An overview of the approaches that the participants used, the evaluation measure, and the dataset used in the ICDAR 2011 Robust Reading Competition for detecting/recognizing text in natural scene images is presented.
Abstract: Recognition of text in natural scene images is becoming a prominent research area due to the widespread availablity of imaging devices in low-cost consumer products like mobile phones. To evaluate the performance of recent algorithms in detecting and recognizing text from complex images, the ICDAR 2011 Robust Reading Competition was organized. Challenge 2 of the competition dealt specifically with detecting/recognizing text in natural scene images. This paper presents an overview of the approaches that the participants used, the evaluation measure, and the dataset used in the Challenge 2 of the contest. We also report the performance of all participating methods for text localization and word recognition tasks and compare their results using standard methods of area precision/recall and edit distance.

Journal ArticleDOI
TL;DR: In this paper, the potential energy surface (PES) of each element derived from high-precision first-principles calculations was constructed for 14 face-centered-cubic (fcc) elements across the periodic table.
Abstract: Highly optimized embedded-atom-method (EAM) potentials have been developed for 14 face-centered-cubic (fcc) elements across the periodic table. The potentials were developed by fitting the potential-energy surface (PES) of each element derived from high-precision first-principles calculations. The as-derived potential-energy surfaces were shifted and scaled to match experimental reference data. In constructing the PES, a variety of properties of the elements were considered, including lattice dynamics, mechanical properties, thermal behavior, energetics of competing crystal structures, defects, deformation paths, liquid structures, and so forth. For each element, the constructed EAM potentials were tested against the experiment data pertaining to thermal expansion, melting, and liquid dynamics via molecular dynamics computer simulation. The as-developed potentials demonstrate high fidelity and robustness. Owing to their improved accuracy and wide applicability, the potentials are suitable for high-quality atomistic computer simulation of practical applications.

Journal ArticleDOI
TL;DR: Recent adaptive results from a variety of laminar and Reynolds-averaged Navier-Stokes applications show the power of output-based adaptive methods for improving the robustness of computational fluid dynamics computations, however, challenges and areas of additional future research remain.
Abstract: Error estimation and control are critical ingredients for improving the reliability of computational simulations Adjoint-based techniques can be used to both estimate the error in chosen solution outputs and to provide local indicators for adaptive refinement This article reviews recent work on these techniques for computational fluid dynamics applications in aerospace engineering The definition of the adjoint as the sensitivity of an output to residual source perturbations is used to derive both the adjoint equation, in fully discrete and variational formulations, and the adjoint-weighted residual method for error estimation Assumptions and approximations made in the calculations are discussed Presentation of the discrete and variational formulations enables a side-by-side comparison of recent work in output-error estimation using the finite volume method and the finite element method Techniques for adapting meshes using output-error indicators are also reviewed Recent adaptive results from a variety of laminar and Reynolds-averaged Navier-Stokes applications show the power of output-based adaptive methods for improving the robustness of computational fluid dynamics computations However, challenges and areas of additional future research remain, including computable error bounds and robust mesh adaptation mechanics

Journal ArticleDOI
TL;DR: This paper analyzes the stability problem of the grid-connected voltage-source inverter (VSI) with LC filters, which demonstrates that the possible grid-impedance variations have a significant influence on the system stability when conventional proportional-integrator (PI) controller is used for grid current control.
Abstract: This paper analyzes the stability problem of the grid-connected voltage-source inverter (VSI) with LC filters, which demonstrates that the possible grid-impedance variations have a significant influence on the system stability when conventional proportional-integrator (PI) controller is used for grid current control. As the grid inductive impedance increases, the low-frequency gain and bandwidth of the PI controller have to be decreased to keep the system stable, thus degrading the tracking performance and disturbance rejection capability. To deal with this stability problem, an H∞ controller with explicit robustness in terms of grid-impedance variations is proposed to incorporate the desired tracking performance and the stability margin. By properly selecting the weighting functions, the synthesized H∞ controller exhibits high gains at the vicinity of the line frequency, similar to the traditional proportional-resonant controller; meanwhile, it has enough high-frequency attenuation to keep the control loop stable. An inner inverter-output-current loop with high bandwidth is also designed to get better disturbance rejection capability. The selection of weighting functions, inner inverter-output-current loop design, and system disturbance rejection capability are discussed in detail in this paper. Both simulation and experimental results of the proposed H∞ controller as well as the conventional PI controller are given and compared, which validates the performance of the proposed control scheme.

Journal ArticleDOI
TL;DR: A model for provable data possession (PDP) that can be used for remote data checking: A client that has stored data at an untrusted server can verify that the server possesses the original data without retrieving it.
Abstract: We introduce a model for provable data possession (PDP) that can be used for remote data checking: A client that has stored data at an untrusted server can verify that the server possesses the original data without retrieving it. The model generates probabilistic proofs of possession by sampling random sets of blocks from the server, which drastically reduces I/O costs. The client maintains a constant amount of metadata to verify the proof. The challenge/response protocol transmits a small, constant amount of data, which minimizes network communication. Thus, the PDP model for remote data checking is lightweight and supports large data sets in distributed storage systems. The model is also robust in that it incorporates mechanisms for mitigating arbitrary amounts of data corruption.We present two provably-secure PDP schemes that are more efficient than previous solutions. In particular, the overhead at the server is low (or even constant), as opposed to linear in the size of the data. We then propose a generic transformation that adds robustness to any remote data checking scheme based on spot checking. Experiments using our implementation verify the practicality of PDP and reveal that the performance of PDP is bounded by disk I/O and not by cryptographic computation. Finally, we conduct an in-depth experimental evaluation to study the tradeoffs in performance, security, and space overheads when adding robustness to a remote data checking scheme.

Journal ArticleDOI
TL;DR: A new approach for estimating small failure probabilities by considering subset simulation proposed by S.-K.

Proceedings ArticleDOI
06 Nov 2011
TL;DR: Online learning has shown to be successful in tracking of previously unknown objects, however, most approaches are limited to a bounding-box representation with fixed aspect ratio and cannot handle highly non-rigid and articulated objects.
Abstract: Online learning has shown to be successful in tracking of previously unknown objects. However, most approaches are limited to a bounding-box representation with fixed aspect ratio. Thus, they provide a less accurate foreground/background separation and cannot handle highly non-rigid and articulated objects. This, in turn, increases the amount of noise introduced during online self-training.

Journal ArticleDOI
TL;DR: In this paper, a mixed-integer programming model was proposed to minimize the nominal cost while reducing the disruption risk using the p -robustness criterion, which bounds the cost in disruption scenarios.
Abstract: This paper studies a strategic supply chain management problem to design reliable networks that perform as well as possible under normal conditions, while also performing relatively well when disruptions strike. We present a mixed-integer programming model whose objective is to minimize the nominal cost (the cost when no disruptions occur) while reducing the disruption risk using the p -robustness criterion (which bounds the cost in disruption scenarios). We propose a hybrid metaheuristic algorithm that is based on genetic algorithms, local improvement, and the shortest augmenting path method. Numerical tests show that the heuristic greatly outperforms CPLEX in terms of solution speed, while still delivering excellent solution quality. We demonstrate the tradeoff between the nominal cost and system reliability, showing that substantial improvements in reliability are often possible with minimal increases in cost. We also show that our model produces solutions that are less conservative than those generated by common robustness measures.

Proceedings Article
17 Nov 2011
TL;DR: This paper assumes that the adversary has control over some training data, and aims to subvert the SVM learning process, and proposes a strategy to improve the robustness of SVMs to training data manipulation based on a simple kernel matrix correction.
Abstract: In adversarial classication tasks like spam ltering and intrusion detection, malicious adversaries may manipulate data to thwart the outcome of an automatic analysis. Thus, besides achieving good classication performances, machine learning algorithms have to be robust against adversarial data manipulation to successfully operate in these tasks. While support vector machines (SVMs) have shown to be a very successful approach in classication problems, their eectiveness in adversarial classication tasks has not been extensively investigated yet. In this paper we present a preliminary investigation of the robustness of SVMs against adversarial data manipulation. In particular, we assume that the adversary has control over some training data, and aims to subvert the SVM learning process. Within this assumption, we show that this is indeed possible, and propose a strategy to improve the robustness of SVMs to training data manipulation based on a simple kernel matrix correction.

Journal ArticleDOI
TL;DR: Numerical results demonstrate that the proposed method can outperform robust rotational-invariant PCAs based on L1 norm when outliers occur and requires no assumption about the zero-mean of data for processing and can estimate data mean during optimization.
Abstract: Principal component analysis (PCA) minimizes the mean square error (MSE) and is sensitive to outliers. In this paper, we present a new rotational-invariant PCA based on maximum correntropy criterion (MCC). A half-quadratic optimization algorithm is adopted to compute the correntropy objective. At each iteration, the complex optimization problem is reduced to a quadratic problem that can be efficiently solved by a standard optimization method. The proposed method exhibits the following benefits: 1) it is robust to outliers through the mechanism of MCC which can be more theoretically solid than a heuristic rule based on MSE; 2) it requires no assumption about the zero-mean of data for processing and can estimate data mean during optimization; and 3) its optimal solution consists of principal eigenvectors of a robust covariance matrix corresponding to the largest eigenvalues. In addition, kernel techniques are further introduced in the proposed method to deal with nonlinearly distributed data. Numerical results demonstrate that the proposed method can outperform robust rotational-invariant PCAs based on L1 norm when outliers occur.

Proceedings ArticleDOI
20 Jun 2011
TL;DR: Real-time Com-pressive Sensing Tracking (RTCST) as mentioned in this paper exploits the signal recovery power of compressive sensing (CS), and adopts Dimensionality Reduction and a customized Orthogonal Matching Pursuit (OMP) algorithm to accelerate the CS tracking.
Abstract: The l 1 tracker obtains robustness by seeking a sparse representation of the tracking object via l 1 norm minimization. However, the high computational complexity involved in the l 1 tracker may hamper its applications in real-time processing scenarios. Here we propose Real-time Com-pressive Sensing Tracking (RTCST) by exploiting the signal recovery power of Compressive Sensing (CS). Dimensionality reduction and a customized Orthogonal Matching Pursuit (OMP) algorithm are adopted to accelerate the CS tracking. As a result, our algorithm achieves a realtime speed that is up to 5,000 times faster than that of the l 1 tracker. Meanwhile, RTCST still produces competitive (sometimes even superior) tracking accuracy compared to the l 1 tracker. Furthermore, for a stationary camera, a refined tracker is designed by integrating a CS-based background model (CSBM) into tracking. This CSBM-equipped tracker, termed RTCST-B, outperforms most state-of-the-art trackers in terms of both accuracy and robustness. Finally, our experimental results on various video sequences, which are verified by a new metric — Tracking Success Probability (TSP), demonstrate the excellence of the proposed algorithms.