scispace - formally typeset
Search or ask a question

Showing papers on "Robustness (computer science) published in 2009"


Proceedings ArticleDOI
Jia Deng1, Wei Dong1, Richard Socher1, Li-Jia Li1, Kai Li1, Li Fei-Fei1 
20 Jun 2009
TL;DR: A new database called “ImageNet” is introduced, a large-scale ontology of images built upon the backbone of the WordNet structure, much larger in scale and diversity and much more accurate than the current image datasets.
Abstract: The explosion of image data on the Internet has the potential to foster more sophisticated and robust models and algorithms to index, retrieve, organize and interact with images and multimedia data. But exactly how such data can be harnessed and organized remains a critical problem. We introduce here a new database called “ImageNet”, a large-scale ontology of images built upon the backbone of the WordNet structure. ImageNet aims to populate the majority of the 80,000 synsets of WordNet with an average of 500-1000 clean and full resolution images. This will result in tens of millions of annotated images organized by the semantic hierarchy of WordNet. This paper offers a detailed analysis of ImageNet in its current state: 12 subtrees with 5247 synsets and 3.2 million images in total. We show that ImageNet is much larger in scale and diversity and much more accurate than the current image datasets. Constructing such a large-scale database is a challenging task. We describe the data collection scheme with Amazon Mechanical Turk. Lastly, we illustrate the usefulness of ImageNet through three simple applications in object recognition, image classification and automatic object clustering. We hope that the scale, accuracy, diversity and hierarchical structure of ImageNet can offer unparalleled opportunities to researchers in the computer vision community and beyond.

49,639 citations


Journal ArticleDOI
TL;DR: This work considers the problem of automatically recognizing human faces from frontal views with varying expression and illumination, as well as occlusion and disguise, and proposes a general classification algorithm for (image-based) object recognition based on a sparse representation computed by C1-minimization.
Abstract: We consider the problem of automatically recognizing human faces from frontal views with varying expression and illumination, as well as occlusion and disguise. We cast the recognition problem as one of classifying among multiple linear regression models and argue that new theory from sparse signal representation offers the key to addressing this problem. Based on a sparse representation computed by C1-minimization, we propose a general classification algorithm for (image-based) object recognition. This new framework provides new insights into two crucial issues in face recognition: feature extraction and robustness to occlusion. For feature extraction, we show that if sparsity in the recognition problem is properly harnessed, the choice of features is no longer critical. What is critical, however, is whether the number of features is sufficiently large and whether the sparse representation is correctly computed. Unconventional features such as downsampled images and random projections perform just as well as conventional features such as eigenfaces and Laplacianfaces, as long as the dimension of the feature space surpasses certain threshold, predicted by the theory of sparse representation. This framework can handle errors due to occlusion and corruption uniformly by exploiting the fact that these errors are often sparse with respect to the standard (pixel) basis. The theory of sparse representation helps predict how much occlusion the recognition algorithm can handle and how to choose the training images to maximize robustness to occlusion. We conduct extensive experiments on publicly available databases to verify the efficacy of the proposed algorithm and corroborate the above claims.

9,658 citations


Proceedings ArticleDOI
12 May 2009
TL;DR: This paper modifications their mathematical expressions and performs a rigorous analysis on their robustness and complexity for the problem of 3D registration for overlapping point cloud views, and proposes an algorithm for the online computation of FPFH features for realtime applications.
Abstract: In our recent work [1], [2], we proposed Point Feature Histograms (PFH) as robust multi-dimensional features which describe the local geometry around a point p for 3D point cloud datasets. In this paper, we modify their mathematical expressions and perform a rigorous analysis on their robustness and complexity for the problem of 3D registration for overlapping point cloud views. More concretely, we present several optimizations that reduce their computation times drastically by either caching previously computed values or by revising their theoretical formulations. The latter results in a new type of local features, called Fast Point Feature Histograms (FPFH), which retain most of the discriminative power of the PFH. Moreover, we propose an algorithm for the online computation of FPFH features for realtime applications. To validate our results we demonstrate their efficiency for 3D registration and propose a new sample consensus based method for bringing two datasets into the convergence basin of a local non-linear optimizer: SAC-IA (SAmple Consensus Initial Alignment).

3,138 citations


Posted Content
TL;DR: In this article, a modular framework for constructing randomized algorithms that compute partial matrix decompositions is presented, which uses random sampling to identify a subspace that captures most of the action of a matrix and then the input matrix is compressed to this subspace, and the reduced matrix is manipulated deterministically to obtain the desired low-rank factorization.
Abstract: Low-rank matrix approximations, such as the truncated singular value decomposition and the rank-revealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for performing low-rank matrix approximation. These techniques exploit modern computational architectures more fully than classical methods and open the possibility of dealing with truly massive data sets. This paper presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions. These methods use random sampling to identify a subspace that captures most of the action of a matrix. The input matrix is then compressed---either explicitly or implicitly---to this subspace, and the reduced matrix is manipulated deterministically to obtain the desired low-rank factorization. In many cases, this approach beats its classical competitors in terms of accuracy, speed, and robustness. These claims are supported by extensive numerical experiments and a detailed error analysis.

2,356 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present a tutorial on modeling the dynamics of hybrid systems, on the elements of stability theory for hybrid systems and on the basics of hybrid control, focusing on the robustness of asymptotic stability to data perturbation, external disturbances and measurement error.
Abstract: Robust stability and control for systems that combine continuous-time and discrete-time dynamics. This article is a tutorial on modeling the dynamics of hybrid systems, on the elements of stability theory for hybrid systems, and on the basics of hybrid control. The presentation and selection of material is oriented toward the analysis of asymptotic stability in hybrid systems and the design of stabilizing hybrid controllers. Our emphasis on the robustness of asymptotic stability to data perturbation, external disturbances, and measurement error distinguishes the approach taken here from other approaches to hybrid systems. While we make some connections to alternative approaches, this article does not aspire to be a survey of the hybrid system literature, which is vast and multifaceted.

1,773 citations


Proceedings ArticleDOI
20 Jun 2009
TL;DR: It is shown that using Multiple Instance Learning (MIL) instead of traditional supervised learning avoids these problems, and can therefore lead to a more robust tracker with fewer parameter tweaks.
Abstract: In this paper, we address the problem of learning an adaptive appearance model for object tracking. In particular, a class of tracking techniques called “tracking by detection” have been shown to give promising results at real-time speeds. These methods train a discriminative classifier in an online manner to separate the object from the background. This classifier bootstraps itself by using the current tracker state to extract positive and negative examples from the current frame. Slight inaccuracies in the tracker can therefore lead to incorrectly labeled training examples, which degrades the classifier and can cause further drift. In this paper we show that using Multiple Instance Learning (MIL) instead of traditional supervised learning avoids these problems, and can therefore lead to a more robust tracker with fewer parameter tweaks. We present a novel online MIL algorithm for object tracking that achieves superior results with real-time performance.

1,752 citations


Proceedings Article
07 Dec 2009
TL;DR: It is proved that most matrices A can be efficiently and exactly recovered from most error sign-and-support patterns by solving a simple convex program, for which it is given a fast and provably convergent algorithm.
Abstract: Principal component analysis is a fundamental operation in computational data analysis, with myriad applications ranging from web search to bioinformatics to computer vision and image analysis. However, its performance and applicability in real scenarios are limited by a lack of robustness to outlying or corrupted observations. This paper considers the idealized "robust principal component analysis" problem of recovering a low rank matrix A from corrupted observations D = A + E. Here, the corrupted entries E are unknown and the errors can be arbitrarily large (modeling grossly corrupted observations common in visual and bioinformatic data), but are assumed to be sparse. We prove that most matrices A can be efficiently and exactly recovered from most error sign-and-support patterns by solving a simple convex program, for which we give a fast and provably convergent algorithm. Our result holds even when the rank of A grows nearly proportionally (up to a logarithmic factor) to the dimensionality of the observation space and the number of errors E grows in proportion to the total number of entries in the matrix. A by-product of our analysis is the first proportional growth results for the related problem of completing a low-rank matrix from a small fraction of its entries. Simulations and real-data examples corroborate the theoretical results, and suggest potential applications in computer vision.

1,479 citations


Proceedings ArticleDOI
01 Sep 2009
TL;DR: Several models that aim at learning the correct weighting of different features from training data are studied, including multiple kernel learning as well as simple baseline methods and ensemble methods inspired by Boosting are derived.
Abstract: A key ingredient in the design of visual object classification systems is the identification of relevant class specific aspects while being robust to intra-class variations. While this is a necessity in order to generalize beyond a given set of training images, it is also a very difficult problem due to the high variability of visual appearance within each class. In the last years substantial performance gains on challenging benchmark datasets have been reported in the literature. This progress can be attributed to two developments: the design of highly discriminative and robust image features and the combination of multiple complementary features based on different aspects such as shape, color or texture. In this paper we study several models that aim at learning the correct weighting of different features from training data. These include multiple kernel learning as well as simple baseline methods. Furthermore we derive ensemble methods inspired by Boosting which are easily extendable to several multiclass setting. All methods are thoroughly evaluated on object classification datasets using a multitude of feature descriptors. The key results are that even very simple baseline methods, that are orders of magnitude faster than learning techniques are highly competitive with multiple kernel learning. Furthermore the Boosting type methods are found to produce consistently better results in all experiments. We provide insight of when combination methods can be expected to work and how the benefit of complementary features can be exploited most efficiently.

898 citations


Proceedings ArticleDOI
01 Sep 2009
TL;DR: In this paper, a robust visual tracking method was proposed by casting tracking as a sparse approximation problem in a particle filter framework, where each target candidate is sparsely represented in the space spanned by target templates and trivial templates.
Abstract: In this paper we propose a robust visual tracking method by casting tracking as a sparse approximation problem in a particle filter framework. In this framework, occlusion, corruption and other challenging issues are addressed seamlessly through a set of trivial templates. Specifically, to find the tracking target at a new frame, each target candidate is sparsely represented in the space spanned by target templates and trivial templates. The sparsity is achieved by solving an l 1 -regularized least squares problem. Then the candidate with the smallest projection error is taken as the tracking target. After that, tracking is continued using a Bayesian state inference framework in which a particle filter is used for propagating sample distributions over time. Two additional components further improve the robustness of our approach: 1) the nonnegativity constraints that help filter out clutter that is similar to tracked targets in reversed intensity patterns, and 2) a dynamic template update scheme that keeps track of the most representative templates throughout the tracking procedure. We test the proposed approach on five challenging sequences involving heavy occlusions, drastic illumination changes, and large pose variations. The proposed approach shows excellent performance in comparison with three previously proposed trackers.

783 citations


Journal ArticleDOI
TL;DR: In this article, the authors studied the finite-time consensus tracking control for multi-robot systems with input disturbances on the terminal sliding-mode surface and showed that the proposed error function can be modified to achieve relative state deviation between agents.
Abstract: This paper studies the finite-time consensus tracking control for multirobot systems. We prove that finite-time consensus tracking of multiagent systems can be achieved on the terminal sliding-mode surface. Also, we show that the proposed error function can be modified to achieve relative state deviation between agents. These results are then applied to the finite-time consensus tracking control of multirobot systems with input disturbances. Simulation results are presented to validate the analysis.

763 citations


Journal ArticleDOI
TL;DR: This paper addresses task allocation to coordinate a fleet of autonomous vehicles by presenting two decentralized algorithms: the consensus-based auction algorithm (CBAA) and its generalization to the multi-assignment problem, i.e., theensus-based bundle algorithm ( CBBA).
Abstract: This paper addresses task allocation to coordinate a fleet of autonomous vehicles by presenting two decentralized algorithms: the consensus-based auction algorithm (CBAA) and its generalization to the multi-assignment problem, i.e., the consensus-based bundle algorithm (CBBA). These algorithms utilize a market-based decision strategy as the mechanism for decentralized task selection and use a consensus routine based on local communication as the conflict resolution mechanism to achieve agreement on the winning bid values. Under reasonable assumptions on the scoring scheme, both of the proposed algorithms are proven to guarantee convergence to a conflict-free assignment, and it is shown that the converged solutions exhibit provable worst-case performance. It is also demonstrated that CBAA and CBBA produce conflict-free feasible solutions that are robust to both inconsistencies in the situational awareness across the fleet and variations in the communication network topology. Numerical experiments confirm superior convergence properties and performance when compared with existing auction-based task-allocation algorithms.

Journal ArticleDOI
TL;DR: This paper presents a comprehensive coverage of different PSO applications in solving optimization problems in the area of electric power systems and highlights the PSO key features and advantages over other various optimization algorithms.
Abstract: Particle swarm optimization (PSO) has received increased attention in many research fields recently. This paper presents a comprehensive coverage of different PSO applications in solving optimization problems in the area of electric power systems. It highlights the PSO key features and advantages over other various optimization algorithms. Furthermore, recent trends with regard to PSO development in this area are explored. This paper also discusses PSO possible future applications in the area of electric power systems and its potential theoretical studies.

01 Jan 2009
TL;DR: This paper proposes a robust visual tracking method by casting tracking as a sparse approximation problem in a particle filter framework and introduces a dynamic template update scheme that keeps track of the most representative templates throughout the tracking procedure.
Abstract: In this paper we propose a robust visual tracking method by casting tracking as a sparse approximation problem in a particle filter framework. In this framework, occlusion, corruption and other challenging issues are addressed seamlessly through a set of trivial templates. Specifically, to find the tracking target at a new frame, each target candidate is sparsely represented in the space spanned by target templates and trivial templates. The sparsity is achieved by solving an � 1-regularized least squares problem. Then the candidate with the smallest projection error is taken as the tracking target. After that, tracking is continued using a Bayesian state inference framework in which a particle filter is used for propagating sample distributions over time. Two additional components further improve the robustness of our approach: 1) the nonnegativity constraints that help filter out clutter that is similar to tracked targets in reversed intensity patterns, and 2) a dynamic template update scheme that keeps track of the most representative templates throughout the tracking procedure. We test the proposed approach on five challenging sequences involving heavy occlusions, drastic illumination changes, and large pose variations. The proposed approach shows excellent performance in comparison with three previously proposed trackers.

Proceedings ArticleDOI
01 Jan 2009
TL;DR: A novel approach for multi-person tracking-by-detection in a particle filtering framework that uses the continuous confidence of pedestrian detectors and online trained, instance-specific classifiers as a graded observation model, which relies only on information from the past and is suitable for online applications.
Abstract: We propose a novel approach for multi-person tracking-by-detection in a particle filtering framework In addition to final high-confidence detections, our algorithm uses the continuous confidence of pedestrian detectors and online trained, instance-specific classifiers as a graded observation model Thus, generic object category knowledge is complemented by instance-specific information A main contribution of this paper is the exploration of how these unreliable information sources can be used for multi-person tracking The resulting algorithm robustly tracks a large number of dynamically moving persons in complex scenes with occlusions, does not rely on background modeling, and operates entirely in 2D (requiring no camera or ground plane calibration) Our Markovian approach relies only on information from the past and is suitable for online applications We evaluate the performance on a variety of datasets and show that it improves upon state-of-the-art methods

Journal ArticleDOI
01 Jun 2009
TL;DR: By the theoretical analysis, it is proved that the consensus error can be reduced as small as desired and the proposed method is extended to two cases: agents form a prescribed formation, and agents have the higher order dynamics.
Abstract: A robust adaptive control approach is proposed to solve the consensus problem of multiagent systems. Compared with the previous work, the agent's dynamics includes the uncertainties and external disturbances, which is more practical in real-world applications. Due to the approximation capability of neural networks, the uncertain dynamics is compensated by the adaptive neural network scheme. The effects of the approximation error and external disturbances are counteracted by employing the robustness signal. The proposed algorithm is decentralized because the controller for each agent only utilizes the information of its neighbor agents. By the theoretical analysis, it is proved that the consensus error can be reduced as small as desired. The proposed method is then extended to two cases: agents form a prescribed formation, and agents have the higher order dynamics. Finally, simulation examples are given to demonstrate the satisfactory performance of the proposed method.

Journal ArticleDOI
TL;DR: In this article, a nonlinear robust adaptive controller for a flexible air-breathing hypersonic vehicle model is proposed, where a combination of nonlinear sequential loop closure and adaptive dynamic inversion is adopted for the design of a dynamic statefeedback controller that provides stable tracking of the velocity and altitude reference trajectories and imposes a desired set point for the angle of attack.
Abstract: This paper describes the design of a nonlinear robust adaptive controller for a flexible air-breathing hypersonic vehicle model. Because of the complexity of a first-principle model of the vehicle dynamics, a control-oriented model is adopted for design and stability analysis. This simplified model retains the dominant features of the higher-fidelity model, including the nonminimum phase behavior of the flight-path angle dynamics, the flexibility effects, and the strong coupling between the engine and flight dynamics. A combination of nonlinear sequential loop closure and adaptive dynamic inversion is adopted for the design of a dynamic state-feedback controller that provides stable tracking of the velocity and altitude reference trajectories and imposes a desired set point for the angle of attack. A complete characterization of the internal dynamics of the model is derived for a Lyapunov-based stability analysis of the closed-loop system, which includes the structural dynamics. The proposed methodology addresses the issue of stability robustness with respect to both parametric model uncertainty, which naturally arises when adopting reduced-complexity models for control design, and dynamic perturbations due to the flexible dynamics. Simulation results from the full nonlinear model show the effectiveness of the controller.

01 Jan 2009
TL;DR: It is shown that the robustness of SVMs for biomarker discovery can be substantially increased by using ensemble feature selection techniques, while at the same time improving upon classification performances.
Abstract: Motivation: Biomarker discovery is an important topic in biomedical applications of computational biology, including applications such as gene and SNP selection from high dimensional data. Surprisingly, the stability with respect to sampling variation or robustness of such selection processes has received attention only recently. However, robustness of biomarkers is an important issue, as it may greatly influence subsequent biological validations. In addition, a more robust set of markers may strengthen the confidence of an expert in the results of a selection method. Results: Our first contribution is a general framework for the analysis of the robustness of a biomarker selection algorithm. Secondly, we conducted a large-scale analysis of the recently introduced concept of ensemble feature selection, where multiple feature selections are combined in order to increase the robustness of the final set of selected features. We focus on selection methods that are embedded in the estimation of support vector machines (SVMs). SVMs are powerful classification models that have shown state-ofthe-art performance on several diagnosis and prognosis tasks on biological data. Their feature selection extensions also offered good results for gene selection tasks. We show that the robustness of SVMs for biomarker discovery can be substantially increased by using ensemble feature selection techniques, while at the same time improving upon classification performances. The proposed methodology is evaluated on four microarray data sets showing increases of up to almost 30% in robustness of the selected biomarkers, along with an improvement of about 15% in classification performance. The stability improvement with ensemble methods is particularly noticeable for small signature sizes (a few tens of genes), which is most relevant for the design of a diagnosis or prognosis model from a gene signature.

Journal ArticleDOI
TL;DR: This paper designs a control law that enables the dynamic model to track a simpler kinematic model with a globally bounded error and builds a robust temporal logic specification that takes into account the tracking errors of the first step.

Journal ArticleDOI
TL;DR: A neural-network-based terminal sliding-mode control (SMC) scheme is proposed for robotic manipulators including actuator dynamics that alleviates some main drawbacks in the linear SMC while maintains its robustness to the uncertainties.
Abstract: A neural-network-based terminal sliding-mode control (SMC) scheme is proposed for robotic manipulators including actuator dynamics. The proposed terminal SMC (TSMC) alleviates some main drawbacks (such as contradiction between control efforts in the transient and tracking errors in the steady state) in the linear SMC while maintains its robustness to the uncertainties. Moreover, an indirect method is developed to avoid the singularity problem in the initial TSMC. In the proposed control scheme, a radial basis function neural network (NN) is adopted to approximate the nonlinear dynamics of the robotic manipulator. Meanwhile, a robust control term is added to suppress the modeling error and estimate the error of the NN. Finite time convergence and stability of the closed loop system can be guaranteed by Lyapunov theory. Finally, the proposed control scheme is applied to a robotic manipulator. Experimental results confirm the validity of the proposed control scheme by comparing it with other control strategies.

Book ChapterDOI
24 Jul 2009
TL;DR: This work proposes an improvement variant of the original duality based TV-L 1 optical flow algorithm that can preserve discontinuities in the flow field by employing total variation (TV) regularization and integrates a median filter into the numerical scheme to further increase the robustness to sampling artefacts in the image data.
Abstract: A look at the Middlebury optical flow benchmark [5] reveals that nowadays variational methods yield the most accurate optical flow fields between two image frames. In this work we propose an improvement variant of the original duality based TV-L 1 optical flow algorithm in [31] and provide implementation details. This formulation can preserve discontinuities in the flow field by employing total variation (TV) regularization. Furthermore, it offers robustness against outliers by applying the robust L 1 norm in the data fidelity term. Our contributions are as follows. First, we propose to perform a structure-texture decomposition of the input images to get rid of violations in the optical flow constraint due to illumination changes. Second, we propose to integrate a median filter into the numerical scheme to further increase the robustness to sampling artefacts in the image data. We experimentally show that very precise and robust estimation of optical flow can be achieved with a variational approach in real-time. The numerical scheme and the implementation are described in a detailed way, which enables reimplementation of this high-end method.

Journal ArticleDOI
TL;DR: A direct transcription method is presented that reduces finding the globally optimal trajectory to solving a second-order cone program using robust numerical algorithms that are freely available.
Abstract: This paper focuses on time-optimal path tracking, a subproblem in time-optimal motion planning of robot systems. Through a nonlinear change of variables, the time-optimal path tracking problem is transformed here into a convex optimal control problem with a single state. Various convexity-preserving extension are introduced, resulting in a versatile approach for optimal path tracking. A direct transcription method is presented that reduces finding the globally optimal trajectory to solving a second-order cone program using robust numerical algorithms that are freely available. Validation against known examples and application to a more complex example illustrate the versatility and practicality of the new method.

Posted Content
TL;DR: In this paper, a general theory for a variant of the error correcting output code scheme, using ideas from compressed sensing for exploiting output sparsity, was developed, which can be regarded as a simple reduction from multi-label regression problems to binary regression problems.
Abstract: We consider multi-label prediction problems with large output spaces under the assumption of output sparsity -- that the target (label) vectors have small support. We develop a general theory for a variant of the popular error correcting output code scheme, using ideas from compressed sensing for exploiting this sparsity. The method can be regarded as a simple reduction from multi-label regression problems to binary regression problems. We show that the number of subproblems need only be logarithmic in the total number of possible labels, making this approach radically more efficient than others. We also state and prove robustness guarantees for this method in the form of regret transform bounds (in general), and also provide a more detailed analysis for the linear prediction setting.

Journal ArticleDOI
27 Jul 2009
TL;DR: Video SnapCut is presented, a robust video object cutout system that significantly advances the state-of-the-art in segmentation and is completed with a novel coherent video matting technique.
Abstract: Although tremendous success has been achieved for interactive object cutout in still images, accurately extracting dynamic objects in video remains a very challenging problem. Previous video cutout systems present two major limitations: (1) reliance on global statistics, thus lacking the ability to deal with complex and diverse scenes; and (2) treating segmentation as a global optimization, thus lacking a practical workflow that can guarantee the convergence of the systems to the desired results.We present Video SnapCut, a robust video object cutout system that significantly advances the state-of-the-art. In our system segmentation is achieved by the collaboration of a set of local classifiers, each adaptively integrating multiple local image features. We show how this segmentation paradigm naturally supports local user editing and propagates them across time. The object cutout system is completed with a novel coherent video matting technique. A comprehensive evaluation and comparison is presented, demonstrating the effectiveness of the proposed system at achieving high quality results, as well as the robustness of the system against various types of inputs.

Journal ArticleDOI
TL;DR: This paper presents a generic and patient-specific classification system designed for robust and accurate detection of ECG heartbeat patterns that can adapt to significant interpatient variations in ECG patterns by training the optimal network structure, and achieves higher accuracy over larger datasets.
Abstract: This paper presents a generic and patient-specific classification system designed for robust and accurate detection of ECG heartbeat patterns. The proposed feature extraction process utilizes morphological wavelet transform features, which are projected onto a lower dimensional feature space using principal component analysis, and temporal features from the ECG data. For the pattern recognition unit, feedforward and fully connected artificial neural networks, which are optimally designed for each patient by the proposed multidimensional particle swarm optimization technique, are employed. By using relatively small common and patient-specific training data, the proposed classification system can adapt to significant interpatient variations in ECG patterns by training the optimal network structure, and thus, achieves higher accuracy over larger datasets. The classification experiments over a benchmark database demonstrate that the proposed system achieves such average accuracies and sensitivities better than most of the current state-of-the-art algorithms for detection of ventricular ectopic beats (VEBs) and supra-VEBs (SVEBs). Over the entire database, the average accuracy-sensitivity performances of the proposed system for VEB and SVEB detections are 98.3%-84.6% and 97.4%-63.5%, respectively. Finally, due to its parameter-invariant nature, the proposed system is highly generic, and thus, applicable to any ECG dataset.

Journal Article
TL;DR: This work considers regularized support vector machines and shows that they are precisely equivalent to a new robust optimization formulation, thus establishing robustness as the reason regularized SVMs generalize well and gives a new proof of consistency of (kernelized) SVMs.
Abstract: We consider regularized support vector machines (SVMs) and show that they are precisely equivalent to a new robust optimization formulation. We show that this equivalence of robust optimization and regularization has implications for both algorithms, and analysis. In terms of algorithms, the equivalence suggests more general SVM-like algorithms for classification that explicitly build in protection to noise, and at the same time control overfitting. On the analysis front, the equivalence of robustness and regularization provides a robust optimization interpretation for the success of regularized SVMs. We use this new robustness interpretation of SVMs to give a new proof of consistency of (kernelized) SVMs, thus establishing robustness as the reason regularized SVMs generalize well.

Journal ArticleDOI
TL;DR: A Hybrid Big Bang-Big Crunch (HBB-BC) optimization algorithm is employed for optimal design of truss structures and numerical results demonstrate the efficiency and robustness of the H BB-BC method compared to other heuristic algorithms.

Journal ArticleDOI
07 Dec 2009
TL;DR: Empirical evaluations show that AROW achieves state-of-the-art performance on a wide range of binary and multiclass tasks, as well as robustness in the face of non-separable data.
Abstract: We present AROW, an online learning algorithm for binary and multiclass problems that combines large margin training, confidence weighting, and the capacity to handle non-separable data. AROW performs adaptive regularization of the prediction function upon seeing each new instance, allowing it to perform especially well in the presence of label noise. We derive mistake bounds for the binary and multiclass settings that are similar in form to the second order perceptron bound. Our bounds do not assume separability. We also relate our algorithm to recent confidence-weighted online learning techniques. Empirical evaluations show that AROW achieves state-of-the-art performance on a wide range of binary and multiclass tasks, as well as robustness in the face of non-separable data.

Journal ArticleDOI
TL;DR: A new numerical algorithm for solving the symmetric eigenvalue problem is presented, which takes its inspiration from the contour integration and density matrix representation in quantum mechanics.
Abstract: A fast and stable numerical algorithm for solving the symmetric eigenvalue problem is presented. The technique deviates fundamentally from the traditional Krylov subspace iteration based techniques (Arnoldi and Lanczos algorithms) or other Davidson-Jacobi techniques and takes its inspiration from the contour integration and density-matrix representation in quantum mechanics. It will be shown that this algorithm---named FEAST---exhibits high efficiency, robustness, accuracy, and scalability on parallel architectures. Examples from electronic structure calculations of carbon nanotubes are presented, and numerical performances and capabilities are discussed.

Journal ArticleDOI
TL;DR: An improved region-based active contour model in a variational level set formulation that has been applied to brain MR image segmentation with desirable results and is presented as a two-phaselevel set formulation and then extended to a multi-phase formulation.

Journal ArticleDOI
TL;DR: A planar homographic occupancy constraint is developed that fuses foreground likelihood information from multiple views, to resolve occlusions and localize people on a reference scene plane in the framework of plane to plane homologies.
Abstract: Occlusion and lack of visibility in crowded and cluttered scenes make it difficult to track individual people correctly and consistently, particularly in a single view. We present a multi-view approach to solving this problem. In our approach we neither detect nor track objects from any single camera or camera pair; rather evidence is gathered from all the cameras into a synergistic framework and detection and tracking results are propagated back to each view. Unlike other multi-view approaches that require fully calibrated views our approach is purely image-based and uses only 2D constructs. To this end we develop a planar homographic occupancy constraint that fuses foreground likelihood information from multiple views, to resolve occlusions and localize people on a reference scene plane. For greater robustness this process is extended to multiple planes parallel to the reference plane in the framework of plane to plane homologies. Our fusion methodology also models scene clutter using the Schmieder and Weathersby clutter measure, which acts as a confidence prior, to assign higher fusion weight to views with lesser clutter. Detection and tracking are performed simultaneously by graph cuts segmentation of tracks in the space-time occupancy likelihood data. Experimental results with detailed qualitative and quantitative analysis, are demonstrated in challenging multi-view, crowded scenes.