scispace - formally typeset
Search or ask a question

Showing papers on "Robustness (computer science) published in 2012"


Proceedings ArticleDOI
16 Jun 2012
TL;DR: This work proposes a novel keypoint descriptor inspired by the human visual system and more precisely the retina, coined Fast Retina Keypoint (FREAK), which is in general faster to compute with lower memory load and also more robust than SIFT, SURF or BRISK.
Abstract: A large number of vision applications rely on matching keypoints across images. The last decade featured an arms-race towards faster and more robust keypoints and association algorithms: Scale Invariant Feature Transform (SIFT)[17], Speed-up Robust Feature (SURF)[4], and more recently Binary Robust Invariant Scalable Keypoints (BRISK)[I6] to name a few. These days, the deployment of vision algorithms on smart phones and embedded devices with low memory and computation complexity has even upped the ante: the goal is to make descriptors faster to compute, more compact while remaining robust to scale, rotation and noise. To best address the current requirements, we propose a novel keypoint descriptor inspired by the human visual system and more precisely the retina, coined Fast Retina Keypoint (FREAK). A cascade of binary strings is computed by efficiently comparing image intensities over a retinal sampling pattern. Our experiments show that FREAKs are in general faster to compute with lower memory load and also more robust than SIFT, SURF or BRISK. They are thus competitive alternatives to existing keypoints in particular for embedded applications.

1,876 citations


Book ChapterDOI
07 Oct 2012
TL;DR: A simple yet effective and efficient tracking algorithm with an appearance model based on features extracted from the multi-scale image feature space with data-independent basis that performs favorably against state-of-the-art algorithms on challenging sequences in terms of efficiency, accuracy and robustness.
Abstract: It is a challenging task to develop effective and efficient appearance models for robust object tracking due to factors such as pose variation, illumination change, occlusion, and motion blur. Existing online tracking algorithms often update models with samples from observations in recent frames. While much success has been demonstrated, numerous issues remain to be addressed. First, while these adaptive appearance models are data-dependent, there does not exist sufficient amount of data for online algorithms to learn at the outset. Second, online tracking algorithms often encounter the drift problems. As a result of self-taught learning, these mis-aligned samples are likely to be added and degrade the appearance models. In this paper, we propose a simple yet effective and efficient tracking algorithm with an appearance model based on features extracted from the multi-scale image feature space with data-independent basis. Our appearance model employs non-adaptive random projections that preserve the structure of the image feature space of objects. A very sparse measurement matrix is adopted to efficiently extract the features for the appearance model. We compress samples of foreground targets and the background using the same sparse measurement matrix. The tracking task is formulated as a binary classification via a naive Bayes classifier with online update in the compressed domain. The proposed compressive tracking algorithm runs in real-time and performs favorably against state-of-the-art algorithms on challenging sequences in terms of efficiency, accuracy and robustness.

1,538 citations


Book
18 Mar 2012
TL;DR: This book presents a complete theory of robust asymptotic stability for hybrid dynamical systems that is applicable to the design of hybrid control algorithms--algorithms that feature logic, timers, or combinations of digital and analog components.
Abstract: Hybrid dynamical systems exhibit continuous and instantaneous changes, having features of continuous-time and discrete-time dynamical systems. Filled with a wealth of examples to illustrate concepts, this book presents a complete theory of robust asymptotic stability for hybrid dynamical systems that is applicable to the design of hybrid control algorithms--algorithms that feature logic, timers, or combinations of digital and analog components. With the tools of modern mathematical analysis, Hybrid Dynamical Systems unifies and generalizes earlier developments in continuous-time and discrete-time nonlinear systems. It presents hybrid system versions of the necessary and sufficient Lyapunov conditions for asymptotic stability, invariance principles, and approximation techniques, and examines the robustness of asymptotic stability, motivated by the goal of designing robust hybrid control algorithms. This self-contained and classroom-tested book requires standard background in mathematical analysis and differential equations or nonlinear systems. It will interest graduate students in engineering as well as students and researchers in control, computer science, and mathematics.

1,162 citations


Proceedings ArticleDOI
16 Jun 2012
TL;DR: A robust appearance model that exploits both holistic templates and local representations is proposed and the update scheme considers both the latest observations and the original template, thereby enabling the tracker to deal with appearance change effectively and alleviate the drift problem.
Abstract: In this paper we propose a robust object tracking algorithm using a collaborative model. As the main challenge for object tracking is to account for drastic appearance change, we propose a robust appearance model that exploits both holistic templates and local representations. We develop a sparsity-based discriminative classifier (SD-C) and a sparsity-based generative model (SGM). In the S-DC module, we introduce an effective method to compute the confidence value that assigns more weights to the foreground than the background. In the SGM module, we propose a novel histogram-based method that takes the spatial information of each patch into consideration with an occlusion handing scheme. Furthermore, the update scheme considers both the latest observations and the original template, thereby enabling the tracker to deal with appearance change effectively and alleviate the drift problem. Numerous experiments on various challenging videos demonstrate that the proposed tracker performs favorably against several state-of-the-art algorithms.

1,069 citations


Proceedings ArticleDOI
16 Jun 2012
TL;DR: This paper proposes an L1 tracker that not only runs in real time but also enjoys better robustness than other L1 trackers and a very fast numerical solver is developed to solve the resulting ℓ1 norm related minimization problem with guaranteed quadratic convergence.
Abstract: Recently sparse representation has been applied to visual tracker by modeling the target appearance using a sparse approximation over a template set, which leads to the so-called L1 trackers as it needs to solve an l 1 norm related minimization problem for many times. While these L1 trackers showed impressive tracking accuracies, they are very computationally demanding and the speed bottleneck is the solver to l 1 norm minimizations. This paper aims at developing an L1 tracker that not only runs in real time but also enjoys better robustness than other L1 trackers. In our proposed L1 tracker, a new l 1 norm related minimization model is proposed to improve the tracking accuracy by adding an l 1 norm regularization on the coefficients associated with the trivial templates. Moreover, based on the accelerated proximal gradient approach, a very fast numerical solver is developed to solve the resulting l 1 norm related minimization problem with guaranteed quadratic convergence. The great running time efficiency and tracking accuracy of the proposed tracker is validated with a comprehensive evaluation involving eight challenging sequences and five alternative state-of-the-art trackers.

931 citations


Journal ArticleDOI
TL;DR: In this paper, the authors proposed a robust optimization approach to accommodate wind output uncertainty, with the objective of providing a robust unit commitment schedule for the thermal generators in the day-ahead market that minimizes the total cost under the worst wind power output scenario.
Abstract: As renewable energy increasingly penetrates into power grid systems, new challenges arise for system operators to keep the systems reliable under uncertain circumstances, while ensuring high utilization of renewable energy. With the naturally intermittent renewable energy, such as wind energy, playing more important roles, system robustness becomes a must. In this paper, we propose a robust optimization approach to accommodate wind output uncertainty, with the objective of providing a robust unit commitment schedule for the thermal generators in the day-ahead market that minimizes the total cost under the worst wind power output scenario. Robust optimization models the randomness using an uncertainty set which includes the worst-case scenario, and protects this scenario under the minimal increment of costs. In our approach, the power system will be more reliable because the worst-case scenario has been considered. In addition, we introduce a variable to control the conservatism of our model, by which we can avoid over-protection. By considering pumped-storage units, the total cost is reduced significantly.

885 citations


Proceedings ArticleDOI
16 Jun 2012
TL;DR: Experimental results show that MTT methods consistently outperform state-of-the-art trackers and mining the interdependencies between particles improves tracking performance and overall computational complexity.
Abstract: In this paper, we formulate object tracking in a particle filter framework as a multi-task sparse learning problem, which we denote as Multi-Task Tracking (MTT). Since we model particles as linear combinations of dictionary templates that are updated dynamically, learning the representation of each particle is considered a single task in MTT. By employing popular sparsity-inducing l p, q mixed norms (p Є {2, ∞} and q = 1), we regularize the representation problem to enforce joint sparsity and learn the particle representations together. As compared to previous methods that handle particles independently, our results demonstrate that mining the interdependencies between particles improves tracking performance and overall computational complexity. Interestingly, we show that the popular L 1 tracker [15] is a special case of our MTT formulation (denoted as the L 11 tracker) when p = q = 1. The learning problem can be efficiently solved using an Accelerated Proximal Gradient (APG) method that yields a sequence of closed form updates. As such, MTT is computationally attractive. We test our proposed approach on challenging sequences involving heavy occlusion, drastic illumination changes, and large pose variations. Experimental results show that MTT methods consistently outperform state-of-the-art trackers.

709 citations


Journal ArticleDOI
TL;DR: This work proposes a conceptually simple face recognition system that achieves a high degree of robustness and stability to illumination variation, image misalignment, and partial occlusion, and demonstrates how to capture a set of training images with enough illumination variation that they span test images taken under uncontrolled illumination.
Abstract: Many classic and contemporary face recognition algorithms work well on public data sets, but degrade sharply when they are used in a real recognition system. This is mostly due to the difficulty of simultaneously handling variations in illumination, image misalignment, and occlusion in the test image. We consider a scenario where the training images are well controlled and test images are only loosely controlled. We propose a conceptually simple face recognition system that achieves a high degree of robustness and stability to illumination variation, image misalignment, and partial occlusion. The system uses tools from sparse representation to align a test face image to a set of frontal training images. The region of attraction of our alignment algorithm is computed empirically for public face data sets such as Multi-PIE. We demonstrate how to capture a set of training images with enough illumination variation that they span test images taken under uncontrolled illumination. In order to evaluate how our algorithms work under practical testing conditions, we have implemented a complete face recognition system, including a projector-based training acquisition system. Our system can efficiently and effectively recognize faces under a variety of realistic conditions, using only frontal images under the proposed illuminations as training.

669 citations


Journal ArticleDOI
TL;DR: The proposed IQA scheme is designed to follow the masking effect and visibility threshold more closely, i.e., the case when both masked and masking signals are small is more effectively tackled by the proposed scheme.
Abstract: In this paper, we propose a new image quality assessment (IQA) scheme, with emphasis on gradient similarity. Gradients convey important visual information and are crucial to scene understanding. Using such information, structural and contrast changes can be effectively captured. Therefore, we use the gradient similarity to measure the change in contrast and structure in images. Apart from the structural/contrast changes, image quality is also affected by luminance changes, which must be also accounted for complete and more robust IQA. Hence, the proposed scheme considers both luminance and contrast-structural changes to effectively assess image quality. Furthermore, the proposed scheme is designed to follow the masking effect and visibility threshold more closely, i.e., the case when both masked and masking signals are small is more effectively tackled by the proposed scheme. Finally, the effects of the changes in luminance and contrast-structure are integrated via an adaptive method to obtain the overall image quality score. Extensive experiments conducted with six publicly available subject-rated databases (comprising of diverse images and distortion types) have confirmed the effectiveness, robustness, and efficiency of the proposed scheme in comparison with the relevant state-of-the-art schemes.

663 citations


Journal ArticleDOI
TL;DR: Part II of the tutorial has summarized the remaining building blocks of the VO pipeline: specifically, how to detect and match salient and repeatable features across frames and robust estimation in the presence of outliers and bundle adjustment.
Abstract: Part II of the tutorial has summarized the remaining building blocks of the VO pipeline: specifically, how to detect and match salient and repeatable features across frames and robust estimation in the presence of outliers and bundle adjustment. In addition, error propagation, applications, and links to publicly available code are included. VO is a well understood and established part of robotics. VO has reached a maturity that has allowed us to successfully use it for certain classes of applications: space, ground, aerial, and underwater. In the presence of loop closures, VO can be used as a building block for a complete SLAM algorithm to reduce motion drift. Challenges that still remain are to develop and demonstrate large-scale and long-term implementations, such as driving autonomous cars for hundreds of miles. Such systems have recently been demonstrated using Lidar and Radar sensors [86]. However, for VO to be used in such systems, technical issues regarding robustness and, especially, long-term stability have to be resolved. Eventually, VO has the potential to replace Lidar-based systems for egomotion estimation, which are currently leading the state of the art in accuracy, robustness, and reliability. VO offers a cheaper and mechanically easier-to-manufacture solution for egomotion estimation, while, additionally, being fully passive. Furthermore, the ongoing miniaturization of digital cameras offers the possibility to develop smaller and smaller robotic systems capable of ego-motion estimation.

630 citations


Journal ArticleDOI
TL;DR: To ensure scalability with respect to the number of nodes, robustness to isolated communication outages, and data privacy and integrity, the proposed SDP is solved in a distributed fashion by resorting to the alternating direction method of multipliers and guarantees faster convergence compared to competing alternatives.
Abstract: Optimal power flow (OPF) is considered for microgrids, with the objective of minimizing either the power distribution losses, or, the cost of power drawn from the substation and supplied by distributed generation (DG) units, while effecting voltage regulation. The microgrid is unbalanced, due to unequal loads in each phase and non-equilateral conductor spacings on the distribution lines. Similar to OPF formulations for balanced systems, the considered OPF problem is nonconvex. Nevertheless, a semidefinite programming (SDP) relaxation technique is advocated to obtain a convex problem solvable in polynomial-time complexity. Enticingly, numerical tests demonstrate the ability of the proposed method to attain the globally optimal solution of the original nonconvex OPF. To ensure scalability with respect to the number of nodes, robustness to isolated communication outages, and data privacy and integrity, the proposed SDP is solved in a distributed fashion by resorting to the alternating direction method of multipliers. The resulting algorithm entails iterative message-passing among groups of consumers and guarantees faster convergence compared to competing alternatives

Journal ArticleDOI
TL;DR: A method for real-time 3D object instance detection that does not require a time-consuming training stage, and can handle untextured objects, and is much faster and more robust with respect to background clutter than current state-of-the-art methods is presented.
Abstract: We present a method for real-time 3D object instance detection that does not require a time-consuming training stage, and can handle untextured objects. At its core, our approach is a novel image representation for template matching designed to be robust to small image transformations. This robustness is based on spread image gradient orientations and allows us to test only a small subset of all possible pixel locations when parsing the image, and to represent a 3D object with a limited set of templates. In addition, we demonstrate that if a dense depth sensor is available we can extend our approach for an even better performance also taking 3D surface normal orientations into account. We show how to take advantage of the architecture of modern computers to build an efficient but very discriminant representation of the input images that can be used to consider thousands of templates in real time. We demonstrate in many experiments on real data that our method is much faster and more robust with respect to background clutter than current state-of-the-art methods.

Journal ArticleDOI
TL;DR: The recently developed sliding mode control driven by sliding mode disturbance observer (SMC-SMDO) approach is used to design a robust flight controller for a small quadrotor vehicle to demonstrate the robustness of the control when faced with external disturbances.
Abstract: Over the last decade, considerable interest has been shown from industry, government and academia to the design of Vertical Take-Off and Landing (VTOL) autonomous aerial vehicles. This paper uses the recently developed sliding mode control driven by sliding mode disturbance observer (SMC-SMDO) approach to design a robust flight controller for a small quadrotor vehicle. This technique allows for a continuous control robust to external disturbance and model uncertainties to be computed without the use of high control gain or extensive computational power. The robustness of the control to unknown external disturbances also leads to a reduction of the design cost as less pre-flight analyses are required. The multiple-loop, multiple time-scale SMC-SMDO flight controller is designed to provide robust position and attitude control of the vehicle while relying only on knowledge of the limits of the disturbances. Extensive simulations of a 6 DOF computer model demonstrate the robustness of the control when faced with external disturbances (including wind, collision and actuator failure) as well as model uncertainties.

Proceedings ArticleDOI
16 Jun 2012
TL;DR: It is shown that a recognition system using only representations obtained from deep learning can achieve comparable accuracy with a system using a combination of hand-crafted image descriptors, and empirically show that learning weights not only is necessary for obtaining good multilayer representations, but also provides robustness to the choice of the network architecture parameters.
Abstract: Most modern face recognition systems rely on a feature representation given by a hand-crafted image descriptor, such as Local Binary Patterns (LBP), and achieve improved performance by combining several such representations. In this paper, we propose deep learning as a natural source for obtaining additional, complementary representations. To learn features in high-resolution images, we make use of convolutional deep belief networks. Moreover, to take advantage of global structure in an object class, we develop local convolutional restricted Boltzmann machines, a novel convolutional learning model that exploits the global structure by not assuming stationarity of features across the image, while maintaining scalability and robustness to small misalignments. We also present a novel application of deep learning to descriptors other than pixel intensity values, such as LBP. In addition, we compare performance of networks trained using unsupervised learning against networks with random filters, and empirically show that learning weights not only is necessary for obtaining good multilayer representations, but also provides robustness to the choice of the network architecture parameters. Finally, we show that a recognition system using only representations obtained from deep learning can achieve comparable accuracy with a system using a combination of hand-crafted image descriptors. Moreover, by combining these representations, we achieve state-of-the-art results on a real-world face verification database.

Proceedings ArticleDOI
16 Jun 2012
TL;DR: This work presents GRASTA, Grassmannian Robust Adaptive Subspace Tracking Algorithm, an online algorithm for robust subspace estimation from randomly subsampled data, and considers the specific application of background and foreground separation in video.
Abstract: It has recently been shown that only a small number of samples from a low-rank matrix are necessary to reconstruct the entire matrix. We bring this to bear on computer vision problems that utilize low-dimensional subspaces, demonstrating that subsampling can improve computation speed while still allowing for accurate subspace learning. We present GRASTA, Grassmannian Robust Adaptive Subspace Tracking Algorithm, an online algorithm for robust subspace estimation from randomly subsampled data. We consider the specific application of background and foreground separation in video, and we assess GRASTA on separation accuracy and computation time. In one benchmark video example [16], GRASTA achieves a separation rate of 46.3 frames per second, even when run in MATLAB on a personal laptop.

Book ChapterDOI
07 Oct 2012
TL;DR: This work presents an activity-independent method to recover the 3D configuration of a human figure from 2D locations of anatomical landmarks in a single image, leveraging a large motion capture corpus as a proxy for visual memory.
Abstract: Reconstructing an arbitrary configuration of 3D points from their projection in an image is an ill-posed problem. When the points hold semantic meaning, such as anatomical landmarks on a body, human observers can often infer a plausible 3D configuration, drawing on extensive visual memory. We present an activity-independent method to recover the 3D configuration of a human figure from 2D locations of anatomical landmarks in a single image, leveraging a large motion capture corpus as a proxy for visual memory. Our method solves for anthropometrically regular body pose and explicitly estimates the camera via a matching pursuit algorithm operating on the image projections. Anthropometric regularity (i.e., that limbs obey known proportions) is a highly informative prior, but directly applying such constraints is intractable. Instead, we enforce a necessary condition on the sum of squared limb-lengths that can be solved for in closed form to discourage implausible configurations in 3D. We evaluate performance on a wide variety of human poses captured from different viewpoints and show generalization to novel 3D configurations and robustness to missing data.

Journal ArticleDOI
TL;DR: This letter studies the robust beamforming problem for the multi-antenna wireless broadcasting system with simultaneous information and power transmission, under the assumption of imperfect channel state information at the transmitter and shows that the solution of the relaxed SDP problem is always rank-one, indicating that the relaxation is tight and the optimal solution can be got.
Abstract: In this letter, we study the robust beamforming problem for the multi-antenna wireless broadcasting system with simultaneous information and power transmission, under the assumption of imperfect channel state information (CSI) at the transmitter. Following the worst-case deterministic model, our objective is to maximize the worst-case harvested energy for the energy receiver while guaranteeing that the rate for the information receiver is above a threshold for all possible channel realizations. Such problem is nonconvex with infinite number of constraints. Using certain transformation techniques, we convert this problem into a relaxed semidefinite programming problem (SDP) which can be solved efficiently. We further show that the solution of the relaxed SDP problem is always rank-one. This indicates that the relaxation is tight and we can get the optimal solution for the original problem. Simulation results are presented to validate the effectiveness of the proposed algorithm.

Proceedings ArticleDOI
16 Jun 2012
TL;DR: This paper forms multi-target tracking as a discrete-continuous optimization problem that handles each aspect in its natural domain and allows leveraging powerful methods for multi-model fitting and demonstrates the accuracy and robustness of this approach with state-of-the-art performance on several standard datasets.
Abstract: The problem of multi-target tracking is comprised of two distinct, but tightly coupled challenges: (i) the naturally discrete problem of data association, i.e. assigning image observations to the appropriate target; (ii) the naturally continuous problem of trajectory estimation, i.e. recovering the trajectories of all targets. To go beyond simple greedy solutions for data association, recent approaches often perform multi-target tracking using discrete optimization. This has the disadvantage that trajectories need to be pre-computed or represented discretely, thus limiting accuracy. In this paper we instead formulate multi-target tracking as a discrete-continuous optimization problem that handles each aspect in its natural domain and allows leveraging powerful methods for multi-model fitting. Data association is performed using discrete optimization with label costs, yielding near optimality. Trajectory estimation is posed as a continuous fitting problem with a simple closed-form solution, which is used in turn to update the label costs. We demonstrate the accuracy and robustness of our approach with state-of-the-art performance on several standard datasets.

Book ChapterDOI
07 Oct 2012
TL;DR: This paper fills the gap of evaluation for the recent wave of binary feature descriptors, which aim to provide robustness while achieving high computational efficiency, and provides a novel taxonomy unifying both traditional and novel binary features.
Abstract: Performance evaluation of salient features has a long-standing tradition in computer vision. In this paper, we fill the gap of evaluation for the recent wave of binary feature descriptors, which aim to provide robustness while achieving high computational efficiency. We use established metrics to embed our assessment into the body of existing evaluations, allowing us to provide a novel taxonomy unifying both traditional and novel binary features. Moreover, we analyze the performance of different detector and descriptor pairings, which are often used in practice but have been infrequently analyzed. Additionally, we complement existing datasets with novel data testing for illumination change, pure camera rotation, pure scale change, and the variety present in photo-collections. Our performance analysis clearly demonstrates the power of the new class of features. To benefit the community, we also provide a website for the automatic testing of new description methods using our provided metrics and datasets www.cs.unc.edu/feature-evaluation.

01 Dec 2012
TL;DR: In this paper, the authors introduce many objective robust decision making (MORDM), which combines concepts and methods from many objective evolutionary optimization and robust decision-making (RDM), along with extensive use of interactive visual analytics, to facilitate the management of complex environmental systems.
Abstract: This paper introduces many objective robust decision making (MORDM). MORDM combines concepts and methods from many objective evolutionary optimization and robust decision making (RDM), along with extensive use of interactive visual analytics, to facilitate the management of complex environmental systems. Many objective evolutionary search is used to generate alternatives for complex planning problems, enabling the discovery of the key tradeoffs among planning objectives. RDM then determines the robustness of planning alternatives to deeply uncertain future conditions and facilitates decision makers' selection of promising candidate solutions. MORDM tests each solution under the ensemble of future extreme states of the world (SOW). Interactive visual analytics are used to explore whether solutions of interest are robust to a wide range of plausible future conditions (i.e., assessment of their Pareto satisficing behavior in alternative SOW). Scenario discovery methods that use statistical data mining algorithms are then used to identify what assumptions and system conditions strongly influence the cost-effectiveness, efficiency, and reliability of the robust alternatives. The framework is demonstrated using a case study that examines a single city's water supply in the Lower Rio Grande Valley (LRGV) in Texas, USA. Results suggest that including robustness as a decision criterion can dramatically change the formulation of complex environmental management problems as well as the negotiated selection of candidate alternatives to implement. MORDM also allows decision makers to characterize the most important vulnerabilities for their systems, which should be the focus of ex post monitoring and identification of triggers for adaptive management.

Journal ArticleDOI
TL;DR: In this paper, a generalized split-sample test (GSST) was proposed to evaluate the robustness of three hydrological models over a set of 216 catchments in southeast Australia.
Abstract: [1] This paper investigates the actual extrapolation capacity of three hydrological models in differing climate conditions. We propose a general testing framework, in which we perform series of split-sample tests, testing all possible combinations of calibration-validation periods using a 10 year sliding window. This methodology, which we have called the generalized split-sample test (GSST), provides insights into the model's transposability over time under various climatic conditions. The three conceptual rainfall-runoff models yielded similar results over a set of 216 catchments in southeast Australia. First, we assessed the model's efficiency in validation using a criterion combining the root-mean-square error and bias. A relation was found between this efficiency and the changes in mean rainfall (P) but not with changes in mean potential evapotranspiration (PE) or air temperature (T). Second, we focused on average runoff volumes and found that simulation biases are greatly affected by changes in P. Calibration over a wetter (drier) climate than the validation climate leads to an overestimation (underestimation) of the mean simulated runoff. We observed different magnitudes of these models deficiencies depending on the catchment considered. Results indicate that the transfer of model parameters in time may introduce a significant level of errors in simulations, meaning increased uncertainty in the various practical applications of these models (flow simulation, forecasting, design, reservoir management, climate change impact assessments, etc.). Testing model robustness with respect to this issue should help better quantify these uncertainties.

01 Jan 2012
TL;DR: In this article, the authors implemented a robust face recognition system via sparse representation and convex optimization, which treated each test sample as sparse linear combination of training samples, and got the sparse solution via L1-minimization.
Abstract: In this project, we implement a robust face recognition system via sparse representation and convex optimization We treat each test sample as sparse linear combination of training samples, and get the sparse solution via L1-minimization We also explore the group sparseness (L2-norm) as well as normal L1-norm regularizationWe discuss the role of feature extraction and classification robustness to occlusion or pixel corruption of face recognition system The experiments demonstrate the choice of features is no longer critical once the sparseness is properly harnessed We also verify that the proposed algorithm outperforms other methods

Journal ArticleDOI
TL;DR: The treatment concerns statistical robustness, which deals with deviations from the distributional assumptions, and addresses single and multichannel estimation problems as well as linear univariate regression for independently and identically distributed (i.i.d.) data.
Abstract: The word robust has been used in many contexts in signal processing. Our treatment concerns statistical robustness, which deals with deviations from the distributional assumptions. Many problems encountered in engineering practice rely on the Gaussian distribution of the data, which in many situations is well justified. This enables a simple derivation of optimal estimators. Nominal optimality, however, is useless if the estimator was derived under distributional assumptions on the noise and the signal that do not hold in practice. Even slight deviations from the assumed distribution may cause the estimator's performance to drastically degrade or to completely break down. The signal processing practitioner should, therefore, ask whether the performance of the derived estimator is acceptable in situations where the distributional assumptions do not hold. Isn't it robustness that is of a major concern for engineering practice? Many areas of engineering today show that the distribution of the measurements is far from Gaussian as it contains outliers, which cause the distribution to be heavy tailed. Under such scenarios, we address single and multichannel estimation problems as well as linear univariate regression for independently and identically distributed (i.i.d.) data. A rather extensive treatment of the important and challenging case of dependent data for the signal processing practitioner is also included. For these problems, a comparative analysis of the most important robust methods is carried out by evaluating their performance theoretically, using simulations as well as real-world data.

Proceedings ArticleDOI
16 Jun 2012
TL;DR: This paper transforms the visual samples in the source domain into an intermediate representation such that each transformed source sample can be linearly reconstructed by the samples of the target domain, making it more robust than previous methods.
Abstract: Visual domain adaptation addresses the problem of adapting the sample distribution of the source domain to the target domain, where the recognition task is intended but the data distributions are different. In this paper, we present a low-rank reconstruction method to reduce the domain distribution disparity. Specifically, we transform the visual samples in the source domain into an intermediate representation such that each transformed source sample can be linearly reconstructed by the samples of the target domain. Unlike the existing work, our method captures the intrinsic relatedness of the source samples during the adaptation process while uncovering the noises and outliers in the source domain that cannot be adapted, making it more robust than previous methods. We formulate our problem as a constrained nuclear norm and l 2, 1 norm minimization objective and then adopt the Augmented Lagrange Multiplier (ALM) method for the optimization. Extensive experiments on various visual adaptation tasks show that the proposed method consistently and significantly beats the state-of-the-art domain adaptation methods.

Journal ArticleDOI
TL;DR: This paper introduces a new transmission system—“Cloud Transmission (Cloud Txn)” for terrestrial broadcasting or point-to-multipoint multimedia services, designed to be robust to co-channel interference, immune to multipath distortion, and is highly spectrum reuse friendly.
Abstract: This paper introduces a new transmission system—“Cloud Transmission (Cloud Txn)” for terrestrial broadcasting or point-to-multipoint multimedia services. The system is based on the concept of increasing the reception robustness, and using the spectrum more efficiently. As such, the system is designed to be robust to co-channel interference, immune to multipath distortion, and is highly spectrum reuse friendly. It can increase the spectrum utilization significantly (3 to 4 times) by making all terrestrial RF channels in a city/market available for broadcast service. The system has the robustness required for providing mobile, pedestrian and indoor reception. It can be used for both small and large cell applications. The receiver is simple and energy efficient. The proposed system is scalable and can be implemented progressively, i.e., providing an easy transition from the traditional systems to the new Cloud Txn system. It can also coexist with the existing DTV systems and their newer versions, such as DVB-T2 or Super Hi-Vision systems.

Journal ArticleDOI
TL;DR: A novel sliding mode-based impact time and angle guidance law for engaging a modern warfare ship is presented and can be applied to many realistic engagement scenarios which include uncertainties such as target motion.
Abstract: A novel sliding mode-based impact time and angle guidance law for engaging a modern warfare ship is presented in this paper. In order to satisfy the impact time and angle constraints, a line-of-sight rate shaping process is introduced. This shaping process results in a tuning parameter that can be used to create a line-of-sight rate profile to satisfy the final time and heading angle requirements and to yield acceptable normal acceleration values. In order to track the desired line-of-sight rate profile in the presence of uncertainties, a novel robust second-order sliding mode control law is developed using a backstepping concept. Due to the robustness of the control law, it can be applied to many realistic engagement scenarios which include uncertainties such as target motion. Numerical simulations with different warship engagements are presented to illustrate the potential of the developed method.

Journal ArticleDOI
TL;DR: A fully automated, generally applicable three-stage clustering approach is developed for interpreting a stabilization diagram that does not require any user-specified parameter or threshold value, and can be used in an experimental, operational, and combined vibration testing context and with any parametric system identification algorithm.

Journal ArticleDOI
TL;DR: This paper deals with the design and implementation of a nonlinear control algorithm for the attitude tracking of a four-rotor helicopter known as quadrotor based on the second order sliding mode technique known as Super-Twisting Algorithm (STA).
Abstract: This paper deals with the design and implementation of a nonlinear control algorithm for the attitude tracking of a four-rotor helicopter known as quadrotor. This algorithm is based on the second order sliding mode technique known as Super-Twisting Algorithm (STA) which is able to ensure robustness with respect to bounded external disturbances. In order to show the effectiveness of the proposed controller, experimental tests were carried out on a real quadrotor. The obtained results show the good performance of the proposed controller in terms of stabilization, tracking and robustness with respect to external disturbances.

Journal ArticleDOI
TL;DR: A robust recurrent neural network is presented in a Bayesian framework based on echo state mechanisms that is robust in the presence of outliers and is superior to existing methods.
Abstract: In this paper, a robust recurrent neural network is presented in a Bayesian framework based on echo state mechanisms. Since the new model is capable of handling outliers in the training data set, it is termed as a robust echo state network (RESN). The RESN inherits the basic idea of ESN learning in a Bayesian framework, but replaces the commonly used Gaussian distribution with a Laplace one, which is more robust to outliers, as the likelihood function of the model output. Moreover, the training of the RESN is facilitated by employing a bound optimization algorithm, based on which, a proper surrogate function is derived and the Laplace likelihood function is approximated by a Gaussian one, while remaining robust to outliers. It leads to an efficient method for estimating model parameters, which can be solved by using a Bayesian evidence procedure in a fully autonomous way. Experimental results show that the proposed method is robust in the presence of outliers and is superior to existing methods.

Journal ArticleDOI
TL;DR: This paper proposes an efficient approximation method for solving the nonconvex centralized problem, using semidefinite relaxation (SDR), an approximation technique based on convex optimization, and analytically shows the convergence of the proposed distributed robust MCBF algorithm to the optimal centralized solution.
Abstract: Multicell coordinated beamforming (MCBF), where multiple base stations (BSs) collaborate with each other in the beamforming design for mitigating the intercell interference (ICI), has been a subject drawing great attention recently. Most MCBF designs assume perfect channel state information (CSI) of mobile stations (MSs); however CSI errors are inevitable at the BSs in practice. Assuming elliptically bounded CSI errors, this paper studies the robust MCBF design problem that minimizes the weighted sum power of BSs subject to worst-case signal-to-interference-plus-noise ratio (SINR) constraints on the MSs. Our goal is to devise a distributed optimization method to obtain the worst-case robust beamforming solutions in a decentralized fashion with only local CSI used at each BS and limited backhaul information exchange between BSs. However, the considered problem is difficult to handle even in the centralized form. We first propose an efficient approximation method for solving the nonconvex centralized problem, using semidefinite relaxation (SDR), an approximation technique based on convex optimization. Then a distributed robust MCBF algorithm is further proposed, using a distributed convex optimization technique known as alternating direction method of multipliers (ADMM). We analytically show the convergence of the proposed distributed robust MCBF algorithm to the optimal centralized solution. We also extend the worst-case robust beamforming design as well as its decentralized implementation method to a fully coordinated scenario. Simulation results are presented to examine the effectiveness of the proposed SDR method and the distributed robust MCBF algorithm.