scispace - formally typeset
Topic

Robustness (computer science)

About: Robustness (computer science) is a(n) research topic. Over the lifetime, 94718 publication(s) have been published within this topic receiving 1686534 citation(s). The topic is also known as: fault tolerance & tolerance.

...read more

Papers
  More

Open accessBook ChapterDOI: 10.1007/11744023_32
Herbert Bay1, Tinne Tuytelaars2, Luc Van Gool1Institutions (2)
07 May 2006-
Abstract: In this paper, we present a novel scale- and rotation-invariant interest point detector and descriptor, coined SURF (Speeded Up Robust Features). It approximates or even outperforms previously proposed schemes with respect to repeatability, distinctiveness, and robustness, yet can be computed and compared much faster. This is achieved by relying on integral images for image convolutions; by building on the strengths of the leading existing detectors and descriptors (in casu, using a Hessian matrix-based measure for the detector, and a distribution-based descriptor); and by simplifying these methods to the essential. This leads to a combination of novel detection, description, and matching steps. The paper presents experimental results on a standard evaluation set, as well as on imagery obtained in the context of a real-life object recognition application. Both show SURF's strong performance.

...read more

  • Fig. 2. Left: Detected interest points for a Sunflower field. This kind of scenes shows clearly the nature of the features from Hessian-based detectors. Middle: Haar wavelet types used for SURF. Right: Detail of the Graffiti scene showing the size of the descriptor window at different scales.
    Fig. 2. Left: Detected interest points for a Sunflower field. This kind of scenes shows clearly the nature of the features from Hessian-based detectors. Middle: Haar wavelet types used for SURF. Right: Detail of the Graffiti scene showing the size of the descriptor window at different scales.
  • Fig. 5. An example image from the reference set (left) and the test set (right). Note the difference in viewpoint and colours.
    Fig. 5. An example image from the reference set (left) and the test set (right). Note the difference in viewpoint and colours.
  • Fig. 6. Repeatability score for image sequences, from left to right and top to bottom, Wall and Graffiti (Viewpoint Change), Leuven (Lighting Change) and Boat (Zoom and Rotation)
    Fig. 6. Repeatability score for image sequences, from left to right and top to bottom, Wall and Graffiti (Viewpoint Change), Leuven (Lighting Change) and Boat (Zoom and Rotation)
  • Table 1. Thresholds, number of detected points and calculation time for the detectors in our comparison. (First image of Graffiti scene, 800 × 640).
    Table 1. Thresholds, number of detected points and calculation time for the detectors in our comparison. (First image of Graffiti scene, 800 × 640).
  • Table 2. Computation times for the joint detector - descriptor implementations, tested on the first image of the Graffiti sequence. The thresholds are adapted in order to detect the same number of interest points for all methods. These relative speeds are also representative for other images.
    Table 2. Computation times for the joint detector - descriptor implementations, tested on the first image of the Graffiti sequence. The thresholds are adapted in order to detect the same number of interest points for all methods. These relative speeds are also representative for other images.
  • + 4

Topics: Scale-invariant feature transform (57%), GLOH (56%), Interest point detection (54%) ...read more

12,404 Citations


Open accessJournal ArticleDOI: 10.1016/J.CVIU.2007.09.014
Abstract: This article presents a novel scale- and rotation-invariant detector and descriptor, coined SURF (Speeded-Up Robust Features). SURF approximates or even outperforms previously proposed schemes with respect to repeatability, distinctiveness, and robustness, yet can be computed and compared much faster. This is achieved by relying on integral images for image convolutions; by building on the strengths of the leading existing detectors and descriptors (specifically, using a Hessian matrix-based measure for the detector, and a distribution-based descriptor); and by simplifying these methods to the essential. This leads to a combination of novel detection, description, and matching steps. The paper encompasses a detailed description of the detector and descriptor and then explores the effects of the most important parameters. We conclude the article with SURF's application to two challenging, yet converse goals: camera calibration as a special case of image registration, and object recognition. Our experiments underline SURF's usefulness in a broad range of topics in computer vision.

...read more

  • Fig. 8. Detected interest points for a Sunflower field. This kind of scenes shows the nature of the features obtained using Hessian-based detectors.
    Fig. 8. Detected interest points for a Sunflower field. This kind of scenes shows the nature of the features obtained using Hessian-based detectors.
  • Fig. 10. Orientation assignment: A sliding orientation window of size
    Fig. 10. Orientation assignment: A sliding orientation window of size
  • Fig. 9. Haar wavelet filters to compute the responses in x (left) and y direction (right). The dark parts have the weight −1 and the light parts +1.
    Fig. 9. Haar wavelet filters to compute the responses in x (left) and y direction (right). The dark parts have the weight −1 and the light parts +1.
  • Fig. 21. Orthogonal projection of the reconstructed angle shown in figure 20.
    Fig. 21. Orthogonal projection of the reconstructed angle shown in figure 20.
  • Fig. 22. 3D reconstruction with KU-Leuven’s 3D webservice. Left: One of the 13 input images for the camera calibration. Right: Position of the reconstructed cameras and sparse 3D model of the vase.
    Fig. 22. 3D reconstruction with KU-Leuven’s 3D webservice. Left: One of the 13 input images for the camera calibration. Right: Position of the reconstructed cameras and sparse 3D model of the vase.
  • + 22

11,276 Citations


Open accessJournal ArticleDOI: 10.1109/TPAMI.2008.79
John Wright1, Allen Y. Yang2, Arvind Ganesh1, S. Shankar Sastry2  +1 moreInstitutions (2)
Abstract: We consider the problem of automatically recognizing human faces from frontal views with varying expression and illumination, as well as occlusion and disguise. We cast the recognition problem as one of classifying among multiple linear regression models and argue that new theory from sparse signal representation offers the key to addressing this problem. Based on a sparse representation computed by C1-minimization, we propose a general classification algorithm for (image-based) object recognition. This new framework provides new insights into two crucial issues in face recognition: feature extraction and robustness to occlusion. For feature extraction, we show that if sparsity in the recognition problem is properly harnessed, the choice of features is no longer critical. What is critical, however, is whether the number of features is sufficiently large and whether the sparse representation is correctly computed. Unconventional features such as downsampled images and random projections perform just as well as conventional features such as eigenfaces and Laplacianfaces, as long as the dimension of the feature space surpasses certain threshold, predicted by the theory of sparse representation. This framework can handle errors due to occlusion and corruption uniformly by exploiting the fact that these errors are often sparse with respect to the standard (pixel) basis. The theory of sparse representation helps predict how much occlusion the recognition algorithm can handle and how to choose the training images to maximize robustness to occlusion. We conduct extensive experiments on publicly available databases to verify the efficacy of the proposed algorithm and corroborate the above claims.

...read more

Topics: Sparse approximation (64%), K-SVD (58%), Feature vector (58%) ...read more

9,039 Citations


Journal ArticleDOI: 10.1016/S1053-8119(02)91132-8
01 Oct 2002-NeuroImage
Abstract: Linear registration and motion correction are important components of structural and functional brain image analysis. Most modern methods optimize some intensity-based cost function to determine the best registration. To date, little attention has been focused on the optimization method itself, even though the success of most registration methods hinges on the quality of this optimization. This paper examines the optimization process in detail and demonstrates that the commonly used multiresolution local optimization methods can, and do, get trapped in local minima. To address this problem, two approaches are taken: (1) to apodize the cost function and (2) to employ a novel hybrid global-local optimization method. This new optimization method is specifically designed for registering whole brain images. It substantially reduces the likelihood of producing misregistrations due to being trapped by local minima. The increased robustness of the method, compared to other commonly used methods, is demonstrated by a consistency test. In addition, the accuracy of the registration is demonstrated by a series of experiments with motion correction. These motion correction experiments also investigate how the results are affected by different cost functions and interpolation methods.

...read more

7,937 Citations


Journal ArticleDOI: 10.1006/CVIU.1995.1004
Abstract: !, Model-based vision is firmly established as a robust approach to recognizing and locating known rigid objects in the presence of noise, clutter, and occlusion It is more problematic to apply modelbased methods to images of objects whose appearance can vary, though a number of approaches based on the use of flexible templates have been proposed The problem with existing methods is that they sacrifice model specificity in order to accommodate variability, thereby compromising robustness during image interpretation We argue that a model should only be able to deform in ways characteristic of the class of objects it represents We describe a method for building models by learning patterns of variability from a training set of correctly annotated images These models can be used for image search in an iterative refinement algorithm analogous to that employed by Active Contour Models (Snakes) The key difference is that our Active Shape Models can only deform to fit the data in ways consistent with the training set We show several practical examples where we have built such models and used them to locate partially occluded objects in noisy, cluttered images Q 199s A&& prrss, IN

...read more

Topics: Active shape model (66%), Active appearance model (65%), Active contour model (59%) ...read more

7,675 Citations


Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2022196
20217,716
20207,607
20197,095
20186,130
20175,402

Top Attributes

Show by:

Topic's top 5 most impactful authors

Ron J. Patton

65 papers, 6.6K citations

Vicenç Puig

55 papers, 985 citations

Steven X. Ding

35 papers, 1.8K citations

Abdelhak M. Zoubir

33 papers, 522 citations

Cho-Jui Hsieh

33 papers, 900 citations

Network Information
Related Topics (5)
Artificial neural network

207K papers, 4.5M citations

93% related
Kalman filter

48.3K papers, 936.7K citations

93% related
Filter (signal processing)

81.4K papers, 1M citations

93% related
Optimization problem

96.4K papers, 2.1M citations

93% related
Estimation theory

35.3K papers, 1M citations

93% related