scispace - formally typeset
Search or ask a question
Topic

Histogram of oriented gradients

About: Histogram of oriented gradients is a research topic. Over the lifetime, 2037 publications have been published within this topic receiving 55881 citations. The topic is also known as: HOG.


Papers
More filters
Journal ArticleDOI
TL;DR: A new method for strawberry detection for use in a strawberry harvesting robot based on a histogram of oriented gradients descriptor associated with a support vector machine (SVM) classifier achieves high detection accuracy (87%) in a reasonable run time, and can appropriately handle slightly overlapping strawberries.

34 citations

Journal ArticleDOI
TL;DR: This study aims to identify the best features descriptor for FER by empirically evaluating five feature descriptors, namely Gabor, Haar, Local Binary Pattern, Histogram of Oriented Gradients, HOG, and Binary Robust Independent Elementary Features (BRIEF) descriptors.
Abstract: Facial expression recognition (FER) is a crucial technology and a challenging task for human–computer interaction. Previous methods have been using different feature descriptors for FER and there is a lack of comparison study. In this paper, we aim to identify the best features descriptor for FER by empirically evaluating five feature descriptors, namely Gabor, Haar, Local Binary Pattern (LBP), Histogram of Oriented Gradients (HOG), and Binary Robust Independent Elementary Features (BRIEF) descriptors. We examine each feature descriptor by considering six classification methods, such as k-Nearest Neighbors (k-NN), Linear Discriminant Analysis (LDA), Support Vector Machine (SVM), and Adaptive Boosting (AdaBoost) with four unique facial expression datasets. In addition to test accuracies, we present confusion matrices of FER. We also analyze the effect of combined features and image resolutions on FER performance. Our study indicates that HOG descriptor works the best for FER when image resolution of a detected face is higher than 48×48 pixels.

34 citations

Book ChapterDOI
03 Sep 2013
TL;DR: A variation of the L1-norm dual total variational (TV-L1) optical flow model is proposed with a new illumination-robust data term defined from the histogram of oriented gradients computed from two consecutive frames, which is significantly more robust to illumination changes.
Abstract: The brightness constancy assumption has widely been used in variational optical flow approaches as their basic foundation. Unfortunately, this assumption does not hold when illumination changes or for objects that move into a part of the scene with different brightness conditions. This paper proposes a variation of the L1-norm dual total variational (TV-L1) optical flow model with a new illumination-robust data term defined from the histogram of oriented gradients computed from two consecutive frames. In addition, a weighted non-local term is utilized for denoising the resulting flow field. Experiments with complex textured images belonging to different scenarios show results comparable to state-of-the-art optical flow models, although being significantly more robust to illumination changes.

34 citations

Proceedings ArticleDOI
25 Mar 1985
TL;DR: A new approach to model based object recognition employing multiple views using multiple views based on the determination of camera viewpoints for succesive views looking for distinguishing features of objects is described.
Abstract: A new approach to model based object recognition employing multiple views is described. The emphasis is given on the determination of camera viewpoints for succesive views looking for distinguishing features of objects. The distance and direction of the camera are determined separately. The distance is determined by the size of the object and the feature, while the direction is determined by the shape of the feature and the presence of the occluding objects.

34 citations

Proceedings ArticleDOI
24 Aug 2014
TL;DR: Experiments on two public datasets, including the ICDAr 2003 Robust Reading character dataset and the Street View Text dataset, show that the proposed character recognition technique obtains superior performance compared with state-of-the-art techniques.
Abstract: Recognition of characters in natural images is a challenging task due to the complex background, variations of text size and perspective distortion, etc. Traditional optical character recognition (OCR) engine cannot perform well on those unconstrained text images. A novel technique is proposed in this paper that makes use of convolutional co occurrence histogram of oriented gradient (ConvCoHOG), which is more robust and discriminative than both the histogram of oriented gradient (HOG) and the co-occurrence histogram of oriented gradients (CoHOG). In the proposed technique, a more informative feature is constructed by exhaustively extracting features from every possible image patches within character images. Experiments on two public datasets including the ICDAr 2003 Robust Reading character dataset and the Street View Text (SVT) dataset, show that our proposed character recognition technique obtains superior performance compared with state-of-the-art techniques.

34 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
89% related
Convolutional neural network
74.7K papers, 2M citations
87% related
Deep learning
79.8K papers, 2.1M citations
87% related
Image segmentation
79.6K papers, 1.8M citations
87% related
Feature (computer vision)
128.2K papers, 1.7M citations
86% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202356
2022181
2021116
2020189
2019179
2018240