scispace - formally typeset
Search or ask a question
Topic

Histogram of oriented gradients

About: Histogram of oriented gradients is a research topic. Over the lifetime, 2037 publications have been published within this topic receiving 55881 citations. The topic is also known as: HOG.


Papers
More filters
Journal ArticleDOI
12 Jun 2015-Sensors
TL;DR: In the experiments, the FIR domain has proven superior to the visible one for the task of pedestrian classification, but the overall best results are obtained by a multi-domain multi-modality multi-feature fusion.
Abstract: The objective of this article is to study the problem of pedestrian classification across different light spectrum domains (visible and far-infrared (FIR)) and modalities (intensity, depth and motion) In recent years, there has been a number of approaches for classifying and detecting pedestrians in both FIR and visible images, but the methods are difficult to compare, because either the datasets are not publicly available or they do not offer a comparison between the two domains Our two primary contributions are the following: (1) we propose a public dataset, named RIFIR , containing both FIR and visible images collected in an urban environment from a moving vehicle during daytime; and (2) we compare the state-of-the-art features in a multi-modality setup: intensity, depth and flow, in far-infrared over visible domains The experiments show that features families, intensity self-similarity (ISS), local binary patterns (LBP), local gradient patterns (LGP) and histogram of oriented gradients (HOG), computed from FIR and visible domains are highly complementary, but their relative performance varies across different modalities In our experiments, the FIR domain has proven superior to the visible one for the task of pedestrian classification, but the overall best results are obtained by a multi-domain multi-modality multi-feature fusion

17 citations

Proceedings ArticleDOI
01 Nov 2017
TL;DR: The method uses the technique of word grouping who the boundary box localization select different words in the image where false positives text blocks are eliminated by geometrical properties to filtering out complex backgrounds by combining three strategies.
Abstract: Text detection in natural scenes holds great importance in the field of research and still remains a challenge and an important task because of size, various fonts, line orientation, different illumination conditions, weak characters and complex backgrounds in image The contribution of our proposed method is to filtering out complex backgrounds by combining three strategies These are enhancing the edge candidate detection in HSV space color using the fractal dimension (FD) to transform the image intensities, then using MSER candidate detection to get different masks applied in HSV space color as well as gray color After that, we opt for the Stroke Width Transform (SWT) and heuristic filtering Such strategies are followed so as to maximize the capacity of zones text pixels candidates and distinguish between text boxes and the rest of the image The components selected non text are filtered by classifying the characters candidates using Support Vector Machines (SVM) exploring Convolutional Neural Networks (CNN) features and Histogram of Oriented Gradients (HOG) vector features We use the technique of word grouping who the boundary box localization select different words in the image where false positives text blocks are eliminated by geometrical properties The evaluation of the proposed method demonstrate the effectiveness of our method for complex foreground through the experimental results tested on three benchmarks ICDAR2013, ICDAR2015 and MSRA-TD500

17 citations

Patent
10 May 2017
TL;DR: In this paper, a target tracking method based on correlation filtering and color histogram statistics and an ADAS (Advanced Driving Assistance System) is proposed. But the target tracking algorithm is not suitable for the detection and positioning on the target according to the final response value.
Abstract: The invention discloses a target tracking method based on correlation filtering and color histogram statistics and an ADAS (Advanced Driving Assistance System). The target tracking method comprises the steps of extracting HOG (Histogram of Oriented Gradients) features and color histogram statistical information of a target region, and generating an initial tracker; extracting HOG features and color histogram statistical information of the next image frame according to the initial tracker, and performing convolution operation on a feature image by using a current filter h to acquire a template response value f (x); extracting the color histogram statistical information of the image frame, and calculating a histogram response value f (x) by using a current color histogram weight vector beta ; fusing the template response value f (x) and the histogram response value f (x) to acquire a final response value f(x) of a target, and performing target detection and positioning on the target according to the final response value f(x). According to the invention, the complementarity of two tracking algorithms is utilized, the speed and the accuracy of a tracking algorithm can be simultaneously ensured, a condition of drifting of the tracking target is greatly reduced, and the target tracking method has good application prospects in a driving assistance system.

17 citations

Proceedings ArticleDOI
18 Nov 2011
TL;DR: This paper designs a nighttime pedestrian detection system based on the AdaBoost and the support vector machine (SVM) classifiers with contour and histogram of oriented gradients (HOG) features to effectively recognize pedestrians from those candidates.
Abstract: Pedestrian detection is important in the computer vision field. In the nighttime, pedestrian detection is even more valuable. In this paper, we address the issue of detecting pedestrians in video streams from a moving camera at nighttime. Most nighttime human detection approaches only use single feature extracted from images. The effective image features in daytime environment may suffer from textureless, high contrast and low light problems at night. To deal with these issues, we first segment the foreground by using the proposed Smart Region Detection approach to generate candidates. Then we design a nighttime pedestrian detection system based on the AdaBoost and the support vector machine (SVM) classifiers with contour and histogram of oriented gradients (HOG) features to effectively recognize pedestrians from those candidates. Combining different type of complementary features improve the detection performance. Results show that our pedestrian detection system is promising in the nighttime environment.

17 citations

Proceedings ArticleDOI
11 Apr 2013
TL;DR: This study starts from the grid of Histogram of oriented gradients and integrate Scale Invariant Feature Transform (SIFT) within them and computed the SIFT despite of computing intensity gradients for these cells, showing better performance over other state of the art object detection methods.
Abstract: In this study, we advocate the importance of robust local features that allow object form to be distinguished from other objects for detection purpose. We start from the grid of Histogram of oriented gradients (HOG) and integrate Scale Invariant Feature Transform (SIFT) within them. In HOG features an object's appearance is detected by the distribution of local intensity gradients or edge directions for different cells. In the proposed method we have computed the SIFT despite of computing intensity gradients for these cells. In this way, the proposed approach does not only provide more significant information than just providing intensity gradients but also proves to deal with following challenges: (i) scale invariance; (ii) rotation invariance; (iii) change in illumination; and (iv) change in view points. With qualitative and quantitative experimental evaluation on standard INRIA dataset, we have compared the proposed method with other state of the art object detection methods and demonstrated better performance over them.

17 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
89% related
Convolutional neural network
74.7K papers, 2M citations
87% related
Deep learning
79.8K papers, 2.1M citations
87% related
Image segmentation
79.6K papers, 1.8M citations
87% related
Feature (computer vision)
128.2K papers, 1.7M citations
86% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202356
2022181
2021116
2020189
2019179
2018240