scispace - formally typeset
Search or ask a question
Topic

Histogram of oriented gradients

About: Histogram of oriented gradients is a research topic. Over the lifetime, 2037 publications have been published within this topic receiving 55881 citations. The topic is also known as: HOG.


Papers
More filters
Proceedings ArticleDOI
12 Nov 2012
TL;DR: A novel video forgery detection technique based on Histogram of Oriented Gradients feature matching and video compression properties to detect the spatial and temporal copy paste tampering in videos.
Abstract: In this paper, we propose a novel video forgery detection technique to detect the spatial and temporal copy paste tampering. It is a challenge to detect the spatial and temporal copy-paste tampering in videos as the forged patch may drastically vary in terms of size, compression rate and compression type (I, B or P) or other changes such as scaling and filtering. In our proposed algorithm, the copy-paste forgery detection is based on Histogram of Oriented Gradients (HOG) feature matching and video compression properties. The benefit of using HOG features is that they are robust against various signal processing manipulations. The experimental results show that the forgery detection performance is very effective. We also compare our results against a popular copy-paste forgery detection algorithm. In addition, we analyze the experimental results for different forged patch sizes under varying degree of modifications such as compression, scaling and filtering.

93 citations

PatentDOI
Jonathan Brookshire1
TL;DR: The development of a mobile robot which will follow a single, unmarked pedestrian using vision is described, able to detect, track, and follow a pedestrian over several kilometers in outdoor environments, demonstrating a level of performance not previously shown on a small unmanned ground vehicle.
Abstract: A method for using a remote vehicle having a stereo vision camera to detect, track, and follow a person, the method comprising: detecting a person using a video stream from the stereo vision camera and histogram of oriented gradient descriptors; estimating a distance from the remote vehicle to the person using depth data from the stereo vision camera; tracking a path of the person and estimating a heading of the person; and navigating the remote vehicle to an appropriate location relative to the person.

93 citations

Proceedings ArticleDOI
23 Jun 2013
TL;DR: The Histograms of Oriented Gradients descriptor is used in combination with a Support Vector Machine for classification as a basic method to process image data at twice the pixel frequency and to normalize blocks with the L1-Sqrt-norm resulting in an efficient resource utilization.
Abstract: This paper focuses on real-time pedestrian detection on Field Programmable Gate Arrays (FPGAs) using the Histograms of Oriented Gradients (HOG) descriptor in combination with a Support Vector Machine (SVM) for classification as a basic method. We propose to process image data at twice the pixel frequency and to normalize blocks with the L1-Sqrt-norm resulting in an efficient resource utilization. This implementation allows for parallel computation of different scales. Combined with a time-multiplex approach we increase multiscale capabilities beyond resource limitations. We are able to process 64 high resolution images (1920 × 1080 pixels) per second at 18 scales with a latency of less than 150 u s. 1.79 million HOG descriptors and their SVM classifications can be calculated per second and per scale, which outperforms current FPGA implementations by a factor of 4.

92 citations

Proceedings ArticleDOI
01 Jan 2017
TL;DR: This paper aims at comparing local feature descriptors and bags of visual words with different classifiers to deep convolutional neural networks (CNNs) on three plant datasets; AgrilPlant, LeafSnap, and Folio and shows that the deep CNN methods outperform the hand-crafted features.
Abstract: The use of machine learning and computer vision methods for recognizing different plants from images has attracted lots of attention from the community. This paper aims at comparing local feature descriptors and bags of visual words with different classifiers to deep convolutional neural networks (CNNs) on three plant datasets; AgrilPlant, LeafSnap, and Folio. To achieve this, we study the use of both scratch and fine-tuned versions of the GoogleNet and the AlexNet architectures and compare them to a local feature descriptor with k-nearest neighbors and the bag of visual words with the histogram of oriented gradients combined with either support vector machines and multi-layer perceptrons. The results shows that the deep CNN methods outperform the hand-crafted features. The CNN techniques can also learn well on a relatively small dataset, Folio.

91 citations

Proceedings ArticleDOI
21 Jun 2010
TL;DR: A set of Histogram of Oriented Gradients (HOG) classifiers are trained to recognize different orientations of vehicles detected in imagery and it is found that these orientation-specific classifiers perform well, achieving a 88% classification accuracy on a test database of 284 images.
Abstract: For an autonomous vehicle, detecting and tracking other vehicles is a critical task. Determining the orientation of a detected vehicle is necessary for assessing whether the vehicle is a potential hazard. If a detected vehicle is moving, the orientation can be inferred from its trajectory, but if the vehicle is stationary, the orientation must be determined directly. In this paper, we focus on vision-based algorithms for determining vehicle orientation of vehicles in images. We train a set of Histogram of Oriented Gradients (HOG) classifiers to recognize different orientations of vehicles detected in imagery. We find that these orientation-specific classifiers perform well, achieving a 88% classification accuracy on a test database of 284 images. We also investigate how combinations of orientation-specific classifiers can be employed to distinguish subsets of orientations, such as driver's side versus passenger's side views. Finally, we compare a vehicle detector formed from orientation-specific classifiers to an orientation-independent classifier and find that, counter-intuitively, the orientation-independent classifier outperforms the set of orientation-specific classifiers.

90 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
89% related
Convolutional neural network
74.7K papers, 2M citations
87% related
Deep learning
79.8K papers, 2.1M citations
87% related
Image segmentation
79.6K papers, 1.8M citations
87% related
Feature (computer vision)
128.2K papers, 1.7M citations
86% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202356
2022181
2021116
2020189
2019179
2018240