scispace - formally typeset
Search or ask a question
Topic

Histogram of oriented gradients

About: Histogram of oriented gradients is a research topic. Over the lifetime, 2037 publications have been published within this topic receiving 55881 citations. The topic is also known as: HOG.


Papers
More filters
Proceedings ArticleDOI
01 Sep 2012
TL;DR: A HOG-CoG detector is proposed which through the validation experiment achieves 38% log-average miss rate in full image evaluation and 90% detection rate at 10-4 false positives per window on INRIA Person Dataset.
Abstract: Pedestrian detection is an important part of intelligent transportation systems. In the literature, Histogram of Oriented Gradients (HOG) detector for pedestrian detection is known for its good performance, but there are still some false detections appearing in the cases with flat area or clustered background. To deal with these problems, in this research work we develop a new feature which is based on pairing comparison computations, called Comparison of Granules (CoG). The idea of CoG is to encode the textural information of local area describing how different the pixel intensities are distributed within a region. It is shown that the special characteristics of CoG feature are “small” and “efficiency” relative to HOG. By incorporating this new feature, we propose a HOG-CoG detector which through our validation experiment achieves 38% log-average miss rate in full image evaluation and 90% detection rate at 10−4 false positives per window on INRIA Person Dataset. Another contribution of this work is that, we also present a training scheme that can be applied on huge database for training a detector. Such training scheme can reduce the number of hard samples during bootstrap training.

7 citations

Proceedings ArticleDOI
TL;DR: This approach is based on histogram of oriented gradients features weighted by local intensities to first identify an initial region of interest depicting the left and right ventricles that exhibits the greatest extent of cardiac motion, which is correlated with the homologous region that belongs to the training dataset that best matches the test image using feature vector correlation techniques.
Abstract: Region of interest detection is a precursor to many medical image processing and analysis applications, including segmentation, registration and other image manipulation techniques. The optimal region of interest is often selected manually, based on empirical knowledge and features of the image dataset. However, if inconsistently identified, the selected region of interest may greatly affect the subsequent image analysis or interpretation steps, in turn leading to incomplete assessment during computer-aided diagnosis or incomplete visualization or identification of the surgical targets, if employed in the context of pre-procedural planning or image-guided interventions. Therefore, the need for robust, accurate and computationally efficient region of interest localization techniques is prevalent in many modern computer-assisted diagnosis and therapy applications. Here we propose a fully automated, robust, a priori learning-based approach that provides reliable estimates of the left and right ventricle features from cine cardiac MR images. The proposed approach leverages the temporal frame-to-frame motion extracted across a range of short axis left ventricle slice images with small training set generated from les than 10% of the population. This approach is based on histogram of oriented gradients features weighted by local intensities to first identify an initial region of interest depicting the left and right ventricles that exhibits the greatest extent of cardiac motion. This region is correlated with the homologous region that belongs to the training dataset that best matches the test image using feature vector correlation techniques. Lastly, the optimal left ventricle region of interest of the test image is identified based on the correlation of known ground truth segmentations associated with the training dataset deemed closest to the test image. The proposed approach was tested on a population of 100 patient datasets and was validated against the ground truth region of interest of the test images manually annotated by experts. This tool successfully identified a mask around the LV and RV and furthermore the minimal region of interest around the LV that fully enclosed the left ventricle from all testing datasets, yielding a 98% overlap with their corresponding ground truth. The achieved mean absolute distance error between the two contours that normalized by the radius of the ground truth is 0.20 ± 0.09.

7 citations

Proceedings ArticleDOI
20 Aug 2015
TL;DR: A novel approach for recognition of handwritten digits for South Indian languages using artificial neural networks (ANN) and Histogram of Oriented Gradients (HOG) features is presented.
Abstract: In this paper a novel approach for recognition of handwritten digits for South Indian languages using artificial neural networks (ANN) and Histogram of Oriented Gradients (HOG) features is presented. The images of documents containing the hand written digits are optically scanned and are segmented into individual images of isolated digits. HOG features are then extracted from these images and applied to the ANN for recognition. The system recognises the digits with an overall accuracy of 83.4%.

7 citations

Book ChapterDOI
01 Jan 2020
TL;DR: A comparative study of different feature descriptors applied for HAR on video datasets is presented, and an efficient sparse filtering method is applied, which reduces the number of features by eliminating the redundant features and assigning weights to the features left after elimination.
Abstract: Human action recognition (HAR) has been a well-studied research topic in the field of computer vision since the past two decades. The objective of HAR is to detect and recognize actions performed by one or more persons based on a series of observations. In this paper, a comparative study of different feature descriptors applied for HAR on video datasets is presented. In particular, we estimate four standard feature descriptors, namely histogram of oriented gradients (HOG), gray-level co-occurrence matrix (GLCM), speeded-up robust features (SURF), and Graphics and Intelligence-based Scripting Technology (GIST) descriptors from RGB videos, after performing background subtraction and creating a minimum bounding box surrounding the human object. To further speed up the overall process, we apply an efficient sparse filtering method, which reduces the number of features by eliminating the redundant features and assigning weights to the features left after elimination. Finally, the performance of the said feature descriptors on three standard benchmark video datasets namely, KTH, HMDB51, and UCF11 has been analyzed.

7 citations

Book ChapterDOI
01 Nov 2014
TL;DR: The performance of the proposed motion boundary trajectory approach is compared with other state-of-the-art approaches, e.g., trajectory based approach, on a number of human action benchmark datasets and it is found that the proposed approach gives improved recognition results.
Abstract: In this paper, we propose a novel approach to extract local descriptors of a video, based on two ideas, one using motion boundary between objects, and, second, the resulting motion boundary trajectories extracted from videos, together with other local descriptors in the neighbourhood of the extracted motion boundary trajectories, histogram of oriented gradients, histogram of optical flow, motion boundary histogram, can be used as local descriptors for video representations. The motion boundary approach captures more information between moving objects which might be caused by camera movements. We compare the performance of the proposed motion boundary trajectory approach with other state-of-the-art approaches, e.g., trajectory based approach, on a number of human action benchmark datasets (YouTube, UCF sports, Olympic Sports, HMDB51, Hollywood2 and UCF50), and found that the proposed approach gives improved recognition results.

7 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
89% related
Convolutional neural network
74.7K papers, 2M citations
87% related
Deep learning
79.8K papers, 2.1M citations
87% related
Image segmentation
79.6K papers, 1.8M citations
87% related
Feature (computer vision)
128.2K papers, 1.7M citations
86% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202356
2022181
2021116
2020189
2019179
2018240