scispace - formally typeset
Search or ask a question
Topic

Standard test image

About: Standard test image is a research topic. Over the lifetime, 5217 publications have been published within this topic receiving 98486 citations.


Papers
More filters
Proceedings ArticleDOI
01 Dec 2009
TL;DR: This paper proposes a novel appearance descriptor for 3D human pose estimation from monocular images using a learning-based technique and compares its approach with other methods using a synchronized video and 3D motion dataset.
Abstract: In this paper we propose a novel appearance descriptor for 3D human pose estimation from monocular images using a learning-based technique. Our image-descriptor is based on the intermediate local appearance descriptors that we design to encapsulate local appearance context and to be resilient to noise. We encode the image by the histogram of such local appearance context descriptors computed in an image to obtain the final image-descriptor for pose estimation. We name the final image-descriptor the Histogram of Local Appearance Context (HLAC). We then use Relevance Vector Machine (RVM) regression to learn the direct mapping between the proposed HLAC image-descriptor space and the 3D pose space. Given a test image, we first compute the HLAC descriptor and then input it to the trained regressor to obtain the final output pose in real time. We compared our approach with other methods using a synchronized video and 3D motion dataset. We compared our proposed HLAC image-descriptor with the Histogram of Shape Context and Histogram of SIFT like descriptors. The evaluation results show that HLAC descriptor outperforms both of them in the context of 3D Human pose estimation.

20 citations

Proceedings ArticleDOI
01 Sep 2019
TL;DR: In this paper, weight gradients from backpropagation were used to characterize the representation space learned by deep learning algorithms for perceptual image quality assessment and out-of-distribution classification.
Abstract: In this paper, we utilize weight gradients from backpropagation to characterize the representation space learned by deep learning algorithms. We demonstrate the utility of such gradients in applications including perceptual image quality assessment and out-of-distribution classification. The applications are chosen to validate the effectiveness of gradients as features when the test image distribution is distorted from the train image distribution. In both applications, the proposed gradient based features outperform activation features. In image quality assessment, the proposed approach is compared with other state of the art approaches and is generally the top performing method on TID 2013 and MULTI-LIVE databases in terms of accuracy, consistency, linearity, and monotonic behavior. Finally, we analyze the effect of regularization on gradients using CURE-TSR dataset for out-of-distribution classification.

20 citations

Journal ArticleDOI
TL;DR: It is shown that the proposed method of blind estimation of noise variance in a single highly textured image provides approximately two times lower estimation root mean square error than other methods.
Abstract: In the paper, a new method of blind estimation of noise variance in a single highly textured image is proposed. An input image is divided into 8x8 blocks and discrete cosine transform (DCT) is performed for each block. A part of 64 DCT coefficients with lowest energy calculated through all blocks is selected for further analysis. For the DCT coefficients, a robust estimate of noise variance is calculated. Corresponding to the obtained estimate, a part of blocks having very large values of local variance calculated only for the selected DCT coefficients are excluded from the further analysis. These two steps (estimation of noise variance and exclusion of blocks) are iteratively repeated three times. For the verification of the proposed method, a new noise-free test image database TAMPERE17 consisting of many highly textured images is designed. It is shown for this database and different values of noise variance from the set {25, 49, 100, 225}, that the proposed method provides approximately two times lower estimation root mean square error than other methods.

20 citations

Journal ArticleDOI
TL;DR: A new layer for CNNs that increases their robustness to several types of corruptions of the input images, called a ‘push–pull’ layer, which compute its response as the combination of two half-wave rectified convolutions, with kernels of different size and opposite polarity.
Abstract: Convolutional neural networks (CNNs) lack robustness to test image corruptions that are not seen during training. In this paper, we propose a new layer for CNNs that increases their robustness to several types of corruptions of the input images. We call it a ‘push–pull’ layer and compute its response as the combination of two half-wave rectified convolutions, with kernels of different size and opposite polarity. Its implementation is based on a biologically motivated model of certain neurons in the visual system that exhibit response suppression, known as push–pull inhibition. We validate our method by replacing the first convolutional layer of the LeNet, ResNet and DenseNet architectures with our push–pull layer. We train the networks on original training images from the MNIST and CIFAR data sets and test them on images with several corruptions, of different types and severities, that are unseen by the training process. We experiment with various configurations of the ResNet and DenseNet models on a benchmark test set with typical image corruptions constructed on the CIFAR test images. We demonstrate that our push–pull layer contributes to a considerable improvement in robustness of classification of corrupted images, while maintaining state-of-the-art performance on the original image classification task. We released the code and trained models at the url http://github.com/nicstrisc/Push-Pull-CNN-layer .

20 citations

Journal ArticleDOI
15 Nov 2016-PLOS ONE
TL;DR: A feature-learning-based random walk method for liver segmentation using CT images was proposed and four texture features were extracted and then classified to determine the classification probability corresponding to the test images.
Abstract: Liver segmentation is a significant processing technique for computer-assisted diagnosis. This method has attracted considerable attention and achieved effective result. However, liver segmentation using computed tomography (CT) images remains a challenging task because of the low contrast between the liver and adjacent organs. This paper proposes a feature-learning-based random walk method for liver segmentation using CT images. Four texture features were extracted and then classified to determine the classification probability corresponding to the test images. Seed points on the original test image were automatically selected and further used in the random walk (RW) algorithm to achieve comparable results to previous segmentation methods.

20 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
91% related
Image segmentation
79.6K papers, 1.8M citations
91% related
Image processing
229.9K papers, 3.5M citations
90% related
Convolutional neural network
74.7K papers, 2M citations
90% related
Support vector machine
73.6K papers, 1.7M citations
90% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20231
20228
2021130
2020232
2019321
2018293