Topic
Standard test image
About: Standard test image is a research topic. Over the lifetime, 5217 publications have been published within this topic receiving 98486 citations.
Papers published on a yearly basis
Papers
More filters
••
TL;DR: A novel method for the blind detection of MF in digital images is presented and two new feature sets that allow us to distinguish a median- Filtered image from an untouched image or average-filtered one are introduced.
Abstract: Recently, the median filtering (MF) detector as a forensic tool for the recovery of images' processing history has attracted wide interest This paper presents a novel method for the blind detection of MF in digital images Following some strongly indicative analyses in the difference domain of images, we introduce two new feature sets that allow us to distinguish a median-filtered image from an untouched image or average-filtered one The effectiveness of the proposed features is verified with evidence from exhaustive experiments on a large composite image database Compared with prior arts, the proposed method achieves significant performance improvement in the case of low resolution and strong JPEG post-compression In addition, it is demonstrated that our method is more robust against additive noise than other existing MF detectors With analyses and extensive experimental researches presented in this paper, we hope that the proposed method will add a new tool to the arsenal of forensic analysts
126 citations
•
08 Sep 1997
TL;DR: In this paper, a sensor receives a print image from an authorized person (21) to form a template, and from a candidate (11), to form test data, and the test data are bandpassed and normalized and expressed as local sinusoids for comparison.
Abstract: A sensor receives a print image from an authorized person (21) to form a template, and from a candidate (11) to form test data. Noise variance (12) is estimated from the test data as a function of position in the image, and used to weight the importance of comparison with the template at each position. Test data are multilevel, and are bandpassed and normalized (13) and expressed as local sinusoids for comparison. A ridge spacing and direction map (28) of the template is stored as vector wavenumber fields, which are later used to refine comparison. Global dilation (34) and also differential distortions (45) of the test image are estimated, and taken into account in the comparison. Comparison yields a test statistic (52) that is the ratio, or log of the ratio, of the likelihoods of obtaining the test image assuming that it respectively was, and was not, formed by an authorized user. The test statistic is compared with a threshold value, preselected for a desired level of certainty, to make the verification decision.
125 citations
••
20 Jun 2009TL;DR: It is shown that substantially low dimensional versions of the training features, such as ones extracted from critically downsampled training images, or low dimensional random projection of original feature images, still have sufficient information for good classification.
Abstract: We propose a novel technique based on compressive sensing for expression invariant face recognition. We view the different images of the same subject as an ensemble of intercorrelated signals and assume that changes due to variation in expressions are sparse with respect to the whole image. We exploit this sparsity using distributed compressive sensing theory, which enables us to grossly represent the training images of a given subject by only two feature images: one that captures the holistic (common) features of the face, and the other that captures the different expressions in all training samples. We show that a new test image of a subject can be fairly well approximated using only the two feature images from the same subject. Hence we can drastically reduce the storage space and operational dimensionality by keeping only these two feature images or their random measurements. Based on this, we design an efficient expression invariant classifier. Furthermore, we show that substantially low dimensional versions of the training features, such as ones extracted from critically downsampled training images, or low dimensional random projection of original feature images, still have sufficient information for good classification. Extensive experiments with publically available databases show that, on average, our approach performs better than the state of the art despite using only such super compact feature representation.
125 citations
••
TL;DR: This work has created a toolbox that can generate 3D digital phantoms of specific cellular components along with their corresponding images degraded by specific optics and electronics, and evaluated the plausibility of the synthetic images, measured by their similarity to real image data.
Abstract: Image cytometry still faces the problem of the quality of cell
image analysis results. Degradations caused by cell
preparation, optics and electronics considerably affect most 2D
and 3D cell image data acquired using optical microscopy. That
is why image processing algorithms applied to these data
typically offer imprecise and unreliable results. We have
created a toolbox that can generate 3D digital phantoms of
specific cellular components along with their corresponding
images degraded by specific optics and electronics. The user
can then apply image analysis methods to such simulated image
data. The analysis results can be compared with ground truth
derived from input object digital phantoms. In this way, image
analysis methods can be compared to each other and their
quality can be computed. We have also evaluated the
plausibility of the synthetic images, measured by their
similarity to real image data.
125 citations
•
TL;DR: In this paper, a network that, given a small set of annotated images, produces parameters for a Fully Convolutional Network (FCN) to perform dense pixel-level prediction on a test image for the new semantic class.
Abstract: Low-shot learning methods for image classification support learning from sparse data. We extend these techniques to support dense semantic image segmentation. Specifically, we train a network that, given a small set of annotated images, produces parameters for a Fully Convolutional Network (FCN). We use this FCN to perform dense pixel-level prediction on a test image for the new semantic class. Our architecture shows a 25% relative meanIoU improvement compared to the best baseline methods for one-shot segmentation on unseen classes in the PASCAL VOC 2012 dataset and is at least 3 times faster.
124 citations