scispace - formally typeset
Search or ask a question
Topic

Standard test image

About: Standard test image is a research topic. Over the lifetime, 5217 publications have been published within this topic receiving 98486 citations.


Papers
More filters
Proceedings ArticleDOI
11 Aug 2002
TL;DR: A text scanner which detects wide text strings in a sequence of scene images by using a multiple-CAMShift algorithm on a text probability image produced by a multi-layer perceptron.
Abstract: We propose a text scanner which detects wide text strings in a sequence of scene images. For scene text detection, we use a multiple-CAMShift algorithm on a text probability image produced by a multi-layer perceptron. To provide enhanced resolution of the extracted text images, we perform the text detection process after generating a mosaic image in a fast and robust image registration method.

30 citations

Journal ArticleDOI
TL;DR: A new method within the framework of principal component analysis (PCA) to robustly recognize faces in the presence of clutter by learning the distribution of background patterns and it is shown how this can be done for a given test image.
Abstract: We propose a new method within the framework of principal component analysis (PCA) to robustly recognize faces in the presence of clutter. The traditional eigenface recognition (EFR) method, which is based on PCA, works quite well when the input test patterns are faces. However, when confronted with the more general task of recognizing faces appearing against a background, the performance of the EFR method can be quite poor. It may miss faces completely or may wrongly associate many of the background image patterns to faces in the training set. In order to improve performance in the presence of background, we argue in favor of learning the distribution of background patterns and show how this can be done for a given test image. An eigenbackground space is constructed corresponding to the given test image and this space in conjunction with the eigenface space is used to impart robustness. A suitable classifier is derived to distinguish nonface patterns from faces. When tested on images depicting face recognition in real situations against cluttered background, the performance of the proposed method is quite good with fewer false alarms.

30 citations

Journal ArticleDOI
TL;DR: It is indicated that statistical classification of veterinary images is feasible and has the potential for grouping and classifying images or image features, especially when a large number of well-classified images are available for model training.
Abstract: As the number of images per study increases in the field of veterinary radiology, there is a growing need for computer-assisted diagnosis techniques. The purpose of this study was to evaluate two machine learning statistical models for automatically identifying image regions that contain the canine hip joint on ventrodorsal pelvis radiographs. A training set of images (120 of the hip and 80 from other regions) was used to train a linear partial least squares discriminant analysis (PLS-DA) model and a nonlinear artificial neural network (ANN) model to classify hip images. Performance of the models was assessed using a separate test image set (36 containing hips and 20 from other areas). Partial least squares discriminant analysis model achieved a classification error, sensitivity, and specificity of 6.7%, 100%, and 89%, respectively. The corresponding values for the ANN model were 8.9%, 86%, and 100%. Findings indicated that statistical classification of veterinary images is feasible and has the potential for grouping and classifying images or image features, especially when a large number of well-classified images are available for model training.

30 citations

Journal ArticleDOI
TL;DR: In extensive testing, this work has validated that this method is suitable for discriminating between subtle differences in image rendering and is free of observer bias or criteria variability, and found that the subjective testing is more reliable than several widely available image quality metrics.
Abstract: Assessing image quality is an important aspect of developing new display technology. A particularly challenging assessment is determining whether a bitwise lossy operation is visually lossless. We define “visually lossless” and describe a new standard for a subjective procedure to assess whether the image quality meets these criteria. Assessments are made between a reference image and temporally interleaved reference and test images using a forced-choice procedure. In extensive testing, we have validated that this method is suitable for discriminating between subtle differences in image rendering and is free of observer bias or criteria variability. The results of these tests demonstrate the efficacy of using as few as five randomly chosen observers. We have found that the subjective testing is more reliable than several widely available image quality metrics. As part of this work, we release a database of nearly 0.25 million subjective responses collected from 35 observers to 18 different images. The study uses a largely within-subjects design and tested observers from two viewing distances. We encourage the use of this dataset in future research to refine objective image quality metrics to improve predictability of subtle but potentially visible compression-induced image impairments.

30 citations

Journal ArticleDOI
01 Aug 2008
TL;DR: This paper shows how to achieve a more effective Query By Example processing, by using active mechanisms of biological vision, such as saccadic eye movements and fixations, in terms of categories, which in turn drive the query process.
Abstract: In this paper we show how to achieve a more effective Query By Example processing, by using active mechanisms of biological vision, such as saccadic eye movements and fixations. In particular, we discuss the way to generate two fixation sequences from a query image I q and a test image I t of the data set, respectively, and how to compare the two sequences in order to compute a similarity measure between the two images. Meanwhile, we show how the approach can be used to discover and represent the hidden semantic associations among images, in terms of categories, which in turn drive the query process.

30 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
91% related
Image segmentation
79.6K papers, 1.8M citations
91% related
Image processing
229.9K papers, 3.5M citations
90% related
Convolutional neural network
74.7K papers, 2M citations
90% related
Support vector machine
73.6K papers, 1.7M citations
90% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20231
20228
2021130
2020232
2019321
2018293