Topic
Standard test image
About: Standard test image is a research topic. Over the lifetime, 5217 publications have been published within this topic receiving 98486 citations.
Papers published on a yearly basis
Papers
More filters
•
01 Jan 1994
TL;DR: The focus of the paper is on general testing procedures for high precision scanning and different test patterns for detection of various errors and requirements on their quality are presented.
Abstract: Film scanners will be used for many years to come and it is well known that not even the best and most expensive scanners are not free of errors. Scanner testing and calibration procedures are necessary to achieve a high geometric and radiometric scanner performance. It is also a means to produce or make use of cheaper scanners that do not rely on expensive mechanical positioning and optical parts. The paper refers to flatbed scanners employing linear or area CCDs, thus to both photogrammetric and DeskTop Publishing (DTP) scanners. It first presents different slowly and frequently varying errors of geometric and radiometric nature and their sources. The focus of the paper is on general testing procedures for high precision scanning. Different test patterns for detection of various errors and requirements on their quality are presented. Appropriate conditions for performing the tests are formulated. Test procedures for detecting and correcting various errors using test patterns are presented. Finally, some requirements for scanner vendors are proposed. The work is closely related and was partially done within the OEEPE/ISPRS Working Group on the Analysis of Photo-Scanners. CCDs or multiple optically butted linear CCDs. Reference to these specific sensors in the text will be made using the ac ronyms A-CCD, L -CCD, ML-CCD re spec t ive ly. Photogrammetric scanners employ either area CCDs that scan the image in tiles or linear CCDs scanning in multiple swaths. Flatbed DTP scanners use one or multiple linear CCDs to scan the image in one swath. Here only the major errors will be mentioned. Other errors can occur depending on the design, construction, and parts of each individual scanner. Whether some errors are slowly or frequently varying depends on the quality and stability of the scanner, e.g. in photogrammetric scanners the positioning mechanism is accurate and stable, while in DTP scanners the positioning errors vary from scan to scan or even within one scan. For linear CCDs the following convention will be used. Horizontal direction is the direction of the linear CCD, vertical the direction of the scanning movement. A. Slowly varying errors 1. Distortions due to lens or other optical parts This refers mainly to geometric errors like symmetric radial and tangential distortion. Radiometric errors like vignetting, shading, and secondary reflections can also be introduced by the optics. 2. CCD blemishes (A-CCD) They usually occur only with large area CCDs. Blemishes are single pixels, lines/columns or areas whose grey values differ significantly (e.g. 16 grey values or more) than the average grey level of their neighbourhood due to fabrication faults. 3. CCD misalignment and overlap (ML-CCD) The multiple CCDs may have different direction or not be collinear. If their overlap is not correctly estimated by a sensor calibration, then overlaps or gaps will occur. 4. Subsampling errors (L-CCD, ML-CCD) When scanning with a resolution less than the original one, the pixels in horizontal direction are low-pass filtered and resampled, while in vertical direction larger pixels are created by increasing the scanning speed. This leads to different treatment of horizontal and vertical features and can lead to loss of information if the scanning speed is not increased by the correct amount and is not properly synchronised to the integration time. 5. Smearing (L-CCD, ML-CCD) Due to the high scanning speed horizontal features, especially lines, will appear thicker and with lower contrast than vertical ones. This effect corresponds to a low-pass filtering.
18 citations
••
TL;DR: In this paper, an ensemble of improved convolutional neural networks combined with a test-time regularly spaced shifting technique was proposed for skin lesion classification, which showed a significant improvement on the well-known HAM10000 dataset in terms of accuracy and F-score.
Abstract: Skin lesions are caused due to multiple factors, like allergies, infections, exposition to the sun, etc. These skin diseases have become a challenge in medical diagnosis due to visual similarities, where image classification is an essential task to achieve an adequate diagnostic of different lesions. Melanoma is one of the best-known types of skin lesions due to the vast majority of skin cancer deaths. In this work, we propose an ensemble of improved convolutional neural networks combined with a test-time regularly spaced shifting technique for skin lesion classification. The shifting technique builds several versions of the test input image, which are shifted by displacement vectors that lie on a regular lattice in the plane of possible shifts. These shifted versions of the test image are subsequently passed on to each of the classifiers of an ensemble. Finally, all the outputs from the classifiers are combined to yield the final result. Experiment results show a significant improvement on the well-known HAM10000 dataset in terms of accuracy and F-score. In particular, it is demonstrated that our combination of ensembles with test-time regularly spaced shifting yields better performance than any of the two methods when applied alone.
18 citations
•
22 Jan 2004
TL;DR: In this article, the authors propose a method to determine a number of frames of data for acquisition from video data on the basis of image quality setting data that allows image quality to be set for an image output by an output device.
Abstract: Designed to determine a number of frames of data for acquisition from video data on the basis of image quality setting data that allows image quality to be set for an image output by an image output device; and to acquire from the video data the determined number of frames of data, and synthesize the acquired number of frames of data to generate image data representing tones of an image by means of a multitude of pixels. Since image data can be generated by synthesizing frames of data in a number appropriate to the image quality of the output image, the process of generating image data representing a still image can be performed efficiently, and a still image derived efficiently.
18 citations
•
15 Mar 2006TL;DR: In this article, an average face model (MAV) is generated from a number of distinct face images (I1, I2,... Ij) and a reference face model is trained for each one of known faces.
Abstract: The invention describes a method of performing face recognition, which method comprises the steps of generating an average face model (MAV)—comprising a matrix of states representing regions of the face—from a number of distinct face images (I1, I2, . . . Ij) and training a reference face model (M1, M2, . . . , Mn) for each one of a number of known faces, where the reference face model (M1, M2, . . . , Mn) is based on the average face model (MAV). A test image (IT) is acquired for a face to be identified, and a best path through the average face model (MAv) is calculated, based on the test image (IT). A degree of similarity is evaluated for each reference face model (M1, M2, . . . , Mn) against the test image (IT) by applying the best path of the average face model (MAV) to each reference face model (M1, M2, . . . , Mn) to identify the reference face model (M1, M2, . . . , Mn) most similar to the test image (IT), which identified reference face mod el (M1, M2, . . . , Mn) is subsequently accepted or rejected on the basis of its degree of similarity. Furthermore, the invention describes a system for performing face recognition. Also, the invention describes a method of and system for training a reference face model (M1) which may be used in the face recognition system, a method of and system for calculating a similarity threshold value for a reference face model (Mn) which may be used in the face recognition system, and a method of and system for optimizing images (I, IT, IT, G1, G2, . . . G, T1, T2, . . . , Tm, Tnew) which may be used in the face recognition system.
18 citations
••
TL;DR: This correspondence paper proposes a new approach called cascaded elastically progressive model aiming for pixel-wise landmark localization and shows advantages for accurate landmark localization compared with prevailing methods.
Abstract: While recently published face alignment algorithms mainly focused on occlusion, low image quality, and complex head poses, subtle variances of facial components were often overlooked. In this correspondence paper, we propose a new approach called cascaded elastically progressive model aiming for pixel-wise landmark localization. First of all, elastically progressive model (EPM) is designed to synthesize the prior knowledge of face shape and appearance of test image. More specifically, a novel framework referred to as inherent linear structure (ILS) is explored for capturing the characteristics of the shape, which is more plastic and flexible than extensively used principle component analysis-based modeling. A locally linear support vector machine (LL-SVM) is used as local expert for searching candidate feature points. In order to optimally integrate ILS with localization results of LL-SVM, we introduce Kalman filter (KF) to dynamically estimate the true shape in the sense of least mean square error. Two schemes are utilized based on our modeling of KF. First, we embedded heuristic line-like search strategy into the framework to guarantee and accelerate the convergence. Second, Kalman gain is manipulated adaptively in accordance with the confidence of the localizers so that poorly localized points are more subject to global constraint than well localized ones. To further improve robustness to initializations, two EPMs are cascaded, in which primary EPM detects the global structure and secondary EPM captures the details. Validation experiments are conducted on in-the-wild LFPW and HELEN databases. Our method shows advantages for accurate landmark localization compared with prevailing methods.
18 citations