Topic
Standard test image
About: Standard test image is a research topic. Over the lifetime, 5217 publications have been published within this topic receiving 98486 citations.
Papers published on a yearly basis
Papers
More filters
•
02 Mar 1981
TL;DR: In this paper, a real-time radiographic inspection system for detecting flaws, defects or inhomogeneities in manufactured objects is presented, where the objects are moved on a conveyor (20) in succession past an X-ray source (30).
Abstract: In an automated real-time radiographic inspection system for detecting flaws, defects or inhomogeneities in manufactured objects, the objects (10, 11, 12, . . . ) to be tested are moved on a conveyor (20) in succession past an X-ray source (30). Penetrating X-ray radiation is transmitted through the objects (10, 11, 12, . . . ) to cause an image to be formed for each object in succession by an electronic imaging system (40). The imaging system (40) generates digital signals representative of the image for each object, and transmits the digital signals to a data processor/comparator (50). One of the objects (10, 11, 12, . . . ) may be considered as a reference object against which the other objects are compared for structural homogeneity. Thus, the electronic images of the objects (11, 12, . . . ) could be compared with the electronic image of, for example, the object (10). Alternatively, the electronic images of the test objects could be compared with an electronic reference image programmed into the data processor/comparator (50), the programmed reference image. The comparison performed by the data processor/comparator (50) is a subtraction process, which eliminates background features common to test and reference images. Thus, any flaw appearing in a test image, but not present in the reference image, stands out in sharp detail. When a flaw occurs in a test image, the subtractive process results in a non-null signal, which activates a conveyor control mechanism (70) for stopping the conveyor (20) so as to enable rejection of the flawed test object.
51 citations
••
08 Dec 2008TL;DR: This paper develops a probabilistic method (LOOPS) that can learn a shape and appearance model for a particular object class, and be used to consistently localize constituent elements of the object’s outline in test images.
Abstract: Discriminative tasks, including object categorization and detection, are central components of high-level computer vision. However, sometimes we are interested in a finer-grained characterization of the object's properties, such as its pose or articulation. In this paper we develop a probabilistic method (LOOPS) that can learn a shape and appearance model for a particular object class, and be used to consistently localize constituent elements (landmarks) of the object's outline in test images. This localization effectively projects the test image into an alternative representational space that makes it particularly easy to perform various descriptive tasks. We apply our method to a range of object classes in cluttered images and demonstrate its effectiveness in localizing objects and performing descriptive classification, descriptive ranking, and descriptive clustering.
50 citations
••
19 Mar 1984TL;DR: In the work presented here the difficulties of applying a median filter in two dimensions to the cartesian matrix of a typical image are investigated and the development of a suitable noise-added test image for this purpose is presented.
Abstract: It is well known that Digital Subtraction Angiography (DSA) images suffer from the influence of noise. Linear filters have been applied to DSA images with some success but are known to introduce image degradation. It is particularly important that edge details be preserved in DSA images since the outline of vessels frequently contains the diagnostically useful information. The non-linear median filter possesses the property of removing spurious noise while preserving edge detail in an image. In the work presented here the difficulties of applying a median filter in two dimensions to the cartesian matrix of a typical image are investigated. The development of a suitable noise-added test image for this purpose is presented.
50 citations
•
30 Jun 1997TL;DR: In this paper, the human visual perception system is modeled using quantization of image error values, according to a visually-lossless scheme, and an image can be compressed such that it is visually indistinguishable to the naked eye from the original image.
Abstract: An image compression scheme is disclosed which models the human visual perception system. Using quantization of image error values, according to a visually-lossless scheme, an image can be compressed such that it is visually indistinguishable to the naked eye from the original image. To aid in image compression on portable devices such as a digital camera, the quantization can be precoupled into a look-up table.
50 citations
•
24 Nov 2000
TL;DR: In this paper, a landmark-based piecewise linear mapping of one volumetric image into another volumeetric image is proposed, and algorithms for the automatic identification of these landmarks are formulated for three orientations, axial, coronal, and sagittal.
Abstract: A system for analysing a brain image compares the image with a brain atlas, labels the image accordingly, and annotating the regions of interest and/or other structures. This atlas-enhanced data is written to a file (or more than one file) in the Dicom format or any web-enabled format such as SGML or XML format. The image used may be produced by any medical imaging modality. A fast algorithm is proposed for a landmark-based piecewise linear mapping of one volumetric image into another volumetric image. Furthermore, a new set of brain landmarks are proposed, and algorithms for the automatic identification of these landmarks are formulated for three orientations, axial, coronal, and sagittal.
50 citations