Topic
Standard test image
About: Standard test image is a research topic. Over the lifetime, 5217 publications have been published within this topic receiving 98486 citations.
Papers published on a yearly basis
Papers
More filters
••
16 Oct 2005TL;DR: This work uses a decision tree of Adaboost-based classifiers, able to select the most effective classifier at every stage, based on the outcomes of the classifiers already applied to detect faces in a test image.
Abstract: While there has been a great deal of research in face detection and recognition, there has been very limited work on identifying the expression on a face. Many current face detection projects use a [Viola/Jones] style “cascade” of Adaboost-based classifiers to interpret (sub)images — e.g. to identify which regions contain faces. We extend this method by learning a decision tree of such classifiers (dtc): While standard cascade classification methods will apply the same sequence of classifiers to each image, our dtc is able to select the most effective classifier at every stage, based on the outcomes of the classifiers already applied. We use dtc not only to detect faces in a test image, but to identify the expression on each face.
27 citations
•
21 Dec 2005TL;DR: In this article, an image forming device is described that is able to correct an image distortion even when the installation environment changes or unexpected shocks occur and even when an optical scanning device is exchanged.
Abstract: An image forming device is disclosed that is able to correct an image distortion even when the installation environment changes or unexpected shocks occur and even when an optical scanning device is exchanged The image forming device comprises a light source, an optical scanning unit, a development unit, a transfer unit, a test image output unit to output a test image able to determine unevenness of intervals of positions of beam spots formed on an image supporting member, a beam spot position correction unit to correct the unevenness of the beam spot position intervals, and a correction data input unit to select correction data of the unevenness of the beam spot position intervals
27 citations
••
TL;DR: In this work, a fuzzy pre-classifier is used to complement a set of support vector machines (SVM) to manage the large wood database and classify the wood species efficiently.
Abstract: An automated wood texture recognition system of 48 tropical wood species is presented. For each wood species, 100 macroscopic texture images are captured from different timber logs where 70 images are used for training while 30 images are used for testing. In this work, a fuzzy pre-classifier is used to complement a set of support vector machines (SVM) to manage the large wood database and classify the wood species efficiently. Given a test image, a set of texture pore features is extracted from the image and used as inputs to a fuzzy pre-classifier which assigns it to one of the four broad categories. Then, another set of texture features is extracted from the image and used with the SVM dedicated to the selected category to further classify the test image to a particular wood species. The advantage of dividing the database into four smaller databases is that when a new wood species is added into the system, only the SVM classifier of one of the four databases needs to be retrained instead of those of the entire database. This shortens the training time and emulates the experts’ reasoning when expanding the wood database. The results show that the proposed model is more robust as the size of wood database is increased.
27 citations
••
TL;DR: Empirical results show that SESRC & LDF achieves the highest recognition rates, outperforming many algorithms including some state-of-the-art ones, such as PLR, MDFR and OPR.
27 citations
••
07 Jun 2015TL;DR: In this article, the authors propose to train a specialized object boundary detector for each of the situations and then classify a test image into these situations using its context, which they model by global image appearance.
Abstract: Intuitively, the appearance of true object boundaries varies from image to image. Hence the usual monolithic approach of training a single boundary predictor and applying it to all images regardless of their content is bound to be suboptimal. In this paper we therefore propose situational object boundary detection: We first define a variety of situations and train a specialized object boundary detector for each of them using [10]. Then given a test image, we classify it into these situations using its context, which we model by global image appearance. We apply the corresponding situational object boundary detectors, and fuse them based on the classification probabilities. In experiments on ImageNet [35], Microsoft COCO [24], and Pascal VOC 2012 segmentation [13] we show that our situational object boundary detection gives significant improvements over a monolithic approach. Additionally, our method substantially outperforms [17] on semantic contour detection on their SBD dataset.
27 citations