scispace - formally typeset
Search or ask a question
Topic

Standard test image

About: Standard test image is a research topic. Over the lifetime, 5217 publications have been published within this topic receiving 98486 citations.


Papers
More filters
Patent
Li Hong1
18 Aug 2011
TL;DR: In this paper, a method for predicting whether a test image (318) is sharp or blurred includes the steps of: training a sharpness classifier (316) to discriminate between sharp and blurred images, the classifier being trained based on a set of training sharpness features (314) computed from a plurality of training images (306), the set of sharpness feature for each training image (306) being computed by a first resizing factor; identifying texture regions (408, 410).
Abstract: A method for predicting whether a test image (318) is sharp or blurred includes the steps of: training a sharpness classifier (316) to discriminate between sharp and blurred images, the sharpness classifier (316) being trained based on a set of training sharpness features (314) computed from a plurality of training images (306), the set of training sharpness features (314) for each training image (306) being computed by (i) resizing each training image (306) by a first resizing factor; (ii) identifying texture regions (408, 410) in the resized training image; and (iii) computing the set of sharpness features in the training image (412) from the identified texture regions; and applying the trained sharpness classifier (316) to the test image (318) to determine if the test image (318) is sharp or blurred based on a set of test sharpness features (322) computed from the test image (318), the set of test sharpness features (322) for each test image (318) being computed by (i) resizing the test image (318) by a second resizing factor that is different than the first resizing factor; (ii) identifying texture regions (408, 410) in the resized test image; and (iii) computing the set of sharpness features in the test image (412) from the identified texture regions.

31 citations

Journal ArticleDOI
TL;DR: The KonIQ-10k dataset as mentioned in this paper is the first in-the-wild dataset for image quality assessment (IQA), consisting of 10,073 quality scored images.
Abstract: Deep learning methods for image quality assessment (IQA) are limited due to the small size of existing datasets. Extensive datasets require substantial resources both for generating publishable content and annotating it accurately. We present a systematic and scalable approach to creating KonIQ-10k, the largest IQA dataset to date, consisting of 10,073 quality scored images. It is the first in-the-wild database aiming for ecological validity, concerning the authenticity of distortions, the diversity of content, and quality-related indicators. Through the use of crowdsourcing, we obtained 1.2 million reliable quality ratings from 1,459 crowd workers, paving the way for more general IQA models. We propose a novel, deep learning model (KonCept512), to show an excellent generalization beyond the test set (0.921 SROCC), to the current state-of-the-art database LIVE-in-the-Wild (0.825 SROCC). The model derives its core performance from the InceptionResNet architecture, being trained at a higher resolution than previous models (512x384). Correlation analysis shows that KonCept512 performs similar to having 9 subjective scores for each test image.

31 citations

Patent
06 Nov 2001
TL;DR: In this article, the authors provided an image processing apparatus in which a test operation of each image can be executed smoothly by effectively utilizing magnetic information or optical information, which was recorded on a recording medium.
Abstract: There is provided an image processing apparatus in which a test operation of each image can be executed smoothly by effectively utilizing magnetic information or optical information, which is recorded on a recording medium. During test processing, when the number of frame images to be displayed on a test image 204 is less than 28, a display mode of magnetic information to be displayed correspondingly to the frame images is altered to the form of words and phrases. Further, when the number of frame images to be displayed on the test image 204 is 28 or more, the display mode of magnetic information to be displayed correspondingly to the frame images is altered to the form of abbreviation (FFy).

31 citations

Journal ArticleDOI
TL;DR: An objective quality model for MEF of dynamic scenes is developed that significantly outperforms the state-of-the-art and is demonstrated to have the promise of the proposed model in parameter tuning of MEF methods.
Abstract: A common approach to high dynamic range (HDR) imaging is to capture multiple images of different exposures followed by multi-exposure image fusion (MEF) in either radiance or intensity domain. A predominant problem of this approach is the introduction of the ghosting artifacts in dynamic scenes with camera and object motion. While many MEF methods (often referred to as deghosting algorithms) have been proposed for reduced ghosting artifacts and improved visual quality, little work has been dedicated to perceptual evaluation of their deghosting results. Here we first construct a database that contains 20 multi-exposure sequences of dynamic scenes and their corresponding fused images by nine MEF algorithms. We then carry out a subjective experiment to evaluate fused image quality, and find that none of existing objective quality models for MEF provides accurate quality predictions. Motivated by this, we develop an objective quality model for MEF of dynamic scenes. Specifically, we divide the test image into static and dynamic regions, measure structural similarity between the image and the corresponding sequence in the two regions separately, and combine quality measurements of the two regions into an overall quality score. Experimental results show that the proposed method significantly outperforms the state-of-the-art. In addition, we demonstrate the promise of the proposed model in parameter tuning of MEF methods. The subjective database and the MATLAB code of the proposed model are made publicly available at https://github.com/h4nwei/MEF-SSIMd .

31 citations

Journal ArticleDOI
TL;DR: Methods for digital image panning and zooming are incorporated and discussed and their use and implications are discussed.
Abstract: SUMMARY Segmentation of large areas of light microscopic slides into N by N fields, and each of these fields into M digital image tiles, allows the scanning, storage and digital processing of large images. Any of the original N2 fields or composites of M adjacent tiles can be recalled to the video display for analysis. Developed procedures for use on a microscope equipped with a precision scanning stage allow registration of the image coordinates (X-Y) for any original or composite field and the alignment of one of these fields along the depth (Z) axis by means of external, machined fiducial marks in serial sections. To facilitate work whenever unavoidable, we have incorporated methods for digital image panning and zooming (changes of magnification) and discuss their use and implications.

31 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
91% related
Image segmentation
79.6K papers, 1.8M citations
91% related
Image processing
229.9K papers, 3.5M citations
90% related
Convolutional neural network
74.7K papers, 2M citations
90% related
Support vector machine
73.6K papers, 1.7M citations
90% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20231
20228
2021130
2020232
2019321
2018293