Topic
Standard test image
About: Standard test image is a research topic. Over the lifetime, 5217 publications have been published within this topic receiving 98486 citations.
Papers published on a yearly basis
Papers
More filters
••
TL;DR: This paper introduces a new multiply distorted image database (MDID2013), which is composed of 324 images that are simultaneously corrupted by blurring, JPEG compression and noise injection, and proposes a new six-step blind metric (SISBLIM) for quality assessment of both singly and multiply distorted images.
Abstract: In a typical image communication system, the visual signal presented to the end users may undergo the steps of acquisition, compression and transmission which cause the artifacts of blurring, quantization and noise. However, the researches of image quality assessment (IQA) with multiple distortion types are very limited. In this paper, we first introduce a new multiply distorted image database (MDID2013), which is composed of 324 images that are simultaneously corrupted by blurring, JPEG compression and noise injection. We then propose a new six-step blind metric (SISBLIM) for quality assessment of both singly and multiply distorted images. Inspired by the early human visual model and recently revealed free energy based brain theory, our method works to systematically combine the single quality prediction of each emerging distortion type and joint effects of different distortion sources. Comparative studies of the proposed SISBLIM with popular full-reference IQA approaches and start-of-the-art no-reference IQA metrics are conducted on five singly distorted image databases (LIVE, TID2008, CSIQ, IVC, Toyama) and two newly released multiply distorted image databases (LIVEMD, MDID2013). Experimental results confirm the effectiveness of our blind technique. MATLAB codes of the proposed SISBLIM algorithm and MDID2013 database will be available online at http://gvsp.sjtu.edu.cn/.
212 citations
•
03 Nov 1997TL;DR: In this paper, a digital image management system is described that includes a content analyzer that analyzes an image to extract content data from the image, which include face feature data.
Abstract: A digital image management system is described that includes a content analyzer that analyzes an image to extract content data from the image. The content data of an image include face feature data. The digital image management system also includes an image database that is coupled to the content analyzer to store pixel data of each of a number of images and the content data of each of the images. A search engine is also provided in the digital image management system. The search engine is coupled to the image database and the content analyzer to compare the content data of the images with that of an input image such that any image similar to the input image can be identified from the image database without retrieving the pixel data of the image from the image database. A method of extracting feature data of a face in an image is also described.
207 citations
•
15 May 2000
TL;DR: 1. Introduction 2. Imaging 3. Digital Images 4. Images in Java 5. Basic Image Manipulation 6. Grey level and colour enhancement 7. Neighbourhood Operations 8. The Frequency Domain 9. Geometric operations 10. Morphological Image Processing 11. Image Compression
Abstract: 1 Introduction 2 Imaging 3 Digital Images 4 Images in Java 5 Basic Image Manipulation 6 Grey level and colour enhancement 7 Neighbourhood Operations 8 The Frequency Domain 9 Geometric operations 10 Segmentation 11 Morphological Image Processing 12 Image Compression Appendix A Glossary of Image Processing Terms
206 citations
•
20 Jun 2008
TL;DR: In this paper, a digital image processing technique detects and corrects visual imperfections using a reference image, where the device corrects the defect based on the information, image data and/or meta data to create an enhanced version of the main image.
Abstract: A digital image processing technique detects and corrects visual imperfections using a reference image. A main image and one or more reference images having a temporal and/or spatial overlap and/or proximity with the original image are captured. Device information, image data and/or meta data are analyzed of the one or more reference images relating to a defect in the main image. The device corrects the defect based on the information, image data and/or meta-data to create an enhanced version of the main image.
206 citations
••
17 Oct 2005TL;DR: An algorithm for a non-negative 3D tensor factorization for establishing a local parts feature decomposition from an object class of images shows a superior decomposition to what an NMF can provide on all fronts.
Abstract: We introduce an algorithm for a non-negative 3D tensor factorization for the purpose of establishing a local parts feature decomposition from an object class of images. In the past, such a decomposition was obtained using non-negative matrix factorization (NMF) where images were vectorized before being factored by NMF. A tensor factorization (NTF) on the other hand preserves the 2D representations of images and provides a unique factorization (unlike NMF which is not unique). The resulting "factors" from the NTF factorization are both sparse (like with NMF) but also separable allowing efficient convolution with the test image. Results show a superior decomposition to what an NMF can provide on all fronts - degree of sparsity, lack of ghost residue due to invariant parts and efficiency of coding of around an order of magnitude better. Experiments on using the local parts decomposition for face detection using SVM and Adaboost classifiers demonstrate that the recovered features are discriminatory and highly effective for classification.
204 citations