Topic
Content-based image retrieval
About: Content-based image retrieval is a research topic. Over the lifetime, 6916 publications have been published within this topic receiving 150696 citations. The topic is also known as: CBIR.
Papers published on a yearly basis
Papers
More filters
••
10 May 2016TL;DR: A novel tool called F-search is presented that emphasize the core strengths of LIRE: lightness, speed and accuracy of the Java library for visual information retrieval.
Abstract: With an annual growth rate of 16.2% of taken photos a year, researchers predict an almost unbelievable number of 4.9 trillion stored images in 2017. Nearly 80% of these photos in 2017 will be taken with mobile phones. To be able to cope with this immense amount of visual data in a fast and accurate way, a visual information retrieval systems are needed for various domains and applications. LIRE, short for Lucene Image Retrieval, is a light weight and easy to use Java library for visual information retrieval. It allows developers and researchers to integrate common content based image retrieval approaches in their applications and research projects. LIRE supports global and local image features and can cope with millions of images using approximate search and distributing indexes on the cloud. In this demo we present a novel tool called F-search that emphasize the core strengths of LIRE: lightness, speed and accuracy.
32 citations
••
TL;DR: A new texture descriptor is developed which is a combination of Local Ternary Pattern (LTP) and gray level co-occurrence matrix (GLCM) and inherits the attributes of both LTP and GLCM.
32 citations
••
07 Jun 1999TL;DR: In this paper, the spatial color information is encoded using geometric triangulation, which is translation, rotation, and scale independent, and concatenation of all these feature point histograms serves as the image index.
Abstract: The paper examines the use of a computational geometry based spatial color indexing methodology for efficient and effective image retrieval. In this scheme, an image is evenly divided into a number of M*N non overlapping blocks, and each individual block is abstracted as a unique feature point labeled with its spatial location, dominant hue, and dominant saturation. For each set of feature points labeled with the same hue or saturation, we construct a Delaunay triangulation and then complete the feature point histogram by discretizing and counting the angles produced by this triangulation. The concatenation of all these feature point histograms serves as the image index. An important contribution of this work is to encode the spatial color information using geometric triangulation, which is translation, rotation, and scale independent. We have implemented the proposed approach and have tested it over two image collections of 2000 JPEG images and 1380 GIF images. Various experimental results demonstrate the efficacy of our techniques.
32 citations
••
TL;DR: A robust method is proposed by a combination of convolutional neural network and sparse representation, in which deep features are extracted by using CNN and sparse representations to increase retrieval speed and accuracy.
Abstract: As stored data and images on memory disks increase, image retrieval has a necessary task on image processing. Although lots of researches have been reported for this task so far, semantic gap between low level features of images and human concept is still an important challenge on content-based image retrieval. For this task, a robust method is proposed by a combination of convolutional neural network and sparse representation, in which deep features are extracted by using CNN and sparse representation to increase retrieval speed and accuracy. The proposed method has been tested on three common databases on image retrieval, named Corel, ALOI and MPEG7. By computing metrics such as P(0.5), P(1) and ANMRR, experimental results show that the proposed method has achieved higher accuracy and better speed compared to state-of-the-art methods.
32 citations
••
03 Dec 2010TL;DR: This is the first model that applies Color and Edge Directivity Descriptor (CEDD), a multiple feature extraction algorithm, into the high-level semantics extraction field and introduces a new padding strategy for region representation, which is especially suitable for widely-used non-arbitrary over-segmentation.
Abstract: Given an image, our proposed model can extract its dominant high-level semantics information through low-level feature extraction and image classification. It contains 3 main parts: image segmentation, feature extraction and classification. To our knowledge, this is the first model that applies Color and Edge Directivity Descriptor (CEDD), a multiple feature extraction algorithm, into the high-level semantics extraction field. Further, we also introduce a new padding strategy for region representation, which is especially suitable for widely-used non-arbitrary over-segmentation. Finally, our experiment shows that CEDD performs equally or better than traditional texture-based Gabor method. Meanwhile, new padding strategy outperforms other relevant methods.
32 citations