scispace - formally typeset
Search or ask a question
Topic

Content-based image retrieval

About: Content-based image retrieval is a research topic. Over the lifetime, 6916 publications have been published within this topic receiving 150696 citations. The topic is also known as: CBIR.


Papers
More filters
Journal ArticleDOI
01 Dec 2000
TL;DR: A novel system for content-based image retrieval in large, unannotated databases based on tree structured self-organizing maps (TS-SOMs), which implements a relevance feedback technique on content- based image retrieval.
Abstract: We have developed a novel system for content-based image retrieval in large, unannotated databases. The system is called PicSOM, and it is based on tree structured self-organizing maps (TS-SOMs). Given a set of reference images, PicSOM is able to retrieve another set of images which are similar to the given ones. Each TS-SOM is formed with a diAerent image feature representation like color, texture, or shape. A new technique introduced in PicSOM facilitates automatic combination of responses from multiple TS-SOMs and their hierarchical levels. This mechanism adapts to the user’s preferences in selecting which images resemble each other. Thus, the mechanism implements a relevance feedback technique on content-based image retrieval. The image queries are performed through the World Wide Web and the queries are iteratively refined as the system exposes more images to the user. ” 2000 Elsevier Science B.V. All rights reserved.

261 citations

Proceedings ArticleDOI
10 Oct 2004
TL;DR: This paper describes a method for generating metadata for photos using spatial, temporal, and social context and proposes that leveraging contextual metadata at the point of capture can address the problems of the semantic and sensory gaps.
Abstract: The recent popularity of mobile camera phones allows for new opportunities to gather important metadata at the point of capture. This paper describes a method for generating metadata for photos using spatial, temporal, and social context. We describe a system we implemented for inferring location information for pictures taken with camera phones and its performance evaluation. We propose that leveraging contextual metadata at the point of capture can address the problems of the semantic and sensory gaps. In particular, combining and sharing spatial, temporal, and social contextual metadata from a given user and across users allows us to make inferences about media content.

261 citations

Journal ArticleDOI
01 Dec 2005
TL;DR: A novel approach for texture image retrieval is proposed by using a set of dual-tree rotated complex wavelet filter (DT-RCWF) andDual-tree-complex wavelet transform ( DT-CWT) jointly, which obtains texture features in 12 different directions.
Abstract: A new set of two-dimensional (2-D) rotated complex wavelet filters (RCWFs) are designed with complex wavelet filter coefficients, which gives texture information strongly oriented in six different directions (45/spl deg/ apart from complex wavelet transform). The 2-D RCWFs are nonseparable and oriented, which improves characterization of oriented textures. Most texture image retrieval systems are still incapable of providing retrieval result with high retrieval accuracy and less computational complexity. To address this problem, we propose a novel approach for texture image retrieval by using a set of dual-tree rotated complex wavelet filter (DT-RCWF) and dual-tree-complex wavelet transform (DT-CWT) jointly, which obtains texture features in 12 different directions. The information provided by DT-RCWF complements the information generated by DT-CWT. Features are obtained by computing the energy and standard deviation on each subband of the decomposed image. To check the retrieval performance, texture database D1 of 1856 textures from Brodatz album and database D2 of 640 texture images from VisTex image database is created. Experimental results indicates that the proposed method improves retrieval rate from 69.61% to 77.75% on database D1, and from 64.83% to 82.81% on database D2, in comparing with traditional discrete wavelet transform based approach. The proposed method also retains comparable levels of computational complexity.

259 citations

Proceedings ArticleDOI
24 Jun 2002
TL;DR: A local Fourier transform is adopted as a texture representation scheme and eight characteristic maps for describing different aspects of cooccurrence relations of image pixels in each channel of the (SVcosH, SVsinH, V) color space are derived, resulting in a 48-dimensional feature vector.
Abstract: We adopt a local Fourier transform as a texture representation scheme and derive eight characteristic maps for describing different aspects of cooccurrence relations of image pixels in each channel of the (SVcosH, SVsinH, V) color space. Then we calculate the first and second moments of these maps as a representation of the natural color image pixel distribution, resulting in a 48-dimensional feature vector. The novel low-level feature is named color texture moments (CTM), which can also be regarded as a certain extension to color moments in eight aspects through eight orthogonal templates. Experiments show that this new feature can achieve good retrieval performance for CBIR.

259 citations

Journal ArticleDOI
TL;DR: Experimental results show that the proposed method yields higher retrieval accuracy than some conventional methods even though its feature vector dimension is not higher than those of the latter for six test DBs.
Abstract: In this paper, we propose a content-based image retrieval method based on an efficient combination of multiresolution color and texture features. As its color features, color autocorrelo- grams of the hue and saturation component images in HSV color space are used. As its texture features, BDIP and BVLC moments of the value component image are adopted. The color and texture features are extracted in multiresolution wavelet domain and combined. The dimension of the combined feature vector is determined at a point where the retrieval accuracy becomes saturated. Experimental results show that the proposed method yields higher retrieval accuracy than some conventional methods even though its feature vector dimension is not higher than those of the latter for six test DBs. Especially, it demonstrates more excellent retrieval accuracy for queries and target images of various resolutions. In addition, the proposed method almost always shows performance gain in precision versus recall and in ANMRR over the other methods.

255 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
90% related
Feature (computer vision)
128.2K papers, 1.7M citations
88% related
Image segmentation
79.6K papers, 1.8M citations
87% related
Convolutional neural network
74.7K papers, 2M citations
87% related
Deep learning
79.8K papers, 2.1M citations
86% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202358
2022141
2021180
2020163
2019224
2018270