scispace - formally typeset
Search or ask a question
Topic

Content-based image retrieval

About: Content-based image retrieval is a research topic. Over the lifetime, 6916 publications have been published within this topic receiving 150696 citations. The topic is also known as: CBIR.


Papers
More filters
Journal ArticleDOI
TL;DR: An IND-CPA secure CBIR framework that performs image retrieval on the cloud without the user’s constant interaction is proposed and implemented and a secure image similarity scoring protocol is proposed, which enables the cloud servers to compare two images without knowing any information about their deep features.
Abstract: With the tremendous growth of smart mobile devices, the Content-Based Image Retrieval (CBIR) becomes popular and has great market potentials. Secure image retrieval has attracted considerable interests recently due to users' security concerns. However, it still suffers from the challenges of relieving mobile devices of excessive computation burdens, such as data encryption, feature extraction, and image similarity scoring. In this paper, we propose and implement an IND-CPA secure CBIR framework that performs image retrieval on the cloud without the user's constant interaction. A pre-trained deep CNN model, i.e., VGG-16, is used to extract the deep features of an image. The information about the neural network is strictly concealed by utilizing the lattice-based homomorphic scheme. We implement a real number computation mechanism and a divide-and-conquer CNN evaluation protocol to enable our framework to securely and efficiently evaluate the deep CNN with a large number of inputs. We further propose a secure image similarity scoring protocol, which enables the cloud servers to compare two images without knowing any information about their deep features. The comprehensive experimental results show that our framework is efficient and accurate.

28 citations

Proceedings ArticleDOI
30 Jul 2000
TL;DR: A more sophisticated model for similarity judgments based on fuzzy measures and the Choquet integral is explored, and a suitable algorithm for relevance feedback is proposed that is preferable to traditional weighted-average techniques.
Abstract: Relevance feedback is a technique to learn the user's subjective perception of similarity between images, and has recently gained attention in content based image retrieval (CBIR). Most relevance feedback methods assume that the individual features that are used in similarity judgments do not interact with each other. However, this assumption severely limits the types of similarity judgments that can be modeled. The authors explore a more sophisticated model for similarity judgments based on fuzzy measures and the Choquet integral, and propose a suitable algorithm for relevance feedback. Experimental results show that the proposed method is preferable to traditional weighted-average techniques. The proposed algorithm is being incorporated into a CBIR system developed at Korea Telecom.

28 citations

Journal ArticleDOI
TL;DR: A multi-modal image search approach that exploits hierarchical organization of modalities and employs both intra and inter-modality fusion techniques to minimize limitations of low-level feature representations in content-based image retrieval (CBIR).
Abstract: Images are frequently used in articles to convey essential information in context with correlated text. How- ever, searching images in a task-specific way poses signifi- cant challenges. To minimize limitations of low-level feature representationsincontent-basedimageretrieval(CBIR),and to complement text-based search, we propose a multi-modal image search approach that exploits hierarchical organiza- tion of modalities and employs both intra and inter-modality fusion techniques. For the CBIR search, several visual fea- tures were extracted to represent the images. Modality- specificinformationwasusedforsimilarityfusionandselec- tion of a relevant image subset. Intra-modality fusion of retrieval results was performed by searching images for spe- cific informational elements. Our methods use text extracted from relevant components in a document to create struc- tured representations as "enriched citations" for the text- based search approach. Finally, the multi-modal search con- sists of a weighted linear combination of similarity scores of independent output results from textual and visual search approaches (inter modality). Search results were evaluated using a standard ImageCLEFmed 2012 evaluation dataset of 300,000 images with associated annotations. We achieved a meanaverageprecision(MAP)scoreof0.2533,whichissta- tisticallysignificant,andbetterinperformance(7%improve- ment) over comparable results in ImageCLEFmed 2012.

28 citations

Proceedings ArticleDOI
06 May 2013
TL;DR: This work addresses the challenge of content based image retrieval system by applying the MapReduce distributed computing model and the HDFS storage model and confirms the feasibility and efficiency of applying the CBIR in the large medical image databases.
Abstract: Most medical images are now digitized and stored in large image databases. Retrieving the desired images becomes a challenge. In this paper, we address the challenge of content based image retrieval system by applying the MapReduce distributed computing model and the HDFS storage model. Two methods are used to characterize the content of images: the first is called the BEMD-GGD method (Bidimensional Empirical Mode Decomposition with Generalized Gaussian density functions) and the second is called the BEMD-HHT method (Bidi-mensional Empirical Mode Decomposition with Huang-Hilbert Transform HHT). To measure similarity between images we compute the distance between signatures of images, for that we use the Kullback-Leibler Divergence (KLD) to compare the BEMD-GGD signatures and the Euclidean distance to compare the HHT signatures. Through the experiments on the DDSM mammography image database, we confirm that the results are promising and this work has allowed us to verify the feasibility and efficiency of applying the CBIR in the large medical image databases.

28 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
90% related
Feature (computer vision)
128.2K papers, 1.7M citations
88% related
Image segmentation
79.6K papers, 1.8M citations
87% related
Convolutional neural network
74.7K papers, 2M citations
87% related
Deep learning
79.8K papers, 2.1M citations
86% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202358
2022141
2021180
2020163
2019224
2018270