Author
Hajer Ayadi
Other affiliations: Keele University
Bio: Hajer Ayadi is an academic researcher from University of Sfax. The author has contributed to research in topics: Image retrieval & Computer science. The author has an hindex of 3, co-authored 6 publications receiving 26 citations. Previous affiliations of Hajer Ayadi include Keele University.
Papers
More filters
TL;DR: This work proposes a novel reranking method based on medical‐image‐dependent features that exploits the defined features in a new re ranking method for medical image retrieval and shows that compared to the BM25 model, the proposed model significantly enhances image retrieval performance.
Abstract: One of the main challenges in medical image retrieval is the increasing volume of image data, which render it difficult for domain experts to find relevant information from large data sets. Effective and efficient medical image retrieval systems are required to better manage medical image information. Text‐based image retrieval (TBIR) was very successful in retrieving images with textual descriptions. Several TBIR approaches rely on models based on bag‐of‐words approaches, in which the image retrieval problem turns into one of standard text‐based information retrieval; where the meanings and values of specific medical entities in the text and metadata are ignored in the image representation and retrieval process. However, we believe that TBIR should extract specific medical entities and terms and then exploit these elements to achieve better image retrieval results. Therefore, we propose a novel reranking method based on medical‐image‐dependent features. These features are manually selected by a medical expert from imaging modalities and medical terminology. First, we represent queries and images using only medical‐image‐dependent features such as image modality and image scale. Second, we exploit the defined features in a new reranking method for medical image retrieval. Our motivation is the large influence of image modality in medical image retrieval and its impact on image‐relevance scores. To evaluate our approach, we performed a series of experiments on the medical ImageCLEF data sets from 2009 to 2013. The BM25 model, a language model, and an image‐relevance feedback model are used as baselines to evaluate our approach. The experimental results show that compared to the BM25 model, the proposed model significantly enhances image retrieval performance. We also compared our approach with other state‐of‐the‐art approaches and show that our approach performs comparably to those of the top three runs in the official ImageCLEF competition.
9 citations
TL;DR: A list of generic and specific medical query features are defined and exploited in an association rule mining technique to discover correlations between query features and image retrieval models and shows that combining the proposed specific and generic query features is effective in query classification.
Abstract: The abundance of medical resources has encouraged the development of systems that allow for efficient searches of information in large medical image data sets. State-of-the-art image retrieval models are classified into three categories: content-based (visual) models, textual models, and combined models. Content-based models use visual features to answer image queries, textual image retrieval models use word matching to answer textual queries, and combined image retrieval models, use both textual and visual features to answer queries. Nevertheless, most of previous works in this field have used the same image retrieval model independently of the query type. In this article, we define a list of generic and specific medical query features and exploit them in an association rule mining technique to discover correlations between query features and image retrieval models. Based on these rules, we propose to use an associative classifier (NaiveClass) to find the best suitable retrieval model given a new textual query. We also propose a second associative classifier (SmartClass) to select the most appropriate default class for the query. Experiments are performed on Medical ImageCLEF queries from 2008 to 2012 to evaluate the impact of the proposed query features on the classification performance. The results show that combining our proposed specific and generic query features is effective in query classification.
9 citations
27 Oct 2013
TL;DR: This paper proposes a novel approach for finding correlations between medical query features and retrieval models based on association rule mining, and proposes to use an associative classifier that finds the best suitable rule with a maximum feature coverage for a new query.
Abstract: The increasing quantities of available medical resources have motivated the development of effective search tools and medical decision support systems. Medical image search tools help physicians in searching medical image datasets for diagnosing a disease or monitoring the stage of a disease given previous patient's image screenings. Image retrieval models are classified into three categories: content-based (visual), textual and combined models. In most of previous work, a unique image retrieval model is applied for any user formulated query independently of what retrieval model best suits the information need behind the query. The main challenge in medical image retrieval is to cope the semantic gap between user information needs and retrieval models. In this paper, we propose a novel approach for finding correlations between medical query features and retrieval models based on association rule mining. We define new medical-dependent query features such as image modality and presence of specific medical image terminology and make use of existing generic query features such as query specificity, ambiguity and cohesiveness. The proposed query features are then exploited into association rule mining for discovering rules which correlate query features to visual, textual or combined image retrieval models. Based on the discovered rules, we propose to use an associative classifier that finds the best suitable rule with a maximum feature coverage for a new query. Experiments are performed on Image CLEF queries from 2008 to 2012 where we evaluate the impact of our proposed query features on the classification performance. Results show that combining our proposed specific and generic query features is effective for classifying queries. A comparative study between our classifier, CBA, Naive Bayes, Bayes Net and decision trees showed that our best coverage associative classifier outperforms existing classifiers where it achieves an improvement of 30%.
8 citations
08 Apr 2017
TL;DR: This work presents the list of specific medical features such as image modality and image dimensionality, and constructs a Bayesian network that represents the relationships among these specific features appearing in a given image collection.
Abstract: In this paper, we believe that representing query and images with specific medical features allows to bridge the gap between the user information need and the searched images. Queries could be classified into three categories: textual, visual and combined. We present, in this work, the list of specific medical features such as image modality and image dimensionality. We exploit these specific features in a new medical image re-ranking method based on Bayesian network. Indeed, using a learning algorithm, we construct a Bayesian network that represents the relationships among these specific features appearing in a given image collection; this network is then considered as a thesaurus (specific for that collection). The relevance of an image to a given query is obtained by means of an inference process through the Bayesian network. Finally, the images are re-ranked based on combining their initial scores and the new scores. Experiments are performed on Medical ImageCLEF datasets from 2009 to 2012 and results show that our proposed model enhances significantly the image retrieval performance compared with BM25 model.
3 citations
22 Feb 2019
TL;DR: This paper proposes a re-ranking method using the CNN and the SMF for text-medical image retrieval and shows that the proposed approach significantly enhances image retrieval performance compared to several state of the art models.
Abstract: With the proliferation of digital imaging data in hospitals, the amount of medical images is increasing rapidly. Thus, the need for efficient retrieval systems, to find relevant information from large medical datasets, becomes high. The Convolutional Neural Network (CNN)-based models have been proved to be effective in several areas including, for example, medical image retrieval. Moreover, the Text-Based Image Retrieval (TBIR) was successful in retrieving images with textual description. However, in TBIR, all queries and documents are processed without taking into account the influence of certain medical terminologies (Specific Medical Features (SMF)) on the retrieval performance. In this paper, we propose a re-ranking method using the CNN and the SMF for text-medical image retrieval. First, images (documents) and queries are indexed to specific medical image features. Second, the Word2vec tool is used to construct feature vectors for both documents and queries. These vectors are then integrated into a neural network process and a matching function is used to re-rank documents obtained initially by a classical retrieval model. To evaluate our approach, several experiments are carried out with Medical ImageCLEF datasets from 2009 to 2012. Results show that our proposed approach significantly enhances image retrieval performance compared to several state of the art models.
1 citations
Cited by
More filters
Journal Article•
9,185 citations
Book•
22 Jul 2020
TL;DR: In this paper, the authors present a set of guidelines for the development of a CI.AB Copyright Information CI (http://www.cci.ci.edu.au/)
Abstract: AB Copyright Information CI
34 citations
TL;DR: This paper proposes a new expansion method for medical text (query/document) based on retro-semantic mapping between textual terms and UMLS concepts that are relevant in medical image retrieval that significantly improves the retrieval accuracy and outperforms the approaches offered in the literature.
Abstract: In the medical image retrieval literature, there are two main approaches: content-based retrieval using the visual information contained in the image itself and context-based retrieval using the metadata and the labels associated with the images. We present a work that fits in the context-based category, where queries are composed of medical keywords and the documents are metadata that succinctly describe the medical images. A main difference between the context-based image retrieval approach and the textual document retrieval is that in image retrieval the narrative description is very brief and typically cannot describe the entire image content, thereby negatively affecting the retrieval quality. One of the solutions offered in the literature is to add new relevant terms to both the query and the documents using expansion techniques. Nevertheless, the use of native terms to retrieve images has several disadvantages such as term-ambiguities. In fact, several studies have proved that mapping text to concepts can improve the semantic representation of the textual information. However, the use of concepts in the retrieval process has its own problems such as erroneous semantic relations between concepts in the semantic resource. In this paper, we propose a new expansion method for medical text (query/document) based on retro-semantic mapping between textual terms and UMLS concepts that are relevant in medical image retrieval. More precisely, we propose mapping the medical text of queries and documents into concepts and then applying a concept-selection method to keep only the most significant concepts. In this way, the most representative term (preferred name) identified in the UMLS for each selected concept is added to the initial text. Experiments carried out with ImageCLEF 2009 and 2010 datasets showed that the proposed approach significantly improves the retrieval accuracy and outperforms the approaches offered in the literature.
16 citations
TL;DR: This work proposes a novel reranking method based on medical‐image‐dependent features that exploits the defined features in a new re ranking method for medical image retrieval and shows that compared to the BM25 model, the proposed model significantly enhances image retrieval performance.
Abstract: One of the main challenges in medical image retrieval is the increasing volume of image data, which render it difficult for domain experts to find relevant information from large data sets. Effective and efficient medical image retrieval systems are required to better manage medical image information. Text‐based image retrieval (TBIR) was very successful in retrieving images with textual descriptions. Several TBIR approaches rely on models based on bag‐of‐words approaches, in which the image retrieval problem turns into one of standard text‐based information retrieval; where the meanings and values of specific medical entities in the text and metadata are ignored in the image representation and retrieval process. However, we believe that TBIR should extract specific medical entities and terms and then exploit these elements to achieve better image retrieval results. Therefore, we propose a novel reranking method based on medical‐image‐dependent features. These features are manually selected by a medical expert from imaging modalities and medical terminology. First, we represent queries and images using only medical‐image‐dependent features such as image modality and image scale. Second, we exploit the defined features in a new reranking method for medical image retrieval. Our motivation is the large influence of image modality in medical image retrieval and its impact on image‐relevance scores. To evaluate our approach, we performed a series of experiments on the medical ImageCLEF data sets from 2009 to 2013. The BM25 model, a language model, and an image‐relevance feedback model are used as baselines to evaluate our approach. The experimental results show that compared to the BM25 model, the proposed model significantly enhances image retrieval performance. We also compared our approach with other state‐of‐the‐art approaches and show that our approach performs comparably to those of the top three runs in the official ImageCLEF competition.
9 citations
TL;DR: A list of generic and specific medical query features are defined and exploited in an association rule mining technique to discover correlations between query features and image retrieval models and shows that combining the proposed specific and generic query features is effective in query classification.
Abstract: The abundance of medical resources has encouraged the development of systems that allow for efficient searches of information in large medical image data sets. State-of-the-art image retrieval models are classified into three categories: content-based (visual) models, textual models, and combined models. Content-based models use visual features to answer image queries, textual image retrieval models use word matching to answer textual queries, and combined image retrieval models, use both textual and visual features to answer queries. Nevertheless, most of previous works in this field have used the same image retrieval model independently of the query type. In this article, we define a list of generic and specific medical query features and exploit them in an association rule mining technique to discover correlations between query features and image retrieval models. Based on these rules, we propose to use an associative classifier (NaiveClass) to find the best suitable retrieval model given a new textual query. We also propose a second associative classifier (SmartClass) to select the most appropriate default class for the query. Experiments are performed on Medical ImageCLEF queries from 2008 to 2012 to evaluate the impact of the proposed query features on the classification performance. The results show that combining our proposed specific and generic query features is effective in query classification.
9 citations