scispace - formally typeset
Search or ask a question
Author

Mouna Torjmen-Khemakhem

Bio: Mouna Torjmen-Khemakhem is an academic researcher from University of Sfax. The author has contributed to research in topics: Image retrieval & Visual Word. The author has an hindex of 4, co-authored 7 publications receiving 37 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: This paper proposes a new expansion method for medical text (query/document) based on retro-semantic mapping between textual terms and UMLS concepts that are relevant in medical image retrieval that significantly improves the retrieval accuracy and outperforms the approaches offered in the literature.
Abstract: In the medical image retrieval literature, there are two main approaches: content-based retrieval using the visual information contained in the image itself and context-based retrieval using the metadata and the labels associated with the images. We present a work that fits in the context-based category, where queries are composed of medical keywords and the documents are metadata that succinctly describe the medical images. A main difference between the context-based image retrieval approach and the textual document retrieval is that in image retrieval the narrative description is very brief and typically cannot describe the entire image content, thereby negatively affecting the retrieval quality. One of the solutions offered in the literature is to add new relevant terms to both the query and the documents using expansion techniques. Nevertheless, the use of native terms to retrieve images has several disadvantages such as term-ambiguities. In fact, several studies have proved that mapping text to concepts can improve the semantic representation of the textual information. However, the use of concepts in the retrieval process has its own problems such as erroneous semantic relations between concepts in the semantic resource. In this paper, we propose a new expansion method for medical text (query/document) based on retro-semantic mapping between textual terms and UMLS concepts that are relevant in medical image retrieval. More precisely, we propose mapping the medical text of queries and documents into concepts and then applying a concept-selection method to keep only the most significant concepts. In this way, the most representative term (preferred name) identified in the UMLS for each selected concept is added to the initial text. Experiments carried out with ImageCLEF 2009 and 2010 datasets showed that the proposed approach significantly improves the retrieval accuracy and outperforms the approaches offered in the literature.

16 citations

Journal ArticleDOI
TL;DR: This work proposes a novel reranking method based on medical‐image‐dependent features that exploits the defined features in a new re ranking method for medical image retrieval and shows that compared to the BM25 model, the proposed model significantly enhances image retrieval performance.
Abstract: One of the main challenges in medical image retrieval is the increasing volume of image data, which render it difficult for domain experts to find relevant information from large data sets. Effective and efficient medical image retrieval systems are required to better manage medical image information. Text‐based image retrieval (TBIR) was very successful in retrieving images with textual descriptions. Several TBIR approaches rely on models based on bag‐of‐words approaches, in which the image retrieval problem turns into one of standard text‐based information retrieval; where the meanings and values of specific medical entities in the text and metadata are ignored in the image representation and retrieval process. However, we believe that TBIR should extract specific medical entities and terms and then exploit these elements to achieve better image retrieval results. Therefore, we propose a novel reranking method based on medical‐image‐dependent features. These features are manually selected by a medical expert from imaging modalities and medical terminology. First, we represent queries and images using only medical‐image‐dependent features such as image modality and image scale. Second, we exploit the defined features in a new reranking method for medical image retrieval. Our motivation is the large influence of image modality in medical image retrieval and its impact on image‐relevance scores. To evaluate our approach, we performed a series of experiments on the medical ImageCLEF data sets from 2009 to 2013. The BM25 model, a language model, and an image‐relevance feedback model are used as baselines to evaluate our approach. The experimental results show that compared to the BM25 model, the proposed model significantly enhances image retrieval performance. We also compared our approach with other state‐of‐the‐art approaches and show that our approach performs comparably to those of the top three runs in the official ImageCLEF competition.

9 citations

Journal ArticleDOI
TL;DR: A list of generic and specific medical query features are defined and exploited in an association rule mining technique to discover correlations between query features and image retrieval models and shows that combining the proposed specific and generic query features is effective in query classification.
Abstract: The abundance of medical resources has encouraged the development of systems that allow for efficient searches of information in large medical image data sets. State-of-the-art image retrieval models are classified into three categories: content-based (visual) models, textual models, and combined models. Content-based models use visual features to answer image queries, textual image retrieval models use word matching to answer textual queries, and combined image retrieval models, use both textual and visual features to answer queries. Nevertheless, most of previous works in this field have used the same image retrieval model independently of the query type. In this article, we define a list of generic and specific medical query features and exploit them in an association rule mining technique to discover correlations between query features and image retrieval models. Based on these rules, we propose to use an associative classifier (NaiveClass) to find the best suitable retrieval model given a new textual query. We also propose a second associative classifier (SmartClass) to select the most appropriate default class for the query. Experiments are performed on Medical ImageCLEF queries from 2008 to 2012 to evaluate the impact of the proposed query features on the classification performance. The results show that combining our proposed specific and generic query features is effective in query classification.

9 citations

Journal ArticleDOI
TL;DR: This work proposes dividing the document into regions through the document structure and image position, and weight links between these regions according to their hierarchical positions, in order to distinguish between links that are useful and those that are not useful.
Abstract: In this paper, we are interested in XML multimedia retrieval, the aim of which is to find relevant multimedia objects such as images, audio and video through their context as document structure. In context-based multimedia retrieval, the most common technique is based on the text surrounding the image. However, such textual information can be irrelevant to the image content. Therefore many works are oriented to the use of alternative techniques to extend the image description, such as the use of ontologies, relevance feedback, and user profiles. We studied in our work the use of links between XML elements to improve image retrieval. More precisely, we propose dividing the document into regions through the document structure and image position. Then we weight links between these regions according to their hierarchical positions, in order to distinguish between links that are useful and those that are not useful. We then apply an updated version of the HITS algorithm at the region level, and compute a final image score by combining link scores with initial image scores. Experiments were done on the INEX 2006 and 2007 multimedia tracks, and showed the potential of our method.

7 citations

01 Jan 2012
TL;DR: In this paper, the effectiveness of using conceptual indexing comparing to word indexing in medical image retrieval is studied. But the results show that the use of the conceptual index is more useful than the word index.
Abstract: This paper presents our participation in medical image re- trieval task of ImageCLEF 2012. Our aim is to study the effectiveness of using conceptual indexing comparing to word indexing in medical image retrieval. For this aim, we have used in the one hand the Terrier tool for textual indexing and for textual retrieval, and on another hand, the MetaMap tool for conceptual indexing and Vector model for conceptual retrieval. More precisely, the run of the BM25 model is considered as a baseline. For textual indexing, we tried to compare different weighting formulas. However, for conceptual indexing, we Used BM25 model results to ex- tract concepts and rerank results using vector model. Results show that the use of the textual indexing is more useful than the conceptual indexing. However, the conceptual indexing improves the result of some queries, which encourages us to continue the study of conceptual indexing and retrieval.

3 citations


Cited by
More filters
Proceedings Article
01 Jan 2012
TL;DR: The ninth edition of the ImageCLEF medical image retrieval and classication tasks was organized in 2012, using a larger number of over 300'000 images than in 2011, adding mainly complexity.
Abstract: The ninth edition of the ImageCLEF medical image retrieval and classication tasks was organized in 2012. A subset of the open access collection of PubMed Central was used as the database in 2012, using a larger number of over 300'000 images than in 2011. As in previous years, there were three subtasks: modality classication, image{based and case{based retrieval. A new hierarchy for article gures was created for the modality classi- cation task. The modality detection could be one of the most important lters to limit the search and focus the results sets. The goal of the image{based and the case{based retrieval tasks were similar compared to 2011 adding mainly complexity. The number of groups submitting runs has remained stable at 17, with the number of submitted runs remaining roughly the same with 202 (207 in 2011). Of these, 122 were image{based retrieval runs, 37 were case{based runs while the remaining 43 were modality classication runs. Depending on the exact nature of the task, visual, textual or multimodal approaches performed better.

115 citations

Book
22 Jul 2020
TL;DR: In this paper, the authors present a set of guidelines for the development of a CI.AB Copyright Information CI (http://www.cci.ci.edu.au/)
Abstract: AB Copyright Information CI

34 citations

Journal ArticleDOI
TL;DR: A novel relevance feedback retrieval method (RFRM) for CBMIR that is implemented on the Kvasir dataset, which has 4,000 images divided into eight classes and was recently widely used for gastrointestinal disease detection.
Abstract: Content-based image medical retrieval (CBMIR) is a technique for retrieving medical images on the basis of automatically derived image features such as colour, texture and shape. There are many applications of CBMIR, such as teaching, research, diagnosis and electronic patient records. The retrieval performance of a CBMIR system depends mainly on the representation of image features, which researchers have studied extensively for decades. Although a number of methods and approaches have been suggested, it remains one of the most challenging problems in current (CBMIR) studies, largely due to the well-known “semantic gap” issue that exists between machine-captured low-level image features and human-perceived high-level semantic concepts. There have been many techniques proposed to bridge this gap. This study proposes a novel relevance feedback retrieval method (RFRM) for CBMIR. The feedback implemented here is based on voting values performed by each class in the image repository. Here, eighteen using colour moments and GLCM texture features were extracted to represent each image and eight common similarity coefficients were used as similarity measures. After briefly researching using a single random image query, the top images retrieved from each class are used as voters to select the most effective similarity coefficient that will be used for the final searching process. Our proposed method is implemented on the Kvasir dataset, which has 4,000 images divided into eight classes and was recently widely used for gastrointestinal disease detection. Intensive statistical analysis of the results shows that our proposed RFRM method has the best performance for enhancing both recall and precision when it uses any group of similarity coefficients.

22 citations

Journal ArticleDOI
TL;DR: The results indicated that, the proposed MIRS is robust and efficient for different medical image databases due to the advantages of dividing the image into blocks and each block can be retrieved separately according to its variance.
Abstract: This paper presents a proposed method for medical image retrieval in order to search in a database for an image that is similar to a query image. The proposed Medical Image Retrieval System (MIRS) consists of two phases; enrollment phase and querying phase. In enrollment phase, the Discrete Wavelet Transform (DWT) coefficients are computed from every incoming image. Four wavelet types; Haar, Daubechies, Coiflet, and Symlet wavelets with different decomposition levels have been tested and compared in order to determine the most suitable wavelet type for the retrieval approach. Then, the Block Truncation Codes (BTCs) are extracted from the wavelet coefficients. To make the proposed image retrieval system robust, the BTC is adaptive by dividing the image into sub-blocks using one of four different scanning methods; raster, zigzag, Morton or Hilbert scanning. Finally, the extracted codes are stored as features vectors database. In querying phase, the BTCs are extracted from the wavelet coefficients of the query image. The similarity measurement between the features vector of the query images and the features vectors stored in the features vectors database is carried out using 8 different distance metrics to select the most suitable one. The proposed MIRS has been tested with a medical image database consists of 7500 CT brain images collected from a teaching hospital in Egypt. The results demonstrated that the proposed approach gives good results with extracting the BTCs with Morton scanning from the DB2 DWT. Moreover, Manhattan distance achieved the best similarity measurement results. The performance of the proposed MIRS has been compared with the published medical image retrieval approaches for VIA-ELCAP and Kvasir databases. The results indicated that, the proposed MIRS is robust and efficient for different medical image databases due to the advantages of dividing the image into blocks and each block can be retrieved separately according to its variance.

17 citations