Author
P. Poornachander
Bio: P. Poornachander is an academic researcher from Indian Institute of Technology Delhi. The author has contributed to research in topics: Ontology (information science) & Ranking (information retrieval). The author has an hindex of 2, co-authored 2 publications receiving 14 citations.
Papers
More filters
28 Sep 2007
TL;DR: A reinforcement learning algorithm is proposed for the parameters of the Bayesian Network with the implicit feedback obtained from the clickthrough data to provide personalized ranking of results in a video retrieval system.
Abstract: This paper proposes a new method for using implicit user feedback from clickthrough data to provide personalized ranking of results in a video retrieval system. The annotation based search is complemented with a feature based ranking in our approach. The ranking algorithm uses belief revision in a Bayesian Network, which is derived from a multimedia ontology that captures the probabilistic association of a concept with expected video features. We have developed a content model for videos using discrete feature states to enable Bayesian reasoning and to alleviate on-line feature processing overheads. We propose a reinforcement learning algorithm for the parameters of the Bayesian Network with the implicit feedback obtained from the clickthrough data.
11 citations
02 Nov 2007
TL;DR: A novel content-based re-ranking scheme for enhancing the precision of video retrieval on the Web that effectively re-ranks results for new text queries submitted to the video retrieval system, leading to better satisfaction of the users' information need.
Abstract: We present a novel content-based re-ranking scheme for enhancing the precision of video retrieval on the Web. We use ontology specified knowledge of the video domain to map user queries to domain-based concepts. The user preferences are learned implicitly from the web logs of users' interaction with a video search engine. A ranking SVM is trained for each concept to learn the ranking function which incorporates user preferences for the concept. The videos are represented by a set of ingeniously derived content- based features which are based on MPEG-7 descriptors. Our re-ranking scheme thus effectively re-ranks results for new text queries submitted to our video retrieval system, leading to better satisfaction of the users' information need.
4 citations
Cited by
More filters
01 Nov 2011
TL;DR: Methods for video structure analysis, including shot boundary detection, key frame extraction and scene segmentation, extraction of features including static key frame features, object features and motion features, video data mining, video annotation, and video retrieval including query interfaces are analyzed.
Abstract: Video indexing and retrieval have a wide spectrum of promising applications, motivating the interest of researchers worldwide. This paper offers a tutorial and an overview of the landscape of general strategies in visual content-based video indexing and retrieval, focusing on methods for video structure analysis, including shot boundary detection, key frame extraction and scene segmentation, extraction of features including static key frame features, object features and motion features, video data mining, video annotation, video retrieval including query interfaces, similarity measure and relevance feedback, and video browsing. Finally, we analyze future research directions.
606 citations
TL;DR: This survey reviews the interesting features that can be extracted from video data for indexing and retrieval along with similarity measurement methods and identifies present research issues in area of content based video retrieval systems.
Abstract: With the development of multimedia data types and available bandwidth there is huge demand of video retrieval systems, as users shift from text based retrieval systems to content based retrieval systems. Selection of extracted features play an important role in content based video retrieval regardless of video attributes being under consideration. These features are intended for selecting, indexing and ranking according to their potential interest to the user. Good features selection also allows the time and space costs of the retrieval process to be reduced. This survey reviews the interesting features that can be extracted from video data for indexing and retrieval along with similarity measurement methods. We also identify present research issues in area of content based video retrieval systems.
90 citations
Posted Content•
TL;DR: A novel content-based heterogeneous information retrieval framework, particularly well suited to browse medical databases and support new generation computer aided diagnosis (CADx) systems, is presented in this paper.
Abstract: A novel content-based heterogeneous information retrieval framework, particularly well suited to browse medical databases and support new generation computer aided diagnosis (CADx) systems, is presented in this paper. It was designed to retrieve possibly incomplete documents, consisting of several images and semantic information, from a database; more complex data types such as videos can also be included in the framework.
52 citations
TL;DR: In this paper, a content-based heterogeneous information retrieval framework was proposed to retrieve possibly incomplete documents, consisting of several images and semantic information, from a database; more complex data types such as videos can also be included in the framework.
Abstract: A novel content-based heterogeneous information retrieval framework, particularly well suited to browse medical databases and support new generation computer aided diagnosis (CADx) systems, is presented in this paper. It was designed to retrieve possibly incomplete documents, consisting of several images and semantic information, from a database; more complex data types such as videos can also be included in the framework. The proposed retrieval method relies on image processing, in order to characterize each individual image in a document by their digital content, and information fusion. Once the available images in a query document are characterized, a degree of match, between the query document and each reference document stored in the database, is defined for each attribute (an image feature or a metadata). A Bayesian network is used to recover missing information if need be. Finally, two novel information fusion methods are proposed to combine these degrees of match, in order to rank the reference documents by decreasing relevance for the query. In the first method, the degrees of match are fused by the Bayesian network itself. In the second method, they are fused by the Dezert-Smarandache theory: the second approach lets us model our confidence in each source of information (i.e., each attribute) and take it into account in the fusion process for a better retrieval performance. The proposed methods were applied to two heterogeneous medical databases, a diabetic retinopathy database and a mammography screening database, for computer aided diagnosis. Precisions at five of 0.809 ± 0.158 and 0.821 ± 0.177, respectively, were obtained for these two databases, which is very promising.
50 citations
TL;DR: This paper forms the personalized video big data retrieval problem as an interaction between the user and the system via a stochastic process, not just a similarity matching, accuracy (feedback) model of the retrieval; introduces users’ real-time context into the retrieval system; and proposes a general framework.
Abstract: Online video sharing (e.g., via YouTube or YouKu) has emerged as one of the most important services in the current Internet, where billions of videos on the cloud are awaiting exploration. Hence, a personalized video retrieval system is needed to help users find interesting videos from big data content. Two of the main challenges are to process the increasing amount of video big data and resolve the accompanying “cold start” issue efficiently. Another challenge is to satisfy the users’ need for personalized retrieval results, of which the accuracy is unknown. In this paper, we formulate the personalized video big data retrieval problem as an interaction between the user and the system via a stochastic process, not just a similarity matching, accuracy (feedback) model of the retrieval; introduce users’ real-time context into the retrieval system; and propose a general framework for this problem. By using a novel contextual multiarmed bandit-based algorithm to balance the accuracy and efficiency, we propose a context-based online big-data-oriented personalized video retrieval system. This system can support datasets that are dynamically increasing in size and has the property of cross-modal retrieval. Our approach provides accurate retrieval results with sublinear regret and linear storage complexity and significantly improves the learning speed. Furthermore, by learning for a cluster of similar contexts simultaneously, we can realize sublinear storage complexity with the same regret but slightly poorer performance on the “cold start” issue compared to the previous approach. We validate our theoretical results experimentally on a tremendously large dataset; the results demonstrate that the proposed algorithms outperform existing bandit-based online learning methods in terms of accuracy and efficiency and the adaptation from the bandit framework offers additional benefits.
25 citations