scispace - formally typeset
Search or ask a question

Showing papers by "International Institute of Information Technology, Hyderabad published in 2006"


Proceedings ArticleDOI
18 Dec 2006
TL;DR: MARGIN is a maximal subgraph mining algorithm that moves among promising nodes of the search space along the "border" of the infrequent and frequent subgraphs, which drastically reduces the number of candidate patterns considered in thesearch space.
Abstract: The exponential number of possible subgraphs makes the problem of frequent subgraph mining a challenge. The set of maximal frequent subgraphs is much smaller to that of the set of frequent subgraphs, thus providing ample scope for pruning. MARGIN is a maximal subgraph mining algorithm that moves among promising nodes of the search space along the "border" of the infrequent and frequent subgraphs. This drastically reduces the number of candidate patterns considered in the search space. Experimental results validate the efficiency and utility of the technique proposed.

105 citations


Proceedings ArticleDOI
08 Mar 2006
TL;DR: It is demonstrated that the proposed multiplier architecture using the TSG gate is much better and optimized, compared to its existing counterparts in literature; in terms of number of reversible gates and garbage outputs.
Abstract: In the recent years, reversible logic has emerged as a promising technology having its applications in low power CMOS, quantum computing, nanotechnology, and optical computing. The classical set of gates such as AND, OR, and EXOR are not reversible. Recently a 4 * 4 reversible gate called “TSG” is proposed. The most significant aspect of the proposed gate is that it can work singly as a reversible full adder, that is reversible full adder can now be implemented with a single gate only. This paper proposes a NXN reversible multiplier using TSG gate. It is based on two concepts. The partial products can be generated in parallel with a delay of d using Fredkin gates and thereafter the addition can be reduced to log2N steps by using reversible parallel adder designed from TSG gates. A 4x4 architecture of the proposed reversible multiplier is also designed. It is demonstrated that the proposed multiplier architecture using the TSG gate is much better and optimized, compared to its existing counterparts in literature; in terms of number of reversible gates and garbage outputs. Thus, this paper provides the initial threshold to building of more complex system which can execute more complicated operations using reversible logic.

100 citations


Book ChapterDOI
13 Feb 2006
TL;DR: This paper presents a scheme to identify different Indian scripts from a document image which employs hierarchical classification which uses features consistent with human perception and achieves an overall classification accuracy of 97.11% on a large testing data set.
Abstract: Automatic identification of a script in a given document image facilitates many important applications such as automatic archiving of multilingual documents, searching online archives of document images and for the selection of script specific OCR in a multilingual environment. In this paper, we present a scheme to identify different Indian scripts from a document image. This scheme employs hierarchical classification which uses features consistent with human perception. Such features are extracted from the responses of a multi-channel log-Gabor filter bank, designed at an optimal scale and multiple orientations. In the first stage, the classifier groups the scripts into five major classes using global features. At the next stage, a sub-classification is performed based on script-specific features. All features are extracted globally from a given text block which does not require any complex and reliable segmentation of the document image into lines and characters. Thus the proposed scheme is efficient and can be used for many practical applications which require processing large volumes of data. The scheme has been tested on 10 Indian scripts and found to be robust to skew generated in the process of scanning and relatively insensitive to change in font size. This proposed system achieves an overall classification accuracy of 97.11% on a large testing data set. These results serve to establish the utility of global approach to classification of scripts.

91 citations


Book ChapterDOI
04 Mar 2006
TL;DR: It is demonstrated that one round is sufficient for WSS when n > 4t, and that VSS can be achieved in 1 + e amortized rounds (for any e > 0 ) when n>3t.
Abstract: We consider perfect verifiable secret sharing (VSS) in a synchronous network of n processors (players) where a designated player called the dealer wishes to distribute a secret s among the players in a way that no t of them obtain any information, but any t + 1 players obtain full information about the secret. The round complexity of a VSS protocol is defined as the number of rounds performed in the sharing phase. Gennaro, Ishai, Kushilevitz and Rabin showed that three rounds are necessary and sufficient when n > 3t. Sufficiency, however, was only demonstrated by means of an inefficient (i.e., exponential-time) protocol, and the construction of an efficient three-round protocol was left as an open problem. In this paper, we present an efficient three-round protocol for VSS. The solution is based on a three-round solution of so-called weak verifiable secret sharing (WSS), for which we also prove that three rounds is a lower bound. Furthermore, we also demonstrate that one round is sufficient for WSS when n > 4t, and that VSS can be achieved in 1 + e amortized rounds (for any e > 0 ) when n>3t.

69 citations


Journal Article
TL;DR: In this article, a three-round solution of weak verifiable secret sharing (WSS) was proposed, and it was shown that three rounds is sufficient and sufficient for WSS when n > 4t and 1 + e amortized rounds when any e > 0.
Abstract: We consider perfect verifiable secret sharing (VSS) in a synchronous network of n processors (players) where a designated player called the dealer wishes to distribute a secret s among the players in a way that no t of them obtain any information, but any t + 1 players obtain full information about the secret. The round complexity of a VSS protocol is defined as the number of rounds performed in the sharing phase. Gennaro, Ishai, Kushilevitz and Rabin showed that three rounds are necessary and sufficient when n > 3t. Sufficiency, however, was only demonstrated by means of an inefficient (i.e., exponential-time) protocol, and the construction of an efficient three-round protocol was left as an open problem. In this paper, we present an efficient three-round protocol for VSS. The solution is based on a three-round solution of so-called weak verifiable secret sharing (WSS), for which we also prove that three rounds is a lower bound. Furthermore, we also demonstrate that one round is sufficient for WSS when n > 4t, and that VSS can be achieved in 1 + e amortized rounds (for any e > 0) when n > 3t.

62 citations


Book ChapterDOI
13 Feb 2006
TL;DR: A novel DTW-based partial matching scheme is employed to take care of morphologically variant words to achieve effective search and retrieval from a large collection of printed document images by matching image features at word-level.
Abstract: This paper presents a system for retrieval of relevant documents from large document image collections. We achieve effective search and retrieval from a large collection of printed document images by matching image features at word-level. For representations of the words, profile-based and shape-based features are employed. A novel DTW-based partial matching scheme is employed to take care of morphologically variant words. This is useful for grouping together similar words during the indexing process.The system supports cross-lingual search using OM-Trans transliteration and a dictionary-based approach. System-level issues for retrieval (eg. scalability, effective delivery etc.) are addressed in this paper.

58 citations


Journal ArticleDOI
TL;DR: This paper presents a methodology for computing the maximum velocity profile over a trajectory planned for a mobile robot, indicative of maximum speeds that can be possessed by the robot along its path without colliding with any of the mobile objects that could intercept its future trajectory.

56 citations


Proceedings ArticleDOI
06 Aug 2006
TL;DR: In this article, the authors proposed the use of reversible logic for designing the ALU of a cryptosystem to prevent differential power analysis (DPA) attacks, which is a major challenge to mathematically secure cryptographic protocols.
Abstract: Differential Power Analysis (DPA) presents a major challenge to mathematically-secure cryptographic protocols. Attackers can break the encryption by measuring the energy consumed in the working digital circuit. To prevent this type of attack, this paper proposes the use of reversible logic for designing the ALU of a cryptosystem. Ideally, reversible circuits dissipate zero energy. Thus, it would be of great significance to apply reversible logic to designing secure cryptosystems. As far as is known, this is the first attempt to apply reversible logic to developing secure cryptosystems. In a prototype of a reversible ALU for a crypto-processor, reversible designs of adders and Montgomery multipliers are presented. The reversible designs of a carry propagate adder, four-to-two and five-to-two carry save adders are presented using a reversible TSG gate. One of the important properties of the TSG gate is that it can work singly as a reversible full adder. In order to design the reversible Montgomery multiplier, novel reversible sequential circuits are also proposed which are integrated with the proposed adders to design a reversible modulo multiplier. It is intended that this paper will provide a starting point for developing cryptosystems secure against DPA attacks.

48 citations


Book ChapterDOI
13 Feb 2006
TL;DR: The challenges for document image analysis community for building large digital libraries with diverse document categories are described and much more research is needed to address the challenges arising out of the diversity of the content in digital libraries.
Abstract: This paper describes the challenges for document image analysis community for building large digital libraries with diverse document categories. The challenges are identified from the experience of the on-going activities toward digitizing and archiving one million books. Smooth workflow has been established for archiving large quantity of books, with the help of efficient image processing algorithms. However, much more research is needed to address the challenges arising out of the diversity of the content in digital libraries.

40 citations


Proceedings ArticleDOI
01 Aug 2006
TL;DR: The reversible logic implementation of the proposed CIFM designed and proposed here will form the basis of the completely reversible FPGAs, and will ideally suit this purpose, making FPG as more suitable for integer as well floating point operations.
Abstract: In this paper, the authors propose the idea of a combined integer and floating point multiplier (CIFM) for FPGAs. The authors propose the replacement of existing 18times18 dedicated multipliers in FPGAs with dedicated 24times24 multipliers designed with small 4times4 bit multipliers. It is also proposed that for every dedicated 24times24 bit multiplier block designed with 4times4 bit multipliers, four redundant 4times4 multiplier should be provided to enforce the feature of self repairability (to recover from the faults). In the proposed CIFM reconfigurability at run time is also provided resulting in low power. The major source of motivation for providing the dedicated 24times24 bit multiplier stems from the fact that single precision floating point multiplier requires 24times24 bit integer multiplier for mantissa multiplication. A reconfigurable, self-repairable 24times24 bit multiplier (implemented with 4times4 bit multiply modules) will ideally suit this purpose, making FPGAs more suitable for integer as well floating point operations. A dedicated 4times4 bit multiplier is also proposed in this paper. Moreover, in the recent years, reversible logic has emerged as a promising technology having its applications in low power CMOS, quantum computing, nanotechnology, and optical computing. It is not possible to realize quantum computing without reversible logic. Thus, this paper also paper provides the reversible logic implementation of the proposed CIFM. The reversible CIFM designed and proposed here will form the basis of the completely reversible FPGAs.

37 citations


Proceedings ArticleDOI
23 Jul 2006
TL;DR: This work provides a complete characterization of directed networks in which probabilistic reliable communication is possible and outlines a round optimal protocol for the same.
Abstract: We provide a complete characterization of directed networks in which probabilistic reliable communication is possible. We also outline a round optimal protocol for the same.

Book ChapterDOI
13 Dec 2006
TL;DR: A document segmentation algorithm that can handle the complexity of Indian scripts in large document image collections by being posed as a graph cut problem that incorporates the apriori information from script structure in the objective function of the cut.
Abstract: Most of the state-of-the-art segmentation algorithms are designed to handle complex document layouts and backgrounds, while assuming a simple script structure such as in Roman script. They perform poorly when used with Indian languages, where the components are not strictly collinear. In this paper, we propose a document segmentation algorithm that can handle the complexity of Indian scripts in large document image collections. Segmentation is posed as a graph cut problem that incorporates the apriori information from script structure in the objective function of the cut. We show that this information can be learned automatically and be adapted within a collection of documents (a book) and across collections to achieve accurate segmentation. We show the results on Indian language documents in Telugu script. The approach is also applicable to other languages with complex scripts such as Bangla, Kannada, Malayalam, and Urdu.

Proceedings ArticleDOI
23 Jul 2006
TL;DR: The method used in this paper for language and encoding identification uses pruned character n-grams, alone as well augmented with word n- grams, and seems to give results comparable to other methods.
Abstract: To determine how close two language models (e.g., n-grams models) are, we can use several distance measures. If we can represent the models as distributions, then the similarity is basically the similarity of distributions. And a number of measures are based on information theoretic approach. In this paper we present some experiments on using such similarity measures for an old Natural Language Processing (NLP) problem. One of the measures considered is perhaps a novel one, which we have called mutual cross entropy. Other measures are either well known or based on well known measures, but the results obtained with them vis-avis one-another might help in gaining an insight into how similarity measures work in practice. The first step in processing a text is to identify the language and encoding of its contents. This is a practical problem since for many languages, there are no universally followed text encoding standards. The method we have used in this paper for language and encoding identification uses pruned character n-grams, alone as well augmented with word n-grams. This method seems to give results comparable to other methods.

Journal ArticleDOI
TL;DR: The proposed method of frame synchronization and frequency offset estimation is applied to the downlink synchronization in OFDM mode of wireless metropolitan area network (WMAN) standard IEEE 802.16-2004, and its performance is studied through simulations.
Abstract: Orthogonal frequency division multiplexing (OFDM) is a parallel transmission scheme for transmitting data at very high rates over time dispersive radio channels. In an OFDM system, frame synchronization and frequency offset estimation are extremely important for maintaining orthogonality among the subcarriers. In this paper, for a preamble having two identical halves in time, a timing metric is proposed for OFDM frame synchronization. The timing metric is analyzed and its mean values at the preamble boundary and in its neighborhood are evaluated, for AWGN and for frequency selective channels with specified mean power profile of the channel taps, and the variance expression is derived for AWGN case. Since the derivation of the variance expression for frequency selective channel case is tedious, we used simulations to estimate the same. Based on the theoretical value of the mean and estimate of the variance, we suggest a threshold for detection of the preamble boundary and evaluating the probability of false and correct detections. We also suggest a method for a threshold selection and the preamble boundary detection in practical applications. A simple and computationally efficient method for estimating fractional and integer frequency offset, using the same preamble, is also described. Simulations are used to corroborate the results of the analysis. The proposed method of frame synchronization and frequency offset estimation is applied to the downlink synchronization in OFDM mode of wireless metropolitan area network (WMAN) standard IEEE 802.16-2004, and its performance is studied through simulations.

Proceedings ArticleDOI
14 Jun 2006
TL;DR: This paper proposes techniques based on load balancing across the integer functional units of VLIW architectures for balanced thermal behavior and peak temperature minimization, and reveals that the peak temperature can be reduced through compiler scheduling.
Abstract: As processors, memories, and other components of today's embedded systems are pushed to higher performance in more enclosed spaces, processor thermal management is quickly becoming a limiting design factor. While previous proposals mostly approached this thermal management problem from circuit and architecture angles, software can also play an important role in identifying and eliminating thermal hotspots as it is the main factor that shapes the order and frequency of accesses to different hardware components in the chip. This is particularly true for compiler-scheduled Very Long Instruction Word (VLIW) datapath.In this paper, we focus on a compiler-based approach to make the thermal profile more balanced in the integer functional units of VLIW architectures. For balanced thermal behavior and peak temperature minimization, we propose techniques based on load balancing across the integer functional units with or without rotation of functional unit usage. As leakage power is exponentially dependent on temperature and temperature is dependent on total power (i.e., switching and leakage), in our techniques, we also consider leakage power optimization by IPC tuning (instructions issued per cycle). By taking a code that is already scheduled for maximum performance as input, our scheduling strategies modify this performance-oriented schedule for balanced thermal behavior with negligible performance degradation. We simulate our scheduling strategies using a framework that consists of the Trimaran infrastructure, a power model, and the HotSpot. Our experimental results using several benchmark programs reveal that the peak temperature can be reduced through compiler scheduling.

01 Jan 2006
TL;DR: This work proposes a text-independent writer identification framework that uses a specified set of primitives of online handwritten data to ascertain the identity of the writer and allows us to learn the properties of the script and the writers simultaneously and hence can be used with multiple languages or scripts.
Abstract: Automatic identification of the author of a document has a variety of applications for both online and offline handwritten data such as facilitating the use of writerdependent recognizers, verification of claimed identity for security, enabling personalized HCI and countering repudiations for legal purposes. Most of the existing writer identification techniques require the data to be from a specific text or a recognizer be available, which is not always feasible. Text-independent approaches often require large amount of data to be confident of good results. In this work, we propose a text-independent writer identification framework that uses a specified set of primitives of online handwritten data to ascertain the identity of the writer. The framework allows us to learn the properties of the script and the writers simultaneously and hence can be used with multiple languages or scripts. We demonstrate the applicability of our framework by choosing shapes of curves as primitives and show results on five different scripts and on different data sets.

01 Jan 2006
TL;DR: The experiments of Language Technologies Research Centre (LTRC) 1 are presented as part of their participation in CLEF 2 2006 ad-hoc document retrieval task and Hindi and Telugu to English CLIR system is discussed.
Abstract: This paper presents the experiments of Language Technologies Research Centre (LTRC) 1 as part of their participation in CLEF 2 2006 ad-hoc document retrieval task. This is our first participation in the CLEF evaluation tasks and we focused on Afaan Oromo, Hindi and Telugu as query languages for retrieval from English document collection. In this paper we discuss our Hindi and Telugu to English CLIR system and the experiments at CLEF.

Book ChapterDOI
11 Dec 2006
TL;DR: This paper presents a two-phase-bit optimal PRMT protocol considering Byzantine adversary as well as mixed adversary, and presents a three phasePRMT protocol which reliably sends a message containing l field elements by overall communicating O(l) field elements.
Abstract: In this paper, we study the problem of perfectly reliable message transmission(PRMT) and perfectly secure message transmission(PSMT) between a sender S and a receiver R in a synchronous network, where S and R are connected by n vertex disjoint paths called wires, each of which facilitates bidirectional communication. We assume that atmost t of these wires are under the control of adversary. We present two-phase-bit optimal PRMT protocol considering Byzantine adversary as well as mixed adversary. We also present a three phase PRMT protocol which reliably sends a message containing l field elements by overall communicating O(l) field elements. This is a significant improvement over the PRMT protocol proposed in [10] to achieve the same task which takes log(t) phases. We also present a three-phase-bit-optimal PSMT protocol which securely sends a message consisting of t field elements by communicating O(t2) field elements.

Book ChapterDOI
13 Dec 2006
TL;DR: A novel homography-based approach that integrates information from multiple homographies to reliably estimate the relative displacement of the camera and develops a new control formulation that meets the contradictory requirements of producing a decoupled camera trajectory and ensuring object visibility by only utilizing the homography relating the two views.
Abstract: This paper presents a vision-based control for positioning a camera with respect to an unknown piecewise planar object. We introduce a novel homography-based approach that integrates information from multiple homographies to reliably estimate the relative displacement of the camera. This approach is robust to image measurement errors and provides a stable estimate of the camera motion that is free from degeneracies in the task space. We also develop a new control formulation that meets the contradictory requirements of producing a decoupled camera trajectory and ensuring object visibility by only utilizing the homography relating the two views. Experimental results validate the efficiency and robustness of our approach and demonstrate its applicability.

Book ChapterDOI
13 Dec 2006
TL;DR: In this article, a multi-modal approach where clues from different information sources are merged to perform the segmentation of videos is presented, where the video is segmented to meaningful entities or scenes, using the scene level descriptions provided by the commentary.
Abstract: In this paper we address the problem of temporal segmentation of videos. We present a multi-modal approach where clues from different information sources are merged to perform the segmentation. Specifically, we segment videos based on textual descriptions or commentaries of the action in the video. Such a parallel information is available for cricket videos, a class of videos where visual feature based (bottom-up) scene segmentation algorithms generally fail, due to lack of visual dissimilarity across space and time. With additional top-down information from textual domain, these ambiguities could be resolved to a large extent. The video is segmented to meaningful entities or scenes, using the scene level descriptions provided by the commentary. These segments can then be automatically annotated with the respective descriptions. This allows for a semantic access and retrieval of video segments, which is difficult to obtain from existing visual feature based approaches. We also present techniques for automatic highlight generation using our scheme.

Book ChapterDOI
13 Dec 2006
TL;DR: It is argued that early stages in primary visual cortex provide ample information to address the boundary detection problem, and global visual primitives such as object and region boundaries can be extracted using local features captured by the receptive fields.
Abstract: Boundary detection in natural images is a fundamental problem in many computer vision tasks. In this paper, we argue that early stages in primary visual cortex provide ample information to address the boundary detection problem. In other words, global visual primitives such as object and region boundaries can be extracted using local features captured by the receptive fields. The anatomy of visual cortex and psychological evidences are studied to identify some of the important underlying computational principles for the boundary detection task. A scheme for boundary detection based on these principles is developed and presented. Results of testing the scheme on a benchmark set of natural images, with associated human marked boundaries, show the performance to be quantitatively competitive with existing computer vision approaches.

Book ChapterDOI
13 Feb 2006
TL;DR: An interactive system for continuous improvement of the results of the OCR and applicability of the design for the recognition of Indian Languages is demonstrated.
Abstract: This paper presents a novel approach for designing a semi-automatic adaptive OCR for large document image collections in digital libraries. We describe an interactive system for continuous improvement of the results of the OCR. In this paper a semi-automatic and adaptive system is implemented. Applicability of our design for the recognition of Indian Languages is demonstrated. Recognition errors are used to train the OCR again so that it adapts and learns for improving its accuracy. Limited human intervention is allowed for evaluating the output of the system and take corrective actions during the recognition process.

Book ChapterDOI
13 Dec 2006
TL;DR: A terrain streaming system based upon a client server architecture to handle heterogeneous clients over low-bandwidth networks and a method of sharing and storing terrain annotations for collaboration between multiple users is presented.
Abstract: Terrains and other geometric models have been traditionally stored locally. Their remote access presents the characteristics that are a combination of file serving and realtime streaming like audio-visual media. This paper presents a terrain streaming system based upon a client server architecture to handle heterogeneous clients over low-bandwidth networks. We present an efficient representation for handling terrains streaming. We design a client-server system that utilizes this representation to stream virtual environments containing terrains and overlayed geometry efficiently. We handle dynamic entities in environment and the synchronization of the same between multiple clients. We also present a method of sharing and storing terrain annotations for collaboration between multiple users. We conclude by presenting preliminary performance data for the streaming system.

Posted Content
TL;DR: In this article, the authors proposed a NXN reversible multiplier using TSG gate, which can work as a reversible full adder with a single gate only and the partial products can be generated in parallel with a delay of d using Fredkin gates.
Abstract: In the recent years, reversible logic has emerged as a promising technology having its applications in low power CMOS, quantum computing, nanotechnology, and optical computing. The classical set of gates such as AND, OR, and EXOR are not reversible. Recently a 4 * 4 reversible gate called TSG is proposed. The most significant aspect of the proposed gate is that it can work singly as a reversible full adder, that is reversible full adder can now be implemented with a single gate only. This paper proposes a NXN reversible multiplier using TSG gate. It is based on two concepts. The partial products can be generated in parallel with a delay of d using Fredkin gates and thereafter the addition can be reduced to log2N steps by using reversible parallel adder designed from TSG gates. Similar multiplier architecture in conventional arithmetic (using conventional logic) has been reported in existing literature, but the proposed one in this paper is totally based on reversible logic and reversible cells as its building block. A 4x4 architecture of the proposed reversible multiplier is also designed. It is demonstrated that the proposed multiplier architecture using the TSG gate is much better and optimized, compared to its existing counterparts in literature; in terms of number of reversible gates and garbage outputs. Thus, this paper provides the initial threshold to building of more complex system which can execute more complicated operations using reversible logic.

Proceedings ArticleDOI
08 Mar 2006
TL;DR: To improve the speed of addition at the 3rd level of computation, a novel carry look-ahead adder (CLA) is also proposed which is better than the recently proposed CLA architecture when compared its efficiency in terms of area/speed.
Abstract: This paper proposes a novel 8x8 multiplier architecture based on Wallace Tree, efficient in terms of power and regularity without significant increase in delay and area. The idea involves the generation of partial products in parallel using AND gates. The addition of these partial products is done using Wallace Tree which is hierarchically divided into levels. There will be a significant reduction in the power consumption, since power is provided only to the level that is involved in computation and thereby rendering the remaining two levels switched off (by employing a control circuitry). Furthermore, to improve the speed of addition at the 3rd level of computation, a novel carry look-ahead adder (CLA) is also proposed which is better than the recently proposed CLA architecture when compared its efficiency in terms of area/speed. The efficiency of the proposed multiplier is also tested by embedding it in higher width partition multipliers.

Book ChapterDOI
13 Dec 2006
TL;DR: In this article, the importance of different parts of a video sequence from the recognition point of view is identified by using actions to capture the fine characteristics of individual parts in the events, and their usefulness in discriminating between events is estimated as a score.
Abstract: This paper presents an approach to identify the importance of different parts of a video sequence from the recognition point of view. It builds on the observations that: (1) events consist of more fundamental (or atomic) units, and (2) a discriminant-based approach is more appropriate for the recognition task, when compared to the standard modelling techniques, such as PCA, HMM, etc. We introduce discriminative actions which describe the usefulness of the fundamental units in distinguishing between events. We first extract actions to capture the fine characteristics of individual parts in the events. These actions are modelled and their usefulness in discriminating between events is estimated as a score. The score highlights the important parts (or actions) of the event from the recognition aspect. Applicability of the approach on different classes of events is demonstrated along with a statistical analysis.

Book ChapterDOI
20 Sep 2006
TL;DR: This paper presents the Cross Language Information Retrieval (CLIR) experiments of Language Technologies Research Centre (LTRC, IIIT-Hyderabad) as part of their participation in the ad-hoc track of CLEF 2006, and uses a dictionary based approach for CLIR.
Abstract: This paper presents the Cross Language Information Retrieval (CLIR) experiments of Language Technologies Research Centre (LTRC, IIIT-Hyderabad) as part of our participation in the ad-hoc track of CLEF 2006. This is our first participation in the CLEF evaluation campaign and we focused on Afaan Oromo, Hindi and Telugu as source (query) languages for retrieval of documents from English text collection. We have used a dictionary based approach for CLIR. After a brief description of our CLIR system we discuss the evaluation results of various experiments we conducted using CLEF 2006 dataset.

Book ChapterDOI
13 Dec 2006
TL;DR: This paper presents an adaptive algorithm to unfold the twin hierarchies at every stage in the culling procedure, and computes from-point visibility and is conservative, allowing the approach to be applied for dynamic scenes as well.
Abstract: Visibility culling of a scene is a crucial stage for interactive graphics applications, particularly for scenes with thousands of objects. The culling time must be small for it to be effective. A hierarchical representation of the scene is used for efficient culling tests. However, when there are multiple view frustums (as in a tiled display wall), visibility culling time becomes substantial and cannot be hidden by pipelining it with other stages of rendering. In this paper, we address the problem of culling an object to a hierarchically organized set of frustums, such as those found in tiled displays and shadow volume computation. We present an adaptive algorithm to unfold the twin hierarchies at every stage in the culling procedure. Our algorithm computes from-point visibility and is conservative. The precomputation required is minimal, allowing our approach to be applied for dynamic scenes as well. We show performance of our technique over different variants of culling a scene to multiple frustums. We also show results for dynamic scenes.

Proceedings ArticleDOI
01 Dec 2006
TL;DR: The results show that the proposed architectures are faster and use lesser hardware than similar circuits known making them a viable option for efficient design.
Abstract: In this paper, the design of a binary to residue converter architecture based on {2k-1, 2k 2k+l} modulo set is presented New highly-parallel schemes using (p,2) compressors are described for computing the integer modulo operation (X mod m), where m is restricted to the values 2kplusmn1, for any value of k>1 and X is a 16-bit or a 32-bit number For efficient design, novel 3-2, 4-2 and 5-2 compressors are illustrated and are used as the basic building blocks for the proposed converter designs The resulting circuits are compared, both qualitatively and quantitatively, in standard CMOS cell technology, with the existing circuits The results show that the proposed architectures are faster and use lesser hardware than similar circuits known making them a viable option for efficient design

Book ChapterDOI
16 Oct 2006
TL;DR: In this paper, a re-ranking strategy that compensates for the sparsity in a user profile, by applying collaborative filtering algorithms, is presented, which shows an improvement in precision over approaches that use only user profile.
Abstract: Search Engines today often return a large volume of results with possibly a few relevant results. The notion of relevance is subjective and depends on the user and the context of search. Re-ranking of these results to reflect the most relevant results to the user, using a user profile built from the relevance feedback has proved to provide good results. Our approach assumes implicit feedback gathered from a search engine query logs and learn a user profile. The user profile typically runs into sparsity problems due to the sheer volume of the WWW. Sparsity refers to the missing weights of certain words in the user profile. In this paper, we present an effective re-ranking strategy that compensates for the sparsity in a user’s profile, by applying collaborative filtering algorithms. Our evaluation results show an improvement in precision over approaches that use only a user’s profile.