scispace - formally typeset
Search or ask a question

Showing papers by "International Institute of Information Technology, Hyderabad published in 2009"


Proceedings Article
01 Feb 2009
TL;DR: It is demonstrated that the performance of the proposed method can be far superior to that of commercial OCR systems, and can benefit from synthetically generated training data obviating the need for expensive data collection and annotation.
Abstract: This paper tackles the problem of recognizing characters in images of natural scenes. In particular, we focus on recognizing characters in situations that would traditionally not be handled well by OCR techniques. We present an annotated database of images containing English and Kannada characters. The database comprises of images of street scenes taken in Bangalore, India using a standard camera. The problem is addressed in an object cateogorization framework based on a bag-of-visual-words representation. We assess the performance of various features based on nearest neighbour and SVM classification. It is demonstrated that the performance of the proposed method, using as few as 15 training images, can be far superior to that of commercial OCR systems. Furthermore, the method can benefit from synthetically generated training data obviating the need for expensive data collection and annotation.

520 citations


Proceedings ArticleDOI
14 Jun 2009
TL;DR: It is observed that existing MKL formulations can be extended to learn general kernel combinations subject to general regularization while retaining all the efficiency of existing large scale optimization algorithms.
Abstract: Recent advances in Multiple Kernel Learning (MKL) have positioned it as an attractive tool for tackling many supervised learning tasks. The development of efficient gradient descent based optimization schemes has made it possible to tackle large scale problems. Simultaneously, MKL based algorithms have achieved very good results on challenging real world applications. Yet, despite their successes, MKL approaches are limited in that they focus on learning a linear combination of given base kernels.In this paper, we observe that existing MKL formulations can be extended to learn general kernel combinations subject to general regularization. This can be achieved while retaining all the efficiency of existing large scale optimization algorithms. To highlight the advantages of generalized kernel learning, we tackle feature selection problems on benchmark vision and UCI databases. It is demonstrated that the proposed formulation can lead to better results not only as compared to traditional MKL but also as compared to state-of-the-art wrapper and filter methods for feature selection.

461 citations


Proceedings ArticleDOI
19 Apr 2009
TL;DR: The results of voice conversion evaluated using subjective and objective measures confirm that ANNs perform better transformation than GMMs and the quality of the transformed speech is intelligible and has the characteristics of the target speaker.
Abstract: In this paper, we propose to use Artificial Neural Networks (ANN) for voice conversion. We have exploited the mapping abilities of ANN to perform mapping of spectral features of a source speaker to that of a target speaker. A comparative study of voice conversion using ANN and the state-of-the-art Gaussian Mixture Model (GMM) is conducted. The results of voice conversion evaluated using subjective and objective measures confirm that ANNs perform better transformation than GMMs and the quality of the transformed speech is intelligible and has the characteristics of the target speaker.

229 citations


Proceedings ArticleDOI
01 Aug 2009
TL;DR: This paper presents a minimum spanning tree algorithm on Nvidia GPUs under CUDA, as a recursive formulation of Borůvka's approach for undirected graphs, implemented using scalable primitives such as scan, segmented scan and split.
Abstract: Graphics Processor Units are used for many general purpose processing due to high compute power available on them. Regular, data-parallel algorithms map well to the SIMD architecture of current GPU. Irregular algorithms on discrete structures like graphs are harder to map to them. Efficient data-mapping primitives can play crucial role in mapping such algorithms onto the GPU. In this paper, we present a minimum spanning tree algorithm on Nvidia GPUs under CUDA, as a recursive formulation of Boruvka's approach for undirected graphs. We implement it using scalable primitives such as scan, segmented scan and split. The irregular steps of supervertex formation and recursive graph construction are mapped to primitives like split to categories involving vertex ids and edge weights. We obtain 30 to 50 times speedup over the CPU implementation on most graphs and 3 to 10 times speedup over our previous GPU implementation. We construct the minimum spanning tree on a 5 million node and 30 million edge graph in under 1 second on one quarter of the Tesla S1070 GPU.

126 citations


Proceedings ArticleDOI
23 May 2009
TL;DR: This paper presents the implementation of singular value decomposition (SVD) of a dense matrix on GPU using the CUDA programming model and shows a speedup of upto 60 over the MATLAB implementation and upto 8 over the Intel MKL implementation on a Intel Dual Core 2.66GHz PC for large matrices.
Abstract: Linear algebra algorithms are fundamental to many computing applications. Modern GPUs are suited for many general purpose processing tasks and have emerged as inexpensive high performance co-processors due to their tremendous computing power. In this paper, we present the implementation of singular value decomposition (SVD) of a dense matrix on GPU using the CUDA programming model. SVD is implemented using the twin steps of bidiagonalization followed by diagonalization. It has not been implemented on the GPU before. Bidiagonalization is implemented using a series of Householder transformations which map well to BLAS operations. Diagonalization is performed by applying the implicitly shifted QR algorithm. Our complete SVD implementation outperforms the MATLAB and Intel ®Math Kernel Library (MKL) LAPACK implementation significantly on the CPU. We show a speedup of upto 60 over the MATLAB implementation and upto 8 over the Intel MKL implementation on a Intel Dual Core 2.66GHz PC on NVIDIA GTX 280 for large matrices. We also give results for very large matrices on NVIDIA Tesla S1070.

121 citations


Proceedings ArticleDOI
13 Nov 2009
TL;DR: The method gives 90% accuracy and 100% recall in detecting abnormality at patient level; and achieves an average precision of 91% and recall of 90% at the slice level.
Abstract: Computed tomographic (CT) images are widely used in the diagnosis of stroke. In this paper, we present an automated method to detect and classify an abnormality into acute infarct, chronic infarct and hemorrhage at the slice level of non-contrast CT images. The proposed method consists of three main steps: image enhancement, detection of mid-line symmetry and classification of abnormal slices. A windowing operation is performed on the intensity distribution to enhance the region of interest. Domain knowledge about the anatomical structure of the skull and the brain is used to detect abnormalities in a rotation- and translation-invariant manner. A two-level classification scheme is used to detect abnormalities using features derived in the intensity and the wavelet domain. The proposed method has been evaluated on a dataset of 15 patients (347 image slices). The method gives 90% accuracy and 100% recall in detecting abnormality at patient level; and achieves an average precision of 91% and recall of 90% at the slice level.

113 citations


Proceedings ArticleDOI
01 Dec 2009
TL;DR: This paper presents a performance prediction model for the CUDA GPGPU platform that encompasses the various facets of the GPU architecture like scheduling, memory hierarchy, and pipelining among others and can be used to analyze pseudo code for a CUDA kernel to obtain a performance estimate.
Abstract: The significant growth in computational power of modern Graphics Processing Units (GPUs) coupled with the advent of general purpose programming environments like NVIDIA's CUDA, has seen GPUs emerging as a very popular parallel computing platform. Till recently, there has not been a performance model for GPGPUs. The absence of such a model makes it difficult to definitively assess the suitability of the GPU for solving a particular problem and is a significant impediment to the mainstream adoption of GPUs as a massively parallel (super)computing platform. In this paper we present a performance prediction model for the CUDA GPGPU platform. This model encompasses the various facets of the GPU architecture like scheduling, memory hierarchy, and pipelining among others. We also perform experiments that demonstrate the effects of various memory access strategies. The proposed model can be used to analyze pseudo code for a CUDA kernel to obtain a performance estimate, in a way that is similar to performing asymptotic analysis. We illustrate the usage of our model and its accuracy with three case studies: matrix multiplication, list ranking, and histogram generation.

111 citations


Proceedings ArticleDOI
10 Oct 2009
TL;DR: Successful and repeatable detection and pursuit of people and other moving objects in realtime with a monocular camera mounted on the Pioneer 3DX, in a cluttered environment confirms the efficacy of the method.
Abstract: The ability to detect, and track multiple moving objects like person and other robots, is an important prerequisite for mobile robots working in dynamic indoor environments. We approach this problem by detecting independently moving objects in image sequence from a monocular camera mounted on a robot. We use multi-view geometric constraints to classify a pixel as moving or static. The first constraint, we use, is the epipolar constraint which requires images of static points to lie on the corresponding epipolar lines in subsequent images. In the second constraint, we use the knowledge of the robot motion to estimate a bound in the position of image pixel along the epipolar line. This is capable of detecting moving objects followed by a moving camera in the same direction, a so-called degenerate configuration where the epipolar constraint fails. To classify the moving pixels robustly, a Bayesian framework is used to assign a probability that the pixel is stationary or dynamic based on the above geometric properties and the probabilities are updated when the pixels are tracked in subsequent images. The same framework also accounts for the error in estimation of camera motion. Successful and repeatable detection and pursuit of people and other moving objects in realtime with a monocular camera mounted on the Pioneer 3DX, in a cluttered environment confirms the efficacy of the method.

90 citations


Journal ArticleDOI
TL;DR: The analysis of the sequence of PC homologues in various organisms shows that PC is an ancient gene dating back to pre bilaterian origin that has not only been conserved but has also expanded during the evolution of complexity.
Abstract: Polycomb group (PcG) proteins maintain expression pattern of genes set early during development. Although originally isolated as regulators of homeotic genes, PcG members play a key role in epigenetic mechanism that maintains the expression state of a large number of genes. Polycomb (PC) is conserved during evolution and while invertebrates have one PC gene, vertebrates have five or more homologues. It remains unclear if different vertebrate PC homologues have distinct or overlapping functions. We have identified and compared the sequence of PC homologues in various organisms to analyze similarities and differences that shaped the evolutionary history of this key regulatory protein. All PC homologues have an N-terminal chromodomain and a C-terminal Polycomb Repressor box. We searched the protein and genome sequence database of various organisms for these signatures and identified ~100 PC homologues. Comparative analysis of these sequences led to the identification of a novel insect specific motif and several novel and signature motifs in the vertebrate homologue: two in CBX2 (Cx2.1 and Cx2.2), four in CBX4 (Cx4.1, Cx4.2, Cx4.3 and Cx4.4), three in CBX6 (Cx6.1, Cx6.2 and Cx6.3) and one in CBX8 (Cx8.1). Additionally, adjacent to the chromodomain, all the vertebrate homologues have a DNA binding motif - AT-Hook in case of CBX2, which was known earlier, and 'AT-Hook Like' motif, from this study, in other PC homologues. Our analysis shows that PC is an ancient gene dating back to pre bilaterian origin that has not only been conserved but has also expanded during the evolution of complexity. Unique motifs acquired by each homologue have been maintained for more than 500 millions years indicating their functional relevance in boosting the epigenetic 'tool kit'. We report the presence of a DNA interaction motif adjacent to chromodomain in all vertebrate PC homologues and suggest a three-way 'PC-histoneH3-DNA' interaction that can restrict nucleosome dynamics. The signature motifs of PC homologues and insect specific motif identified in this study pave the way to understand the molecular basis of epigenetic mechanisms.

70 citations


01 Jan 2009
TL;DR: A treebanking project for Hindi/Urdu is annotating dependency syntax, lexical predicate-argument structure, and phrase structure syntax in a coordinated and partly automated manner.
Abstract: This paper describes a treebanking project for Hindi/Urdu. We are annotating dependency syntax, lexical predicate-argument structure, and phrase structure syntax in a coordinated and partly automated manner. The paper focuses on choices in syntactic representation, and the stages we think are most appropriate for annotating differnt types of information.

68 citations


Proceedings ArticleDOI
09 Oct 2009
TL;DR: A novel clustering based Short Term Load Forecasting (STLF) using Support Vector Machines (SVM) is presented and the results show that the clusters based approach is more accurate.
Abstract: A novel clustering based Short Term Load Forecasting (STLF) using Support Vector Machines (SVM) is presented in this paper. The forecasting is performed for the 48 half hourly loads of the next day. The daily average load of each day for all the training patterns and testing patterns is calculated and the patterns are clustered using a threshold value between the daily average load of the testing pattern and the daily average load of the training patterns. The data considered for forecasting contains 2 years of half hourly daily load and daily average temperature. The proposed architecture is implemented in Matlab. The results obtained from clustering the input patterns and without clustering are presented and the results show that the clustering based approach is more accurate.

Proceedings ArticleDOI
06 Aug 2009
TL;DR: The Hindi Discourse Relation Bank project is described, aimed at developing a large corpus annotated with discourse relations, and the lexically grounded approach of the Penn Discourse Treebank is adopted.
Abstract: We describe the Hindi Discourse Relation Bank project, aimed at developing a large corpus annotated with discourse relations. We adopt the lexically grounded approach of the Penn Discourse Treebank, and describe our classification of Hindi discourse connectives, our modifications to the sense classification of discourse relations, and some cross-linguistic comparisons based on some initial annotations carried out so far.

Book ChapterDOI
18 Apr 2009
TL;DR: A new approach which is both unsupervised and domain independent to extract the attributes of products from text descriptions is proposed, able to achieve 92% precision and 62% recall in experiments.
Abstract: Product Attribute Extraction is the task of automatically discovering attributes of products from text descriptions In this paper, we propose a new approach which is both unsupervised and domain independent to extract the attributes With our approach, we are able to achieve 92% precision and 62% recall in our experiments Our experiments with varying dataset sizes show the robustness of our algorithm We also show that even a minimum of 5 descriptions provide enough information to identify attributes

Proceedings ArticleDOI
20 Jun 2009
TL;DR: This work proposes an approach to restore severely degraded document images using a probabilistic context model that works well with document collections such as books, even with severe degradations, and hence is ideally suited for repositories such as digital libraries.
Abstract: We propose an approach to restore severely degraded document images using a probabilistic context model. Unlike traditional approaches that use previously learned prior models to restore an image, we are able to learn the text model from the degraded document itself, making the approach independent of script, font, style, etc. We model the contextual relationship using an MRF. The ability to work with larger patch sizes allows us to deal with severe degradations including cuts, blobs, merges and vandalized documents. Our approach can also integrate document restoration and super-resolution into a single framework, thus directly generating high quality images from degraded documents. Experimental results show significant improvement in image quality on document images collected from various sources including magazines and books, and comprehensively demonstrate the robustness and adaptability of the approach. It works well with document collections such as books, even with severe degradations, and hence is ideally suited for repositories such as digital libraries.

Proceedings ArticleDOI
13 Nov 2009
TL;DR: This work proposes a clustering-based method to segment exudates, using multi-space clustering, and colorspace features, and achieves an accuracy of 89.7% and positive predictive value of 87%.
Abstract: Exudates are a class of lipid retinal lesions visible through optical fundus imaging, and indicative of diabetic retinopathy. We propose a clustering-based method to segment exudates, using multi-space clustering, and colorspace features. The method was evaluated on a set of 89 images from a publicly available dataset, and achieves an accuracy of 89.7% and positive predictive value of 87%.

Proceedings ArticleDOI
01 Dec 2009
TL;DR: This paper presents a forecasting method based on similar day approach in conjunction with fuzzy rule-based logic for next-day load forecast that is effective on a typical load and weather data.
Abstract: The main objective of short term load forecasting (STLF) is to provide load predictions for generation scheduling, economic load dispatch and security assessment at any time. Thus, STLF is needed to supply necessary information for the system management of day-to-day operations and unit commitment. This paper presents a forecasting method based on similar day approach in conjunction with fuzzy rule-based logic. To obtain the next-day load forecast, fuzzy logic is used to modify the load curves on selected similar days. A Euclidean norm considering weather variables such as ‘temperature’ and ‘humidity’ with weight factors is used for the selection of similar days. The effectiveness of the proposed approach is demonstrated on a typical load and weather data.

Proceedings ArticleDOI
07 Aug 2009
TL;DR: The results show the popular phrase-based SMT techniques can be successfully used for the task of machine transliteration, for English-Hindi language pair.
Abstract: In this paper we use the popular phrase-based SMT techniques for the task of machine transliteration, for English-Hindi language pair. Minimum error rate training has been used to learn the model weights. We have achieved an accuracy of 46.3% on the test set. Our results show these techniques can be successfully used for the task of machine transliteration.

Proceedings ArticleDOI
04 Jun 2009
TL;DR: It is shown that the summarizer built is able to outperform most systems participating in task focused summarization evaluations at Text Analysis Conferences (TAC) 2008 and would perform better at producing short summaries than longer summaries.
Abstract: In this paper, we describe a sentence position based summarizer that is built based on a sentence position policy, created from the evaluation testbed of recent summarization tasks at Document Understanding Conferences (DUC). We show that the summarizer thus built is able to outperform most systems participating in task focused summarization evaluations at Text Analysis Conferences (TAC) 2008. Our experiments also show that such a method would perform better at producing short summaries (upto 100 words) than longer summaries. Further, we discuss the baselines traditionally used for summarization evaluation and suggest the revival of an old baseline to suit the current summarization task at TAC: the Update Summarization task.

Proceedings ArticleDOI
07 Oct 2009
TL;DR: The paper describes the overall design of a new two stage constraint based hybrid approach to dependency parsing and shows how the use of hard constraints and soft constraints helps to build an efficient and robust hybrid parser.
Abstract: The paper describes the overall design of a new two stage constraint based hybrid approach to dependency parsing. We define the two stages and show how different grammatical construct are parsed at appropriate stages. This division leads to selective identification and resolution of specific dependency relations at the two stages. Furthermore, we show how the use of hard constraints and soft constraints helps us build an efficient and robust hybrid parser. Finally, we evaluate the implemented parser on Hindi and compare the results with that of two data driven dependency parsers.

Book ChapterDOI
04 Jun 2009
TL;DR: The protocol provides non-repudiable identity verification, while not revealing any additional information about the user to the server or vice versa, and uses asymmetric encryption, and captures the advantages of biometric authentication.
Abstract: Biometric authentication over public networks leads to a variety of privacy issues that needs to be addressed before it can become popular. The primary concerns are that the biometrics might reveal more information than the identity itself, as well as provide the ability to track users over an extended period of time. In this paper, we propose an authentication protocol that alleviates these concerns. The protocol takes care of user privacy, template protection and trust issues in biometric authentication systems. The protocol uses asymmetric encryption, and captures the advantages of biometric authentication. The protocol provides non-repudiable identity verification, while not revealing any additional information about the user to the server or vice versa. We show that the protocol is secure under various attacks. Experimental results indicate that the overall method is efficient to be used in practical scenarios.

Proceedings ArticleDOI
08 Jun 2009
TL;DR: This paper describes two implementations of List Ranking, a traditional irregular algorithm that is difficult to parallelize on massively multi-threaded hardware, and presents a GPU-optimized, Recursive Helman-JaJa (RHJ) algorithm.
Abstract: General purpose programming on the graphics processing units (GPGPU) has received a lot of attention in the parallel computing community as it promises to offer the highest performance per dollar. The GPUs have been used extensively on regular problems that can be easily parallelized. In this paper, we describe two implementations of List Ranking, a traditional irregular algorithm that is difficult to parallelize on such massively multi-threaded hardware. We first present an implementation of Wyllie's algorithm based on pointer jumping. This technique does not scale well to large lists due to the suboptimal work done. We then present a GPU-optimized, Recursive Helman-JaJa (RHJ) algorithm. Our RHJ implementation can rank a random list of 32 million elements in about a second and achieves a speedup of about 8-9 over a CPU implementation as well as a speedup of 3-4 over the best reported implementation on the Cell Broadband engine. We also discuss the practical issues relating to the implementation of irregular algorithms on massively multi-threaded architectures like that of the GPU. Regular or coalesced memory accesses pattern and balanced load are critical to achieve good performance on the GPU.

Proceedings ArticleDOI
01 Jun 2009
TL;DR: A language independent letter-to-phoneme conversion approach which is based on the popular phrase based Statistical Machine Translation techniques, which shows an overall improvement of 5.8% over the baseline and are comparable to the state of the art.
Abstract: Letter-to-phoneme conversion plays an important role in several applications. It can be a difficult task because the mapping from letters to phonemes can be many-to-many. We present a language independent letter-to-phoneme conversion approach which is based on the popular phrase based Statistical Machine Translation techniques. The results of our experiments clearly demonstrate that such techniques can be used effectively for letter-to-phoneme conversion. Our results show an overall improvement of 5.8% over the baseline and are comparable to the state of the art. We also propose a measure to estimate the difficulty level of L2P task for a language.

Proceedings ArticleDOI
19 Apr 2009
TL;DR: This study has demonstrated an effective way to differentiate vibration frequencies by combining EMD and second-order difference (SOD) plots, which is quantified using a Central Tendency Measure (CTM).
Abstract: This paper presents a new method for analysis of center of pressure (COP) signals using Empirical Mode Decomposition (EMD). The EMD decomposes a COP signal into a finite set of band-limited signals termed as intrinsic mode functions (IMFs). Thereafter, a signal processing technique used in continuous chaotic modeling is used to investigate the difference between experimental conditions on the summed IMFs. This method is used to detect the degree of variability from a second-order difference plot, which is quantified using a Central Tendency Measure (CTM). Seventeen subjects were tested under eyes open (EO) and eyes closed (EC) conditions, with different vibration frequencies applied for the EC condition in order to provide additional sensory perturbation. This study has demonstrated an effective way to differentiate vibration frequencies by combining EMD and second-order difference (SOD) plots.

Proceedings ArticleDOI
07 Aug 2009
TL;DR: A statistical transliteration technique that is language independent that uses statistical alignment models and Conditional Random Fields and has efficient training and decoding processes which is conditioned on both source and target languages and produces globally optimal solution.
Abstract: In this paper we present a statistical transliteration technique that is language independent. This technique uses statistical alignment models and Conditional Random Fields (CRF). Statistical alignment models maximizes the probability of the observed (source, target) word pairs using the expectation maximization algorithm and then the character level alignments are set to maximum posterior predictions of the model. CRF has efficient training and decoding processes which is conditioned on both source and target languages and produces globally optimal solution.

Proceedings ArticleDOI
01 Nov 2009
TL;DR: In this article, a replacement of buffers with Schmitt trigger has been proposed for the same purpose of signal restoration, which gives 59% delay reduction as compared to 45% in case of bus coding.
Abstract: In interconnect bus coding techniques the presence of buffers is often ignored. Buffers are used to restore the signal level affected by parasitics. However buffers have a certain switching time that contribute to overall signal delay. Further the transition that happens in interconnects also contribute to crosstalk delay. Thus the overall delay in interconnects is due to combined effect of both buffer and crosstalk delay. Here a replacement of buffers with Schmitt trigger has been proposed for the same purpose of signal restoration. Due to lower threshold voltage of Schmitt trigger signal can rise early and the large noise margin of schmitt trigger helps in reducing the noise glitches as well. Hence we don't need to add extra hardware for bus coding for the removal of higher crosstalk classes and delay reduction. Simulation results shows that the replacement process gives 59% delay reduction as compared to 45% in case of bus coding.

Book ChapterDOI
04 Jun 2009
TL;DR: This paper proposes a method to deal with variations in pose in unconstrained palmprint imaging that can robustly estimate and correct variations in poses, and compute a similarity measure between the corrected test image and a reference image.
Abstract: A palmprint based authentication system that can work with a multi-purpose camera in uncontrolled circumstances, such as those mounted on a laptop, mobile device or those for surveillance, can dramatically increase the applicability of such a system. However, the performance of existing techniques for palmprint authentication fall considerably, when the camera is not aligned with the surface of the palm. The problems arise primarily due to variations in appearance introduced due to varying pose, but is compounded by specularity of the skin and blur due to motion and focus. In this paper, we propose a method to deal with variations in pose in unconstrained palmprint imaging. The method can robustly estimate and correct variations in pose, and compute a similarity measure between the corrected test image and a reference image. Experimental results on a set of 100 user's palms captured at varying poses show a reduction in Equal Error Eate from 22.4% to 8.7%.

Proceedings ArticleDOI
06 Aug 2009
TL;DR: This paper is an attempt to show that an intermediary level of analysis is an effective way for carrying out various NLP tasks for linguistically similar languages and proposes the possibility of usefulness of the simple parser for Indian languages that are similar in nature.
Abstract: This paper is an attempt to show that an intermediary level of analysis is an effective way for carrying out various NLP tasks for linguistically similar languages. We describe a process for developing a simple parser for doing such tasks. This parser uses a grammar driven approach to annotate dependency relations (both inter and intra chunk) at an intermediary level. Ease in identifying a particular dependency relation dictates the degree of analysis reached by the parser. To establish efficiency of the simple parser we show the improvement in its results over previous grammar driven dependency parsing approaches for Indian languages like Hindi. We also propose the possibility of usefulness of the simple parser for Indian languages that are similar in nature.

Book ChapterDOI
13 Feb 2009
TL;DR: The reversible logic implementation of the modified conventional, as well as the proposed, carry look-ahead and carry skip BCD subtractors efficient in terms of the number of reversible gates used and garbage output produced are introduced.
Abstract: Reversible logic is emerging as a promising computing paradigm, having its applications in low-power CMOS, quantum computing, nanotech-nology and optical computing. Firstly, we showed a modified design of conventional BCD subtractors and also proposed designs of carry look-ahead and carry skip BCD subtractors. The proposed designs of carry look-ahead and carry skip BCD subtractors are based on the novel designs of carry look-ahead and carry skip BCD adders, respectively. Then, we introduced the reversible logic implementation of the modified conventional, as well as the proposed, carry look-ahead and carry skip BCD subtractors efficient in terms of the number of reversible gates used and garbage output produced. To the best of our knowledge, the carry look-ahead and carry skip BCD subtractors and their reversible logic design are explored for the first time ever in literature.

Proceedings Article
21 Jan 2009
TL;DR: A novel technique for information-driven architecture for routing in Wireless Sensor Networks that guarantees energy efficiency in network architecture and allows the idea of fine-grained sector localization is proposed.
Abstract: In this paper, we propose a novel technique for information-driven architecture for routing in Wireless Sensor Networks that guarantees energy efficiency in network architecture and allows the idea of fine-grained sector localization. The key idea was to hierarchically partition the sensor field into levels of increasing signal strength and also angularly partition levels into sectors of increasing angular measure. Then we try to form clusters in the sectors with one cluster head assigned to each cluster. The logical reasoning is that it is sufficient to know a particular ‘level of sector’ where the event has occurred, instead of knowing the position of each and every sensor node in the network. Sectoring & clustering introduced in levels, benefits in terms of data aggregation and data filtering and reduces power and traffic throughout the network. Our approach when applied in the network increase the life time of the network and it also helps to improve the successful transmission of data packets. We show that compared to existing solutions, such as levelling, gossiping, & level controlled gossip, our approach is energy efficient and it even outperforms other existing algorithms.

Proceedings Article
01 Jan 2009
TL;DR: The significance of excitation features for discriminating laughter and speech is discussed and new features describing the excitation characteristics are used to analyze the laugh signals.
Abstract: Laughter is a nonverbal vocalization that occurs often in speech communication. Since laughter is produced by the speech production mechanism, spectral analysis methods are used mostly for the study of laughter acoustics. In this paper the significance of excitation features for discriminating laughter and speech is discussed. New features describing the excitation characteristics are used to analyze the laugh signals. The features are based on instantaneous pitch and strength of excitation at epochs. An algorithm is developed based on these features to detect laughter regions in continuous speech. The results are illustrated by detecting laughter regions in a TV broadcast program. Index Terms: Laughter detection, epoch, strength of excitation