scispace - formally typeset
Search or ask a question
Author

Parag Kulkarni

Bio: Parag Kulkarni is an academic researcher from College of Engineering, Pune. The author has contributed to research in topics: Cluster analysis & Context (language use). The author has an hindex of 17, co-authored 116 publications receiving 1633 citations. Previous affiliations of Parag Kulkarni include Bharati Vidyapeeth University & University of California, Los Angeles.


Papers
More filters
Proceedings ArticleDOI
02 Jan 2011
TL;DR: A novel multiplier architecture with tunable error characteristics, that leverages a modified inaccurate 2x2 building block, that can achieve 2X - 8X better Signal-Noise-Ratio (SNR) for the same power savings when compared to recent voltage over-scaling based power-error tradeoff methods is proposed.
Abstract: We propose a novel multiplier architecture with tunable error characteristics, that leverages a modified inaccurate 2x2 building block. Our inaccurate multipliers achieve an average power saving of 31.78% ? 45.4% over corresponding accurate multiplier designs, for an average error of 1.39%?3.32%. Using image filtering and JPEG compression as sample applications we show that our architecture can achieve 2X - 8X better Signal-Noise-Ratio (SNR) for the same power savings when compared to recent voltage over-scaling based power-error tradeoff methods. We project the multiplier power savings to bigger designs highlighting the fact that the benefits are strongly design dependent. We compare this circuit-centric approach to power quality tradeoffs with a pure software adaptation approach for a JPEG example. We also enhance the design to allow for correct operation of the multiplier using a residual adder, for non error resilient applications.

411 citations

Patent
06 Dec 2007
TL;DR: In this article, the authors present systems and methods for Web-based intelligent paperless document management where users can collect, store, and share all document from various locations, requiring minimal data reentry because of data extraction capabilities.
Abstract: Provided are systems and methods for Web-based intelligent paperless document management where users can collect, store, and share all document from various locations. Also provided are systems and methods requiring minimal data reentry because of data extraction capabilities. Systems and methods described herein can deliver documents via internet to multiple persons without using barcodes or separator sheets for faxing or sending documents.

188 citations

Journal ArticleDOI
TL;DR: The survey results shows that Graph based representation is appropriate way of representing text document and improved result of analysis over traditional model for different text applications.
Abstract: A common and standard approach to model text document is bag-of-words. This model is suitable for capturing word frequency, however structural and semantic information is ignored. Graph representation is mathematical constructs and can model relationship and structural information effectively. A text can appropriately represented as Graph using vertex as feature term and edge relation can be significant relation between the feature terms. Text representation using Graph model provides computations related to various operations like term weight, ranking which is helpful in many applications in information retrieval. This paper presents a systematic survey of existing work on Graph based representation of text and also focused on Graph based analysis of text document for different operations in information retrieval. In this process taxonomy of Graph based representation and analysis of text document is derived and result of different methods of Graph based text representation and analysis are discussed. The survey results shows that Graph based representation is appropriate way of representing text document and improved result of analysis over traditional model for different text applications.

104 citations

Journal ArticleDOI
TL;DR: This work proposes a novel multiplier architecture with tunable error characteristics, that leverages a modified inaccurate 2x2 multiplier as its building block, and enhances the design to allow for correct operation of the multiplier using a correction unit, for non error-resilient applications which share the hardware resource.
Abstract: Certain classes of applications are inherently capable of absorbing some error in computation, which allows for quality to be traded off for power. Such a tradeoff is often achieved through voltage over-scaling. We propose a novel multiplier architecture with tunable error characteristics, that leverages a modified inaccurate 2x2 multiplier as its building block. Our inaccurate multipliers achieve an average power saving of 31.78%− 45.4% over corresponding accurate multiplier designs, for an average error of 1.39%−3.32%. We compare our architecture with other approaches, such as voltage scaling, for introducing error in a multiplier. Using image filtering and JPEG compression as sample applications we show that our architecture can achieve 2X 8X better SignalNoise-Ratio (SNR) for the same power savings when compared to recent voltage over-scaling based power-error tradeoff methods. We project the multiplier power savings to bigger designs highlighting the fact that the benefits are strongly design-dependent. We compare this circuit-centric approach to power-quality tradeoffs with a pure software adaptation approach for a JPEG example. Unlike recent design-for-error approaches for arithmetic logic, we also enhance the design to allow for correct operation of the multiplier using a correction unit, for non error-resilient applications which share the hardware resource.

96 citations

Proceedings ArticleDOI
13 Dec 2008
TL;DR: Experimental results show that the hybrid algorithm gives better classification accuracy, and various important approaches to semi-supervised learning such as self-training, co-training(CO), expectation maximization (EM), CO-EM, and how graph-based methods are useful is explained.
Abstract: In traditional machine learning approaches to classification, one uses only a labelled set to train the classifier. Labelled instances however are often difficult, expensive, or time consuming to obtain, as they require the efforts of experienced human annotators. Meanwhile unlabeled data may be relatively easy to collect, but there has been few ways to use them. Semi-supervised learning addresses this problem by using large amount of unlabeled data, together with the labelled data, to build better classifiers. Because semi-supervised learning requires less human effort and gives higher accuracy, it is of great interest both in theory and in practice. The paper discusses various important approaches to semi-supervised learning such as self-training, co-training(CO), expectation maximization (EM), CO-EM, Then how graph-based methods are useful is explained. All semi-supervised learning methods are classified into generative and discriminative methods. But experimental results show that the hybrid algorithm gives better classification accuracy.

91 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

01 Jan 2002

9,314 citations

Patent
17 Jun 2005
TL;DR: In this paper, the authors present a data processing system having a business object model reflecting the data used during a business transaction, which is suitable for use across industries, across businesses, and across different departments within a business within a transaction.
Abstract: Methods and systems consistent with the present invention provide a data processing system having a business object model reflecting the data used during a business transaction. Consistent interfaces are generated from the business object model. These interfaces are suitable for use across industries, across businesses, and across different departments within a business during a business transaction.

1,431 citations