scispace - formally typeset
Search or ask a question
Author

Lakhmi C. Jain

Bio: Lakhmi C. Jain is an academic researcher from University of Technology, Sydney. The author has contributed to research in topics: Artificial neural network & Intelligent decision support system. The author has an hindex of 41, co-authored 419 publications receiving 10015 citations. Previous affiliations of Lakhmi C. Jain include University of South Australia & University of Canberra.


Papers
More filters
Journal ArticleDOI
TL;DR: A novel multicore beamformer particle filter (multicore BPF) to estimate the EEG brain source spatial locations and their corresponding waveforms and reduces the dimensionality of the problem to half compared with the PF solution, thus alleviating the curse of dimensionality problem.
Abstract: Electroencephalography (EEG)-based brain computer interface (BCI) is the most studied noninvasive interface to build a direct communication pathway between the brain and an external device. However, correlated noises in EEG measurements still constitute a significant challenge. Alternatively, building BCIs based on filtered brain activity source signals instead of using their surface projections, obtained from the noisy EEG signals, is a promising and not well-explored direction. In this context, finding the locations and waveforms of inner brain sources represents a crucial task for advancing source-based noninvasive BCI technologies. In this paper, we propose a novel multicore beamformer particle filter (multicore BPF) to estimate the EEG brain source spatial locations and their corresponding waveforms. In contrast to conventional (single-core) beamforming spatial filters, the developed multicore BPF considers explicitly temporal correlation among the estimated brain sources by suppressing activation from regions with interfering coherent sources. The hybrid multicore BPF brings together the advantages of both deterministic and Bayesian inverse problem algorithms in order to improve the estimation accuracy. It solves the brain activity localization problem without prior information about approximate areas of source locations. Moreover, the multicore BPF reduces the dimensionality of the problem to half compared with the PF solution, thus alleviating the curse of dimensionality problem. The results, based on generated and real EEG data, show that the proposed framework recovers correctly the dominant sources of brain activity.

14 citations

Proceedings Article
01 Jan 2003
TL;DR: It is denaonstated that the proposed watermarking scheme provides better quality in watermarked images, stronger robustness under some common attacks, faster encoding time, and effective methods for partitioning codebooks.
Abstract: A novel watermarking scheme based on vector quantisation (VQ) for digital still images is presented. This scheme begins with the procedure of partitioning the original codebook into two sub-codebooks. To achieve this, two strategies are proposed. The first one requires no complex algorithm and gives the users full freedom for partitioning. The second one is a genetic codebook partition procedure, which has the ability to improve the perfonmance of tile proposed watermarking scheme. After that, the information of codebook partition is served as a secret key and is used in the proposed watermarking scheme. In the embedding procedure, according to the watermark bit to be embedded a sub-codebook is chosen. The traditional VQ nearest codeword search is then performed to obtain the nearest codeword for the input vector. In the extracting procedure, the traditional VQ table lookup procedure is executed. With the same secret key, the hidden watermark bit can be determined by examining which same-codebook the corresponding codeword is belonging to. It is denaonstated that the proposed watermarking scheme provides better quality in watermarked images, stronger robustness under some common attacks, faster encoding time, and effective methods for partitioning codebooks.

14 citations

Journal ArticleDOI
TL;DR: Due to the modifications in ART2 this updated alternative architecture has improved real-time landmine detection capabilities although the registration of all bands is more critical to the accuracy of results in this case.
Abstract: The self-organizing network ART2 is extended to provide a fuzzy output value, which indicates the degree of familiarity of a new analog input pattern to previously stored patterns in the long-term memory of the network. The outputs of the multilayer perceptron and this modified ART2, provide an analog value to a fuzzy rule-based fusion technique which also uses a processed polarization resolved image as its third input. In real-time situations these two classifier outputs indicate the likelihood of a surface landmine target when presented with a number of multispectral and textural bands. Due to the modifications in ART2 this updated alternative architecture has improved real-time landmine detection capabilities although the registration of all bands is more critical to the accuracy of results in this case. The real-time fuzzy rule-based system in preliminary tests has detected two of the three landmines and the landmine surrogate with two false alarms. Advanced tests on 30 images using the fuzzy rule-based system further confirmed the distinct advantages of fusion and improved detection rates.

14 citations

BookDOI
01 Jan 2013
TL;DR: Sustainability in Energy and Buildings : Proceedings of the 4th International Conference in Sustainability and Building (SEB´12) as mentioned in this paper, presented by the authors of this paper.
Abstract: Sustainability in Energy and Buildings : Proceedings of the 4th International Conference in Sustainability in Energy and Buildings (SEB´12)

14 citations

Book
01 Jan 2013
TL;DR: This chapter discusses the processing framework for Ranking and Skyline Queries, as well as preference-Based Query Personalization, and Progressive and Approximate Join Algorithms on Data Streams.
Abstract: From the content: Advanced Query Processing: An Introduction.- On Skyline Queries and how to Choose from Pareto Sets.- Processing Framework for Ranking and Skyline Queries.- Preference-Based Query Personalization.- Approximate Queries for Spatial Data.- Approximate XML Query Processing.- Progressive and Approximate Join Algorithms on Data Streams.

14 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Christopher M. Bishop1
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

10,141 citations

Book
01 Jan 1995
TL;DR: In this article, Nonaka and Takeuchi argue that Japanese firms are successful precisely because they are innovative, because they create new knowledge and use it to produce successful products and technologies, and they reveal how Japanese companies translate tacit to explicit knowledge.
Abstract: How has Japan become a major economic power, a world leader in the automotive and electronics industries? What is the secret of their success? The consensus has been that, though the Japanese are not particularly innovative, they are exceptionally skilful at imitation, at improving products that already exist. But now two leading Japanese business experts, Ikujiro Nonaka and Hiro Takeuchi, turn this conventional wisdom on its head: Japanese firms are successful, they contend, precisely because they are innovative, because they create new knowledge and use it to produce successful products and technologies. Examining case studies drawn from such firms as Honda, Canon, Matsushita, NEC, 3M, GE, and the U.S. Marines, this book reveals how Japanese companies translate tacit to explicit knowledge and use it to produce new processes, products, and services.

7,448 citations

01 Jan 2009

7,241 citations