scispace - formally typeset
Search or ask a question
Author

Lakhmi C. Jain

Bio: Lakhmi C. Jain is an academic researcher from University of Technology, Sydney. The author has contributed to research in topics: Artificial neural network & Intelligent decision support system. The author has an hindex of 41, co-authored 419 publications receiving 10015 citations. Previous affiliations of Lakhmi C. Jain include University of South Australia & University of Canberra.


Papers
More filters
Book ChapterDOI
26 Mar 2008
TL;DR: An original approach to fuzzification of grounding sets is introduced and two levels of fuzzification are applied to make the original mechanism of grounding more context sensitive.
Abstract: An original approach to fuzzification of grounding sets is introduced. This model is considered for the case of grounding knowledge, belief and possibility extensions of conjunctions. Artificial cognitive agents that carry out the grounding are assumed to observe their external worlds and store results of observations in the so-called base profiles. Two levels of fuzzification are introduced to an original model for grounding: the first one deals with fuzzification of atom observations in particular base profiles and the second deals with fuzzification of grounding sets by introducing a fuzzy membership of base profiles to grounding sets. Both levels of fuzzification are applied to make the original mechanism of grounding more context sensitive.

5 citations

Book ChapterDOI
01 Jan 2010
TL;DR: In this book, aspects of low level data processing of multimedia services in intelligent environments are covered, including storage, recognition, classification, transmission, retrieval, and securing of information.
Abstract: Multimedia services is the term chosen to describe services which rely on the coordinated and secure storage, processing, transmission, and retrieval of information which exists in various forms [1]. The term refers to the several levels of data processing. It includes application areas, such as digital libraries, e-learning, e-government, e-commerce, e-entertainment, e-health, and e-legal services. In our earlier book [2], we covered aspects of low level data processing of multimedia services in intelligent environments, including storage, recognition, classification, transmission, retrieval, and securing of information. Four additional chapters in [2] considered systems developed to support intermediate level multimedia processing services. These included noise and hearing monitoring, and measurement, augmented reality, and automated lecture rooms. In addition rights management and licensing were included. The final chapter in [2] was devoted to a high-level intelligent recommender service in scientific digital libraries.

5 citations

Journal ArticleDOI
TL;DR: In this special issue, a total of thirteen articles comprising extended papers from the 12th International Conference on Knowledge-Based and Intelligent Information & Engineering Systems (KES2008) as well as from other submissions that highlight a small number of innovative knowledge-based intelligent systems and their applications to solving problems in different domains are presented.
Abstract: Intelligent techniques derived from knowledge-based engineering and related computing paradigms have provided useful concepts and tools to undertake a variety of real-world problems. These systems mimic the analytical and learning capabilities of the human brain. They harness the benefits of knowledge and intelligence to form an integrated framework for problem solving. In this special issue, a total of thirteen articles comprising extended papers from the 12th International Conference on Knowledge-Based and Intelligent Information & Engineering Systems (KES2008) as well as from other submissions that highlight a small number of innovative knowledge-based intelligent systems and their applications to solving problems in different domains are presented. A summary of each article is as follows. With the development of advanced travelers information systems, it is important to have a prompt and accurate travel time prediction system for road networks. In the first article, two travel time prediction algorithms using naive Bayesian classification and rulebased classification are proposed. Based on a historical traffic database, the algorithms are able to yield high accuracy in travel time prediction. The algorithms are also useful for road networks with arbitrary travel routes. The results also reveal that naive Bayesian classification produces better mean absolute relative error than that of rule-based classification. For large-scale complex process plants that involve safety critical systems, real-time diagnosis is an important aspect. In the second article, an ontology for

5 citations

Proceedings ArticleDOI
20 May 2014
TL;DR: An intelligent MAS is developed that emulates the real-time operation of a distributed energy system and an artificially intelligent learning algorithm is implemented, which can aid the autonomous behavior of the multi agents without any human intervention.
Abstract: Researchers have been taking efforts to reduce the dependency on distributed generators, due to high fuel cost and problems associated with the depletion of non-renewable energy sources. Hence, it becomes inevitable to formulate techniques, which can utilize alternate energy sources and capable of meeting power demands in a cost effective manner. The concept of Multi-Agent Systems (MAS) is novel in taking intelligent decisions in place of manual operations and thereby ensuring greater operational efficiency. MAS offer a range of benefits like flexibility, autonomy, less maintenance, reduced cost and so on. The primary objective of this paper is to develop an intelligent MAS that emulates the real-time operation of a distributed energy system. It also aims at implementing an artificially intelligent learning algorithm, which can aid the autonomous behavior of the multi agents without any human intervention. MAS are designed to accommodate decision-making modules as well as learning mechanisms based on evolutionary computation. These techniques increase the intelligence of the MAS.

5 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Christopher M. Bishop1
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

10,141 citations

Book
01 Jan 1995
TL;DR: In this article, Nonaka and Takeuchi argue that Japanese firms are successful precisely because they are innovative, because they create new knowledge and use it to produce successful products and technologies, and they reveal how Japanese companies translate tacit to explicit knowledge.
Abstract: How has Japan become a major economic power, a world leader in the automotive and electronics industries? What is the secret of their success? The consensus has been that, though the Japanese are not particularly innovative, they are exceptionally skilful at imitation, at improving products that already exist. But now two leading Japanese business experts, Ikujiro Nonaka and Hiro Takeuchi, turn this conventional wisdom on its head: Japanese firms are successful, they contend, precisely because they are innovative, because they create new knowledge and use it to produce successful products and technologies. Examining case studies drawn from such firms as Honda, Canon, Matsushita, NEC, 3M, GE, and the U.S. Marines, this book reveals how Japanese companies translate tacit to explicit knowledge and use it to produce new processes, products, and services.

7,448 citations

01 Jan 2009

7,241 citations