scispace - formally typeset
Search or ask a question
Author

Lakhmi C. Jain

Bio: Lakhmi C. Jain is an academic researcher from University of Technology, Sydney. The author has contributed to research in topics: Artificial neural network & Intelligent decision support system. The author has an hindex of 41, co-authored 419 publications receiving 10015 citations. Previous affiliations of Lakhmi C. Jain include University of South Australia & University of Canberra.


Papers
More filters
Book ChapterDOI
01 Jan 2009
TL;DR: The results demonstrate that the proposed MAC system is able to improve the performances of individual agents as well as the team agents and compare favorably with those from other methods published in the literature.
Abstract: In this paper, we propose a Multi-Agent Classifier (MAC) system based on the Trust-Negotiation-Communication (TNC) model. A novel trust measurement method, based on the recognition and rejection rates, is proposed. Two agent teams, each consists of three neural network (NN) agents, are formed. The first is the Fuzzy Min-Max (FMM) agent team and the second is the Fuzzy ARTMAP (FAM) agent team. An auctioning method is also used for the negotiation phase. The effectiveness of the proposed model and the bond (based on trust) is measured using two benchmark classification problems. The bootstrap method is applied to quantify the classification accuracy rates statistically. The results demonstrate that the proposed MAC system is able to improve the performances of individual agents as well as the team agents. The results also compare favorably with those from other methods published in the literature.

5 citations

Book ChapterDOI
01 Jan 2019
TL;DR: This chapter proposes a novel methodology to study the structure and evolution of enriched co-authorship networks by encompassing researchers employed at one large faculty of sciences.
Abstract: The nodes of an enriched co-authorship network are annotated with various types of nominal and numeric attributes that provide additional information about researchers present in the network. In this chapter we propose a novel methodology to study the structure and evolution of enriched co-authorship networks. The proposed methodology is illustrated on an enriched co-authorship network encompassing researchers employed at one large faculty of sciences.

5 citations

Book ChapterDOI
30 Nov 2004
TL;DR: Experimental results show that the proposed vector quantisation (VQ) based watermarking scheme for hiding the gray watermark possesses advantages over other related methods in literature.
Abstract: A vector quantisation (VQ) based watermarking scheme for hiding the gray watermark is presented. It expands the watermark size, and employs VQ index assignment procedure with genetic algorithm, called genetic index assignment (GIA), for watermarking. The gray watermark is coded by VQ, and obtained indices are translated into a binary bitstream with a much smaller size. We partition the codebook into two sub-codebooks, and use either one of them based on the value of the bit for embedding. Next, GIA is employed to find better imperceptibility of watermarked image. Experimental results show that the proposed method possesses advantages over other related methods in literature.

5 citations

Proceedings ArticleDOI
28 Nov 2006
TL;DR: A vision system for autonomously guiding a robot along a known route using a single CCD camera, using a memory feedback modulation (MFM) mechanism, which provides a means for the knowledge from the memory to interact and enhance the earlier stages in the system.
Abstract: This paper presents a vision system for autonomously guiding a robot along a known route using a single CCD camera The prominent feature of the system is the real-time recognition of shape-based visual landmarks in cluttered backgrounds, using a memory feedback modulation (MFM) mechanism, which provides a means for the knowledge from the memory to interact and enhance the earlier stages in the system Its feasibility in autonomous robot navigation is demonstrated in both indoor and outdoor experiments using a vision-based navigating vehicle

5 citations

Book ChapterDOI
03 Sep 2008
TL;DR: This paper presents a role-based BDI framework to facilitate cooperation and coordination problems and is extended and based on the commercial agent software development environment known as JACK Teams.
Abstract: Multi-agent teaming is a key research field of multi-agent systems. BDI (Belief, Desire, and Intension) architecture has been widely used to solve complex problems. The theory of joint behavior has been widely used to solve the team level optimisation problems. Due to the inherent complexity of real-time and dynamic environments, it is often extremely complex and difficult to formally specify the joint behavior of the team a priori. This paper presents a role-based BDI framework to facilitate cooperation and coordination problems. This BDI framework is extended and based on the commercial agent software development environment known as JACK Teams. A real-time 2D simulation environment known as soccerbots has been used to investigate the difficulties of multi-agent teaming. The layered architecture has been used to group the agents' competitive and cooperative behaviors, which can be learned through experience by using the reinforcement learning techniques.

5 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Christopher M. Bishop1
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

10,141 citations

Book
01 Jan 1995
TL;DR: In this article, Nonaka and Takeuchi argue that Japanese firms are successful precisely because they are innovative, because they create new knowledge and use it to produce successful products and technologies, and they reveal how Japanese companies translate tacit to explicit knowledge.
Abstract: How has Japan become a major economic power, a world leader in the automotive and electronics industries? What is the secret of their success? The consensus has been that, though the Japanese are not particularly innovative, they are exceptionally skilful at imitation, at improving products that already exist. But now two leading Japanese business experts, Ikujiro Nonaka and Hiro Takeuchi, turn this conventional wisdom on its head: Japanese firms are successful, they contend, precisely because they are innovative, because they create new knowledge and use it to produce successful products and technologies. Examining case studies drawn from such firms as Honda, Canon, Matsushita, NEC, 3M, GE, and the U.S. Marines, this book reveals how Japanese companies translate tacit to explicit knowledge and use it to produce new processes, products, and services.

7,448 citations

01 Jan 2009

7,241 citations