Author
Lakhmi C. Jain
Other affiliations: University of South Australia, University of Canberra, Ritsumeikan University ...read more
Bio: Lakhmi C. Jain is an academic researcher from University of Technology, Sydney. The author has contributed to research in topics: Artificial neural network & Intelligent decision support system. The author has an hindex of 41, co-authored 419 publications receiving 10015 citations. Previous affiliations of Lakhmi C. Jain include University of South Australia & University of Canberra.
Papers published on a yearly basis
Papers
More filters
••
01 Jan 2016
TL;DR: These volumes constitute the Proceedings of the 6th International Workshop on Soft Computing Applications, or SOFA 2014, held on 24-26 July 2014 in Timisoara, Romania and provide useful information to professors, researchers and graduated students in area of soft computing techniques and applications, as they report new research work on challenging issues.
Abstract: These volumes constitute the Proceedings of the 6th International Workshop on Soft Computing Applications, or SOFA 2014, held on 24-26 July 2014 in Timisoara, Romania. This edition was organized by the University of Belgrade, Serbia in conjunction with Romanian Society of Control Engineering and Technical Informatics (SRAIT) - Arad Section, The General Association of Engineers in Romania - Arad Section, Institute of Computer Science, Iasi Branch of the Romanian Academy and IEEE Romanian Section. The Soft Computing concept was introduced by Lotfi Zadeh in 1991 and serves to highlight the emergence of computing methodologies in which the accent is on exploiting the tolerance for imprecision and uncertainty to achieve tractability, robustness and low solution cost. Soft computing facilitates the use of fuzzy logic, neurocomputing, evolutionary computing and probabilistic computing in combination, leading to the concept of hybrid intelligent systems. The combination of such intelligent systems tools and a large number of applications introduce a need for a synergy of scientific and technological disciplines in order to show the great potential of Soft Computing in all domains. The conference papers included in these proceedings, published post conference, were grouped into the following area of research: Image, Text and Signal ProcessingIntelligent TransportationModeling and ApplicationsBiomedical ApplicationsNeural Network and ApplicationsKnowledge-Based Technologies for Web Applications, Cloud Computing, Security, Algorithmsand Computer NetworksKnowledge-Based TechnologiesSoft Computing Techniques for Time Series AnalysisSoft Computing and Fuzzy Logic in BiometricsFuzzy Applications Theory and Fuzzy ControlBussiness Process ManagementMethods and Applications in Electrical EngineeringThe volumes provide useful information to professors, researchers and graduated students in area of soft computing techniques and applications, as they report new research work on challenging issues.
5 citations
••
01 Jan 2007TL;DR: In this chapter, an introduction on the use of evolutionary computing techniques, which are considered as global optimization and search techniques inspired from biological evolutions, in the domain of system design is presented.
Abstract: In this chapter, an introduction on the use of evolutionary computing techniques, which are considered as global optimization and search techniques inspired from biological evolutions, in the domain of system design is presented. A variety of evolutionary computing techniques are first explained, and the motivations of using evolutionary computing techniques in tackling system design tasks are then discussed. In addition, a number of successful applications of evolutionary computing to system design tasks are described.
5 citations
••
TL;DR: The core architecture, which is believed to be required for Multi-Agent System (MAS) developers to achieve such flexibility, is highlighted.
Abstract: Heuristic computing has consolidated into two streams of research (personification software and smart products) [1]. Cognitive Science is one of these fields and is attracting research effort based on Multi-Agent System (MAS). This research requires the formation of a voluntarily trust relationship in order for collaboration to occur, otherwise the imposed goal(s) may be aborted or fail completely [2,3]. An Agent Transportation Layer Adaption System (ATLAS) communications framework has been constructed to pass messages between separate agent systems. Discussion about confined frameworks have recently been extended to enable individual students associated with our Knowledge-Based and Intellingent Information and Engineering Systems (KES) Centre to fast track the development of their research concepts. A Plug 'n' Play concept based on a multi-agent blackboard architecture forms the basis of this research. This paper highlights the core architecture, we believe is required for Multi-Agent System (MAS) developers to achieve such flexibility. Agent teams can provide the ability to adapt and dynamically organize. The model described, concentrates on the blackboard design constructs to represents all functional blocks required to automate the processes required to complete any decomposed goals. Discussion in this paper is limited to the formative work within the foundation layers of that framework.
5 citations
•
01 Oct 2005
4 citations
••
01 Aug 1999TL;DR: This work demonstrates the effectiveness of the fuzzy control for speech signal generation, designed in the sense of supporting specific speech variability, as described by "classical" knowledge base.
Abstract: Based on he first systematic approach to fuzzy speech production modeling, we demonstrate the effectiveness of the fuzzy control for speech signal generation. The control is designed in the sense of supporting specific speech variability, as described by "classical" knowledge base (and fuzzified by us) about the phenomenon of speech production.
4 citations
Cited by
More filters
••
[...]
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).
13,246 citations
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.
10,141 citations
•
9,185 citations
•
01 Jan 1995
TL;DR: In this article, Nonaka and Takeuchi argue that Japanese firms are successful precisely because they are innovative, because they create new knowledge and use it to produce successful products and technologies, and they reveal how Japanese companies translate tacit to explicit knowledge.
Abstract: How has Japan become a major economic power, a world leader in the automotive and electronics industries? What is the secret of their success? The consensus has been that, though the Japanese are not particularly innovative, they are exceptionally skilful at imitation, at improving products that already exist. But now two leading Japanese business experts, Ikujiro Nonaka and Hiro Takeuchi, turn this conventional wisdom on its head: Japanese firms are successful, they contend, precisely because they are innovative, because they create new knowledge and use it to produce successful products and technologies. Examining case studies drawn from such firms as Honda, Canon, Matsushita, NEC, 3M, GE, and the U.S. Marines, this book reveals how Japanese companies translate tacit to explicit knowledge and use it to produce new processes, products, and services.
7,448 citations