scispace - formally typeset
Search or ask a question
Author

Lakhmi C. Jain

Bio: Lakhmi C. Jain is an academic researcher from University of Technology, Sydney. The author has contributed to research in topics: Artificial neural network & Intelligent decision support system. The author has an hindex of 41, co-authored 419 publications receiving 10015 citations. Previous affiliations of Lakhmi C. Jain include University of South Australia & University of Canberra.


Papers
More filters
Book ChapterDOI
01 Jan 2002
TL;DR: This research paper presents the implementation of fuzzy logic system tools with the combination of conventional technique in evaluating the best course of action for directing military vehicles from one designated area to the final destination in quickest way.
Abstract: This research paper presents the implementation of fuzzy logic system tools with the combination of conventional technique in evaluating the best course of action for directing military vehicles from one designated area to the final destination in quickest way. The difficulty levels between the road network nodes are weather and terrain factors, which are presented on the GIS maps. The accurate performance of the combined technique in decision making will depend on the expert’s knowledge and information supplied at the time of evaluating. The GIS model, fuzzy logic system, parameters considered for weather and terrain condition, vehicle selection, and conventional methodology will be discussed. The area between start and finish will be assumed in the 25 square Kms map area

2 citations

BookDOI
24 Jun 2013
TL;DR: The 6th International Conference on Intelligent Interactive Multimedia Systems and Services (KES-IIMSS2013) was held in Sesimbra, Portugal, in June 2013 as mentioned in this paper.
Abstract: At a time when computers are more widespread than ever, intelligent interactive systems have become a necessity. The term multimedia systems refers to the coordinated storage, processing, transmission and retrieval of multiple forms of information, such as audio, image, video, animation, graphics and text. The growth of multimedia services has been exponential, as technological progress keeps up with the consumers need for content. The solution of 'one fits all' is no longer appropriate for the wide ranges of users with various backgrounds and needs, so one important goal of many intelligent interactive systems is dynamic personalization and adaptivity to users. This book presents 37 papers summarizing the work and new research results presented at the 6th International Conference on Intelligent Interactive Multimedia Systems and Services (KES-IIMSS2013), held in Sesimbra, Portugal, in June 2013. The conference series focuses on research in the fields of intelligent interactive multimedia systems and services and provides an internationally respected forum for scientific research in related technologies and applications.IOS Press is an international science, technical and medical publisher of high-quality books for academics, scientists, and professionals in all fields. Some of the areas we publish in: -Biomedicine -Oncology -Artificial intelligence -Databases and information systems -Maritime engineering -Nanotechnology -Geoengineering -All aspects of physics -E-governance -E-commerce -The knowledge economy -Urban studies -Arms control -Understanding and responding to terrorism -Medical informatics -Computer Sciences

2 citations

Book ChapterDOI
21 Nov 2002
TL;DR: Results obtained from spread spectrum fingerprinting experiments show that the proposed attack can impede fingerprint detection using as few as three fingerprinted images without introducing noticeable visual degradation, hence it is more powerful than those reported in literature.
Abstract: Digital watermarking is a technology proposed to help address the concern of copyright protection for digital content. To facilitate tracing of copyright violators, different watermarks carrying information about the transaction or content recipient can be embedded into multimedia content before distribution. Such form of "personalised" watermark is called "fingerprint". A powerful attack against digital fingerprinting is the collusion attack, in which different fingerprinted copies of same host data are jointly processed to remove the fingerprints or hinder their detection. This paper first studies a number of existing collusion attack schemes against image fingerprinting. A new collusion attack scheme is then proposed and evaluated, both analytically and empirically. Attack performance in terms of fingerprint detectability and visual quality degradation after attack is assessed. Results obtained from spread spectrum fingerprinting experiments show that the proposed attack can impede fingerprint detection using as few as three fingerprinted images without introducing noticeable visual degradation, hence it is more powerful than those reported in literature. It is also found that increasing the fingerprint embedding strength and spreading factor do not help resist such malicious attacks.

2 citations

Proceedings ArticleDOI
01 Dec 2009
TL;DR: An architecture that is inspired by a human’s capability to autonomously navigate an environment based on visual landmark recognition is presented that consists of pre-attentive and attentive stages that allow visual landmarks to be recognized reliably under both clean and cluttered backgrounds.
Abstract: An architecture that is inspired by a human’s capability to autonomously navigate an environment based on visual landmark recognition is presented. It consists of pre-attentive and attentive stages that allow visual landmarks to be recognized reliably under both clean and cluttered backgrounds. The pre-attentive stage provides an efficient means for real-time image processing by selectively focusing on regions of interest within input images. The attentive stage has a memory feedback modulation mechanism that allows visual knowledge of landmarks in the memory to interact and guide different stages in the architecture for efficient feature extraction and landmark recognition. The results show that the architecture is able to reliably recognise both occluded and non-occluded visual landmarks in complex backgrounds.

2 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Christopher M. Bishop1
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

10,141 citations

Book
01 Jan 1995
TL;DR: In this article, Nonaka and Takeuchi argue that Japanese firms are successful precisely because they are innovative, because they create new knowledge and use it to produce successful products and technologies, and they reveal how Japanese companies translate tacit to explicit knowledge.
Abstract: How has Japan become a major economic power, a world leader in the automotive and electronics industries? What is the secret of their success? The consensus has been that, though the Japanese are not particularly innovative, they are exceptionally skilful at imitation, at improving products that already exist. But now two leading Japanese business experts, Ikujiro Nonaka and Hiro Takeuchi, turn this conventional wisdom on its head: Japanese firms are successful, they contend, precisely because they are innovative, because they create new knowledge and use it to produce successful products and technologies. Examining case studies drawn from such firms as Honda, Canon, Matsushita, NEC, 3M, GE, and the U.S. Marines, this book reveals how Japanese companies translate tacit to explicit knowledge and use it to produce new processes, products, and services.

7,448 citations

01 Jan 2009

7,241 citations