George F. Luger
Other affiliations: University of Edinburgh
Bio: George F. Luger is an academic researcher from University of New Mexico. The author has contributed to research in topics: Inference & Expert system. The author has an hindex of 15, co-authored 107 publications receiving 2677 citations. Previous affiliations of George F. Luger include University of Edinburgh.
Papers published on a yearly basis
•01 Jan 1989
TL;DR: In this paper, the authors combine the theoretical foundations of intelligent problem-solving with data structures and algorithms needed for its implementation, including logic, rule, object and agent-based architectures, along with example programs written in LISP and PROLOG.
Abstract: From the Publisher: Combines the theoretical foundations of intelligent problem-solving with he data structures and algorithms needed for its implementation. The book presents logic, rule, object and agent-based architectures, along with example programs written in LISP and PROLOG. The practical applications of AI have been kept within the context of its broader goal: understanding the patterns of intelligence as it operates in this world of uncertainty, complexity and change. The introductory and concluding chapters take a new look at the potentials and challenges facing artificial intelligence and cognitive science. An extended treatment of knowledge-based problem-solving is given including model-based and case-based reasoning. Includes new material on: Fundamentals of search, inference and knowledge representation AI algorithms and data structures in LISP and PROLOG Production systems, blackboards, and meta-interpreters including planers, rule-based reasoners, and inheritance systems. Machine-learning including ID3 with bagging and boosting, explanation based learning, PAC learning, and other forms of induction Neural networks, including perceptrons, back propogation, Kohonen networks, Hopfield networks, Grossberg learning, and counterpropagation. Emergent and social methods of learning and adaptation, including genetic algorithms, genetic programming and artificial life. Object and agent-based problem solving and other forms of advanced knowledge representation
15 Aug 1990
TL;DR: This paper presents the preliminary architecture of a network level intrusion detection system that will monitor base level information in network packets, learning the normal patterns and announcing anomalies as they occur.
Abstract: This paper presents the preliminary architecture of a network level intrusion detection system. The proposed system will monitor base level information in network packets (source, destination, packet size, and time), learning the normal patterns and announcing anomalies as they occur. The goal of this research is to determine the applicability of current intrusion detection technology to the detection of network level intrusions. In particular, the authors are investigating the possibility of using this technology to detect and react to worm programs.
01 Nov 1990
TL;DR: Provides a thorough discussion of AI's theoretical foundations and advanced applications, including expert system design and knowledge-based programming, and should appeal to a broad audience.
Abstract: Provides a thorough discussion of AI's theoretical foundations and advanced applications, including expert system design and knowledge-based programming. It is a wealth of advanced AI topics and applications that should appeal to a broad audience.
•20 Aug 1979
TL;DR: It is argued that the technique of meta-level inference is a powerful technique for controlling search while retaining the modularity of declarative knowledge representations.
Abstract: In this paper we shall describe a program (MECHO), written in Prolog, which solves a wide range of mechanics problems from statements in both predicate calculus and English. Mecho uses the technique of meta-level inference to control search in natural language understanding, common sense inference, model formation and algebraic manipulation. We argue that this is a powerful technique for controlling search while retaining the modularity of declarative knowledge representations.
08 Jun 1994
TL;DR: This book presents Vocabularies for Describing Intelligence, a guide to building Cognitive Representations in PROLOG as Representation and Language, and search Strategies for Weak Method Problem Solving, using knowledge and Strong Method problem solving.
Abstract: Introduction to Cognitive Science: Intelligence and the Roots of Cognitive Science. Vocabularies for Describing Intelligence. Representation Schemes. Constraining the Architecture of Minds. Natural Intelligence: Brain Function. Symbol Based Representation and Search: Network and Structured Representation Schemes. Logic Based Representation and Reasoning. Search Strategies for Weak Method Problem Solving. Using Knowledge and Strong Method Problem Solving. Machine Learning: Explicit Symbol Based Learning Models. Connectionist Networks: History, The Perception, and Backpropagation. Competitive, Reinforcement, and Attractor Learning Models. Language: Language Representation and Processing. Pragmatics and Discourse. Building Cognitive Representations in PROLOG: PROLOG as Representation and Language. Creating Meta-Interpreters in PROLOG. Epilogue: Cognitive Science: Problems and Promise. References. Index.
01 Jan 2009
01 Aug 2000
TL;DR: Assessment of medical technology in the context of commercialization with Bioentrepreneur course, which addresses many issues unique to biomedical products.
Abstract: BIOE 402. Medical Technology Assessment. 2 or 3 hours. Bioentrepreneur course. Assessment of medical technology in the context of commercialization. Objectives, competition, market share, funding, pricing, manufacturing, growth, and intellectual property; many issues unique to biomedical products. Course Information: 2 undergraduate hours. 3 graduate hours. Prerequisite(s): Junior standing or above and consent of the instructor.
TL;DR: This work introduces the reader to the motivations for solving the ambiguity of words and provides a description of the task, and overviews supervised, unsupervised, and knowledge-based approaches.
Abstract: Word sense disambiguation (WSD) is the ability to identify the meaning of words in context in a computational manner. WSD is considered an AI-complete problem, that is, a task whose solution is at least as hard as the most difficult problems in artificial intelligence. We introduce the reader to the motivations for solving the ambiguity of words and provide a description of the task. We overview supervised, unsupervised, and knowledge-based approaches. The assessment of WSD systems is discussed in the context of the Senseval/Semeval campaigns, aiming at the objective evaluation of systems participating in several different disambiguation tasks. Finally, applications, open problems, and future directions are discussed.
10 Dec 2015
TL;DR: Countering the unavailability of network benchmark data set challenges, this paper examines a UNSW-NB15 data set creation which has a hybrid of the real modern normal and the contemporary synthesized attack activities of the network traffic.
Abstract: One of the major research challenges in this field is the unavailability of a comprehensive network based data set which can reflect modern network traffic scenarios, vast varieties of low footprint intrusions and depth structured information about the network traffic. Evaluating network intrusion detection systems research efforts, KDD98, KDDCUP99 and NSLKDD benchmark data sets were generated a decade ago. However, numerous current studies showed that for the current network threat environment, these data sets do not inclusively reflect network traffic and modern low footprint attacks. Countering the unavailability of network benchmark data set challenges, this paper examines a UNSW-NB15 data set creation. This data set has a hybrid of the real modern normal and the contemporary synthesized attack activities of the network traffic. Existing and novel methods are utilised to generate the features of the UNSWNB15 data set. This data set is available for research purposes and can be accessed from the link.
TL;DR: Evidence is given that short sequences of system calls executed by running processes are a good discriminator between normal and abnormal operating characteristics of several common UNIX programs.
Abstract: A method is introduced for detecting intrusions at the level of privileged processes. Evidence is given that short sequences of system calls executed by running processes are a good discriminator between normal and abnormal operating characteristics of several common UNIX programs. Normal behavior is collected in two waysc Synthetically, by exercising as many normal modes of usage of a program as possible, and in a live user environment by tracing the actual execution of the program. In the former case several types of intrusive behavior were studieds in the latter case, results were analyzed for false positives.