scispace - formally typeset
Search or ask a question
Author

Wolfgang Banzhaf

Bio: Wolfgang Banzhaf is an academic researcher from Michigan State University. The author has contributed to research in topics: Genetic programming & Evolutionary algorithm. The author has an hindex of 52, co-authored 324 publications receiving 14795 citations. Previous affiliations of Wolfgang Banzhaf include Technical University of Dortmund & Mitsubishi.


Papers
More filters
Book
01 Jan 1998
TL;DR: This book presents a meta-modelling framework for genetic programming that automates the very labor-intensive and therefore time-heavy and expensive process of designing and implementing genetic algorithms.
Abstract: 1 Genetic Programming as Machine Learning 2 Genetic Programming and Biology 3 Computer Science and Mathematical Basics 4 Genetic Programming as Evolutionary Computation 5 Basic ConceptsThe Foundation 6 CrossoverThe Center of the Storm 7 Genetic Programming and Emergent Order 8 AnalysisImproving Genetic Programming with Statistics 9 Different Varieties of Genetic Programming 10 Advanced Genetic Programming 11 ImplementationMaking Genetic Programming Work 12 Applications of Genetic Programming 13 Summary and Perspectives A Printed and Recorded Resources B Information Available on the Internet C GP Software D Events

1,771 citations

Book
15 Dec 1997
TL;DR: The authors discuss GP software tools, including Discipulus, the GP software developed by the authors, and Appendix D mentions events most closely related to the field of genetic programming.
Abstract: ‰ Four appendices summarize valuable resources available for the reader: Appendix A contains printed and recorded resources, Appendix B suggests web-related resources, Appendix C discusses GP software tools, including Discipulus, the GP software developed by the authors, and Appendix D mentions events most closely related to the field of genetic programming. URLs can be found online at http://mkp.com/GPIntro.

1,256 citations

BookDOI
01 Jan 2004
TL;DR: The two volume set LNCS 3102/3103 constitutes the refereed proceedings of the Genetic and Evolutionary Computation Conference, GECCO 2004, held in Seattle, WA, USA, in June 2004.
Abstract: The two volume set LNCS 3102/3103 constitutes the refereed proceedings of the Genetic and Evolutionary Computation Conference, GECCO 2004, held in Seattle, WA, USA, in June 2004. The 230 revised full papers and 104 poster papers presented were carefully reviewed and selected from 460 submissions. The papers are organized in topical sections on artificial life, adaptive behavior, agents, and ant colony optimization; artificial immune systems, biological applications; coevolution; evolutionary robotics; evolution strategies and evolutionary programming; evolvable hardware; genetic algorithms; genetic programming; learning classifier systems; real world applications; and search-based software engineering.

913 citations

Journal ArticleDOI
01 Jan 2010
TL;DR: An overview of the research progress in applying CI methods to the problem of intrusion detection is provided, including core methods of CI, including artificial neural networks, fuzzy systems, evolutionary computation, artificial immune systems, swarm intelligence, and soft computing.
Abstract: Intrusion detection based upon computational intelligence is currently attracting considerable interest from the research community. Characteristics of computational intelligence (CI) systems, such as adaptation, fault tolerance, high computational speed and error resilience in the face of noisy information, fit the requirements of building a good intrusion detection model. Here we want to provide an overview of the research progress in applying CI methods to the problem of intrusion detection. The scope of this review will encompass core methods of CI, including artificial neural networks, fuzzy systems, evolutionary computation, artificial immune systems, swarm intelligence, and soft computing. The research contributions in each field are systematically summarized and compared, allowing us to clearly define existing research challenges, and to highlight promising new research directions. The findings of this review should provide useful insights into the current IDS literature and be a good source for anyone who is interested in the application of CI approaches to IDSs or related fields.

700 citations

BookDOI
01 Jan 2003
TL;DR: Self-Replication is proposed as a universal, continuously valued property of the interaction between a system and its environment that represents the effect of the presence of such a system upon the future presence of similar systems.
Abstract: Self-replication is a fundamental property of many interesting physical, formal and biological systems, such as crystals, waves, automata, and especially forms of natural and artificial life. Despite its importance to many phenomena, self-replication has not been consistently defined or quantified in a rigorous, universal way. In this paper we propose a universal, continuously valued property of the interaction between a system and its environment. This property represents the effect of the presence of such a system upon the future presence of similar systems. We demonstrate both analytical and computational analysis of self-replicability factors for three distinct systems involving both discrete and continuous behaviors. 1 Overview and History Self-replication is a fundamental property of many interesting physical, formal, and biological systems, such as crystals, waves, automata, and especially forms of natural and artificial life [1]. Despite its importance to many phenomena, self-replication has not been consistently defined or quantified in a rigorous, universal way. In this paper we propose a universal, continuous valued property of the interaction between a system and its environment. This property represents the effect of the presence of such a system upon the future presence of similar systems. Subsequently, we demonstrate both analytical and computational analysis of self-replicability factors for three distinct systems involving both discrete and continuous behaviors. Two prominent issues arise in examining how self-replication has been handled when trying to extend the concept universally: how to deal with non-ideal systems and how to address so-called ‘trivial’ cases [2,3]. Moore [4] requires that in order for a configuration to be considered self-reproducing it must be capable of causing arbitrarily many offspring; this requirement extends poorly to finite environments. Lohn and Reggia [5] put forward several cellular-automata (CA) -specific definitions, and result in a binary criterion. A second issue that arose in the consideration of self-replicating automata was that some cases seemed too trivial for consideration, such as an ‘all-on’ CA, resulting in a requirement for Turing-universality [6]. 2 B. Adams and H. Lipson The definition for self-replicability we propose here is motivated in part by (a) A desire to do more than look at self-replication as a binary property applicable only to certain automata, and, (b) The goal of encapsulating a general concept in a means not reliant upon (but compatible with) ideal conditions. We wish to do this by putting self-replication on a scale that is algorithmically calculable, quantifiable, and continuous. Such a scale would allow for comparisons, both between the same system in different environments, determining ideal environments for a system’s replication, as well as between different systems in the same environment, if optimizing replicability in a given environment is desired. Rather than viewing self-replicability as a property purely of the system in question, we view it as a property of the interaction between a system and its environment. Self-Replication, as we present it, is a property embedded and based upon information, rather than a specific material framework. We construct replicability as a property relative to two different environments, which indicates the degree to which one environment yields a higher presence of the system over time. Self-replicability, then, is a comparison between an environment lacking the system and an environment in which the system is present. We will first introduce a number of definitions, and then give examples of replicability of three types of systems.

635 citations


Cited by
More filters
Proceedings ArticleDOI
06 Aug 2002
TL;DR: A concept for the optimization of nonlinear functions using particle swarm methodology is introduced, and the evolution of several paradigms is outlined, and an implementation of one of the paradigm is discussed.
Abstract: A concept for the optimization of nonlinear functions using particle swarm methodology is introduced. The evolution of several paradigms is outlined, and an implementation of one of the paradigms is discussed. Benchmark testing of the paradigm is described, and applications, including nonlinear function optimization and neural network training, are proposed. The relationships between particle swarm optimization and both artificial life and genetic algorithms are described.

35,104 citations

Journal ArticleDOI
TL;DR: This historical survey compactly summarizes relevant work, much of it from the previous millennium, review deep supervised learning, unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.

14,635 citations

Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

01 Aug 2000
TL;DR: Assessment of medical technology in the context of commercialization with Bioentrepreneur course, which addresses many issues unique to biomedical products.
Abstract: BIOE 402. Medical Technology Assessment. 2 or 3 hours. Bioentrepreneur course. Assessment of medical technology in the context of commercialization. Objectives, competition, market share, funding, pricing, manufacturing, growth, and intellectual property; many issues unique to biomedical products. Course Information: 2 undergraduate hours. 3 graduate hours. Prerequisite(s): Junior standing or above and consent of the instructor.

4,833 citations