scispace - formally typeset
Search or ask a question

Showing papers in "Natural Computing in 2009"


Journal ArticleDOI
TL;DR: In this paper metaheuristics such as Ant Colony Optimization, Evolutionary Computation, Simulated Annealing, Tabu Search and others are introduced, and their applications to the class of Stochastic Combinatorial Optimization Problems (SCOPs) is thoroughly reviewed.
Abstract: Metaheuristics are general algorithmic frameworks, often nature-inspired, designed to solve complex optimization problems, and they are a growing research area since a few decades. In recent years, metaheuristics are emerging as successful alternatives to more classical approaches also for solving optimization problems that include in their mathematical formulation uncertain, stochastic, and dynamic information. In this paper metaheuristics such as Ant Colony Optimization, Evolutionary Computation, Simulated Annealing, Tabu Search and others are introduced, and their applications to the class of Stochastic Combinatorial Optimization Problems (SCOPs) is thoroughly reviewed. Issues common to all metaheuristics, open problems, and possible directions of research are proposed and discussed. In this survey, the reader familiar to metaheuristics finds also pointers to classical algorithmic approaches to optimization under uncertainty, and useful informations to start working on this problem domain, while the reader new to metaheuristics should find a good tutorial in those metaheuristics that are currently being applied to optimization under uncertainty, and motivations for interest in this field.

638 citations


Journal ArticleDOI
TL;DR: Uniform constructions of standard spiking neural P systems are provided which solve computationally hard problems in a constant number of steps, working in a non-deterministic way.
Abstract: We continue the investigations concerning the possibility of using spiking neural P systems as a framework for solving computationally hard problems, addressing two problems which were already recently considered in this respect: $${\tt Subset}\,{\tt Sum}$$ and $${\tt SAT}.$$ For both of them we provide uniform constructions of standard spiking neural P systems (i.e., not using extended rules or parallel use of rules) which solve these problems in a constant number of steps, working in a non-deterministic way. This improves known results of this type where the construction was non-uniform, and/or was using various ingredients added to the initial definition of spiking neural P systems (the SN P systems as defined initially are called here "standard"). However, in the $${\tt Subset}\,{\tt Sum}$$ case, a price to pay for this improvement is that the solution is obtained either in a time which depends on the value of the numbers involved in the problem, or by using a system whose size depends on the same values, or again by using complicated regular expressions. A uniform solution to 3- $${\tt SAT}$$ is also provided, that works in constant time.

106 citations


Journal ArticleDOI
TL;DR: The paper reviews a number of published chemical realizations of simple information processing devices like logical gates or memory cells and shows that by combining these devices as building blocks the medium can perform complex operations like for example counting of arriving excitations.
Abstract: There are many ways in which a nonlinear chemical medium can be used for information processing. Here we are concerned with an excitable medium and the straightforward method of information coding: a single excitation pulse represents a bit of information and a group of excitations forms a message. Our attention is focused on a specific type of nonhomogeneous medium that has an intentionally introduced geometrical structure of regions characterized by different excitability levels. We show that in information processing applications the geometry plays an equally important role as the dynamics of the medium and allows one to construct devices that perform complex signal processing operations even for a relatively simple kinetics of the reactions involved. In the paper we review a number of published chemical realizations of simple information processing devices like logical gates or memory cells and we show that by combining these devices as building blocks the medium can perform complex operations like for example counting of arriving excitations. We also present a new, simple realizations of chemical signal diode that transmits pulses in one direction only.

52 citations


Journal ArticleDOI
TL;DR: This work introduces the hybridization state Tile Assembly Model (hsTAM), which evaluates intra-tile state changes as well as assembly state changes, and proposes two novel error suppression mechanisms: the Protected Tile Mechanism (PTM) and the Layered Tile Mechanisms (LTM).
Abstract: Algorithmic self-assembly using DNA-based molecular tiles has been demonstrated to implement molecular computation. When several different types of DNA tile self-assemble, they can form large two-dimensional algorithmic patterns. Prior analysis predicted that the error rates of tile assembly can be reduced by optimizing physical parameters such as tile concentrations and temperature. However, in exchange, the growth speed is also very low. To improve the tradeoff between error rate and growth speed, we propose two novel error suppression mechanisms: the Protected Tile Mechanism (PTM) and the Layered Tile Mechanism (LTM). These utilize DNA protecting molecules to form kinetic barriers against spurious assembly. In order to analyze the performance of these two mechanisms, we introduce the hybridization state Tile Assembly Model (hsTAM), which evaluates intra-tile state changes as well as assembly state changes. Simulations using hsTAM suggest that the PTM and LTM improve the optimal tradeoff between error rate $$\epsilon$$ and growth speed r, from $$r \approx \beta \epsilon^{2.0}$$ (for the conventional mechanism) to $$r \approx \beta \epsilon^{1.4}$$ and $$r \approx \beta \epsilon^{0.7}$$ , respectively.

50 citations


Journal ArticleDOI
TL;DR: Encouraging results with negative correlation in incremental learning are revealed, showing that NCL is a promising approach to incremental learning.
Abstract: Negative Correlation Learning (NCL) has been successfully applied to construct neural network ensembles. It encourages the neural networks that compose the ensemble to be different from each other and, at the same time, accurate. The difference among the neural networks that compose an ensemble is a desirable feature to perform incremental learning, for some of the neural networks can be able to adapt faster and better to new data than the others. So, NCL is a potentially powerful approach to incremental learning. With this in mind, this paper presents an analysis of NCL, aiming at determining its weak and strong points to incremental learning. The analysis shows that it is possible to use NCL to overcome catastrophic forgetting, an important problem related to incremental learning. However, when catastrophic forgetting is very low, no advantage of using more than one neural network of the ensemble to learn new data is taken and the test error is high. When all the neural networks are used to learn new data, some of them can indeed adapt better than the others, but a higher catastrophic forgetting is obtained. In this way, it is important to find a trade-off between overcoming catastrophic forgetting and using an entire ensemble to learn new data. The NCL results are comparable with other approaches which were specifically designed to incremental learning. Thus, the study presented in this work reveals encouraging results with negative correlation in incremental learning, showing that NCL is a promising approach to incremental learning.

46 citations


Journal ArticleDOI
TL;DR: A physical device in relativistic spacetime which can compute a non-Turing computable task, e.g. which can decide the halting problem of Turing machines or decide whether ZF set theory is consistent (more precisely,can decide the theorems of ZF).
Abstract: Looking at very recent developments in spacetime theory, we can wonder whether these results exhibit features of hypercomputation that traditionally seemed impossible or absurd. Namely, we describe a physical device in relativistic spacetime which can compute a non-Turing computable task, e.g. which can decide the halting problem of Turing machines or decide whether ZF set theory is consistent (more precisely, can decide the theorems of ZF). Starting from this, we will discuss the impact of recent breakthrough results of relativity theory, black hole physics and cosmology to well established foundational issues of computability theory as well as to logic. We find that the unexpected, revolutionary results in the mentioned branches of science force us to reconsider the status of the physical Church Thesis and to consider it as being seriously challenged. We will outline the consequences of all this for the foundation of mathematics (e.g. to Hilbert's programme). Observational, empirical evidence will be quoted to show that the statements above do not require any assumption of some physical universe outside of our own one: in our specific physical universe there seem to exist regions of spacetime supporting potential non-Turing computations. Additionally, new "engineering" ideas will be outlined for solving the so-called blue-shift problem of GR-computing. Connections with related talks at the Physics and Computation meeting, e.g. those of Jerome Durand-Lose, Mark Hogarth and Martin Ziegler, will be indicated.

40 citations


Journal ArticleDOI
TL;DR: A model of study about the phenomena of Adaptation, Anticipation and Rationality as nature-inspired computational paradigms mimicking nature is proposed by means of a division towards the discrimination of these terms, from the point of view of the complexity exhibited in the behavior of the systems.
Abstract: Intelligence, Rationality, Learning, Anticipation and Adaptation are terms that have been and still remain in the central stage of computer science. These terms delimit their specific areas of study; nevertheless, they are so interrelated that studying them separately is an endeavor that seems little promising. In this paper, a model of study about the phenomena of Adaptation, Anticipation and Rationality as nature-inspired computational paradigms mimicking nature is proposed by means of a division, which is oriented, towards the discrimination of these terms, from the point of view of the complexity exhibited in the behavior of the systems, where these phenomena come at play. For this purpose a series of fundamental principles and hypothesis are proposed as well as some experimental results that corroborate them.

40 citations


Journal ArticleDOI
TL;DR: It is shown how the struggle for resources induces an arms race that leads to the evolution of elongated growth in contrast to rather ample forms at ground-level when the plants evolve in isolation.
Abstract: This article presents studies on plants and their communities through experiments with a multi-agent platform of generic virtual plants. Based on Artificial Life concepts, the model has been designed for long-term simulations spanning a large number of generations while emphasizing the most important morphological and physiological aspects of a single plant. The virtual plants combine a physiological transport-resistance model with a morphological model using the L-system formalism and grow in a simplified 3D artificial ecosystem. Experiments at three different scales are carried out and compared to observations on real plant species. At the individual level, single virtual plants are grown in order to examine their responses to environmental constraints. A number of emerging characteristics concerning individual plant growth can be observed. Unifying field observation, mathematical theory and computer simulation, population level experiments on intraspecific and interspecific competition for resources are related to corresponding aggregate models of population dynamics. The latter provide a more general understanding of the experiments with respect to long-term trends and equilibrium conditions. Studies at the evolutionary level aim at morphogenesis and the influence of competition on plant morphology. Among other results, it is shown how the struggle for resources induces an arms race that leads to the evolution of elongated growth in contrast to rather ample forms at ground-level when the plants evolve in isolation.

39 citations


Journal ArticleDOI
TL;DR: This work presents the results and analysis of two classifier systems (XCS and UCS) on a subset of a publicly available benchmark intrusion detection dataset which features serious class imbalances and two very rare classes and concludes that LCSs are a competitive approach to intrusion detection.
Abstract: Evolutionary Learning Classifier Systems (LCSs) combine reinforcement learning or supervised learning with effective genetics-based search techniques Together these two mechanisms enable LCSs to evolve solutions to decision problems in the form of easy to interpret rules called classifiers Although LCSs have shown excellent performance on some data mining tasks, many enhancements are still needed to tackle features like high dimensionality, huge data sizes, non-uniform distribution of classes, etc Intrusion detection is a real world problem where such challenges exist and to which LCSs have not previously been applied An intrusion detection problem is characterised by huge network traffic volumes, difficult to realize decision boundaries between attacks and normal activities and highly imbalanced attack class distribution Moreover, it demands high accuracy, fast processing times and adaptability to a changing environment We present the results and analysis of two classifier systems (XCS and UCS) on a subset of a publicly available benchmark intrusion detection dataset which features serious class imbalances and two very rare classes We introduce a better approach for handling the situation when no rules match an input on the test set and recommend this be adopted as a standard part of XCS and UCS We detect little sign of overfitting in XCS but somewhat more in UCS However, both systems tend to reach near-best performance in very few passes over the training data We improve the accuracy of these systems with several modifications and point out aspects that can further enhance their performance We also compare their performance with other machine learning algorithms and conclude that LCSs are a competitive approach to intrusion detection

37 citations


Journal ArticleDOI
TL;DR: Integrative connectionist learning systems (ICOS) as discussed by the authors integrate in their structure and learning algorithms principles from different hierarchical levels of information processing in the brain, including neuronal-, genetic-, quantum.
Abstract: The so far developed and widely utilized connectionist systems (artificial neural networks) are mainly based on a single brain-like connectionist principle of information processing, where learning and information exchange occur in the connections. This paper extends this paradigm of connectionist systems to a new trend--integrative connectionist learning systems (ICOS) that integrate in their structure and learning algorithms principles from different hierarchical levels of information processing in the brain, including neuronal-, genetic-, quantum. Spiking neural networks (SNN) are used as a basic connectionist learning model which is further extended with other information learning principles to create different ICOS. For example, evolving SNN for multitask learning are presented and illustrated on a case study of person authentification based on multimodal auditory and visual information. Integrative gene-SNN are presented, where gene interactions are included in the functioning of a spiking neuron. They are applied on a case study of computational neurogenetic modeling. Integrative quantum-SNN are introduced with a quantum Hebbian learning, where input features as well as information spikes are represented by quantum bits that result in exponentially faster feature selection and model learning. ICOS can be used to solve more efficiently challenging biological and engineering problems when fast adaptive learning systems are needed to incrementally learn in a large dimensional space. They can also help to better understand complex information processes in the brain especially how information processes at different information levels interact. Open questions, challenges and directions for further research are presented.

37 citations


Journal ArticleDOI
TL;DR: It is experimentally demonstrate that computation of spanning trees and implementation of general purpose storage-modification machines can be executed by a vegetative state of the slime mold Physarum polycephalum.
Abstract: We experimentally demonstrate that computation of spanning trees and implementation of general purpose storage-modification machines can be executed by a vegetative state of the slime mold Physarum polycephalum. We advance theory and practice of reaction-diffusion computing by studying a biological model of reaction-diffusion encapsulated in a membrane.

Journal ArticleDOI
TL;DR: In this article, the authors proposed an optical computational device which uses light rays for solving the subset-sum problem in a graph-like representation and the light is traversing it by following the routes given by the connections between nodes.
Abstract: We propose an optical computational device which uses light rays for solving the subset-sum problem. The device has a graph-like representation and the light is traversing it by following the routes given by the connections between nodes. The nodes are connected by arcs in a special way which lets us to generate all possible subsets of the given set. To each arc we assign either a number from the given set or a predefined constant. When the light is passing through an arc it is delayed by the amount of time indicated by the number placed in that arc. At the destination node we will check if there is a ray whose total delay is equal to the target value of the subset sum problem (plus some constants). The proposed optical solution solves a NP-complete problem in time proportional with the target sum, but requires an exponential amount of energy.

Journal ArticleDOI
TL;DR: A theoretical framework to analyze the problem of finding a large set of single DNA strands that do not crosshybridize to themselves and/or to their complements, and proposes a filtering process that removes strands creating pairs with low Gibbs energies, as approximated by the nearest-neighbor model.
Abstract: Finding a large set of single DNA strands that do not crosshybridize to themselves and/or to their complements is an important problem in DNA computing, self-assembly, and DNA memories. We describe a theoretical framework to analyze this problem, gauge its computational difficulty, and provide nearly optimal solutions. In this framework, codeword design is reduced to finding large sets of strands maximaly separated in a DNA space and the size of such sets depends on the geometry of these metric spaces. We show that codeword design is NP-complete using any single reasonable measure that approximates the Gibbs energy, thus practically excluding the possibility of finding any procedure to find maximal sets efficiently. Second, we extend a technique known as shuffling to provide a construction that yields provably nearly-maximal codes. Third, we propose a filtering process that removes strands creating pairs with low Gibbs energies, as approximated by the nearest-neighbor model. These two steps produce large codes of thermodynamic high quality. The proposed framework can be used to gain an understanding of the Gibbs energy landscapes for DNA strands on which much of DNA computing and self-assembly are based.

Journal ArticleDOI
TL;DR: Nootropia, a model inspired by the autopoietic view of the immune system that reacts to user feedback in order to define and preserve the user interests, is described in the context of adaptive, content-based document filtering and evaluated using virtual users.
Abstract: Adaptive information filtering is a challenging and fascinating problem. It requires the adaptation of a representation of a user's multiple interests to various changes in them. We tackle this dynamic problem with Nootropia, a model inspired by the autopoietic view of the immune system. It is based on a self-organising antibody network that reacts to user feedback in order to define and preserve the user interests. We describe Nootropia in the context of adaptive, content-based document filtering and evaluate it using virtual users. The results demonstrate Nootropia's ability to adapt to both short-term variations and more radical changes in the user's interests, and to dynamically control its size and connectivity in the process. Advantages over existing approaches to profile adaptation, such as learning algorithms and evolutionary algorithms are also highlighted.

Journal ArticleDOI
TL;DR: Using nested singularities (which are built), it is shown how to decide higher levels of the corresponding arithmetical hierarchies and not only is Zeno effect possible but it is used to unleash the black hole power.
Abstract: The so-called Black Hole model of computation involves a non Euclidean space-time where one device is infinitely "accelerated" on one world-line but can send some limited information to an observer working at "normal pace". The key stone is that after a finite duration, the observer has received the information or knows that no information was ever sent by the device which had an infinite time to complete its computation. This allows to decide semi-decidable problems and clearly falls out of classical computability. A setting in a continuous Euclidean space-time that mimics this is presented. Not only is Zeno effect possible but it is used to unleash the black hole power. Both discrete (classical) computation and analog computation (in the understanding of Blum, Shub and Smale) are considered. Moreover, using nested singularities (which are built), it is shown how to decide higher levels of the corresponding arithmetical hierarchies.

Journal ArticleDOI
TL;DR: The subsequent years have seen a resurgence of LCSs as XCS in particular has been found able to reach optimality in a number of difficult benchmark problems and has also begun to be applied to anumber of hard real-world problems such as data mining, simulation modeling, robotics, and adaptive control.
Abstract: It is now 30 years since John Holland presented the first implementation of his learning classifier system (LCS) framework (Holland and Reitman 1978). This ‘‘Cognitive System Level 1’’ used a genetic algorithm (Holland 1975) to learn appropriate rules of behaviour in one-dimensional, dual-objective maze navigation tasks with a form of reinforcement learning assigning utility to the rules. Holland later revised the algorithm to define what would become the standard system (Holland 1980, 1986). However, Holland’s full system was somewhat complex and practical experience found it difficult to realize the envisaged behaviour/performance (e.g., Wilson and Goldberg 1989) and interest waned. Some years later, Wilson presented the ‘‘zeroth-level’’ classifier system, ZCS (Wilson 1994) which ‘‘keeps much of Holland’s original framework but simplifies it to increase understandability and performance’’ (ibid.). But ZCS did not reach optimality in the most common reinforcement learning sense. Accordingly, Wilson introduced a form of LCS which altered the way in which rule fitness is calculated—XCS (Wilson 1995). XCS also makes the connection between LCS and temporal difference learning (Watkins 1989) explicit with, in its standard form, its ability to represent the state-action value map in a rule form thereby enabling compaction through generalization. Shortly after Holland had formulated the general framework, Stephen Smith (1980) presented a modification wherein a traditional genetic algorithm was used to design a complete set of rules. That is, Smith’s poker playing ‘‘Learning System 1’’ avoided the need to assign utility to individual rules. The subsequent years have seen a resurgence of LCSs as XCS in particular has been found able to reach optimality in a number of difficult benchmark problems. Perhaps more importantly, XCS has also begun to be applied to a number of hard real-world problems such as data mining, simulation modeling, robotics, and adaptive control (see Bull 2004 for an overview)—where excellent performance has often been achieved. A theoretical basis

Journal ArticleDOI
TL;DR: Results demonstrate that GBML is capable of performing prostate tissue classification efficiently, making a compelling case for using GBML implementations as efficient and powerful tools for biomedical image processing.
Abstract: Prostate cancer accounts for one-third of noncutaneous cancers diagnosed in US men and is a leading cause of cancer-related death. Advances in Fourier transform infrared spectroscopic imaging now provide very large data sets describing both the structural and local chemical properties of cells within prostate tissue. Uniting spectroscopic imaging data and computer-aided diagnoses (CADx), our long term goal is to provide a new approach to pathology by automating the recognition of cancer in complex tissue. The first step toward the creation of such CADx tools requires mechanisms for automatically learning to classify tissue types--a key step on the diagnosis process. Here we demonstrate that genetics-based machine learning (GBML) can be used to approach such a problem. However, to efficiently analyze this problem there is a need to develop efficient and scalable GBML implementations that are able to process very large data sets. In this paper, we propose and validate an efficient GBML technique-- $${\tt NAX}$$ --based on an incremental genetics-based rule learner. $${\tt NAX}$$ exploits massive parallelisms via the message passing interface (MPI) and efficient rule-matching using hardware-implemented operations. Results demonstrate that $${\tt NAX}$$ is capable of performing prostate tissue classification efficiently, making a compelling case for using GBML implementations as efficient and powerful tools for biomedical image processing.

Journal ArticleDOI
TL;DR: First a precise operational model is provided for these dynamic membrane systems in which also promoter and inhibitor rules may occur and a translation into behaviourally equivalent Petri nets with localities and range arcs is described.
Abstract: We consider membrane systems with dissolving and thickening reaction rules. Application of these rules entails a dynamical change in the structure of a system during its evolution. First we provide a precise operational model for these dynamic membrane systems in which also promoter and inhibitor rules may occur. Next we describe a translation into behaviourally equivalent Petri nets with localities and range arcs.

Journal ArticleDOI
TL;DR: This work focuses on rule-based modeling and its integration in the domain-specific language MGS, which allows the modeling of biological processes at various levels of description through the notions of topological collections and transformations.
Abstract: Systems biology aims at integrating processes at various time and spatial scales into a single and coherent formal description to allow computer modeling. In this context, we focus on rule-based modeling and its integration in the domain-specific language MGS . Through the notions of topological collections and transformations, MGS allows the modeling of biological processes at various levels of description. We validate our approach through the description of various models of the genetic switch of the ? phage, from a very simple biochemical description of the process to an individual-based model on a Delaunay graph topology. This approach is a first step into providing the requirements for the emerging field of spatial systems biology which integrates spatial properties into systems biology.

Journal ArticleDOI
TL;DR: This paper surveys a wide variety of behaviours observed within the natural world, and aims to highlight general cooperative group behaviours, search strategies and communication methods that might be useful within a wider computing context, beyond optimisation.
Abstract: In recent years a considerable amount of natural computing research has been undertaken to exploit the analogy between, say, searching a given problem space for an optimal solution and the natural process of foraging for food. Such analogies have led to useful solutions in areas such as optimisation, prominent examples being ant colony systems and particle swarm optimisation. However, these solutions often rely on well defined fitness landscapes that are not always be available in more general search scenarios. This paper surveys a wide variety of behaviours observed within the natural world, and aims to highlight general cooperative group behaviours, search strategies and communication methods that might be useful within a wider computing context, beyond optimisation, where information from the fitness landscape may be sparse, but new search paradigms could be developed that capitalise on research into biological systems that have developed over millennia within the natural world.

Journal ArticleDOI
TL;DR: It is shown that the niching framework can overcome some degeneracy in the search space, and obtain different conceptual designs using problem-specific diversity measurements, as well as compare with the maximum-peak-ratio (MPR) performance analysis tool.
Abstract: We introduce a framework of derandomized evolution strategies (ES) niching techniques. A survey of these techniques, based on 5 variants of derandomized ES, is presented, based on the fixed niche radius approach. The core mechanisms range from the very first derandomized approach to self-adaptation of ES to the sophisticatedMediaObjects/11047_2007_9065_Figa_HTML.gif Covariance Matrix Adaptation (CMA). They are applied to artificial as well as real-world multimodal continuous landscapes, of different levels of difficulty and various dimensions, and compared with the maximum-peak-ratio (MPR) performance analysis tool. While characterizing the performance of the different derandomized variants in the context of niching, some conclusions concerning the niching formation process of the different mechanisms are drawn, and the hypothesis of a trade-off between learning time and niching acceleration is numerically confirmed. Niching with (1 + λ)-CMA core mechanism is shown to experimentally outperform all the other variants, especially on the real-world problem. Some theoretical arguments supporting the advantage of a plus-strategy for niching are discussed. For the real-world application in hand, taken from the field of Quantum Control, we show that the niching framework can overcome some degeneracy in the search space, and obtain different conceptual designs using problem-specific diversity measurements.

Journal ArticleDOI
TL;DR: A mathematical model of an important photosynthetic phenomenon, called Non Photochemical Quenching (shortly NPQ), that determines the plant accommodation to the environmental light is defined and a Metabolic P system is deduced which provides, in a specific simplified case, the regulation mechanism underling the NPQ process.
Abstract: Photosynthesis is the process used by plants, algae and some bacteria to obtain biochemical energy from sunlight. It is the most important process allowing life on earth. In this work, by applying the Log Gain theory of Metabolic P Systems, we define a mathematical model of an important photosynthetic phenomenon, called Non Photochemical Quenching (shortly NPQ), that determines the plant accommodation to the environmental light. Starting from experimental data of this phenomenon, we are able to deduce a Metabolic P system which provides, in a specific simplified case, the regulation mechanism underling the NPQ process. The dynamics of our model, generated by suitable computational tools, reproduce, with a very good approximation, the observed behaviour of the natural system.

Journal ArticleDOI
TL;DR: This paper shows how a family of recognizer tissue P systems with symport/antiport rules which solves a decision problem can be efficiently simulated by afamily of basic recognizer P systems solving the same problem.
Abstract: In the framework of P systems, it is known that the construction of exponential number of objects in polynomial time is not enough to efficiently solve NP-complete problems Nonetheless, it could be sufficient to create an exponential number of membranes in polynomial time Working with P systems whose membrane structure does not increase in size, it is known that it is not possible to solve computationally hard problems (unless P = NP), basically due to the impossibility of constructing exponential number of membranes, in polynomial time, using only evolution, communication and dissolution rules In this paper we show how a family of recognizer tissue P systems with symport/antiport rules which solves a decision problem can be efficiently simulated by a family of basic recognizer P systems solving the same problem This simulation allows us to transfer the result about the limitations in computational power, from the model of basic cell-like P systems to this kind of tissue-like P systems

Journal ArticleDOI
TL;DR: P polarizationless P systems with active membranes working in maximally parallel manner are shown to be able to solve the PSPACE-complete problem Quantified 3-sat, provided that non-elementary membrane division is controlled by the presence of a (possibly non- elementary) membrane.
Abstract: We investigate polarizationless P systems with active membranes working in maximally parallel manner, which do not make use of evolution or communication rules, in order to find which features are sufficient to efficiently solve computationally hard problems. We show that such systems are able to solve the PSPACE-complete problem Quantified 3-sat, provided that non-elementary membrane division is controlled by the presence of a (possibly non-elementary) membrane.

Journal ArticleDOI
TL;DR: It is shown that a (minimal) deterministic finite cover automaton (DFCA) provides the right approximation for the computation of a P system.
Abstract: In this paper, we propose an approach to P system testing based on finite state machine conformance techniques. Of the many variants of P systems that have been defined, we consider cell-like P systems which use non-cooperative transformation and communication rules. We show that a (minimal) deterministic finite cover automaton (DFCA) (a finite automaton that accepts all words in a given finite language, but can also accept words that are longer than any word in the language) provides the right approximation for the computation of a P system. Furthermore, we provide a procedure for generating test sets directly from the P system specification (without explicitly constructing the minimal DFCA model).

Journal ArticleDOI
TL;DR: The rules that govern the transformations to transform communicating XMs into tPS are described, an example is presented to demonstrate the feasibility of this approach and ways to extend it to more general models, such as population P systems, which involve dynamic structures.
Abstract: Tissue P systems (tPS) represent a class of P systems in which cells are arranged in a graph rather than a hierarchical structure. On the other hand, communicating X-machines (XMs) are state-based machines, extended with a memory structure and transition functions instead of simple inputs, which communicate via message passing. One could use communicating XMs to create models built out of components in a rather intuitive way. There are investigations showing how various classes of P systems can be modelled as communicating XMs. In this paper, we define a set of principles to transform communicating XMs into tPS. We describe the rules that govern such transformations, present an example to demonstrate the feasibility of this approach and discuss ways to extend it to more general models, such as population P systems, which involve dynamic structures.

Journal ArticleDOI
TL;DR: In this article, negative selection and the associated r-contiguous matching rule are investigated from a pattern classification perspective, which includes insights in the generalization capability of negativeselection and the computational complexity of finding r- Contiguous detectors.
Abstract: Negative selection and the associated r-contiguous matching rule is a popular immune-inspired method for anomaly detection problems. In recent years, however, problems such as scalability and high false positive rate have been empirically noticed. In this article, negative selection and the associated r-contiguous matching rule are investigated from a pattern classification perspective. This includes insights in the generalization capability of negative selection and the computational complexity of finding r-contiguous detectors.

Journal ArticleDOI
TL;DR: This work intends to demonstrate how a simple concept of a physical field can be adopted to build a complete framework for supervised and unsupervised learning methodology.
Abstract: Despite recent successes and advancements in artificial intelligence and machine learning, this domain remains under continuous challenge and guidance from phenomena and processes observed in natural world. Humans remain unsurpassed in their efficiency of dealing and learning from uncertain information coming in a variety of forms, whereas more and more robust learning and optimisation algorithms have their analytical engine built on the basis of some nature-inspired phenomena. Excellence of neural networks and kernel-based learning methods, an emergence of particle-, swarms-, and social behaviour-based optimisation methods are just few of many facts indicating a trend towards greater exploitation of nature inspired models and systems. This work intends to demonstrate how a simple concept of a physical field can be adopted to build a complete framework for supervised and unsupervised learning methodology. An inspiration for artificial learning has been found in the mechanics of physical fields found on both micro and macro scales. Exploiting the analogies between data and charged particles subjected to gravity, electrostatic and gas particle fields, a family of new algorithms has been developed and applied to classification, clustering and data condensation while properties of the field were further used in a unique visualisation of classification and classifier fusion models. The paper covers extensive pictorial examples and visual interpretations of the presented techniques along with some comparative testing over well-known real and artificial datasets.

Journal ArticleDOI
TL;DR: This paper uses evolutionary computational techniques to determine the minimal set of training instances needed to achieve good classification accuracy with an instance-based learner, and introduces the Evolutionary General Regression Neural Network, which uses an estimation of distribution algorithm to generate the optimal training set as well as the optimal feature set for a general regression neural network.
Abstract: In this paper, we present an approach to overcome the scalability issues associated with instance-based learners. Our system uses evolutionary computational techniques to determine the minimal set of training instances needed to achieve good classification accuracy with an instance-based learner. In this way, instance-based learners need not store all the training data available but instead store only those instances that are required for the desired accuracy. Additionally, we explore the utility of evolving the optimal feature set used by the learner for a given problem. In this way, we attempt to deal with the so-called "curse of dimensionality" associated with computational learning systems. To these ends, we introduce the Evolutionary General Regression Neural Network. This design uses an estimation of distribution algorithm to generate both the optimal training set as well as the optimal feature set for a general regression neural network. We compare its performance against a standard general regression neural network and an optimized support vector machine across four benchmark classification problems.

Journal ArticleDOI
TL;DR: This paper describes a new LCS agent that has a simpler and more transparent performance mechanism, using the structure of a predictive LCS model, strip out the evolutionary mechanism, simplify the reinforcement learning procedure and equip the agent with the ability to Associative Perception, adopted from psychology.
Abstract: Maze problems represent a simplified virtual model of the real environment and can be used for developing core algorithms of many real-world application related to the problem of navigation Learning Classifier Systems (LCS) are the most widely used class of algorithms for reinforcement learning in mazes However, LCSs best achievements in maze problems are still mostly bounded to non-aliasing environments, while LCS complexity seems to obstruct a proper analysis of the reasons for failure Moreover, there is a lack of knowledge of what makes a maze problem hard to solve by a learning agent To overcome this restriction we try to improve our understanding of the nature and structure of maze environments In this paper we describe a new LCS agent that has a simpler and more transparent performance mechanism We use the structure of a predictive LCS model, strip out the evolutionary mechanism, simplify the reinforcement learning procedure and equip the agent with the ability to Associative Perception, adopted from psychology We then assess the new LCS with Associative Perception on an extensive set of mazes and analyse the results to discover which features of the environments play the most significant role in the learning process We identify a particularly hard feature for learning in mazes, aliasing clones, which arise when groups of aliasing cells occur in similar patterns in different parts of the maze We discuss the impact of aliasing clones and other types of aliasing on learning algorithms