scispace - formally typeset
Search or ask a question

Showing papers in "Applied Artificial Intelligence in 2003"


Journal ArticleDOI
TL;DR: This analysis indicates that missing data imputation based on the k-nearest neighbor algorithm can outperform the internal methods used by C4.5 and CN2 to treat missing data, and can also outperforms the mean or mode imputation method, which is a method broadly used to treatMissing values.
Abstract: One relevant problem in data quality is missing data. Despite the frequent occurrence and the relevance of the missing data problem, many machine learning algorithms handle missing data in a rather naive way. However, missing data treatment should be carefully treated, otherwise bias might be introduced into the knowledge induced. In this work, we analyze the use of the k-nearest neighbor as an imputation method. Imputation is a term that denotes a procedure that replaces the missing values in a data set with some plausible values. One advantage of this approach is that the missing data treatment is independent of the learning algorithm used. This allows the user to select the most suitable imputation method for each situation. Our analysis indicates that missing data imputation based on the k-nearest neighbor algorithm can outperform the internal methods used by C4.5 and CN2 to treat missing data, and can also outperform the mean or mode imputation method, which is a method broadly used to treat missing ...

743 citations


Journal ArticleDOI
TL;DR: The importance of data preparation in data analysis is shown, some research achievements in the area of data preparedness are introduced, and some future directions of research and development are suggested.
Abstract: Data preparation is a fundamental stage of data analysis. While a lot of low-quality information is available in various data sources and on the Web, many organizations or companies are interested in how to transform the data into cleaned forms which can be used for high-profit purposes. This goal generates an urgent need for data analysis aimed at cleaning the raw data. In this paper, we first show the importance of data preparation in data analysis, then introduce some research achievements in the area of data preparation. Finally, we suggest some future directions of research and development.

396 citations


Journal ArticleDOI
TL;DR: INTRIGUE, a prototype tourist-information server that presents information about the area around Turin City, Italy, on desktop and hand held devices, relies on user modeling and adaptive hypermedia techniques and its adaptation to Web browsers and WAP minibrowsers.
Abstract: This paper presents INTRIGUE, a prototype tourist-information server that presents information about the area around Turin City, Italy, on desktop and hand held devices.This system recommends sightseeing destinations and itineraries by taking into account the preferences of heterogeneous tourist groups (such as families with children or the elderly) and explains the recommendations by addressing the group members' requirements. Moreover, the system provides an interactive agenda for scheduling the tour. The services offered by INTRIGUE rely on user modeling and adaptive hypermedia techniques; furthermore, XML-based technologies support the generation of the user interface and its adaptation to Web browsers and WAP minibrowsers.

394 citations


Journal ArticleDOI
TL;DR: A Selective Bayesian classifier that simply uses only those features that C4.5 would use in its decision tree when learning a small example of a training set is described, a combination of the two different natures of classifiers.
Abstract: It is known that Naive Bayesian classifier (NB) works very well on some domains, and poorly on others. The performance of NB suffers in domains that involve correlated features. C4.5 decision trees, on the other hand, typically perform better than the Naive Bayesian algorithm on such domains. This paper describes a Selective Bayesian classifier (SBC) that simply uses only those features that C4.5 would use in its decision tree when learning a small example of a training set, a combination of the two different natures of classifiers. Experiments conducted on ten data sets indicate that SBC performs markedly better than NB on all domains, and SBC outperforms C4.5 on many data sets of which C4.5 outperform NB. Augmented Bayesian classifier (ABC) is also tested on the same data, and SBC appears to perform as well as ABC. SBC also can eliminate, in most cases, more than half of the original attributes, which can greatly reduce the size of the training and test data as well as the running time. Further, the SBC...

120 citations


Journal ArticleDOI
TL;DR: A document discovery tool based on Conceptual Clustering by Formal Concept Analysis that allows users to navigate e-mail using a visual lattice metaphor rather than a tree to aid knowledge discovery in document collections.
Abstract: This paper discusses a document discovery tool based on Conceptual Clustering by Formal Concept Analysis. The program allows users to navigate e-mail using a visual lattice metaphor rather than a tree. It implements a virtual. le structure over e-mail where files and entire directories can appear in multiple positions. The content and shape of the lattice formed by the conceptual ontology can assist in e-mail discovery. The system described provides more flexibility in retrieving stored e-mails than what is normally available in e-mail clients. The paper discusses how conceptual ontologies can leverage traditional document retrieval systems and aid knowledge discovery in document collections.

91 citations


Journal ArticleDOI
TL;DR: This paper proposes an application of Genetic Programming and Artificial Neural Networks in hydrology, showing how these two techniques can work together to solve a problem, namely for modeling the effect of rain on the runoff flow in a typical urban basin.
Abstract: This paper proposes an application of Genetic Programming (GP) and Artificial Neural Networks (ANN) in hydrology, showing how these two techniques can work together to solve a problem, namely for modeling the effect of rain on the runoff flow in a typical urban basin. The ultimate goal of this research is to design a real-time alarm system to warn of floods or subsidence in various types of urban basins. Results look promising and appear to offer some improvement for analyzing river basin systems over stochastic methods such as unitary hydrographs.

83 citations


Journal ArticleDOI
TL;DR: The quality of the network improves with interactions and the quality is maximized when both expertise and sociability are considered, reflecting the intuition that you need to talk to people outside your close circle to get the best information.
Abstract: We consider a social network of software agents who assist each other in helping their users find information Unlike in most previous approaches, our architecture is fully distributed and includes agents who preserve the privacy and autonomy of their users These agents learn models of each other in terms of expertise (ability to produce correct domain answers) and sociability (ability to produce accurate referrals) We study our framework experimentally to study how the social network evolves Specifically, we find that under our multi-agent learning heuristic, the quality of the network improves with interactions: the quality is maximized when both expertise and sociability are considered; pivot agents further improve the quality of the network and have a catalytic effect on its quality even if they are ultimately removed Moreover, the quality of the network improves when clustering decreases, reflecting the intuition that you need to talk to people outside your close circle to get the best information

57 citations


Journal ArticleDOI
TL;DR: A system developed for adaptive retrieval and the filtering of documents belonging to digital libraries available on the Web, called InfoWeb, is currently in operation on the ENEA digital library Web site reserved to the cultural heritage and environment domain.
Abstract: This paper presents a system developed for adaptive retrieval and the filtering of documents belonging to digital libraries available on the Web. This system, called InfoWeb, is currently in operation on the ENEA (National Entity for Alternative Energy) digital library Web site reserved to the cultural heritage and environment domain. InfoWeb records the user information needs in a user model, created through a representation, which extends the traditional vector space model and takes the form of a semantic network consisting of co-occurrences between index terms. The initial user model is built on the basis of stereotypes, developed through a clustering of the collection by using specific documents as a starting point. The user's query can be expanded in an adaptive way, using the user model formulated by the user himself. The system has been tested on the entire collection comprising about 14,000 documents in HTML/text format. The results of the experiments are satisfactory both in terms of performance ...

50 citations


Journal ArticleDOI
TL;DR: An attempt is made to approach educational agents from a pedagogical view, based on a model of learning and teaching that tries to keep its distance from individual "schools" of learning theories, that concludes that there remain many more new educational agents to invent.
Abstract: Much research and development work is being undertaken on educational agents. Each project, it seems, starts out from its own educational needs and theoretical bases. In this paper, an attempt is made to approach educational agents from a pedagogical view. Based on a model of learning and teaching that tries to keep its distance from individual "schools" of learning theories, it looks at educational agents as members of the faculty of a global virtual university: What is their role in the learning process and how do theyfulfill it? The paper suggests a basic distinction between pedagogic roles for agents: "simulated" vs. "emergent" roles, where the first are modeled on human educators of any kind, while the second are new and specific for virtual learning settings. We conclude that there remain many more new educational agents to invent.

44 citations


Journal ArticleDOI
TL;DR: This paper discusses how the management system TOSCANA for conceptual information systems is supporting the goals of CKDD, and presents a new tool for conceptual deviation discovery, C HIANTI.
Abstract: In this paper, we discuss Conceptual Knowledge Discovery in Databases (CKDD) as it is developing in the field of Conceptual Knowledge Processing. Conceptual Knowledge Processing is based on the mathematical theory of Formal Concept Analysis, which has become a successful theory for data analysis during the last two decades. CKDD aims to support a human-centered process of discovering knowledge from data by visualizing and analyzing the conceptual structure of the data. Basic to the philosophy of CKDD is the idea that only the human analyst is able to make meaningful decisions, and that the analysis tools should not oversimplify the data, but help the analyst to deal with the complex situation. We discuss how the management system TOSCANA for conceptual information systems is supporting those goals of CKDD, and illustrate it by two applications in database marketing and flight movement analysis. Finally, we present a new tool for conceptual deviation discovery, C HIANTI .

37 citations


Journal ArticleDOI
TL;DR: This work proposes a new method to deal with missing values in data sets where cluster properties exist among the data records by integrating the clustering and regression techniques and can predict the missing values with higher accuracy.
Abstract: Data pre-processing is a critical task in the knowledge discovery process in order to ensure the quality of the data to be analyzed. One widely studied problem in data pre-processing is the handling of missing values with the aim to recover its original value. Based on numerous studies on missing values, it is shown that different methods are needed for different types of missing data. In this work, we propose a new method to deal with missing values in data sets where cluster properties exist among the data records. By integrating the clustering and regression techniques, the proposed method can predict the missing values with higher accuracy. To our best knowledge, this is the first work combining regression and clustering analysis to deal with the missing values problem. Through empirical evaluation, the proposed method was shown to perform better than other methods under different types of data sets.

Journal ArticleDOI
TL;DR: A new way of coding is proposed, where a single gene is constructed with the position and speed of the ship, as well as the factors of tide, wind, and wave, which shows that this new coding is a more effective way to search for the optimum and safe path.
Abstract: Genetic algorithm is applied to solving the problem of collision avoidance in ship navigation. When utilizing this method, the first step is to code the solution of problems as a finite-length string. For the studied subject, in the conventional coding, the position (longitude and latitude) of the ship is chosen just as genes of a chromosome. This was not especially good for checking the experimental values. In this paper, a new way of coding is proposed, where a single gene is constructed with the position and speed of the ship, as well as the factors of tide, wind, and wave. Then it is verified with both an automatic collision avoidance simulation system developed in the VC++ language and a real ship. The experimental results show that this new coding is a more effective way to search for the optimum and safe path.

Journal ArticleDOI
TL;DR: This work presents the application of a multistrategy approach to some document processing tasks in an enhanced version of the incremental learning system INTHELEX, embedded in the system architecture of the EU project COLLATE.
Abstract: This work presents the application of a multistrategy approach to some document processing tasks. The application is implemented in an enhanced version of the incremental learning system INTHELEX. This learning module has been embedded as a learning component in the system architecture of the EU project COLLATE, which deals with the annotation of cultural heritage documents. Indeed, the complex shape of the material handled in the project has suggested that the addition of multistrategy capabilities is needed to improve effectiveness and efficiency of the learning process. Results proving the benefits of these strategies in specific classfication tasks are reported in the experimentation presented in this work.

Journal ArticleDOI
TL;DR: This work reports how two tools structure phenotypes/genotypes in behavior genetics using a unique instance set (patients) to explore the matching between two sets of features: phenotypes and genetic causes.
Abstract: The Galois Lattice of a binary relation formalizes it as a concept system, dually ordered in "extension"/"intension." All implications between conjunctions of properties holding in it are summarized by a canonical basis--all basis having the same cardinality. We report how these tools structure phenotypes/genotypes in behavior genetics. The first study on phenotypes of laterality has a unique set of features and two sets of instances (left-/right-handers) for which the corresponding sets of rules are compared, while the second study on partial trisomy 21 uses a unique instance set (patients) to explore the matching between two sets of features: phenotypes and genetic causes. Hence, both situations comprise two binary data sets that are paired through either a column or a row matching, which raises specific questions. If the data are small, as compared with databases in bioinformatics, this illustrates how these abstract tools can unfold better interpretations.

Journal ArticleDOI
TL;DR: A Web data mining and cleaning strategy for information gathering and cleaning model is presented for the data that come from multiple agents, and a data-cleaning algorithm is presented to eliminate irrelevant data.
Abstract: While the Internet and World Wide Web have put a huge volume of low-quality information at the easy access of an information gathering system, filtering out irrelevant information has become a big challenge. In this paper, a Web data mining and cleaning strategy for information gathering is proposed. A data-mining model is presented for the data that come from multiple agents. Using the model, a data-cleaning algorithm is then presented to eliminate irrelevant data. To evaluate the data-cleaning strategy, an interpretation is given for the mining model according to evidence theory. An experiment is also conducted to evaluate the strategy using Web data. The experimental results have shown that the proposed strategy is efficient and promising.

Journal ArticleDOI
TL;DR: A text-mining framework is proposed in which subsystems of a classification system are treated as constituents of a knowledge discovery process for text corpora, and whether there exists a synergic relation between systems for classification and those for summarization by way of composing those subsystems is explored.
Abstract: In view of the exponential growth of online document corpora, even perfect retrieval will fetch too much material for a user to cope with. One way to reduce this problem is automatic domain-specific summarization tailored to user's needs, which is a kind of high-level data cleaning. This requires some method of discovering classes of similar items that may be grouped into predetermined domains. We explore whether there exists a synergic relation between systems for classification and those for summarization by way of composing those subsystems. In other words, we examine whether prior summarization will increase the performance of the classifier system and vice versa. In both cases, the answer is affirmative, as we show in this paper. We propose a text-mining framework in which these subsystems are treated as constituents of a knowledge discovery process for text corpora.

Journal ArticleDOI
TL;DR: A hybrid framework for identifying patterns from databases or multi-databases that provides a highly flexible and robust data-mining platform and the resulting systems demonstrate emergent behaviors although it does not improve the performance of individual KDD techniques.
Abstract: While knowledge discovery in databases (KDD) is defined as an iterative sequence of the following steps: data pre-processing, data mining, and post data mining, a significant amount of research in data mining has been done, resulting in a variety of algorithms and techniques for each step. However, a single data-mining technique has not been proven appropriate for every domain and data set. Instead, several techniques may need to be integrated into hybrid systems and used cooperatively during a particular data-mining operation. That is, hybrid solutions are crucial for the success of data mining. This paper presents a hybrid framework for identifying patterns from databases or multi-databases. The framework integrates these techniques for mining tasks from an agent point of view. Based on the experiments conducted, putting different KDD techniques together into the agent-based architecture enables them to be used cooperatively when needed. The proposed framework provides a highly flexible and robust data-mining platform and the resulting systems demonstrate emergent behaviors although it does not improve the performance of individual KDD techniques.

Journal ArticleDOI
TL;DR: This paper describes an agent-based system implementing an original consumer-based methodology for product penetration strategy selection in real-world situations and has elementary agents based on a generic reusable architecture and complex agents considered as an agent organization created dynamically in an hierarchical way.
Abstract: This paper describes an agent-based system implementing an original consumer-based methodology for product penetration strategy selection in real-world situations. Agents are simultaneously considered according to two different levels: a functional and a structural level. In the functional level, we have three types of agents: task agents, information agents, and interface agents assuming task fulfillment through cooperation, information gathering tasks, and mediation between users and artificial agents, respectively. In the structural level, we have elementary agents based on a generic reusable architecture and complex agents considered as an agent organization created dynamically in an hierarchical way.

Journal ArticleDOI
TL;DR: Simulation results show that the neural network controller can act as an experienced pilot and guide the aircraft to a safe landing in severe wind disturbance environments without using the gain scheduling technique.
Abstract: This paper presents an intelligent automatic landing system that uses a time delay neural network controller and a linearized inverse aircraft model to improve the performance of conventional automatic landing systems. The automatic landing system of an airplane is enabled only under limited conditions. If severe wind disturbances are encountered, the pilot must handle the aircraft due to the limits of the automatic landing system. In this study, a learning-through-time process is used in the controller training. Simulation results show that the neural network controller can act as an experienced pilot and guide the aircraft to a safe landing in severe wind disturbance environments without using the gain scheduling technique.

Journal ArticleDOI
TL;DR: A specific model, called the cube model, has been designed so as to capture the extension of the natural set operators to a lattice on conjunctions of first order literals.
Abstract: The aim of this article is to describe a basic algebraic structure on conjunctions of literals. As far as knowledge representation is concerned, the comparison of different pieces of information is a pivotal question; generally, the classical set operators (inclusion, union, intersection, subtraction) are used at least as a metaphorical model. In many applications, the core problem is the representation of actual data or information for which the basic unit of knowledge to represent is a conjunction of properties (while traditionally, AI is devoted to solving models for which the basic unit is a disjunction of properties, i.e., clauses). A specific model, called the cube model has been designed so as to capture the extension of the natural set operators to a lattice on conjunctions of first order literals. This paper is organized as follows: after a description of the origin and the postulates of the model, i.e., a need for a formal structure for knowledge fusion, the Cube model is described. Then applica...

Journal ArticleDOI
TL;DR: The proposed fuzzy information retrieval method is moreelligent and more flexible than the existing methods due to the fact that it can construct multi-relationship fuzzy concept networks automatically and it can provide contextual search capability to allow the users to specify fuzzy contextual queries in a more intelligent and flexible manner.
Abstract: Although the knowledge bases incorporated in existing information retrieval systems can enhance retrieval effectiveness, many of them are built by domain experts. It is obvious that the construction of such knowledge bases requires a large amount of human effort. In this paper, an intelligent fuzzy information retrieval system with an automatically constructed knowledge base is presented; the knowledge base is represented by a multi-relationship fuzzy concept network. The multi-relationship fuzzy concept network can describe four kindsof context-independent and context-dependent fuzzy relationships, i.e., "fuzzy positive association" relationship, "fuzzy negative association" relationship, "fuzzygeneralization" relationship, and "fuzzy specialization" relationship between concepts. The users of the fuzzy information retrieval system can submit a fuzzy contextual query which specifies the search context in the query formula. The fuzzy information retrieval system retrieves documents whose contents are rele...

Journal ArticleDOI
TL;DR: This work describes a method that combines a Bayesian feature selection approach with a clustering genetic algorithm to get classification rules in data-mining applications and the obtained results show that the proposed method is very promising.
Abstract: This work describes a method that combines a Bayesian feature selection approach with a clustering genetic algorithm to get classification rules in data-mining applications. A Bayesian network is generated from a data set and the Markov blanket of the class variable is applied to the feature subset selection task. The general rule extraction method is simple and consists of employing the clustering process in the examples of each class separately. In this way, clusters of similar examples are found for each class. These clusters can be viewed as subclasses and can, consequently, be modeled into logical rules. In this context, the problem of finding the optimal number of classification rules can be viewed as the problem of finding the best number of clusters. The Clustering Genetic Algorithm can find the best clustering in a data set, according to the Average Silhouette Width criterion, and it was applied to extract classification rules. The proposed methodology is illustrated by means of simulations in th...

Journal ArticleDOI
TL;DR: The WEDELMUSIC solution permits the creation of a virtuoso mechanism which increases the amount of content owned by each mediateque and adds several new multimedia functionalities that can be enforced into the WEDelmUSIC objects, with a limited effort, by using and integrating the content of any format and media.
Abstract: Archives of mediateques, theaters, music schools, conservatories, universities, etc., are the most important sources of cultural heritage. Such institutions are interested in digitizing content to (i) improve the service towards their attendees thus increasing the number of collections provided; (ii) add new multimedia and innovative functionalities to the service already provided within the content available; and (iii) save fragile materials otherwise settled to time deterioration, such as tapes, disks, documents, etc. Publishers are interested in digitizing only content which guarantees a certain return of investment and at the same time they are inclined to limit the distribution of the content located in archives, so as to control its exploitation and to preserve the content ownership. WEDELMUSIC has been defined to allow content providers to share and distribute "interactive music" with respect paid to the owner's rights. This permits the creation of a network for content-sharing mediateques that can...

Journal ArticleDOI
TL;DR: Experimental results demonstrate that application of this new data collecting technique can not only identify quality data, but can also efficiently reduce the amount of data that must be considered during mining.
Abstract: This paper presents a new means of selecting quality data for mining multiple data sources. Traditional data-mining strategies obtain necessary data from internal and external data sources and pool all the data into a huge homogeneous dataset for discovery. In contrast, our data-mining strategy identifies quality data from (internal and external) data sources for a mining task. A framework is advocated for generating quality data. Experimental results demonstrate that application of this new data collecting technique can not only identify quality data, but can also efficiently reduce the amount of data that must be considered during mining.

Journal ArticleDOI
TL;DR: An algorithm built to generate consistent information is presented, quite similar to a CSP approach, on a flood captured by aerial photographs, which is applied to the 1993 flood of the Aisne River in France.
Abstract: Spatial hydrology aims at defining hydrosystem behavior more precisely, by means of a detailed characterization of the environment (using remote sensing images, DEM, maps, GPS). This analysis is based on large quantities of localized data from various sources and which are, to different degrees, of a fuzzy and heterogeneous character. The management of the data set as a whole gives rise to specific AI problems (fusion, updating, consistency). After clarification of these issues, we apply our conclusions by using a detailed example on a flood captured by aerial photographs: we transform information derived from the images into water levels, for any point of the flood zone. In addition to the collection of the source data (water levels, structuring of the plan, and definition of the topological relations), we present an algorithm built to generate consistent information, which is quite similar to a CSP approach. Results are applied to the 1993 flood of the Aisne River in France.

Journal ArticleDOI
TL;DR: This paper focuses on the use of ANNs in the field of production scheduling, leading to a discussion of the various approaches, as well as the current research directions.
Abstract: In recent times, Artificial Neural Networks (ANNs) have been receiving increased attention as tools for business applications. In this paper, we focus on the use of ANNs in the field of production scheduling. The history of ANNs in production scheduling is outlined, leading to a discussion of the various approaches, as well as the current research directions. The paper concludes by sharing thoughts and estimations on the future prospects of ANNs in this area.

Journal ArticleDOI
TL;DR: This work is focused on determining provenance of travertine stones employed in the construction of some important monuments in Umbria (Italy) using two systems that use concepts and algorithms inherent to Artificial Intelligence: Kohonen self-organizing maps and fuzzy logic.
Abstract: This work is focused on determining provenance of travertine stones employed in the construction of some important monuments in Umbria (Italy) using two systems that use concepts and algorithms inherent to Artificial Intelligence: Kohonen self-organizing maps and fuzzy logic. The two systems have been applied to travertine samples belonging to quarries known to be sites of excavation from ancient times and monuments. Tests on quarry samples show a good discriminative power of both methods to recognize the exact provenance of most samples. The application of the systems to monument samples show that most of employed travertine stones were quarried from outcrops occurring in areas close to the towns where monuments have been erected. Results are in good agreement with historical data.

Journal ArticleDOI
TL;DR: This work constructs sequential classifiers for predicting the users' next visits based on the current actions using association rule mining, and introduces a model compression method, which removes redundant association rules from the model.
Abstract: With millions of Web users visiting Web servers each day, the Web log contains valuable information about users’ browsing behavior. In this work, we construct sequential classifiers for predicting the users’ next visits based on the current actions using association rule mining. The domain feature of Web-log mining entails that we adopt a special kind of association rules we call latest-substring rules, which take into account the temporal information as well as the correlation information. Furthermore, when constructing the classification model, we adopt a pessimistic selection method for choosing among alternative predictions. To make such prediction models useful, especially for small devices with limited memory and bandwidth, we also introduce a model compression method, which removes redundant association rules from the model. We empirically show that the resulting prediction model performs very well.

Journal ArticleDOI
TL;DR: A hybrid neuro-symbolic problem-solving model is presented in which the aim is to forecast parameters of a complex and dynamic environment in an unsupervised way and can provide a more effective means of performing predictions than other connectionist or symbolic techniques.
Abstract: A hybrid neuro-symbolic problem-solving model is presented in which the aim is to forecast parameters of a complex and dynamic environment in an unsupervised way. In situations in which the rules that determine a system are unknown, the prediction of the parameter values that determine the characteristic behavior of the system can be a problematic task. In such a situation, it has been found that a hybrid case-based reasoning system can provide a more effective means of performing such predictions than other connectionist or symbolic techniques. The system employs a case-based reasoning model that incorporates a growing cell structures network, a radial basis function network, and a set of Sugeno fuzzy models to provide an accurate prediction. Each of these techniques is used at a different stage of the reasoning cycle of the case-based reasoning system to retrieve historical data, to adapt it to the present problem, and to review the proposed solution. This system has been used to predict the red tides t...

Journal ArticleDOI
TL;DR: Two approaches for neural network estimation of biomass concentration X and the specific growth rate w, aimed to control of chemostat microbial cultivation, are proposed and conclusions with respect to the practical applicability of the proposed estimators are made.
Abstract: Two approaches for neural network estimation of biomass concentration X and the specific growth rate w , aimed to control of chemostat microbial cultivation, are proposed. Continuous growth of a strain Saccharomyces cerevisiae on glucose limited medium is considered as an example in order to evaluate the capability of the designed estimators to work independently, as well as in the framework of linearizing adaptive control systems of biomass and substrate concentration. The first approach assumes that two different estimators NNX and NN w are designed for biomass and growth rate, respectively. The second one is based on the assumption that one estimator, NNX w , is designed for both, biomass and growth rate. In both cases the substrate concentration is assumed to be online measurable and the dilution rate, the main control variable, to be known. In the process of the estimator design, the necessity of substrate concentration past values incorporation is studied. The influence of the estimation error on th...