scispace - formally typeset
Search or ask a question

Showing papers in "Applied Intelligence in 2004"


Journal ArticleDOI
TL;DR: An efficient solution representation for the JSSP in which the job task ordering constraints are easily encoded and both checking of the constraints and repair mechanism can be avoided, thus resulting in increased efficiency.
Abstract: In previous work, we developed three deadlock removal strategies for the job shop scheduling problem (JSSP) and proposed a hybridized genetic algorithm for it While the genetic algorithm (GA) gave promising results, its performance depended greatly on the choice of deadlock removal strategies employed This paper introduces a genetic algorithm based scheduling scheme that is deadlock free This is achieved through the choice of chromosome representation and genetic operators We propose an efficient solution representation for the JSSP in which the job task ordering constraints are easily encoded Furthermore, a problem specific crossover operator that ensures solutions generated through genetic evolution are all feasible is also proposed Hence, both checking of the constraints and repair mechanism can be avoided, thus resulting in increased efficiency A mutation-like operator geared towards local search is also proposed which further improves the solution quality Lastly, a hybrid strategy using the genetic algorithm reinforced with a tabu search is developed An empirical study is carried out to test the proposed strategies

131 citations


Journal ArticleDOI
TL;DR: A hybrid neuro-symbolic problem-solving model is presented in which the aim is to forecast parameters of a complex and dynamic environment in an unsupervised way and can provide a more effective means of performing predictions than other connectionist or symbolic techniques.
Abstract: A hybrid neuro-symbolic problem-solving model is presented in which the aim is to forecast parameters of a complex and dynamic environment in an unsupervised way. In situations in which the rules that determine a system are unknown, the prediction of the parameter values that determine the characteristic behaviour of the system can be a problematic task. In such a situation, it has been found that a hybrid case-based reasoning system can provide a more effective means of performing such predictions than other connectionist or symbolic techniques. The system employs a case-based reasoning model to wrap a growing cell structures network, a radial basis function network and a set of Sugeno fuzzy models to provide an accurate prediction. Each of these techniques is used at a different stage of the reasoning cycle of the case-based reasoning system to retrieve historical data, to adapt it to the present problem and to review the proposed solution. This system has been used to predict the red tides that appear in the coastal waters of the north west of the Iberian Peninsula. The results obtained from experiments, in which the system operated in a real environment, are presented.

92 citations


Journal ArticleDOI
TL;DR: The experimental results showed that the proposed multidimensional Gaussian noise modeling algorithm was very effective in generating extra data examples that can be used to train a neural network to make favorable decisions for the minority class and to have increased generalization capability.
Abstract: This paper describes the result of our study on neural learning to solve the classification problems in which data is unbalanced and noisy. We conducted the study on three different neural network architectures, multi-layered Back Propagation, Radial Basis Function, and Fuzzy ARTMAP using three different training methods, duplicating minority class examples, Snowball technique and multidimensional Gaussian modeling of data noise. Three major issues are addressed: neural learning from unbalanced data examples, neural learning from noisy data, and making intentional biased decisions. We argue that by properly generated extra training data examples around the noise densities, we can train a neural network that has a stronger capability of generalization and better control of the classification error of the trained neural network. In particular, we focus on problems that require a neural network to make favorable classification to a particular class such as classifying normal(pass)/abnormal(fail) vehicles in an assembly plant. In addition, we present three methods that quantitatively measure the noise level of a given data set. All experiments were conducted using data examples downloaded directly from test sites of an automobile assembly plant. The experimental results showed that the proposed multidimensional Gaussian noise modeling algorithm was very effective in generating extra data examples that can be used to train a neural network to make favorable decisions for the minority class and to have increased generalization capability.

75 citations


Journal ArticleDOI
TL;DR: Some of the tasks in the four REs, namely Retrieve, Reuse, Revise and Retain, of the CBR cycle that have relevance as prospective candidates for soft computing applications are explained.
Abstract: Here we first describe the concepts, components and features of CBR. The feasibility and merits of using CBR for problem solving is then explained. This is followed by a description of the relevance of soft computing tools to CBR. In particular, some of the tasks in the four REs, namely Retrieve, Reuse, Revise and Retain, of the CBR cycle that have relevance as prospective candidates for soft computing applications are explained.

72 citations


Journal ArticleDOI
TL;DR: Experimental results show that a GA approach to simultaneous optimization of the CBR model outperforms other conventional approaches for financial forecasting.
Abstract: This paper presents a simultaneous optimization method of a case-based reasoning (CBR) system using a genetic algorithm (GA) for financial forecasting. Prior research proposed many hybrid models of CBR and the GA for selecting a relevant feature subset or optimizing feature weights. Most research used the GA for improving only a part of architectural factors of the CBR model. However, the performance of the CBR model may be enhanced when these factors are simultaneously considered. In this study, the GA simultaneously optimizes multiple factors of the CBR system. Experimental results show that a GA approach to simultaneous optimization of the CBR model outperforms other conventional approaches for financial forecasting.

66 citations


Journal ArticleDOI
TL;DR: This paper follows Khardon's approach but represents generalized policies in a different way using a concept language and shows that the concept language yields a better policy using a smaller set of examples and no background knowledge.
Abstract: In this paper we are concerned with the problem of learning how to solve planning problems in one domain given a number of solved instances. This problem is formulated as the problem of inferring a function that operates over all instances in the domain and maps states and goals into actions. We call such functions generalized policies and the question that we address is how to learn suitable representations of generalized policies from data. This question has been addressed recently by Roni Khardon (Technical Report TR-09-97, Harvard, 1997). Khardon represents generalized policies using an ordered list of existentially quantified rules that are inferred from a training set using a version of Rivest's learning algorithm (Machine Learning, vol. 2, no. 3, pp. 229–246, 1987). Here, we follow Khardon's approach but represent generalized policies in a different way using a concept language. We show through a number of experiments in the blocks-world that the concept language yields a better policy using a smaller set of examples and no background knowledge.

65 citations


Journal ArticleDOI
TL;DR: Experimental results indicate that the accuracy of classification is maintained or even increased when the proposed method is applied for missing attribute value prediction.
Abstract: This paper proposes a grey-based nearest neighbor approach to predict accurately missing attribute values First, grey relational analysis is employed to determine the nearest neighbors of an instance with missing attribute values Accordingly, the known attribute values derived from these nearest neighbors are used to infer those missing values Two datasets were used to demonstrate the performance of the proposed method Experimental results show that our method outperforms both multiple imputation and mean substitution Moreover, the proposed method was evaluated using five classification problems with incomplete data Experimental results indicate that the accuracy of classification is maintained or even increased when the proposed method is applied for missing attribute value prediction

54 citations


Journal ArticleDOI
Shu Li1
TL;DR: Parts of the equate-to-differentiate model are shown to be able to provide an alternative and seemingly better account of the prominence effect and it is suggested that the model allows understanding perplexing decision phenomena better than alternative models.
Abstract: This paper presents a model of how humans choose between mutually exclusive alternatives. The model is based on the observation that human decision makers are unable or unwilling to compute the overall worth of the offered alternatives. This approach models much human choice behavior as a process in which people seek to equate a less significant difference between alternatives on one dimension, thus leaving the greater one-dimensional difference to be differentiated as the determinant of the final choice. These aspects of the equate-to-differentiate model are shown to be able to provide an alternative and seemingly better account of the prominence effect. The model is also able to provide an explanation and prediction regarding the empirical violation of independence and transitivity axioms. It is suggested that the model allows understanding perplexing decision phenomena better than alternative models.

45 citations


Journal ArticleDOI
TL;DR: Results show that DLC algorithms achieve significant performance improvement over usual dispatching rules in complex real-time shop floor control problems for JIT production.
Abstract: This paper presents an approach that is suitable for Just-In-Time (JIT) production for multi-objective scheduling problem in dynamically changing shop floor environment The proposed distributed learning and control (DLC) approach integrates part-driven distributed arrival time control (DATC) and machine-driven distributed reinforcement learning based control With DATC, part controllers adjust their associated parts' arrival time to minimize due-date deviation Within the restricted pattern of arrivals, machine controllers are concurrently searching for optimal dispatching policies The machine control problem is modeled as Semi Markov Decision Process (SMDP) and solved using Q-learning The DLC algorithms are evaluated using simulation for two types of manufacturing systems: family scheduling and dynamic batch sizing Results show that DLC algorithms achieve significant performance improvement over usual dispatching rules in complex real-time shop floor control problems for JIT production

39 citations


Journal ArticleDOI
TL;DR: MBNR (Memory-Based Neural Reasoning), case-based reasoning with local feature weighting by neural network, which develops a learning algorithm to train the neural network to learn the case-specific local weighting patterns for case- based reasoning.
Abstract: Our aim is to build an integrated learning framework of neural network and case-based reasoning. The main idea is that feature weights for case-based reasoning can be evaluated by neural networks. In this paper, we propose MBNR (Memory-Based Neural Reasoning), case-based reasoning with local feature weighting by neural network. In our method, the neural network guides the case-based reasoning by providing case-specific weights to the learning process. We developed a learning algorithm to train the neural network to learn the case-specific local weighting patterns for case-based reasoning. We showed the performance of our learning system using four datasets.

37 citations


Journal ArticleDOI
TL;DR: A rough self-organizing map (RSOM) with fuzzy discretization of feature space is described here and Superiority of this network in terms of quality of clusters, learning time and representation of data is demonstrated quantitatively.
Abstract: A rough self-organizing map (RSOM) with fuzzy discretization of feature space is described here. Discernibility reducts obtained using rough set theory are used to extract domain knowledge in an unsupervised framework. Reducts are then used to determine the initial weights of the network, which are further refined using competitive learning. Superiority of this network in terms of quality of clusters, learning time and representation of data is demonstrated quantitatively through experiments over the conventional SOM.

Journal ArticleDOI
TL;DR: It is shown that an information extraction system which is used for real world applications and different domains can be built using some autonomous, corporate components (agents) and that carefully selecting the right machine learning technique for the right task and selective sampling can be used to reduce the human effort required to annotate examples for building such systems.
Abstract: Information Extraction (IE) systems that can exploit the vast source of textual information that is the internet would provide a revolutionary step forward in terms of delivering large volumes of content cheaply and precisely, thus enabling a wide range of new knowledge driven applications and services. However, despite this enormous potential, few IE systems have successfully made the transition from laboratory to commercial application. The reason may be a purely practical one—to build useable, scaleable IE systems requires bringing together a range of different technologies as well as providing clear and reproducible guidelines as to how to collectively configure and deploy those technologies. This paper is an attempt to address these issues. The paper focuses on two primary goals. Firstly, we show that an information extraction system which is used for real world applications and different domains can be built using some autonomous, corporate components (agents). Such a system has some advanced properties: clear separation to different extraction tasks and steps, portability to multiple application domain, trainability, extensibility, etc. Secondly, we show that machine learning and, in particular, learning in different ways and at different levels, can be used to build practical IE systems. We show that carefully selecting the right machine learning technique for the right task and selective sampling can be used to reduce the human effort required to annotate examples for building such systems.

Journal ArticleDOI
TL;DR: In this article, the authors proposed and investigated the use of Artificial Intelligence techniques for sensor fusion in order to improve the accuracy and reliability of the distance measure between a robot and an object in its work environment, based on measures obtained from different sensors.
Abstract: Mobile robots rely on sensor data to build a representation of their environment. However, sensors usually provide incomplete, inconsistent or inaccurate information. Sensor fusion has been successfully employed to enhance the accuracy of sensor measures. This work proposes and investigates the use of Artificial Intelligence techniques for sensor fusion. Its main goal is to improve the accuracy and reliability of the distance measure between a robot and an object in its work environment, based on measures obtained from different sensors. Several Machine Learning algorithms are investigated to fuse the sensors data. The best model generated by each algorithm is called estimator. It is shown that the employment of estimators based on Artificial Intelligence can improve significantly the performance achieved by each sensor alone. The Machine Learning algorithms employed have different characteristics, causing the estimators to have different behaviors in different situations. Aiming to achieve an even more accurate and reliable behavior, the estimators are combined in committees. The results obtained suggest that this combination can further improve the reliability and accuracy of the distances measured by the individual sensors and estimators used for sensor fusion.

Journal ArticleDOI
TL;DR: An active audition for humanoid robot focuses on improved sound source tracking by integrating audition, vision, and motor control, and adapts motor noises using motor control signals.
Abstract: Mobile robots capable of auditory perception usually adopt the “stop-perceive-act” principle to avoid sounds made during moving due to motor noise. Although this principle reduces the complexity of the problems involved in auditory processing for mobile robots, it restricts their capabilities of auditory processing. In this paper, sound and visual tracking are investigated to compensate each other's drawbacks in tracking objects and to attain robust object tracking. Visual tracking may be difficult in case of occlusion, while sound tracking may be ambiguous in localization due to the nature of auditory processing. For this purpose, we present an active audition system for humanoid robot. The audition system of the highly intelligent humanoid requires localization of sound sources and identification of meanings of the sound in the auditory scene. The active audition reported in this paper focuses on improved sound source tracking by integrating audition, vision, and motor control. Given the multiple sound sources in the auditory scene, SIG the humanoid actively moves its head to improve localization by aligning microphones orthogonal to the sound source and by capturing the possible sound sources by vision. The system adaptively cancels motor noises using motor control signals. The experimental result demonstrates the effectiveness of sound and visual tracking.

Journal ArticleDOI
TL;DR: It is shown how UCTx is capable of dealing with another multi-agent system (Carrel, an Agent Mediated Institution for the Exchange of Human Tissues among Hospitals for Transplantation) in order to meet its own goals, acting as the representative of the hospital in the negotiation.
Abstract: We present a system called UCTx designed to model and automate some of the tasks performed by a Transplant Coordination Unit (UCTx) inside a Hospital. The aim of this work is to show how a multi-agent approach allows us to describe and implement the model, and how UCTx is capable of dealing with another multi-agent system (Carrel, an Agent Mediated Institution for the Exchange of Human Tissues among Hospitals for Transplantation) in order to meet its own goals, acting as the representative of the hospital in the negotiation. As an example we introduce the use of this Agency in the case of Cornea Transplantation.

Journal ArticleDOI
TL;DR: This paper describes an implementation of the most promising such algorithm, the big clique algorithm, together with examples of its use and its properties, and algorithms to realize it.
Abstract: Autonomous agents that communicate using probabilistic information and use Bayesian networks for knowledge representation need an update mechanism that goes beyond conditioning on the basis of evidence. In a related paper (M. Valtorta, Y.G. Kim, and J. Vomlel, International Journal of Approximate Reasoning, vol. 29, no. 1, pp. 71–106, 2002), we describe this mechanism, which we call soft evidential update, its properties, and algorithms to realize it. Here, we describe an implementation of the most promising such algorithm, the big clique algorithm, together with examples of its use.

Journal ArticleDOI
TL;DR: This paper describes a model for the establishment of cooperative information sharing among agents on teams formed dynamically for particular purposes within such organizations and argues that effective information sharing in the presence of such teams requires the active dissemination of descriptions of current and future information needs to both local teammates and to the larger organization.
Abstract: We are interested in developing models of and support for mixed-initiative human control of software agent teams, especially in the larger context of dynamic, real world organizations. In this paper, we describe a model for the establishment of cooperative information sharing among agents on teams formed dynamically for particular purposes within such organizations. We argue that effective information sharing in the presence of such teams requires the active dissemination of descriptions of current and future information needs to both local teammates and to the larger organization. Only by this mechanism can one avoid having to make explicit at design time who will provide each bit of the information. We consider how information sharing within the organization can be promoted not only for the immediate goals shared by a tightly coordinated team, but some of the likely information needs of the larger organization going forward. We illustrate the model by describing its application to a large-scale agent-based simulation of the US Military's disaster relief response to the devastation caused in Central America by Hurricane Mitch in 1998. The demonstration was developed in conjunction with a large group of researchers representing eight different institutions.

Journal ArticleDOI
TL;DR: It is found that the discriminative learners can attain the efficiency of HMM, and that after the transformations they can retain the same performance in spite of the severe dimension reduction.
Abstract: This paper examines the applicability of some learning techniques to the classification of phonemes. The methods tested were artificial neural nets (ANN), support vector machines (SVM) and Gaussian mixture modeling (GMM). We compare these methods with a traditional hidden Markov phoneme model (HMM), working with the linear prediction-based cepstral coefficient features (LPCC). We also tried to combine the learners with linear/nonlinear and unsupervised/supervised feature space transformation methods such as principal component analysis (PCA), independent component analysis (ICA), linear discriminant analysis (LDA), springy discriminant analysis (SDA) and their nonlinear kernel-based counterparts. We found that the discriminative learners can attain the efficiency of HMM, and that after the transformations they can retain the same performance in spite of the severe dimension reduction. The kernel-based transformations brought only marginal improvements compared to their linear counterparts.

Journal ArticleDOI
TL;DR: A new technique to simulate polymer blends that overcomes the shortcomings in polymer system modeling is presented and a high degree of convergence not seen using the genetic algorithm alone is obtained.
Abstract: In this paper we present a new technique to simulate polymer blends that overcomes the shortcomings in polymer system modeling. This method has an inherent advantage in that the vast existing information on polymer systems forms a critical part in the design process. The stages in the design begin with selecting potential candidates for blending using Neural Networks. Generally the parent polymers of the blend need to have certain properties and if the blend is miscible then it will reflect the properties of the parents. Once this step is finished the entire problem is encoded into a genetic algorithm using various models as fitness functions. We select the lattice fluid model of Sanchez and Lacombe (J. Polym. Sci. Polym. Lett. Ed., vol. 15, p. 71, 1977), which allows for a compressible lattice. After reaching a steady-state with the genetic algorithm we transform the now stochastic problem that satisfies detailed balance and the condition of ergodicity to a Markov Chain of states. This is done by first creating a transition matrix, and then using it on the incidence vector obtained from the final populations of the genetic algorithm. The resulting vector is converted back into a population of individuals that can be searched to find the individuals with the best fitness values. A high degree of convergence not seen using the genetic algorithm alone is obtained. We check this method with known systems that are miscible and then use it to predict miscibility on several unknown systems.

Journal ArticleDOI
Ronald R. Yager1
TL;DR: This work introduces a methodology for matching the target and cases which uses a hierarchical representation of the target object and a method for fusing the information provided by relevant retrieved cases.
Abstract: Our goal is to provide some tools, based on soft computing aggregation methods, useful in the two fundamental steps in case base reasoning, matching the target and the cases and fusing the information provided by the relevant cases. To aid in the first step we introduce a methodology for matching the target and cases which uses a hierarchical representation of the target object. We also introduce a method for fusing the information provided by relevant retrieved cases. This approach is based upon the nearest neighbor principle and uses the induced ordered weighted averaging operator as the basic aggregation operator. A procedure for learning the weights is described.

Journal ArticleDOI
TL;DR: In this paper, a compositional verification method for multi-agent systems is applied that allows to logically relate dynamic properties of the multiagent system as a whole to dynamic property of agents, and logically relate properties of agents to properties of their subcomponents.
Abstract: Verification of multi-agent systems hardly occurs in design practice. One of the difficulties is that required properties for a multi-agent system usually refer to multi-agent behaviour which has nontrivial dynamics. To constrain these multi-agent behavioural dynamics, often a form of organisational structure is used, for example, for negotiating agents, by following strict protocols. The claim is that these negotiation protocols entail a structured process that is manageable with respect to analysis, design and execution of such a multi-agent system. In this paper this is shown by a case study: verification of a multi-agent system for one-to-many negotiation in the domain of load balancing of electricity use. A compositional verification method for multi-agent systems is applied that allows to (1) logically relate dynamic properties of the multi-agent system as a whole to dynamic properties of agents, and (2) logically relate dynamic properties of agents to properties of their subcomponents. Given that properties of these subcomponents can be verified by more standard methods, these logical relationships provide proofs of the dynamic properties of the multi-agent system as a whole.

Journal ArticleDOI
TL;DR: The paper proposes to use B-spline basis functions to approximate nonlinear sigmoidal functions and shows that this approximation fulfils the general requirements on the activation functions.
Abstract: This paper proposes a new way of digital hardware implementation of nonlinear activation functions in feed-forward neural networks. The basic idea of this new realization is that the nonlinear functions can be implemented using a matrix-vector multiplication. Recently a new approach was proposed for the efficient realization of matrix-vector multipliers, and this approach can be applied for implementing nonlinear functions if these functions are approximated by simple basis functions. The paper proposes to use B-spline basis functions to approximate nonlinear sigmoidal functions, it shows that this approximation fulfils the general requirements on the activation functions, presents the details of the proposed hardware implementation, and gives a summary of an extensive study about the effects of B-spline nonlinear function realization on the size and the trainability of feed-forward neural networks.

Journal ArticleDOI
TL;DR: Results indicate that the computer software architecture presented in this paper is suitable for effectively manipulating complex engineering systems characterized by relatively slow process dynamics like those of a slag foaming operation.
Abstract: Slag foaming is a steel-making process that has been shown to improve the efficiency of electric arc furnace plants. Unfortunately, slag foaming is a highly dynamic process that is difficult to control. This paper describes the development of an adaptive, intelligent control system for effectively manipulating the slag foaming process. The level-2 intelligent control system developed is based on three techniques from the field of computational intelligence (CI): (1) fuzzy logic, (2) genetic algorithms, and (3) neural networks. Results indicate that the computer software architecture presented in this paper is suitable for effectively manipulating complex engineering systems characterized by relatively slow process dynamics like those of a slag foaming operation.

Journal ArticleDOI
TL;DR: A new estimation to set the maximum bound on prediction accuracy is presented, based on the approximation of the a posteriori probability of Bayes byFeed-forward three-layer neural networks, to obtain the best prediction accuracy for the correct classification probability of patient relapse after breast cancer surgery.
Abstract: The prediction of clinical outcome of patients after breast cancer surgery plays an important role in medical tasks such as diagnosis and treatment planning. Survival estimations are currently performed by clinicians using non-numerical techniques. Artificial neural networks are shown to be a powerful tool for analyzing data sets where there are complicated nonlinear interactions between the input data and the information to be predicted. In this paper, a new estimation to set the maximum bound on prediction accuracy is presented, based on the approximation of the a posteriori probability of Bayes by feed-forward three-layer neural networks. This result is applied to different patients' follow-up time intervals, in order to obtain the best prediction accuracy for the correct classification probability of patient relapse after breast cancer surgery using clinical-pathological data (tumor size, patient age, menarchy age, etc.), which were obtained from the Medical Oncology Service of the Hospital Clinico Universitario of Malaga, Spain. Different network topologies and learning parameters are investigated to obtain the best prediction accuracy. The actual results show as, after training process, the final model is appropriate to make predictions about the relapse probability at different times of follow-up.

Journal ArticleDOI
TL;DR: This paper presents an application of lazy learning algorithms in the domain of industrial processes described by a set of variables, each corresponding a time series, based on a k-nearest neighbour algorithm.
Abstract: This paper presents an application of lazy learning algorithms in the domain of industrial processes. These processes are described by a set of variables, each corresponding a time series. Each variable plays a different role in the process and some mutual influences can be discovered. A methodology to study the different variables and their roles in the process are described. This methodology allows the structuration of the study of the time series. The prediction methodology is based on a k-nearest neighbour algorithm. A complete study of the different parameters of this kind of algorithm is done, including data preprocessing, neighbour distance, and weighting strategies. An alternative to Euclidean distance called shape distance is presented, this distance is insensitive to scaling and translation. Alternative weighting strategies based on time series autocorrelation and partial autocorrelation are also presented. Experiments using autorregresive models, simulated data and real data obtained from an industrial process (Waste water treatment plants) are presented to show the feasabilty of our approach.

Journal ArticleDOI
TL;DR: A competitive coevolutionary algorithm that combines the strategies of neighborhood-based evolution, entry fee exchange tournament competition (EFE-TC) and localized elitism appears to promote a balanced evolution between host and parasite populations, which naturally leads them to keep on evolutionary arms race.
Abstract: For an efficient competitive coevolutionary algorithm, it is important that competing populations be capable of maintaining a coevolutionary balance and hence, continuing evolutionary arms race to increase the levels of complexity. We propose a competitive coevolutionary algorithm that combines the strategies of neighborhood-based evolution, entry fee exchange tournament competition (EFE-TC) and localized elitism. An emphasis is placed on analyzing the effects of these strategies on the performance of competitive coevolutionary algorithms. We have tested the proposed algorithm with two adversarial problems: sorting network and Nim game problems that have different characteristics. The experimental results show that the interacting effects of the strategies appear to promote a balanced evolution between host and parasite populations, which naturally leads them to keep on evolutionary arms race. Consequently, the proposed algorithm provides good quality solutions with a little computation time.

Journal ArticleDOI
TL;DR: RPFP gives better performance than well-known eager approaches found in machine learning and statistics such as MARS, rule-based regression, and regression tree induction systems and it outperforms existing eager or lazy approaches on many domains when there are many missing values in the training data.
Abstract: A new instance-based learning method is presented for regression problems with high-dimensional data. As an instance-based approach, the conventional method, KNN, is very popular for classification. Although KNN performs well on classification tasks, it does not perform as well on regression problems. We have developed a new instance-based method, called Regression by Partitioning Feature Projections (RPFP) which is designed to meet the requirement for a lazy method that achieves high levels of accuracy on regression problems. RPFP gives better performance than well-known eager approaches found in machine learning and statistics such as MARS, rule-based regression, and regression tree induction systems. The most important property of RPFP is that it is a projection-based approach that can handle interactions. We show that it outperforms existing eager or lazy approaches on many domains when there are many missing values in the training data.

Journal ArticleDOI
TL;DR: This paper introduces an approach for diagnosis of VHDL hardware designs and makes use of model-based diagnosis which is a general diagnosis approach, and describes the derivation of specialized models that capture only some aspects of the program.
Abstract: Debugging is a time-consuming task. This holds especially for large programs that are usually written by different groups of programmers. An example for this observation is the hardware design domain. Nowadays hardware designs are written in special hardware description language, e.g., VHDL, by groups which work in different companies and places. Moreover, there is a high pressure for completing the system in time with a very high quality because of the huge costs for correcting a bug after manufacturing the circuit. In order to decrease time for debugging we introduce an approach for diagnosis of VHDL hardware designs and present first empirical results. In contrast to other debugging approaches we make use of model-based diagnosis which is a general diagnosis approach. The models we use are logical descriptions of the syntax and semantics of a VHDL program that can be automatically derived from the program at compile time. The main part of this paper describes a general model and the derivation of specialized models that capture only some aspects of the program. The specialized models should be used in a specific debugging situation where they deliver the most appropriate solution in reasonable time. In order to select such a model we propose the use of a probability-based selection strategy. For example, larger programs should be debugged using a model only distinguishing concurrent VHDL statements and not sequential statements. As a result of multi-model reasoning in this domain we expect performance gains allowing to debug larger designs in a reasonable time, and more expressive diagnosis results.

Journal ArticleDOI
TL;DR: The EA outperforms a traditional method suitable for solving linear problems, then is extended to nonlinear problems for which there is no effective solution methodology.
Abstract: Determining the characteristics of wind gusts that result in the critical loading of an aircraft structure is an extremely complex problem. On the other hand, identifying these “worst-case gusts” is terribly important for aircraft designers because they need to ensure that aircraft are designed to withstand the dynamic loads associated with these turbulent gusts. In this paper an evolutionary algorithm (EA) is shown to be a feasible approach to solving this problem. The EA outperforms a traditional method suitable for solving linear problems, then is extended to nonlinear problems for which there is no effective solution methodology.

Journal ArticleDOI
TL;DR: A genetic method for analyzing data from a new market determined by the supply from the generators in a competitive market, the so-called “electrical pool” in Spain, with an eventual objective to determine the individual supply curves of the competitive agents.
Abstract: The price of electrical energy in Spain has not been regulated by the government since 1998, but determined by the supply from the generators in a competitive market, the so-called “electrical pool”. A genetic method for analyzing data from this new market is presented in this paper. The eventual objective is to determine the individual supply curves of the competitive agents. Adopting the point of view of the game theory, different genetic algorithm configurations using coevolutionary and non-coevolutionary strategies combined with scalar and multi-objective fitness are compared. The results obtained are the first step toward solving the induction of the optimal individual strategies into the Spanish electrical market from data in terms of perfect oligopolistic behavior.