scispace - formally typeset
Search or ask a question

Showing papers on "Soft computing published in 2012"


Journal ArticleDOI
01 Feb 2012
TL;DR: This study aims at developing a methodology for effective stock selection using support vector regression as well as genetic algorithms, and expects this hybrid GA-SVR methodology to advance the research in soft computing for finance and provide an effective solution to stock selection in practice.
Abstract: In the areas of investment research and applications, feasible quantitative models include methodologies stemming from soft computing for prediction of financial time series, multi-objective optimization of investment return and risk reduction, as well as selection of investment instruments for portfolio management based on asset ranking using a variety of input variables and historical data, etc. Among all these, stock selection has long been identified as a challenging and important task. This line of research is highly contingent upon reliable stock ranking for successful portfolio construction. Recent advances in machine learning and data mining are leading to significant opportunities to solve these problems more effectively. In this study, we aim at developing a methodology for effective stock selection using support vector regression (SVR) as well as genetic algorithms (GAs). We first employ the SVR method to generate surrogates for actual stock returns that in turn serve to provide reliable rankings of stocks. Top-ranked stocks can thus be selected to form a portfolio. On top of this model, the GA is employed for the optimization of model parameters, and feature selection to acquire optimal subsets of input variables to the SVR model. We will show that the investment returns provided by our proposed methodology significantly outperform the benchmark. Based upon these promising results, we expect this hybrid GA-SVR methodology to advance the research in soft computing for finance and provide an effective solution to stock selection in practice.

191 citations


Journal ArticleDOI
TL;DR: A soft computing based load balancing approach has been proposed and a local optimization approach Stochastic Hill climbing is used for allocation of incoming jobs to the servers or virtual machines(VMs).

190 citations


Journal ArticleDOI
TL;DR: In this article, a survey on assembly sequence planning and assembly line balancing problems using various soft computing approaches is presented, including genetic algorithm, ant colony optimization, particle swarm optimization and particle swarm optimisation.
Abstract: Assembly optimisation activities occur across development and production stages of manufacturing goods. Assembly Sequence Planning (ASP) and Assembly Line Balancing (ALB) problems are among the assembly optimisation. Both of these activities are classified as NP-hard. Several soft computing approaches using different techniques have been developed to solve ASP and ALB. Although these approaches do not guarantee the optimum solution, they have been successfully applied in many ASP and ALB optimisation works. This paper reported the survey on research in ASP and ALB that use soft computing approaches for the past 10 years. To be more specific, only Simple Assembly Line Balancing Problem (SALBP) is considered for ALB. The survey shows that three soft computing algorithms that frequently used to solve ASP and ALB are Genetic Algorithm, Ant Colony Optimisation and Particle Swarm Optimisation. Meanwhile, the research in ASP and ALB is also progressing to the next level by integration of assembly optimisation activities across product development stages.

185 citations


Journal ArticleDOI
TL;DR: An efficient rough feature selection algorithm for large-scale data sets, which is stimulated from multi-granulation, is proposed, which yields in a much less amount of time a feature subset (the approximate reduct).

168 citations


Book
13 Nov 2012
TL;DR: This book considers the principal constituents of soft computing, namely fuzzy logic, neurocomputing, genetic computing, and probabilistic reasoning, the relations between them, and their fusion in industrial applications, and how the combination can be used to achieve a high Machine Intelligence Quotient.
Abstract: Soft computing is a consortium of computing methodologies that provide a foundation for the conception, design, and deployment of intelligent systems and aims to formalize the human ability to make rational decisions in an environment of uncertainty and imprecision. This book is based on a NATO Advanced Study Institute held in 1996 on soft computing and its applications. The distinguished contributors consider the principal constituents of soft computing, namely fuzzy logic, neurocomputing, genetic computing, and probabilistic reasoning, the relations between them, and their fusion in industrial applications. Two areas emphasized in the book are how to achieve a synergistic combination of the main constituents of soft computing and how the combination can be used to achieve a high Machine Intelligence Quotient.

125 citations


Proceedings ArticleDOI
01 Dec 2012
TL;DR: In this paper, the authors present a review of electricity demand forecasting techniques and various types of methodologies and models are included in the literature, which can be broadly divided into three categories: short-term forecasts which are usually from one hour to one week, medium forecasts which were usually from a week to a year, and long-term forecast which are longer than a year.
Abstract: Electricity demand forecasts are extremely important for energy suppliers and other participants in electric energy generation, transmission, distribution and markets. Accurate models for electric power load forecasting are essential to the operation and planning of a utility company. Load forecasts are extremely important for energy suppliers and other participants in electric energy generation, transmission, distribution and markets. This paper presents a review of electricity demand forecasting techniques. The various types of methodologies and models are included in the literature. Load forecasting can be broadly divided into three categories: short-term forecasts which are usually from one hour to one week, medium forecasts which are usually from a week to a year, and long-term forecasts which are longer than a year. Based on the various types of studies presented in these papers, the load forecasting techniques may be presented in three major groups: Traditional Forecasting technique, Modified Traditional Technique and Soft Computing Technique.

122 citations


Book
12 Dec 2012
TL;DR: This is the first book focused on clustering with a particular emphasis on symmetry-based measures of similarity and metaheuristic approaches, and the aim is to find a suitable grouping of the input data set so that some criteria are optimized.
Abstract: Clustering is an important unsupervised classification technique where data points are grouped such that points that are similar in some sense belong to the same cluster. Cluster analysis is a complex problem as a variety of similarity and dissimilarity measures exist in the literature. This is the first book focused on clustering with a particular emphasis on symmetry-based measures of similarity and metaheuristic approaches. The aim is to find a suitable grouping of the input data set so that some criteria are optimized, and using this the authors frame the clustering problem as an optimization one where the objectives to be optimized may represent different characteristics such as compactness, symmetrical compactness, separation between clusters, or connectivity within a cluster. They explain the techniques in detail and outline many detailed applications in data mining, remote sensing and brain imaging, gene expression data analysis, and face detection. The book will be useful to graduate students and researchers in computer science, electrical engineering, system science, and information technology, both as a text and as a reference book. It will also be useful to researchers and practitioners in industry working on pattern recognition, data mining, soft computing, metaheuristics, bioinformatics, remote sensing, and brain imaging.

121 citations


Journal ArticleDOI
TL;DR: The predicted structural response by ANNs can be used in the PBD framework when dynamic analyses are performed, aiming at reducing the excessive computational cost.

115 citations


Journal ArticleDOI
TL;DR: A novel DSP called the learning-effect DSP (LDSP) is proposed by considering the general effects of learning in DSP, and a modified simplified swarm optimization (SSO) method developed by revising the most recently published variants of SSO is proposed to solve this new problem.

97 citations


Journal ArticleDOI
TL;DR: The proposed methodology aims to bring methodological support to scenario-based decision making in scenario analysis by combining Delphi method, soft computing (fuzzy cognitive maps) and multicriteria (TOPSIS) techniques.
Abstract: Highlights? The methodology proposed is a step forward with regard to the classic tools used in scenarios. ? The proposed approach that it aims to use the scenarios' assessment and ranking as a whole. ? The proposal combine Delphi method, soft computing (fuzzy cognitive maps) and multicriteria (TOPSIS) techniques. Scenarios describe events and situations that would occurred in the future real-world. Policy makers use scenario methods as a tool to build landscapes of possible futures at a national level. Based on these future visions, policy and decision-makers are able to explore different courses of action. In recent years, the number of potential scenario methods and applications is increasing. It is because academics and practitioners are increasing their interest about it. In spite of the success of scenario methods' support, scenario-based decision making still is not a fully structured process.The proposed methodology aims to bring methodological support to scenario-based decision making in scenario analysis. The originality of the proposed approach with respect to other ones is that it aims to use the scenarios' assessment and ranking as a whole. Traditional approaches consider the future impact of each present entity in isolation. This assumption is a simplification of a more complex reality, in which different entities interact with each other. The model that the authors propose allows decision and policy makers to measure the impact of a entity interactions. To reach this aim, the proposal combine Delphi method, soft computing (fuzzy cognitive maps) and multicriteria (TOPSIS) techniques. In addition, a numerical example is developed for illustrating the proposal.

95 citations


Journal ArticleDOI
TL;DR: A more flexible fuzzy decision support system for redistribution process in BSS to minimize the redistribution costs for bike-sharing companies, determining the optimal bikes repositioning flows, distribution patterns and time intervals between relocation operations, with the objective of a high level for users satisfaction.

Journal ArticleDOI
TL;DR: The development of models using Artificial Neural Network with back propagation and Levenberg-Maquardt algorithms, radial basis function, Fuzzy Logic, and decision tree algorithms such as M5 and REPTree for predicting the suspended sediment concentration at Kasol, upstream of the Bhakra reservoir in northern India are presented.
Abstract: The prediction of the sediment loading generated within a watershed is an important input in the design and management of water resources projects. High variability of hydro-climatic factors with sediment generation makes the modelling of the sediment process cum- bersome and tedious. The methods for the estimation of sediment concentration based on the properties of flow and sediment have limitations attributed to the simplification of important parameters and boundary conditions. Under such circumstances, soft computing approaches have proven to be an efficient tool in modelling the sediment concentration. The focus of this paper is to present the development of models using Artificial Neural Network (ANN) with back propagation and Levenberg-Maquardt algorithms, radial basis function (RBF), Fuzzy Logic, and decision tree algorithms such as M5 and REPTree for predicting the suspended sediment concentration at Kasol, upstream of the Bhakra reservoir, located in the Sutlej basin in northern India. The input vector to the various models using different algorithms was derived con- sidering the statistical properties such as auto-correlation function, partial auto-correlation, and cross-correlation function of the time series. It was found that the M5 model performed well compared to other soft computing techniques such as ANN, fuzzy logic, radial basis function, and REPTree investigated in this study, and results of the M5 model indicate that all ranges of sediment concentration values were simulated fairly well. This study also suggests that M5 model trees, which are analogous to piecewise linear functions, have certain advantages over other soft computing techniques because they offer more insight into the generated model, are acceptable to decision makers, and always converge. Further, the M5 model tree offers explicit expressions for use by field engineers. DOI: 10.1061/(ASCE)HE.1943-5584.0000445. © 2012 American Society of Civil Engineers. CE Database subject headings: Suspended sediment; Neural networks; Fuzzy sets; Reservoirs. Author keywords: Suspended sediment concentration; Neural networks; Fuzzy Logic; M5; REPTree; Bhakra reservoir.

Journal ArticleDOI
TL;DR: The proposed soft computing method can reliably estimate the PCI and can be used in a pavement management system (PMS) using simple and accessible spreadsheet softwares and showed that the ANN- and GP-based projected values are in good agreement with the field-measured data.
Abstract: The pavement condition index (PCI) is a widely used numerical index for the evaluation of the structural integrity and operational condition of pavements. Estimation of the PCI is based on the results of a visual inspection in which the type, severity, and quantity of distresses are identified. The purpose of this study is to develop an alternative approach for forecasting the PCI using optimization techniques, including artificial neural networks (ANN) and genetic programming (GP). The proposed soft computing method can reliably estimate the PCI and can be used in a pavement management system (PMS) using simple and accessible spreadsheet softwares. A database composed of the PCI results of more than 1,250 km of highways in Iran was used to develop the models. The results showed that the ANN- and GP-based projected values are in good agreement with the field-measured data. In addition, the ANN-based model was more precise than the GP-based model. For more straightforward applications, a computer program was developed based on the results obtained.

Journal ArticleDOI
TL;DR: This work uses a novel rule-based classifier design method, constructed by using improved simplified swarm optimization (SSO), to mine a thyroid gland dataset from UCI databases to improve solution quality and evaluate the classification performance of the proposed improved SSO.

Journal ArticleDOI
TL;DR: A detailed comparative analysis of the Hebbian-like learning algorithms applied to train Fuzzy Cognitive Maps (FCMs) operating as pattern classifiers, is presented in this paper.
Abstract: A detailed comparative analysis of the Hebbian-like learning algorithms applied to train Fuzzy Cognitive Maps (FCMs) operating as pattern classifiers, is presented in this paper. These algorithms aim to find appropriate weights between the concepts of the FCM classifier so it equilibrates to a desired state (class mapping). For these purposes, six different types of Hebbian learning algorithms from the literature have been selected and studied in this work. Along with the theoretical description of these algorithms and the analysis of their performance in classifying known patterns, a sensitivity analysis of the applied classification scheme, regarding some configuration parameters have taken place. It is worth noting that the algorithms are studied in a comparative fashion, under common configurations for several benchmark pattern classification datasets, by resulting to useful conclusions about their training capabilities.

Journal ArticleDOI
TL;DR: The paper shows the possibilities of generalizing the two-class classification into multi- class classification by means of a fuzzy inference system and compares proposed combination methods with ECOC and two variations of decision templates, based on Euclidean and symmetric distance.

Book ChapterDOI
01 Jan 2012
TL;DR: This chapter describes how a core concept of rough sets, the lower and upper approximation of a set, can be used in clustering and is shown to be useful for representing groups of highway sections, web users, and supermarket customers.
Abstract: Clustering algorithms are probably the most commonly used methods in data mining. Applications can be found in virtually any domain; prominent areas of application are e.g. bioinformatics, engineering and marketing besides many others. In many applications the classic k-means clustering algorithm is applied. Its fuzzy version, Bezdek’s fuzzy c-means has also gained tremendous attention. Another soft computing k-means algorithm based on rough set has been recently introduced by Lingras. This chapter describes how a core concept of rough sets, the lower and upper approximation of a set, can be used in clustering. Rough clusters are shown to be useful for representing groups of highway sections, web users, and supermarket customers.

Journal ArticleDOI
TL;DR: The main idea of this work is to enrich with qualitative representation of context underling data by means of Fuzzy Logic in order to automatically recognize the context and to consequently find the right set of healthcare services among the available ones.

Journal ArticleDOI
TL;DR: A simple direct solution is proposed that significantly reduces the inherent difficulties of finding the solution and ensures optimum solution at the same time and the concept of principle of optimality is effectively used to make the proposed technique more computationally efficient and useful.
Abstract: Much of the research has been focused on developing the optimization techniques, varying from classical to nontraditional soft computing techniques, to solve the distribution system planning problem. Most of the methods preserve the distinctions and niceties, but dependent on complex search algorithm with a lot of convergence related issues that require more time to reach a firm conclusion at planning stage. This paper proposes a simple direct solution that significantly reduces the inherent difficulties of finding the solution and ensures optimum solution at the same time. Moreover, the concept of principle of optimality is effectively used to make the proposed technique more computationally efficient and useful. The effectiveness of the developed planning technique has been verified with different test cases.

Journal ArticleDOI
01 Dec 2012
TL;DR: An innovative soft computing methodology, Fuzzy Grey Cognitive Map (FGCM), is integrated into a traditional reliability analysis for better knowledge and exposes the potential benefits it could provide in order to assist electric power system decision-makers to supply its customer electrical energy with a high degree of reliability.
Abstract: Current industrial equipment has become more complex and huge. In this case, the conventional reliability techniques cannot correctly support functional assessment. This paper integrates an innovative soft computing methodology, Fuzzy Grey Cognitive Map (FGCM), into a traditional reliability analysis for better knowledge. FGCMs are used for evaluating, modelling and aiding decision-making by examining causal relations among relevant domain concepts. The proposed procedure is illustrated with a reliability analysis of a transformer active part. Twenty failure causes in the transformer's active part are identified and assessed. In addition, six failure scenarios are simulated. The results revealed the potential of the combination of FGCM and failure analysis for complex systems. The proposed methodology exposes the potential benefits it could provide in order to assist electric power system decision-makers to supply its customer electrical energy with a high degree of reliability.

Journal ArticleDOI
TL;DR: This study proposes a model-based robust fault detection and isolation (RFDI) method with hybrid structure that was tested on a single-shaft industrial gas turbine prototype model and has been evaluated based on the gas turbine data.

Journal ArticleDOI
01 Oct 2012
TL;DR: R rough clustering offers strong tools to detect changing data structures and has been applied to synthetic as well as to real-world data sets where it provides new insights regarding the underlying dynamic phenomena.
Abstract: Dynamic data mining has gained increasing attention in the last decade. It addresses changing data structures which can be observed in many real-life applications, e.g. buying behavior of customers. As opposed to classical, i.e. static data mining where the challenge is to discover pattern inherent in given data sets, in dynamic data mining the challenge is to understand - and in some cases even predict - how such pattern will change over time. Since changes in general lead to uncertainty, the appropriate approaches for uncertainty modeling are needed in order to capture, model, and predict the respective phenomena considered in dynamic environments. As a consequence, the combination of dynamic data mining and soft computing is a very promising research area. The proposed algorithm consists of a dynamic clustering cycle when the data set will be refreshed from time to time. Within this cycle criteria check if the newly arrived data have structurally changed in comparison to the data already analyzed. If yes, appropriate actions are triggered, in particular an update of the initial settings of the cluster algorithm. As we will show, rough clustering offers strong tools to detect such changing data structures. To evaluate the proposed dynamic rough clustering algorithm it has been applied to synthetic as well as to real-world data sets where it provides new insights regarding the underlying dynamic phenomena.

Journal ArticleDOI
TL;DR: A new hybrid model is proposed that combines artificial intelligence with fuzzy in order to benefit from unique advantages of both fuzzy logic and the classification power of the artificial neural networks (ANNs), to construct an efficient and accurate hybrid classifier in less available data situations.
Abstract: Classification is an important data mining task that widely used in several different real world applications. In microarray analysis, classification techniques are applied in order to discriminate diseases or to predict outcomes based on gene expression patterns, and perhaps even to identify the best treatment for given genetic signature. The most important challenge in gene expression data analysis lies in how to deal with its unique ''high dimension small sample'' characteristic, which makes many traditional classification techniques non-applicable or inefficient; and hence, more dedicated techniques are nowadays needed in order to approach this problem. Fuzzy logic is recently shown that is a powerful and suitable soft computing tool for handling the complex problems under incomplete data conditions. In this paper, a new hybrid model is proposed that combines artificial intelligence with fuzzy in order to benefit from unique advantages of both fuzzy logic and the classification power of the artificial neural networks (ANNs), to construct an efficient and accurate hybrid classifier in less available data situations. The proposed model, because of using the fuzzy parameters instead of the crisp parameters, will need less data set in comparing with traditional nonfuzzy neural networks in its training process or with same training sample can better learn and hence can yield more accurate results than traditional neural networks. In addition of theoretical evidence of using fuzzy logic, empirical results of gene expression classification indicate that the proposed model exhibits effectively improved classification accuracy in comparison with traditional artificial neural networks (ANNs) and also some other well-known statistical and intelligent classification models such as the linear discriminant analysis (LDA), the quadratic discriminant analysis (QDA), the K-nearest neighbor (KNN), and the support vector machines (SVMs). Therefore, the proposed model can be applied as an appropriate alternate approach for solving problems with scant data such as gene expression data classification, specifically when higher classification accuracy is needed.

Journal ArticleDOI
TL;DR: An SA-based hybrid method called entropy-based SA (EbSA) is developed for performance optimization of BS, which is used to detect and track object(s) in videos, and it is shown that the proposed method is more preferable than regular BS method.
Abstract: In computer vision, moving object detection and tracking methods are the most important preliminary steps for higher-level video analysis applications. In this frame, background subtraction (BS) method is a well-known method in video processing and it is based on frame differencing. The basic idea is to subtract the current frame from a background image and to classify each pixel either as foreground or background by comparing the difference with a threshold. Therefore, the moving object is detected and tracked by using frame differencing and by learning an updated background model. In addition, simulated annealing (SA) is an optimization technique for soft computing in the artificial intelligence area. The p-median problem is a basic model of discrete location theory of operational research (OR) area. It is a NP-hard combinatorial optimization problem. The main aim in the p-median problem is to find p number facility locations, minimize the total weighted distance between demand points (nodes) and the closest facilities to demand points. The SA method is used to solve the p-median problem as a probabilistic metaheuristic. In this paper, an SA-based hybrid method called entropy-based SA (EbSA) is developed for performance optimization of BS, which is used to detect and track object(s) in videos. The SA modification to the BS method (SA-BS) is proposed in this study to determine the optimal threshold for the foreground-background (i.e., bi-level) segmentation and to learn background model for object detection. At these segmentation and learning stages, all of the optimization problems considered in this study are taken as p-median problems. Performances of SA-BS and regular BS methods are measured using four videoclips. Therefore, these results are evaluated quantitatively as the overall results of the given method. The obtained performance results and statistical analysis (i.e., Wilcoxon median test) show that our proposed method is more preferable than regular BS method. Meanwhile, the contribution of this study is discussed.

Journal ArticleDOI
01 Jul 2012
TL;DR: This paper presents an outline of a fuzzy ontology as an enhanced version of classical ontology and demonstrates some advantages for practical decision making.
Abstract: Knowledge mobilisation is a transition from the prevailing knowledge management technology that has been widely used in industry for the last 20 years to a new methodology and some innovative methods for knowledge representation, formation and development and for knowledge retrieval and distribution. Knowledge mobilisation aims at coming to terms with some of the problems of knowledge management and at the same time to introduce new theory, new methods and new technology. More precisely, this paper presents an outline of a fuzzy ontology as an enhanced version of classical ontology and demonstrates some advantages for practical decision making. We show that a number of soft computing techniques, e.g. aggregation functions and interval valued fuzzy numbers, will support effective and practical decision making on the basis of the fuzzy ontology. We demonstrate the knowledge mobilisation methods with the construction of a support system for finding the best available wine for a number of wine drinking occasions using a fuzzy wine ontology and fuzzy reasoning methods; the support system has been implemented for a Nokia N900 smart phone.

Journal ArticleDOI
TL;DR: An analysis of the relationship between rough sets and other components of Soft Computing, and of how this hybridization helps improve system performance is surveyed.

Journal ArticleDOI
TL;DR: The ANFIS model for prediction of permeability coefficient revealed the most reliable prediction when compared with the ANN models, and the use of soft computing techniques will provide new approaches and methodologies in prediction of some parameters in soil mechanics.
Abstract: Correlations are very significant from the earliest days; in some cases, it is essential as it is difficult to measure the amount directly, and in other cases it is desirable to ascertain the results with other tests through correlations. Soft computing techniques are now being used as alternate statistical tool, and new techniques such as artificial neural networks, fuzzy inference systems, genetic algorithms, and their hybrids were employed for developing the predictive models to estimate the needed parameters, in the recent years. Determination of permeability coefficient (k) of soils is very important for the definition of hydraulic conductivity and is difficult, expensive, time-consuming, and involves destructive tests. In this paper, use of some soft computing techniques such as ANNs (MLP, RBF, etc.) and ANFIS (adaptive neuro-fuzzy inference system) for prediction of permeability of coarse-grained soils was described and compared. As a result of this paper, it was obtained that the all constructed soft computing models exhibited high performance for predicting k. In order to predict the permeability coefficient, ANN models having three inputs, one output were applied successfully and exhibited reliable predictions. However, all four different algorithms of ANN have almost the same prediction capability, and accuracy of MLP was relatively higher than RBF models. The ANFIS model for prediction of permeability coefficient revealed the most reliable prediction when compared with the ANN models, and the use of soft computing techniques will provide new approaches and methodologies in prediction of some parameters in soil mechanics.

Journal ArticleDOI
01 May 2012
TL;DR: It is observed that both GEP and SVM equally outperform the other two classifiers (PSVM and Wavelet-GEP) considered in the present study.
Abstract: Fault detection and isolation in rotating machinery is very important from an industrial viewpoint as it can help in maintenance activities and significantly reduce the down-time of the machine, resulting in major cost savings. Traditional methods have been found to be not very accurate. Soft computing based methods are now being increasingly employed for the purpose. The proposed method is based on a genetic programming technique which is known as gene expression programming (GEP). GEP is somewhat a new member of the genetic programming family. The main objective of this paper is to compare the classification accuracy of the proposed evolutionary computing based method with other pattern classification approaches such as support vector machine (SVM), Wavelet-GEP, and proximal support vector machine (PSVM). For this purpose, six states viz., normal, bearing fault, impeller fault, seal fault, impeller and bearing fault together, cavitation are simulated on centrifugal pump. Decision tree algorithm is used to select the features. The results obtained using GEP is compared with the performance of Wavelet-GEP, support vector machine (SVM) and proximal support vector machine (PSVM) based classifiers. It is observed that both GEP and SVM equally outperform the other two classifiers (PSVM and Wavelet-GEP) considered in the present study.

Journal ArticleDOI
01 Feb 2012
TL;DR: Three models - a linear model, a generalized regression neural network (GRNN) and an adaptive network based fuzzy inference system (ANFIS) - to estimate the train station parking (TSP) error in urban rail transit are presented.
Abstract: This paper presents three models - a linear model, a generalized regression neural network (GRNN) and an adaptive network based fuzzy inference system (ANFIS) - to estimate the train station parking (TSP) error in urban rail transit. We also develop some statistical indices to evaluate the reliability of controlling parking errors in a certain range. By comparing modeling errors, the subtractive clustering method other than grid partition method is chosen to generate an initial fuzzy system for ANFIS. Then, the collected TSP data from two railway stations are employed to identify the parameters of the proposed three models. The three models can make the average parking errors under an acceptable error, and tuning the parameters of the models is effective in dynamically reducing parking errors. Experiments in two stations indicate that, among the three models, (1) the linear model ranks the third in training and the second in testing, nevertheless, it can meet the required reliability for two stations, (2) the GRNN based model achieves the best performance in training, but the poorest one in testing due to overfitting, resulting in failing to meet the required reliability for the two stations, (3) the ANFIS based model obtains better performance than model 1 both in training and testing. After analyzing parking error characteristics and developing a parking strategy, finally, we confirm the effectiveness of the proposed ANFIS model in the real-world application.

BookDOI
21 Feb 2012
TL;DR: Rough Set Theory, introduced by Pawlak in the early 1980s, has become an important part of soft computing within the last 25 years, but much of the focus has been on the theoretical understanding.
Abstract: Rough Set Theory, introduced by Pawlak in the early 1980s, has become an important part of soft computing within the last 25 years. However, much of the focus has been on the theoretical understanding of Rough Sets, with a survey of Rough Sets and their applications within business and industry much desired. Rough Sets: Selected Methods and Applications in Management and Engineering provides context to Rough Set theory, with each chapter exploring a real-world application of Rough Sets. Rough Sets is relevant to managers striving to improve their businesses, industry researchers looking to improve the efficiency of their solutions, and university researchers wanting to apply Rough Sets to real-world problems.