scispace - formally typeset
Search or ask a question

Showing papers on "Soft computing published in 2014"


Journal ArticleDOI
01 May 2014-Energy
TL;DR: The proposed adaptive intelligent energy management system can learn while it is running and makes proper adjustments during its operation and it is shown that the proposed intelligentEnergy management system is improving the performance of other existing systems.

161 citations


Journal ArticleDOI
TL;DR: An extended rough set model, called as composite rough sets, is presented, and a novel matrix-based method for fast updating approximations is proposed in dynamic composite information systems.

160 citations


Journal ArticleDOI
TL;DR: The proposed cooperative Game-based Fuzzy Q-learning (G-FQL) model implements cooperative defense counter-attack scenarios for the sink node and the base station to operate as rational decision-maker players through a game theory strategy, and yields a greater improvement than existing machine learning methods.

155 citations


Patent
18 Mar 2014
TL;DR: In this paper, new algorithms, methods, and systems for artificial intelligence, soft computing, and deep learning/recognition for image recognition have been proposed, e.g., for action, gesture, emotion, expression, biometrics, fingerprint, facial, OCR (text), background, relationship, position, pattern, and object.
Abstract: Specification covers new algorithms, methods, and systems for artificial intelligence, soft computing, and deep learning/recognition, eg, image recognition (eg, for action, gesture, emotion, expression, biometrics, fingerprint, facial, OCR (text), background, relationship, position, pattern, and object), Big Data analytics, machine learning, training schemes, crowd-sourcing (experts), feature space, clustering, classification, SVM, similarity measures, modified Boltzmann Machines, optimization, search engine, ranking, question-answering system, soft (fuzzy or unsharp) boundaries/impreciseness/ambiguities/fuzziness in language, Natural Language Processing (NLP), Computing-with-Words (CWW), parsing, machine translation, sound and speech recognition, video search and analysis (eg tracking), image annotation, geometrical abstraction, image correction, semantic web, context analysis, data reliability, Z-number, Z-Web, Z-factor, rules engine, control system, autonomous vehicle, self-diagnosis and self-repair robots, system diagnosis, medical diagnosis, biomedicine, data mining, event prediction, financial forecasting, economics, risk assessment, e-mail management, database management, indexing and join operation, memory management, data compression, event-centric social network, Image Ad Network

110 citations


Journal ArticleDOI
TL;DR: The proposed cooperative-based fuzzy artificial immune system (Co-FAIS) improves detection accuracy and successful defense rate performance against attacks compared to conventional empirical methods.

102 citations


Journal ArticleDOI
TL;DR: A residual-based approach for fault detection at rolling mills based on data-driven soft computing techniques that transforms the original measurement signals into a model space by identifying the multi-dimensional relationships contained in the system.

85 citations


BookDOI
16 Jul 2014
TL;DR: These three volumes (CCIS 442, 443, 444) constitute the proceedings of the 15th International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems, IPMU 2014, held in Montpellier, France, July 15-19, 2014.
Abstract: These three volumes (CCIS 442, 443, 444) constitute the proceedings of the 15th International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems, IPMU 2014, held in Montpellier, France, July 15-19, 2014. The 180 revised full papers presented together with five invited talks were carefully reviewed and selected from numerous submissions. The papers are organized in topical sections on uncertainty and imprecision on the web of data; decision support and uncertainty management in agri-environment; fuzzy implications; clustering; fuzzy measures and integrals; non-classical logics; data analysis; real-world applications; aggregation; probabilistic networks; recommendation systems and social networks; fuzzy systems; fuzzy logic in boolean framework; management of uncertainty in social networks; from different to same, from imitation to analogy; soft computing and sensory analysis; database systems; fuzzy set theory; measurement and sensory information; aggregation; formal methods for vagueness and uncertainty in a many-valued realm; graduality; preferences; uncertainty management in machine learning; philosophy and history of soft computing; soft computing and sensory analysis; similarity analysis; fuzzy logic, formal concept analysis and rough set; intelligent databases and information systems; theory of evidence; aggregation functions; big data - the role of fuzzy methods; imprecise probabilities: from foundations to applications; multinomial logistic regression on Markov chains for crop rotation modelling; intelligent measurement and control for nonlinear systems.

77 citations


Journal ArticleDOI
01 Sep 2014
TL;DR: This paper provides a comprehensive overview of the published work onSC applications in different mining areas and comments on the use of SC applications in several mining problems and possible future applications of advanced SC technologies.
Abstract: Soft computing (SC) is a field of computer science that resembles the processes of the human brain. While conventional hard computing is run based on crisp values and binary numbers, SC uses soft values and fuzzy sets. In fact, SC technology is capable of address imprecision and uncertainty. The application of SC techniques in the mining industry is fairly extensive and covers a considerable number of applications. This paper provides a comprehensive overview of the published work on SC applications in different mining areas. A brief introduction to mining and the general field of SC applications are presented in the first section of the paper. The second section comprises four review chapters. Mining method selection, equipment selection problems and their applications in SC technologies are presented in chapters one and two. Chapter three discusses rock mechanics-related subjects and some of representative SC applications in this field. The last chapter presents rock blasting related SC applications that include blast design and hazards. The final section of the paper comments on the use of SC applications in several mining problems and possible future applications of advanced SC technologies.

77 citations


Journal ArticleDOI
TL;DR: Predictive modeling of groundwater level fluctuations is essential for groundwater resource development and management and appropriate care should be taken for selecting suitable methodology for modeling the complex and noisy hydrological time series.
Abstract: Predictive modeling of hydrological time series is essential for groundwater resource development and management. Here, we examined the comparative merits and demerits of three modern soft computing techniques, namely, artificial neural networks (ANN) optimized by scaled conjugate gradient (SCG) (ANN.SCG), Bayesian neural networks (BNN) optimized by SCG (BNN.SCG) with evidence approximation and adaptive neuro-fuzzy inference system (ANFIS) in the predictive modeling of groundwater level fluctuations. As a first step of our analysis, a sensitivity analysis was carried out using automatic relevance determination scheme to examine the relative influence of each of the hydro-meteorological attributes on groundwater level fluctuations. Secondly, the result of stability analysis was studied by perturbing the underlying data sets with different levels of correlated red noise. Finally, guided by the ensuing theoretical experiments, the above techniques were applied to model the groundwater level fluctuation time series of six wells from a hard rock area of Dindigul in Southern India. We used four standard quantitative statistical measures to compare the robustness of the different models. These measures are (1) root mean square error, (2) reduction of error, (3) index of agreement (IA), and (4) Pearson’s correlation coefficient (R). Based on the above analyses, it is found that the ANFIS model performed better in modeling noise-free data than the BNN.SCG and ANN.SCG models. However, modeling of hydrological time series correlated with significant amount of red noise, the BNN.SCG models performed better than both the ANFIS and ANN.SCG models. Hence, appropriate care should be taken for selecting suitable methodology for modeling the complex and noisy hydrological time series. These results may be used to constrain the model of groundwater level fluctuations, which would in turn, facilitate the development and implementation of more effective sustainable groundwater management and planning strategies in semi-arid hard rock area of Dindigul, Southern India and alike.

77 citations


Journal ArticleDOI
TL;DR: This papers highlights the previous researches and methods that has been used in the surface reconstruction field and chooses suitable methods based on the data used.
Abstract: Surface reconstruction means that retrieve the data by scanning an object using a device such as laser scanner and construct it using the computer to gain back the soft copy of data on that particular object It is a reverse process and is very useful especially when that particular object original data is missing without doing any backup Hence, by doing so, the data can be recollected and can be stored for future purposes The type of data can be in the form of structure or unstructured points The accuracy of the reconstructed result should be concerned because if the result is incorrect, hence it will not exactly same like the original shape of the object Therefore, suitable methods should be chosen based on the data used Soft computing methods also have been used in the reconstruction field This papers highlights the previous researches and methods that has been used in the surface reconstruction field

69 citations


Journal ArticleDOI
TL;DR: An optimization algorithm, based on particle swarm optimization, is designed and presented in the current contribution, aiming at the efficient solution of UTRP, and the performance of the proposed soft computing algorithm is quite competitive compared to existing techniques.

Journal ArticleDOI
01 Dec 2014
TL;DR: A collection of several benchmark problems in nonlinear control and system identification, which are presented in a standardized format and range from component to plant level problems and originate mainly from the areas of mechatronics/drives and process systems.
Abstract: HighlightsCollection of 13 benchmark problems described in detail in standardized way.General assessment criteria as well as problem-specific tests specified.Benchmarks span from simple artificial systems to complex entire industrial plants.Many domains covered incl. drives, mechatronics, chemical plants, wind turbines.Examples of use in Soft Computing community are provided for each problem. Using benchmark problems to demonstrate and compare novel methods to the work of others could be more widely adopted by the Soft Computing community. This article contains a collection of several benchmark problems in nonlinear control and system identification, which are presented in a standardized format. Each problem is augmented by examples where it has been adopted for comparison. The selected examples range from component to plant level problems and originate mainly from the areas of mechatronics/drives and process systems. The authors hope that this overview contributes to a better adoption of benchmarking in method development, test and demonstration.

Journal ArticleDOI
TL;DR: Development and application of soft computing such as rough sets, grey sets, fuzzy systems, generic algorithm, support vector machine, and Bayesian network and hybrid model are developed.
Abstract: Information security risk analysis becomes an increasingly essential component of organization's operations. Traditional Information security risk analysis is quantitative and qualitative analysis methods. Quantitative and qualitative analysis methods have some advantages for information risk analysis. However, hierarchy process has been widely used in security assessment. A future research direction may be development and application of soft computing such as rough sets, grey sets, fuzzy systems, generic algorithm, support vector machine, and Bayesian network and hybrid model. Hybrid model are developed by integrating two or more existing model. A Practical advice for evaluation information security risk is discussed. This approach is combination with AHP and Fuzzy comprehensive method.

Journal ArticleDOI
TL;DR: Enhanced predictive accuracy and capability of generalization can be achieved using the SVR approach compared to other soft computing methodologies.

Journal ArticleDOI
TL;DR: This study aims to present a new approach for revealing the causal relationship between values of attributes in an information system, and compares soft attribute analysis with rough attribute analysis and also relate it to soft association rules mining.
Abstract: Soft set theory provides a parameterized treatment of uncertainty, which is closely related to soft computing models like fuzzy sets and rough sets. Based on soft sets and logical formulas over them, this study aims to present a new approach for revealing the causal relationship between values of attributes in an information system. The main procedure of our new method is as follows: First, we choose the attributes to be analyzed and construct some partition soft sets from a given information system. Then we compute the extended union of the obtained partition soft sets, which results in a covering soft set. Further, we transform the obtained covering soft set into a decision soft set and consider logical formulas over it. Next, we calculate various types of soft truth degrees of elementary soft implications. Finally, we can rank attribute values and plot some illustrative graphs, which helps us extract useful knowledge from the given information system. We use several examples, including a classical example given by Pawlak and a practical application concerning IT applying features analysis, to illustrate the newly proposed method and related concepts. In addition, we compare soft attribute analysis with rough attribute analysis and also relate it to soft association rules mining.

Dissertation
01 Jan 2014
TL;DR: A Fuzzy Expert System (FES) is proposed for student academic performance evaluation based on fuzzy logic techniques and introduces the principles behind fuzzy logic and illustrates how these principles could be applied by educators to evaluatingStudent academic performance.

Book
26 Jan 2014
TL;DR: This paper presents a meta-theoretical and ComputationalPrinciples of Computational Mechanics, aPrincipled Approach to Structural Optimization, and Selected Soft Computing Tools, a Practical Guide to Optimization.
Abstract: Preface. Part I: Introduction. Problem Description. 1. Direct and Inverse Problems. Part II: Theoretical and Computational Tools. 2. Computational Mechanics. 3. Computational and Structural Optimization. 4. Selected Soft Computing Tools. Part III: Applications to Inverse Problems. 5. Static Problems. 6. Steady-State Dynamics. 7. Transient Dynamics. References.

Journal ArticleDOI
TL;DR: An Interval-Valued Fuzzy Preference Selection Index IVF-PSI method aiming at solving complex decision making problems, in which the performance ratings of candidates are described by using the concept of the IVFSs.
Abstract: Multiple Attributes Decision Making MADM is the process of finding the best candidate and involves the evaluation and selection among a finite number of potential candidates to solve real-life complex decision problems. In classical MADM methods, the relative importance of the conflicting criteria and performance ratings of candidates are determined precisely. However, in real-world systems related to human resource management, decision making problems are often uncertain or vague, and because of the lack of information, the future state of these systems cannot be known completely. Moreover, if decision makers cannot reach an agreement on the method of defining linguistic variables based on the traditional fuzzy sets, the Interval-Valued Fuzzy Sets IVFSs theory can provide a more accurate and practical modeling. This paper presents an Interval-Valued Fuzzy Preference Selection Index IVF-PSI method aiming at solving complex decision making problems, in which the performance ratings of candidates are described by using the concept of the IVFSs. Finally, the executive procedure of the proposed IVF-PSI method is illustrated by applying it to the expatriate selection process from the viewpoint of human resource managers.

Journal ArticleDOI
01 Jan 2014
TL;DR: Comparisons show that a TSK fuzzy rule-based system outperformed the other approaches in terms of prediction accuracy, and it is shown that, although these methods have only recently applied to airport problems, they present promising and potential features for such problems.
Abstract: The predicted growth in air transportation and the ambitious goal of the European Commission to have on-time performance of flights within 1min makes efficient and predictable ground operations at airports indispensable. Accurately predicting taxi times of arrivals and departures serves as an important key task for runway sequencing, gate assignment and ground movement itself. This research tests different statistical regression approaches and also various regression methods which fall into the realm of soft computing to more accurately predict taxi times. Historic data from two major European airports is utilised for cross-validation. Detailed comparisons show that a TSK fuzzy rule-based system outperformed the other approaches in terms of prediction accuracy. Insights from this approach are then presented, focusing on the analysis of taxi-in times, which is rarely discussed in literature. The aim of this research is to unleash the power of soft computing methods, in particular fuzzy rule-based systems, for taxi time prediction problems. Moreover, we aim to show that, although these methods have only been recently applied to airport problems, they present promising and potential features for such problems.


Journal ArticleDOI
TL;DR: A hybrid model is proposed in which the software projects are divided into several clusters based on key attributes and the promising results showed that the proposed localization can considerably improve the accuracy of estimates.
Abstract: The estimation of software development effort has been centralized mostly on the accuracy of estimates through dealing with heterogeneous datasets regardless of the fact that the software projects are inherently complex and uncertain. In particular, Analogy Based Estimation (ABE), as a widely accepted estimation method, suffers a great deal from the problem of inconsistent and non-normal datasets because it is a comparison-based method and the quality of comparisons strongly depends on the consistency of projects. In order to overcome this problem, prior studies have suggested the use of weighting methods, outlier elimination techniques and various types of soft computing methods. However the proposed methods have reduced the complexity and uncertainty of projects, the results are not still convincing and the methods are limited to a special domain of software projects, which causes the generalization of methods to be impossible. Localization of comparison and weighting processes through clustering of projects is the main idea behind this paper. A hybrid model is proposed in which the software projects are divided into several clusters based on key attributes (development type, organization type and development platform). A combination of ABE and Particle Swarm Optimization (PSO) algorithm is used to design a weighting system in which the project attributes of different clusters are given different weights. Instead of comparing a new project with all the historical projects, it is only compared with the projects located in the related clusters based on the common attributes. The proposed method was evaluated through three real datasets that include a total of 505 software projects. The performance of the proposed model was compared with other well-known estimation methods and the promising results showed that the proposed localization can considerably improve the accuracy of estimates. Besides the increase in accuracy, the results also certified that the proposed method is flexible enough to be used in a wide range of software projects.

Journal ArticleDOI
TL;DR: Detailed review on various modeling methods of mode choice analysis and bottlenecks associated with the same is carried out, particularly on statistical mode choice models such as multinomial logit and probit models as well as recent advanced soft computing techniques that are employed for modal split analysis.
Abstract: Mode choice is one of the most vital stages in transportation planning process and it has direct impact on the policy making decisions. Mode choice models deals very closely with the human choice making behaviour and thus continues to attract researchers for further exploration of commuter’s choice making process. The objective of this study is to carryout detailed review on various modeling methods of mode choice analysis and bottlenecks associated with the same. The factors that affect the psyche of the travelers have been discussed; further various types of data required and their method of collection has been briefed up. This paper particularly emphasizes on statistical mode choice models such as multinomial logit and probit models as well as recent advanced soft computing techniques such as Artificial Neural Network models (ANN) and Fuzzy approach model that are employed for modal split analysis. Comparative analysis were made among various modeling techniques for modeling the complex mode choice of behaviour of models carried out by various researchers in the literature and a discussion on the need of future hybrid soft computing models has been attempted.

01 Jul 2014
TL;DR: In this article, the authors present an approach to improve the quality of the Econman algorithm by introducing an EconMan-based Econ-man algorithm, which can be found in the literature.
Abstract: http://dx.doi.org/10.1016/j.enconman.2014.02.055 0196-8904/ 2014 Elsevier Ltd. All rights reserved. ⇑ Corresponding author. Tel.: +6

Journal ArticleDOI
01 Feb 2014
TL;DR: This paper proposes two graph embedding algorithms based on the Granular Computing paradigm, which are engineered as key procedures of a general-purpose graph classification system.
Abstract: Research on Graph-based pattern recognition and Soft Computing systems has attracted many scientists and engineers in several different contexts. This fact is motivated by the reason that graphs are general structures able to encode both topological and semantic information in data. While the data modeling properties of graphs are of indisputable power, there are still different concerns about the best way to compute similarity functions in an effective and efficient manner. To this end, suited transformation procedures are usually conceived to address the well-known Inexact Graph Matching problem in an explicit embedding space. In this paper, we propose two graph embedding algorithms based on the Granular Computing paradigm, which are engineered as key procedures of a general-purpose graph classification system. Tests have been conducted on benchmarking datasets relying on both synthetic and real-world data, achieving competitive results in terms of test set classification accuracy.

Book ChapterDOI
01 Jan 2014
TL;DR: A novel ant colony based algorithm to balance the load by searching under loaded node is proposed and outperformed the traditional approaches like First Come First Serve (FCFS), local search algorithm like Stochastic Hill Climbing (SHC), another soft computing approach Genetic Algorithm (GA) and some existing Ant Colony Based strategy.
Abstract: Cloud computing thrives a new supplement of consumption and delivery model for internet based services and protocol. It provides large scale computing infrastructure defined on usage and also provides infrastructure services in a very flexible manner which may scales up and down according to user demand. To meet the QoS and satisfy the end users demands for resources in time is one of the main goals for cloud service provider. For this reason selecting a proper node that can complete end users task with QoS is really challenging job. Thus in Cloud distributing dynamic workload across multiple nodes in a distributed environment evenly, is called load balancing. Load balancing can be an optimization problem and should be adapting its strategy to the changing needs. This paper proposes a novel ant colony based algorithm to balance the load by searching under loaded node. Proposed load balancing strategy has been simulated using the CloudAnalyst. Experimental result for a typical sample application outperformed the traditional approaches like First Come First Serve (FCFS), local search algorithm like Stochastic Hill Climbing (SHC),another soft computing approach Genetic Algorithm (GA) and some existing Ant Colony Based strategy.

Journal ArticleDOI
TL;DR: A novel technique for analyzing the behavior of an industrial system by utilizing vague, imprecise, and uncertain data and an artificial bee colony has been used for determining their corresponding membership functions.
Abstract: The purpose of this paper is to present a novel technique for analyzing the behavior of an industrial system by utilizing vague, imprecise, and uncertain data. In this, two important tools namely traditional Lambda-Tau and artificial bee colony algorithm have been used to build a technique named as an artificial bee colony (ABC) algorithm based Lambda-Tau (ABCBLT). In real-life situation, data collected from various resources contains a large amount of uncertainties due to human errors and hence it is not easy to analyze the behavior of such system up to a desired accuracy. If somehow behavior of these systems has been calculated, then they have a high range of uncertainty. For handling this situation, a fuzzy set theory has been used in the analysis and an artificial bee colony has been used for determining their corresponding membership functions. To strengthen the analysis, various reliability parameters, which affects the system performance directly, have been computed in the form of fuzzy membership functions. Sensitivity as well as performance analysis has also been analyzed and their computed results are compared with the existing techniques result. The butter-oil processing plant, a complex repairable industrial system has been taken to demonstrate the approach.

Journal ArticleDOI
TL;DR: The proposed fuzzy grey group compromise ranking method helps the decision-makers to evaluate the gaps that have not been reduced or improved for the alternatives under uncertainty in order to have the great benefit.
Abstract: The aim of this article is to present a new fuzzy grey multi-criteria decision-making method to handle evaluation and selection problems by a group of decision-makers under the condition of uncertain information. By a combination of the concept of compromise solution and grey relational model, a new multi-criteria analysis is developed for real-life situations. Linguistic terms characterised by trapezoidal fuzzy numbers are first utilised to provide weights of selected criteria and to denote the performance rating of alternatives with respect to the conflicting criteria. Then, a grey relational analysis is introduced to investigate the extent of connections between two alternatives by the use of an effective fuzzy distance measurement under the group decision-making process. Finally, a new ranking index is extended to obtain a compromise solution and to determine the best alternative in order to solve complex decision problems. The proposed fuzzy grey group compromise ranking method helps the decision-mak...

Journal ArticleDOI
TL;DR: The real time experimental results are verified with simulation results, showing that ANFIS consistently perform better results to navigate the mobile robot safely in a terrain populated by variety obstacles.
Abstract: Intelligent soft computing techniques such as fuzzy inference system (FIS), artificial neural network (ANN) and adaptive neuro-fuzzy inference system (ANFIS) are proven to be efficient and suitable when applied to a variety of engineering systems. The hallmark of this paper investigates the application of an adaptive neuro-fuzzy inference system (ANFIS) to path generation and obstacle avoidance for an autonomous mobile robot in a real world environment. ANFIS has also taken the advantages of both learning capability of artificial neural network and reasoning ability of fuzzy inference system. In this present design model different sensor based information such as front obstacle distance (FOD), right obstacle distance (ROD), left obstacle distance (LOD) and target angle (TA) are given input to the adaptive fuzzy controller and output from the controller is steering angle (SA) for mobile robot. Using ANFIS tool box, the obtained mean of squared error (MSE) for training data set in the current paper is 0.031. The real time experimental results also verified with simulation results, showing that ANFIS consistently perform better results to navigate the mobile robot safely in a terrain populated by variety obstacles.

Journal ArticleDOI
TL;DR: A model based on rough fuzzy sets to extract spatial fuzzy decision rules from spatial data that simultaneously have two types of uncertainties, roughness and fuzziness is presented.
Abstract: With the development of data mining and soft computing techniques, it becomes possible to automatically mine knowledge from spatial data. Spatial rule extraction from spatial data with uncertainty is an important issue in spatial data mining. Rough set theory is an effective tool for rule extraction from data with roughness. In our previous studies, Rough set method has been successfully used in the analysis of social and environmental causes of neural tube birth defects. However, both roughness and fuzziness may co-exist in spatial data because of the complexity of the object and the subjective limitation of human knowledge. The situation of fuzzy decisions, which is often encountered in spatial data, is beyond the capability of classical rough set theory. This paper presents a model based on rough fuzzy sets to extract spatial fuzzy decision rules from spatial data that simultaneously have two types of uncertainties, roughness and fuzziness. Fuzzy entropy and fuzzy cross entropy are used to measure accuracies of the fuzzy decisions on unseen objects using the rules extracted. An example of neural tube birth defects is given in this paper. The identification result from rough fuzzy sets based model was compared with those from two classical rule extraction methods and three commonly used fuzzy set based rule extraction models. The comparison results support that the rule extraction model established is effective in dealing with spatial data which have roughness and fuzziness simultaneously.

Journal ArticleDOI
TL;DR: The main advantage of proposed systems is the elimination of random selection of the network weights and biases, resulting in increased efficiency of the systems.
Abstract: This paper presents two innovative evolutionary-neural systems based on feed-forward and recurrent neural networks used for quantitative analysis. These systems have been applied for approximation of phenol concentration. Their performance was compared against the conventional methods of artificial intelligence (artificial neural networks, fuzzy logic and genetic algorithms). The proposed systems are a combination of data preprocessing methods, genetic algorithms and the Levenberg–Marquardt (LM) algorithm used for learning feed forward and recurrent neural networks. The initial weights and biases of neural networks chosen by the use of a genetic algorithm are then tuned with an LM algorithm. The evaluation is made on the basis of accuracy and complexity criteria. The main advantage of proposed systems is the elimination of random selection of the network weights and biases, resulting in increased efficiency of the systems.