scispace - formally typeset
Search or ask a question

Showing papers in "Applied Intelligence in 2015"


Journal ArticleDOI
TL;DR: The statistical results prove the GWO algorithm is able to provide very competitive results in terms of improved local optima avoidance and a high level of accuracy in classification and approximation of the proposed trainer.
Abstract: This paper employs the recently proposed Grey Wolf Optimizer (GWO) for training Multi-Layer Perceptron (MLP) for the first time. Eight standard datasets including five classification and three function-approximation datasets are utilized to benchmark the performance of the proposed method. For verification, the results are compared with some of the most well-known evolutionary trainers: Particle Swarm Optimization (PSO), Genetic Algorithm (GA), Ant Colony Optimization (ACO), Evolution Strategy (ES), and Population-based Incremental Learning (PBIL). The statistical results prove the GWO algorithm is able to provide very competitive results in terms of improved local optima avoidance. The results also demonstrate a high level of accuracy in classification and approximation of the proposed trainer.

529 citations


Journal ArticleDOI
TL;DR: A connectionist-hidden Markov model (HMM) system for noise-robust AVSR is introduced and it is demonstrated that approximately 65 % word recognition rate gain is attained with denoised MFCCs under 10 dB signal-to-noise-ratio (SNR) for the audio signal input.
Abstract: Audio-visual speech recognition (AVSR) system is thought to be one of the most promising solutions for reliable speech recognition, particularly when the audio is corrupted by noise. However, cautious selection of sensory features is crucial for attaining high recognition performance. In the machine-learning community, deep learning approaches have recently attracted increasing attention because deep neural networks can effectively extract robust latent features that enable various recognition algorithms to demonstrate revolutionary generalization capabilities under diverse application conditions. This study introduces a connectionist-hidden Markov model (HMM) system for noise-robust AVSR. First, a deep denoising autoencoder is utilized for acquiring noise-robust audio features. By preparing the training data for the network with pairs of consecutive multiple steps of deteriorated audio features and the corresponding clean features, the network is trained to output denoised audio features from the corresponding features deteriorated by noise. Second, a convolutional neural network (CNN) is utilized to extract visual features from raw mouth area images. By preparing the training data for the CNN as pairs of raw images and the corresponding phoneme label outputs, the network is trained to predict phoneme labels from the corresponding mouth area input images. Finally, a multi-stream HMM (MSHMM) is applied for integrating the acquired audio and visual HMMs independently trained with the respective features. By comparing the cases when normal and denoised mel-frequency cepstral coefficients (MFCCs) are utilized as audio features to the HMM, our unimodal isolated word recognition results demonstrate that approximately 65 % word recognition rate gain is attained with denoised MFCCs under 10 dB signal-to-noise-ratio (SNR) for the audio signal input. Moreover, our multimodal isolated word recognition results utilizing MSHMM with denoised MFCCs and acquired visual features demonstrate that an additional word recognition rate gain is attained for the SNR conditions below 10 dB.

493 citations


Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed a new theory called generalized evidence theory (GET), which addresses conflict management in an open world, where the frame of discernment is incomplete because of uncertainty and incomplete knowledge.
Abstract: Dempster-Shafer evidence theory is an efficient tool in knowledge reasoning and decision-making under uncertain environments. Conflict management is an open issue in Dempster-Shafer evidence theory. In past decades, a large amount of research has been conducted on this issue. In this paper, we propose a new theory called generalized evidence theory (GET). In comparison with classical evidence theory, GET addresses conflict management in an open world, where the frame of discernment is incomplete because of uncertainty and incomplete knowledge. Within the presented GET, we define a novel concept called generalized basic probability assignment (GBPA) to model uncertain information, and provide a generalized combination rule (GCR) for the combination of GBPAs, and build a generalized conflict model to measure conflict among evidences. Conflicting evidence can be effectively handled using the GET framework. We present many numerical examples that demonstrate that the proposed GET can explain and deal with conflicting evidence more reasonably than existing methods.

254 citations


Journal ArticleDOI
TL;DR: Results show how design guidelines developed for low-dimensional implementations can become unsuitable for high-dimensional search spaces as the volume of the search space grows exponentially with dimensionality.
Abstract: The existence of the curse of dimensionality is well known, and its general effects are well acknowledged. However, and perhaps due to this colloquial understanding, specific measurements on the curse of dimensionality and its effects are not as extensive. In continuous domains, the volume of the search space grows exponentially with dimensionality. Conversely, the number of function evaluations budgeted to explore this search space usually grows only linearly. The divergence of these growth rates has important effects on the parameters used in particle swarm optimization and differential evolution as dimensionality increases. New experiments focus on the effects of population size and key changes to the search characteristics of these popular metaheuristics when population size is less than the dimensionality of the search space. Results show how design guidelines developed for low-dimensional implementations can become unsuitable for high-dimensional search spaces.

135 citations


Journal ArticleDOI
TL;DR: Experimental results show that feature relevance has a non-ignorable influence on missing data estimation based on grey theory, and the method is considered superior to the other four estimation strategies, which can be significantly reduced by using the approach in classification tasks.
Abstract: Treatment of missing data has become increasingly significant in scientific research and engineering applications. The classic imputation strategy based on the K nearest neighbours (KNN) has been widely used to solve the plague problem. However, former studies do not give much attention to feature relevance, which has a significant impact on the selection of nearest neighbours. As a result, biased results may appear in similarity measurements. In this paper, we propose a novel method to impute missing data, named feature weighted grey KNN (FWGKNN) imputation algorithm. This approach employs mutual information (MI) to measure feature relevance. We present an experimental evaluation for five UCI datasets in three missingness mechanisms with various missing rates. Experimental results show that feature relevance has a non-ignorable influence on missing data estimation based on grey theory, and our method is considered superior to the other four estimation strategies. Moreover, the classification bias can be significantly reduced by using our approach in classification tasks.

110 citations


Journal ArticleDOI
TL;DR: It is demonstrated that the proposed similarity measure is capable of discriminating the difference between patterns, and satisfies the properties of the axiomatic definition for similarity measures.
Abstract: The intuitionistic fuzzy set, as a generation of Zadeh' fuzzy set, can express and process uncertainty much better, by introducing hesitation degree. Similarity measures between intuitionistic fuzzy sets (IFSs) are used to indicate the similarity degree between the information carried by IFSs. Although several similarity measures for intuitionistic fuzzy sets have been proposed in previous studies, some of those cannot satisfy the axioms of similarity, or provide counter-intuitive cases. In this paper, we first review several widely used similarity measures and then propose new similarity measures. As the consistency of two IFSs, the proposed similarity measure is defined by the direct operation on the membership function, non-membership function, hesitation function and the upper bound of membership function of two IFS, rather than based on the distance measure or the relationship of membership and non-membership functions. It proves that the proposed similarity measures satisfy the properties of the axiomatic definition for similarity measures. Comparison between the previous similarity measures and the proposed similarity measure indicates that the proposed similarity measure does not provide any counter-intuitive cases. Moreover, it is demonstrated that the proposed similarity measure is capable of discriminating the difference between patterns.

81 citations


Journal ArticleDOI
TL;DR: An optimized classification algorithm by BP neural network based on PLS and HCA (PLS-HCA-BP algorithm) is proposed, aimed at improving the operating efficiency and classification precision so as to provide a more reliable and more convenient tool for complex pattern classification systems.
Abstract: Due to some correlative or repetitive factors between features or samples with high dimension and large amount of sample data, when traditional back-propagation (BP) neural network is used to solve this classification problem, it will present a series of problems such as network structural redundancy, low learning efficiency, occupation of storage space, consumption of computing time, and so on. All of these problems will restrict the operating efficiency and classification precision of neural network. To avoid them, partial least squares (PLS) algorithm is used to reduce the feature dimension of original data into low-dimensional data as the input of BP neural network, so that it can simplify the structure and accelerate convergence, thus improving the training speed and operating efficiency. In order to improve the classification precision of BP neural network by using hierarchical cluster analysis (HCA), similar samples are put into a sub-class, and some different sub-classes can be obtained. For each sub-class, a different training session can be conducted to find a corresponding precision BP neural network model, and the simulation samples of different sub-classes can be recognized by the corresponding network model. In this paper, the theories of PLS and HCA are combined together with the property of BP neural network, and an optimized classification algorithm by BP neural network based on PLS and HCA (PLS-HCA-BP algorithm) is proposed. The new algorithm is aimed at improving the operating efficiency and classification precision so as to provide a more reliable and more convenient tool for complex pattern classification systems. Three experiments and comparisons with four other algorithms are carried out to verify the superiority of the proposed algorithm, and the results indicate a good picture of the PLS-HCA-BP algorithm, which is worthy of further promotion.

73 citations


Journal ArticleDOI
TL;DR: The experimental results show that the proposed algorithm more efficiently processes real datasets compared to previous ones, and a tree structure constructed with a single database scan named HUPID-Tree (High Utility Patterns in Incremental Databases Tree), and a restructuring method with a novel data structure called TIList (Tail-node Information List) in order to process incremental databases more efficiently.
Abstract: Pattern mining is a data mining technique used for discovering significant patterns and has been applied to various applications such as disease analysis in medical databases and decision making in business. Frequent pattern mining based on item frequencies is the most fundamental topic in the pattern mining field. However, it is difficult to discover the important patterns on the basis of only frequencies since characteristics of real-world databases such as relative importance of items and non-binary transactions are not reflected. In this regard, utility pattern mining has been considered as an emergent research topic that deals with the characteristics. In real-world applications, meanwhile newly generated data by continuous operation or data in other databases for integration analysis can be gradually added to the current database. To efficiently deal with both existing and new data as a database, it is necessary to reflect increased data to previous analysis results without analyzing the whole database again. In this paper, we propose an algorithm called HUPID-Growth (High Utility Patterns in Incremental Databases Growth) for mining high utility patterns in incremental databases. Moreover, we suggest a tree structure constructed with a single database scan named HUPID-Tree (High Utility Patterns in Incremental Databases Tree), and a restructuring method with a novel data structure called TIList (Tail-node Information List) in order to process incremental databases more efficiently. We conduct various experiments for performance evaluation with state-of-the-art algorithms. The experimental results show that the proposed algorithm more efficiently processes real datasets compared to previous ones.

71 citations


Journal ArticleDOI
TL;DR: A GA-based framework with two optimization algorithms is proposed for data sanitization of PPDM and a novel evaluation function with three concerned factors is designed to find the appropriate transactions to be deleted in order to hide sensitive itemsets.
Abstract: Data mining technology is used to extract useful knowledge from very large datasets, but the process of data collection and data dissemination may result in an inherent threat to privacy. Some sensitive or private information concerning individuals, businesses and organizations has to be suppressed before it is shared or published. Privacy-preserving data mining (PPDM) has become an important issue in recent years. In the past, many heuristic approaches were developed to sanitize databases for the purpose of hiding sensitive information in PPDM, but data sanitization of PPDM is considered to be an NP-hard problem. It is critical to find the balance between privacy protection for hiding sensitive information and maintaining the discovery of knowledge, or even reducing artificial knowledge in the sanitization process. In this paper, a GA-based framework with two optimization algorithms is proposed for data sanitization. A novel evaluation function with three concerned factors is designed to find the appropriate transactions to be deleted in order to hide sensitive itemsets. Experiments are then conducted to evaluate the performance of the proposed GA-based algorithms with regard to different factors such as the execution time, the number of hiding failures, the number of missing itemsets, the number of artificial itemsets, and database dissimilarity.

68 citations


Journal ArticleDOI
TL;DR: Experiments show that the service composition model with the time attenuation function can make the quality of service more consistent with the current characteristics of services, and it can obtain a near-optimal solution within a short period of time.
Abstract: The widespread application of cloud computing creates massive application services on the Internet, which is a new challenge for the models and algorithms of cloud service composition This paper proposes a new method for cloud service composition Time attenuation function is added into the service composition model, and service composition is formalized as a nonlinear integer programming problem Moreover, the Discrete Gbest-guided Artificial Bee Colony (DGABC) algorithm is proposed, which simulates the search for the optimal service composition solution through the exploration of bees for food Experiments show that the service composition model with the time attenuation function can make the quality of service more consistent with the current characteristics of services Compared with other algorithms, the DGABC algorithm has advantages in terms of the quality of solution and efficiency, especially for the large-scale data, and it can obtain a near-optimal solution within a short period of time

65 citations


Journal ArticleDOI
TL;DR: To enhance the searching ability of the DE algorithm, the proposed method categorizes the population into two parts to process different types of mutation operators and self-adapting control parameters embedded in the proposed algorithm framework.
Abstract: The differential evolution (DE) algorithm is a notably powerful evolutionary algorithm that has been applied in many areas. Therefore, the question of how to improve the algorithm's performance has attracted considerable attention from researchers. The mutation operator largely impacts the performance of the DE algorithm The control parameters also have a significant influence on the performance. However, it is not an easy task to set a suitable control parameter for DE. One good method is to considering the mutation operator and control parameters simultaneously. Thus, this paper proposes a new DE algorithm with a hybrid mutation operator and self-adapting control parameters. To enhance the searching ability of the DE algorithm, the proposed method categorizes the population into two parts to process different types of mutation operators and self-adapting control parameters embedded in the proposed algorithm framework. Two famous benchmark sets (including 46 functions) are used to evaluate the performance of the proposed algorithm and comparisons with various other DE variants previously reported in the literature have also been conducted. Experimental results and statistical analysis indicate that the proposed algorithm has good performance on these functions.

Journal ArticleDOI
TL;DR: This work presents a method for link prediction in dynamic networks by integrating temporal information, community structure, and node centrality in the network, and achieves higher quality results than traditional methods.
Abstract: Link prediction in social networks has attracted increasing attention in various fields such as sociology, anthropology, information science, and computer science. Most existing methods adopt a static graph representation to predict new links. However, these methods lose some important topological information of dynamic networks. In this work, we present a method for link prediction in dynamic networks by integrating temporal information, community structure, and node centrality in the network. Information on all of these aspects is highly beneficial in predicting potential links in social networks. Temporal information offers link occurrence behavior in the dynamic network, while community clustering shows how strong the connection between two individual nodes is, based on whether they share the same community. The centrality of a node, which measures its relative importance within a network, is highly related with future links in social networks. We predict a node's future importance by eigenvector centrality, and use this for link prediction. Merging the typological information, including community structure and centrality, with temporal information generates a more realistic model for link prediction in dynamic networks. Experimental results on real datasets show that our method based on the integrated time model can predict future links efficiently in temporal social networks, and achieves higher quality results than traditional methods.

Journal ArticleDOI
TL;DR: This paper proposes an evolutionary algorithm for mining rare class association rules when gathering student usage data from a Moodle system, and analyzes how the use of different parameters of the algorithm determine the rule characteristics.
Abstract: Association rule mining, an important data mining technique, has been widely focused on the extraction of frequent patterns. Nevertheless, in some application domains it is interesting to discover patterns that do not frequently occur, even when they are strongly related. More specifically, this type of relation can be very appropriate in e-learning domains due to its intrinsic imbalanced nature. In these domains, the aim is to discover a small but interesting and useful set of rules that could barely be extracted by traditional algorithms founded in exhaustive search-based techniques. In this paper, we propose an evolutionary algorithm for mining rare class association rules when gathering student usage data from a Moodle system. We analyse how the use of different parameters of the algorithm determine the rule characteristics, and provides some illustrative examples of them to show their interpretability and usefulness in e-learning environments. We also compare our approach to other existing algorithms for mining both rare and frequent association rules. Finally, an analysis of the rules mined is presented, which allows information about students' unusual behaviour regarding the achievement of bad or good marks to be discovered.

Journal ArticleDOI
TL;DR: A new extraction and opinion mining system based on a type-2 fuzzy ontology called T2FOBOMIE is proposed, which retrieves targeted hotel reviews and extracts feature opinions from reviews using a fuzzy domain ontology.
Abstract: The volume of traveling websites is rapidly increasing. This makes relevant information extraction more challenging. Several fuzzy ontology-based systems have been proposed to decrease the manual work of a full-text query search engine and opinion mining. However, most search engines are keyword-based, and available full-text search engine systems are still imperfect at extracting precise information using different types of user queries. In opinion mining, travelers do not declare their hotel opinions entirely but express individual feature opinions in reviews. Hotel reviews have numerous uncertainties, and most featured opinions are based on complex linguistic wording (small, big, very good and very bad). Available ontology-based systems cannot extract blurred information from reviews to provide better solutions. To solve these problems, this paper proposes a new extraction and opinion mining system based on a type-2 fuzzy ontology called T2FOBOMIE. The system reformulates the user's full-text query to extract the user requirement and convert it into the format of a proper classical full-text search engine query. The proposed system retrieves targeted hotel reviews and extracts feature opinions from reviews using a fuzzy domain ontology. The fuzzy domain ontology, user information and hotel information are integrated to form a type-2 fuzzy merged ontology for the retrieving of feature polarity and individual hotel polarity. The Protege OWL-2 (Ontology Web Language) tool is used to develop the type-2 fuzzy ontology. A series of experiments were designed and demonstrated that T2FOBOMIE performance is highly productive for analyzing reviews and accurate opinion mining.

Journal ArticleDOI
TL;DR: The applications reveal that FTS-N produces more accurate forecasts for the 11 real-world time-series data sets, and has a network structure and is called a fuzzy-time-series network (F TS-N).
Abstract: Non-probabilistic forecasting methods are commonly used in various scientific fields. Fuzzy-time-series methods are well-known non-probabilistic and nonlinear forecasting methods. Although these methods can produce accurate forecasts, linear autoregressive models can produce forecasts that are more accurate than those produced by fuzzy-time-series methods for some real-world time series. It is well known that hybrid forecasting methods are useful techniques for forecasting time series and that they have the capabilities of their components. In this study, a new hybrid forecasting method is proposed. The components of the new hybrid method are a high-order fuzzy-time-series forecasting model and autoregressive model. The new hybrid forecasting method has a network structure and is called a fuzzy-time-series network (FTS-N). The fuzzy c-means method is used for the fuzzification of time series in FTS-N, which is trained by particle swarm optimization. Istanbul Stock Exchange daily data sets from 2009 to 2013 and the Taiwan Stock Exchange Capitalization Weighted Stock Index data sets from 1999 to 2004 were used to evaluate the performance of FTS-N. The applications reveal that FTS-N produces more accurate forecasts for the 11 real-world time-series data sets.

Journal ArticleDOI
TL;DR: A novel approach that is based on artificial bee colony algorithm (ABC) to address dynamic task assignment problems in multi-agent cooperative systems and shows that ABC improves these two criteria significantly with respect to the other approaches.
Abstract: The task assignment problem is an important topic in multi-agent systems research. Distributed real-time systems must accommodate a number of communication tasks, and the difficulty in building such systems lies in task assignment (i.e., where to place the tasks). This paper presents a novel approach that is based on artificial bee colony algorithm (ABC) to address dynamic task assignment problems in multi-agent cooperative systems. The initial bee population (solution) is constructed by the initial task assignment algorithm through a greedy heuristic. Each bee is formed by the number of tasks and agents, and the number of employed bees is equal to the number of onlooker bees. After being generated, the solution is improved through a local search process called greedy selection. This process is implemented by onlooker and employed bees. In greedy selection, if the fitness value of the candidate source is greater than that of the current source, the bee forgets the current source and memorizes the new candidate source. Experiments are performed with two test suites (TIG representing real-life tree and Fork---Join problems and randomly generated TIGs). Results are compared with other nature-inspired approaches, such as genetic and particle swarm optimization algorithms, in terms of CPU time and communication cost. The findings show that ABC improves these two criteria significantly with respect to the other approaches.

Journal ArticleDOI
TL;DR: A Swarm Intelligence approach is proposed for the optimal scheduling of traffic lights timing programs in metropolitan areas so that the traffic flow of vehicles can be improved with the final goal global target of reducing their fuel consumption and gas emissions.
Abstract: Nowadays, the increasing levels of polluting emissions and fuel consumption of the road traffic in modern cities directly affect air quality, the city economy, and especially the health of citizens. Therefore, improving the efficiency of the traffic flow is a mandatory task in order to mitigate such critical problems. In this article, a Swarm Intelligence approach is proposed for the optimal scheduling of traffic lights timing programs in metropolitan areas. By doing so, the traffic flow of vehicles can be improved with the final goal global target of reducing their fuel consumption and gas emissions (CO and N O x ). In this work we optimize the traffic lights timing programs and analyze their effect in pollution by following the standard HBEFA as the traffic emission model. Specifically, we focus on two large and heterogeneous urban scenarios located in the cities of Malaga and Seville (in Spain). When compared to the traffic lights timing programs designed by experts close to real ones, the proposed strategy obtains significant reductions in terms of the emission rates (23.3 % CO and 29.3 % N O x ) and the total fuel consumption.

Journal ArticleDOI
TL;DR: A cat chaotic mapping is introduced into the steps of population initialization and iterative stage of the original GSA, which forms a new algorithm called CCMGSA which is employed to optimize BP neural networks and shows better performance in terms of the convergence rate and avoidance of local minima.
Abstract: This paper proposes a novel image segmentation method based on BP neural network, which is optimized by an enhanced Gravitational Search Algorithm (GSA). GSA is a novel heuristic optimization algorithm based on the law of gravity and mass interactions. It has been proven that the GSA has good ability to search for the global optimum, but it suffers from the premature convergence due to the rapid reduction of diversity. This work introduces a cat chaotic mapping into the steps of population initialization and iterative stage of the original GSA, which forms a new algorithm called CCMGSA. Then the CCMGSA is employed to optimize BP neural networks, which forms a combination method called CCMGSA-BP and we use it for image segmentation. To verify the efficiency of this method, the visual and performance experiments are done. The visual results using our proposed method are compared with those using other segmentation methods including an improved k-means clustering algorithm (I-K-means), a hybrid region merging method (H-Region-merging), and manual segmentation. The comparison results show that the proposed method can get good segmentation results on grayscale images with specific characteristics. And we compare the performance of our proposed method with those of IGSA-BP, CLPSO-BP and RGA-BP for image segmentation. The results indicate that the CCMGSA-BP shows better performance in terms of the convergence rate and avoidance of local minima.

Journal ArticleDOI
TL;DR: A systematized presentation and a terminology for observations in a Bayesian network focusing on the three main concepts of uncertain evidence, namely likelihood evidence and fixed and not-fixed probabilistic evidence, using a review of previous literature.
Abstract: This paper proposes a systematized presentation and a terminology for observations in a Bayesian network. It focuses on the three main concepts of uncertain evidence, namely likelihood evidence and fixed and not-fixed probabilistic evidence, using a review of previous literature. A probabilistic finding on a variable is specified by a local probability distribution and replaces any former belief in that variable. It is said to be fixed or not fixed regarding whether it has to be kept unchanged or not after the arrival of observation on other variables. Fixed probabilistic evidence is defined by Valtorta et al. (J Approx Reason 29(1):71---106 2002) under the name soft evidence, whereas the concept of not-fixed probabilistic evidence has been discussed by Chan and Darwiche (Artif Intell 163(1):67---90 2005). Both concepts have to be clearly distinguished from likelihood evidence defined by Pearl (1988), also called virtual evidence, for which evidence is specified as a likelihood ratio, that often represents the unreliability of the evidence. Since these three concepts of uncertain evidence are not widely understood, and the terms used to describe these concepts are not well established, most Bayesian networks engines do not offer well defined propagation functions to handle them. Firstly, we present a review of uncertain evidence and the proposed terminology, definitions and concepts related to the use of uncertain evidence in Bayesian networks. Then we describe updating algorithms for the propagation of uncertain evidence. Finally, we propose several results where the use of fixed or not-fixed probabilistic evidence is required.

Journal ArticleDOI
TL;DR: This paper explores a model-based approach for recommendation in social networks which employs matrix factorization techniques and incorporates the mechanism of temporal information and trust relations into the model.
Abstract: All types of recommender systems have been thoroughly explored and developed in industry and academia with the advent of online social networks. However, current studies ignore the trust relationships among users and the time sequence among items, which may affect the quality of recommendations. Three crucial challenges of recommender system are prediction quality, scalability, and data sparsity. In this paper, we explore a model-based approach for recommendation in social networks which employs matrix factorization techniques. Advancing previous work, we incorporate the mechanism of temporal information and trust relations into the model. Specifically, our method utilizes shared latent feature space to constrain the objective function, as well as considers the influence of time and user trust relations simultaneously. Experimental results on the public domain dataset show that our approach performs better than state-of-the-art methods, particularly for cold-start users. Moreover, the complexity analysis indicates that our approach can be easily extended to large datasets.

Journal ArticleDOI
TL;DR: A novel methodology based on fuzzy set theory and analytic network process (FEANP) is developed to address both the uncertain information involved and the interrelationships among the attributes in the supplier selection scenario.
Abstract: With increasing globalization, supplier selection has become more and more important than before. In the process of determining the best supplier, the expert judgements might be vague or incomplete due to the inherent uncertainty and imprecision of their perception. In addition to that, the sub-criteria are relevant to each other in the selection of right supplier. In this paper, a novel methodology based on fuzzy set theory and analytic network process (FEANP) is developed to address both the uncertain information involved and the interrelationships among the attributes. This paper concludes with a case study describing the implementation of this model for a real-world supplier selection scenario. We demonstrate the efficiency of the proposed model by comparing with existing method.

Journal ArticleDOI
TL;DR: This paper proposes a dissimilarity-based method that greatly improves the performance of imbalance learning, and outperforms the other solutions with all given classification algorithms.
Abstract: Class imbalances have been reported to compromise the performance of most standard classifiers, such as Naive Bayes, Decision Trees and Neural Networks Aiming to solve this problem, various solutions have been explored mainly via balancing the skewed class distribution or improving the existing classification algorithms However, these methods pay more attention on the imbalance distribution, ignoring the discriminative ability of features in the context of class imbalance data In this perspective, a dissimilarity-based method is proposed to deal with the classification of imbalanced data Our proposed method first removes the useless and redundant features by feature selection from the given data set; and then, extracts representative instances from the reduced data as prototypes; finally, projects the reduced data into a dissimilarity space by constructing new features, and builds the classification model with data in the dissimilarity space Extensive experiments over 24 benchmark class imbalance data sets show that, compared with seven other imbalance data tackling solutions, our proposed method greatly improves the performance of imbalance learning, and outperforms the other solutions with all given classification algorithms

Journal ArticleDOI
TL;DR: This study proposes a solution to provide online complete coverage through a boustrophedon and backtracking mechanism called the BoB algorithm and designs robots in the system according to a market-based approach.
Abstract: Online complete coverage is required in many applications, such as in floor cleaning, lawn mowing, mine hunting, and harvesting, and can be performed by single- or multi-robot systems. Motivated by the efficiency and robustness of multi-robot systems, this study proposes a solution to provide online complete coverage through a boustrophedon and backtracking mechanism called the BoB algorithm. This approach designs robots in the system according to a market-based approach. Without a central supervisor, the robots use only local interactions to coordinate and construct simultaneously non-overlapping regions in an incremental manner via boustrophedon motion. To achieve complete coverage, that is, the union of all covered regions in the entire accessible area of the workspace, each robot is equipped with an intelligent backtracking mechanism based on a proposed greedy A* search (GA*) to move to the closest unvisited region. The robots complete the coverage task when no more backtracking points are detected. Computer simulations show that the BoB approach is efficient in terms of the coverage rate, the length of the coverage path, and the balance of the workload distribution of robots.

Journal ArticleDOI
TL;DR: The statistical results indicate that the proposed dynamic multi-objective optimization evolutionary algorithm has better conergence speed and diversity and it is very promising for dealing with dynamic environment.
Abstract: A novel dynamic multi-objective optimization evolutionary algorithm is proposed in this paper to track the Pareto-optimal set of time-changing multi-objective optimization problems. In the proposed algorithm, to initialize the new population when a change is detected, a modified prediction model utilizng the historical optimal sets obtained in the last two times is adopted. Meantime, to improve both convergence and diversity, a self-adaptive differential evolution crossover operator is used. We conducted two experiments: the first one compares the proposed algorithm with the other three dynamic multiobjective evolutionary algorithms, and the second one investigates the performance of the two proposed operators. The statistical results indicate that the proposed algorithm has better conergence speed and diversity and it is very promising for dealing with dynamic environment.

Journal ArticleDOI
TL;DR: This paper brings together the Ant Colony approach with a realistic fire dynamics simulator, and shows that the proposed solution is not only able to outperform comparable alternatives in static and dynamic environments, but also in environments with realistic spreading of fire and smoke causing fatalities.
Abstract: An emergency requiring evacuation is a chaotic event, filled with uncertainties both for the people affected and rescuers. The evacuees are often left to themselves for navigation to the escape area. The chaotic situation increases when predefined escape routes are blocked by a hazard, and there is a need to re-think which escape route is safest. This paper addresses automatically finding the safest escape routes in emergency situations in large buildings or ships with imperfect knowledge of the hazards. The proposed solution, based on Ant Colony Optimisation, suggests a near optimal escape plan for every affected person -- considering dynamic spread of fires, movability impairments caused by the hazards and faulty unreliable data. Special focus in this paper is on empirical tests for the proposed algorithms. This paper brings together the Ant Colony approach with a realistic fire dynamics simulator, and shows that the proposed solution is not only able to outperform comparable alternatives in static and dynamic environments, but also in environments with realistic spreading of fire and smoke causing fatalities. The aim of the solutions is usage by both individuals, such as from a personal smartphone of one of the evacuees, or for emergency personnel trying to assist large groups from remote locations.

Journal ArticleDOI
TL;DR: A novel spatio-temporal probabilistic model that integrates crowd and hazard dynamics, using ship- and building fire as proof-of-concept scenarios, and opens up for novel in situ threat mapping and evacuation planning under uncertainty, with applications to emergency response.
Abstract: Managing the uncertainties that arise in disasters --- such as a ship or building fire --- can be extremely challenging. Previous work has typically focused either on modeling crowd behavior, hazard dynamics, or targeting fully known environments. However, when a disaster strikes, uncertainties about the nature, extent and further development of the hazard is the rule rather than the exception. Additionally, crowds and hazard dynamics are both intertwined and uncertain, making evacuation planning extremely difficult. To address this challenge, we propose a novel spatio-temporal probabilistic model that integrates crowd and hazard dynamics, using ship- and building fire as proof-of-concept scenarios. The model is realized as a dynamic Bayesian network (DBN), supporting distinct kinds of crowd evacuation behavior, being based on studies of physical fire models, crowd psychology models, and corresponding flow models. Simulation results demonstrate that the DBN model allows us to track and forecast the movement of people until they escape, as the hazard develops from time step to time step. Our scheme thus opens up for novel in situ threat mapping and evacuation planning under uncertainty, with applications to emergency response.

Journal ArticleDOI
TL;DR: To detect GR-based outliers, an outlier detection algorithm called ODGrCR is proposed from the perspectives of granular computing (GrC) and rough set theory and the experimental results show that the algorithm is effective for outlier Detection.
Abstract: In recent years, outlier detection has attracted considerable attention. The identification of outliers is important for many applications, including those related to intrusion detection, credit card fraud, criminal activity in electronic commerce, medical diagnosis and anti-terrorism. Various outlier detection methods have been proposed for solving problems in different domains. In this paper, a new outlier detection method is proposed from the perspectives of granular computing (GrC) and rough set theory. First, we give a definition of outliers called GR(GrC and rough sets)-based outliers. Second, to detect GR-based outliers, an outlier detection algorithm called ODGrCR is proposed. Third, the effectiveness of ODGrCR is evaluated by using a number of real data sets. The experimental results show that our algorithm is effective for outlier detection. In particular, our algorithm takes much less running time than other outlier detection methods.

Journal ArticleDOI
TL;DR: Experimental results demonstrate that CBBO is able to achieve better results than other evolutionary algorithms on most of the benchmark functions.
Abstract: With its unique migration operator and mutation operator, Biogeography-Based Optimization (BBO), which simulates migration of species in natural biogeography, is different from existing evolutionary algorithms, but it has shortcomings such as poor convergence precision and slow convergence speed when it is applied to solve complex optimization problems. Therefore, we put forward a Cooperative Coevolutionary Biogeography-Based Optimizer (CBBO) in this paper. In CBBO, the whole population is divided into multiple sub-populations first, and then each subpopulation is evolved with an improved BBO separately. The fitness evaluation of habitats of a subpopulation is conducted by constructing context vectors with selected habitats from other sub-populations. Our CBBO tests are based on 13 benchmark functions and are also compared with several other evolutionary algorithms. Experimental results demonstrate that CBBO is able to achieve better results than other evolutionary algorithms on most of the benchmark functions.

Journal ArticleDOI
Yitian Xu1, Xianli Pan1, Zhijian Zhou1, Zhiji Yang1, Yuqun Zhang1 
TL;DR: This paper applies the prior structural information of data into the LS-TSVM to build a better classifier, called the structural least square twin support vector machine (S-LSTSVM), which costs less time by solving two systems of linear equations compared with other existing methods based on structural information.
Abstract: The least square twin support vector machine (LS-TSVM) obtains two non-parallel hyperplanes by directly solving two systems of linear equations instead of two quadratic programming problems (QPPs) as in the conventional twin support vector machine (TSVM), which makes the computational speed of LS-TSVM faster than that of the TSVM. However, LS-TSVM ignores the structural information of data which may contain some vital prior domain knowledge for training a classifier. In this paper, we apply the prior structural information of data into the LS-TSVM to build a better classifier, called the structural least square twin support vector machine (S-LSTSVM). Since it incorporates the data distribution information into the model, S-LSTSVM has good generalization performance. Furthermore, S-LSTSVM costs less time by solving two systems of linear equations compared with other existing methods based on structural information. Experimental results on twelve benchmark datasets demonstrate that our S-LSTSVM performs well. Finally, we apply it into Alzheimer's disease diagnosis to further demonstrate the advantage of our algorithm.

Journal ArticleDOI
TL;DR: The improved FA employs two strategies to enhance the search ability and avoid the premature usually suffered from in standard FA, based on the distance information among the fireflies, and it adjusts the light absorption coefficient adaptively.
Abstract: Economic dispatch (ED) problem exhibits highly nonlinear characteristics, such as prohibited operating zone, ramp rate limits, and non-smooth property. Due to its nonlinear characteristics, it is hard to achieve the expected solution by the classical methods. To overcome the challenging difficulty, this paper proposes an improved firefly algorithm (FA) to solve economic dispatch (ED) problem. The improved FA employs two strategies to enhance the search ability and avoid the premature usually suffered from in standard FA. The first one is based on the distance information among the fireflies, and it adjusts the light absorption coefficient adaptively. The other one is a decreasing strategy for the randomization parameter. Additionally, a crossover operation is employed to create potential solution with high diversity. The designs are able to enhance the search ability and performance of FA, which have been demonstrated on six benchmark functions. To validate the proposed algorithm, we also use three different systems to demonstrate its efficiency and feasibility in solving ED problem. The experimental results show that the proposed FA method was capable of achieving higher quality solutions in ED problems.