scispace - formally typeset
Search or ask a question

Showing papers in "Artificial Intelligence Review in 2014"


Journal ArticleDOI
TL;DR: This work presents a comprehensive survey of the advances with ABC and its applications and it is hoped that this survey would be very beneficial for the researchers studying on SI, particularly ABC algorithm.
Abstract: Swarm intelligence (SI) is briefly defined as the collective behaviour of decentralized and self-organized swarms. The well known examples for these swarms are bird flocks, fish schools and the colony of social insects such as termites, ants and bees. In 1990s, especially two approaches based on ant colony and on fish schooling/bird flocking introduced have highly attracted the interest of researchers. Although the self-organization features are required by SI are strongly and clearly seen in honey bee colonies, unfortunately the researchers have recently started to be interested in the behaviour of these swarm systems to describe new intelligent approaches, especially from the beginning of 2000s. During a decade, several algorithms have been developed depending on different intelligent behaviours of honey bee swarms. Among those, artificial bee colony (ABC) is the one which has been most widely studied on and applied to solve the real world problems, so far. Day by day the number of researchers being interested in ABC algorithm increases rapidly. This work presents a comprehensive survey of the advances with ABC and its applications. It is hoped that this survey would be very beneficial for the researchers studying on SI, particularly ABC algorithm.

1,645 citations


Journal ArticleDOI
TL;DR: This paper is a review of AFSA algorithm and describes the evolution of this algorithm along with all improvements, its combination with various methods as well as its applications.
Abstract: AFSA (artificial fish-swarm algorithm) is one of the best methods of optimization among the swarm intelligence algorithms. This algorithm is inspired by the collective movement of the fish and their various social behaviors. Based on a series of instinctive behaviors, the fish always try to maintain their colonies and accordingly demonstrate intelligent behaviors. Searching for food, immigration and dealing with dangers all happen in a social form and interactions between all fish in a group will result in an intelligent social behavior.This algorithm has many advantages including high convergence speed, flexibility, fault tolerance and high accuracy. This paper is a review of AFSA algorithm and describes the evolution of this algorithm along with all improvements, its combination with various methods as well as its applications. There are many optimization methods which have a affinity with this method and the result of this combination will improve the performance of this method. Its disadvantages include high time complexity, lack of balance between global and local search, in addition to lack of benefiting from the experiences of group members for the next movements.

333 citations


Journal ArticleDOI
TL;DR: A categorisation of the ME literature based on the implicit problem space partitioning using a tacit competitive process between the experts is presented, and the first group is called the mixture of implicitly localised experts (MILE), and the second is called mixture of explicitly localised Experts (MELE), as it uses pre-specified clusters.
Abstract: Mixture of experts (ME) is one of the most popular and interesting combining methods, which has great potential to improve performance in machine learning. ME is established based on the divide-and-conquer principle in which the problem space is divided between a few neural network experts, supervised by a gating network. In earlier works on ME, different strategies were developed to divide the problem space between the experts. To survey and analyse these methods more clearly, we present a categorisation of the ME literature based on this difference. Various ME implementations were classified into two groups, according to the partitioning strategies used and both how and when the gating network is involved in the partitioning and combining procedures. In the first group, The conventional ME and the extensions of this method stochastically partition the problem space into a number of subspaces using a special employed error function, and experts become specialised in each subspace. In the second group, the problem space is explicitly partitioned by the clustering method before the experts' training process starts, and each expert is then assigned to one of these sub-spaces. Based on the implicit problem space partitioning using a tacit competitive process between the experts, we call the first group the mixture of implicitly localised experts (MILE), and the second group is called mixture of explicitly localised experts (MELE), as it uses pre-specified clusters. The properties of both groups are investigated in comparison with each other. Investigation of MILE versus MELE, discussing the advantages and disadvantages of each group, showed that the two approaches have complementary features. Moreover, the features of the ME method are compared with other popular combining methods, including boosting and negative correlation learning methods. As the investigated methods have complementary strengths and limitations, previous researches that attempted to combine their features in integrated approaches are reviewed and, moreover, some suggestions are proposed for future research directions.

325 citations


Journal ArticleDOI
TL;DR: Various attack types are described and new dimensions for attack classification are introduced and detailed description of the proposed detection and robust recommendation algorithms are given.
Abstract: Online vendors employ collaborative filtering algorithms to provide recommendations to their customers so that they can increase their sales and profits. Although recommendation schemes are successful in e-commerce sites, they are vulnerable to shilling or profile injection attacks. On one hand, online shopping sites utilize collaborative filtering schemes to enhance their competitive edge over other companies. On the other hand, malicious users and/or competing vendors might decide to insert fake profiles into the user-item matrices in such a way so that they can affect the predicted ratings on behalf of their advantages. In the past decade, various studies have been conducted to scrutinize different shilling attacks strategies, profile injection attack types, shilling attack detection schemes, robust algorithms proposed to overcome such attacks, and evaluate them with respect to accuracy, cost/benefit, and overall performance. Due to their popularity and importance, we survey about shilling attacks in collaborative filtering algorithms. Giving an overall picture about various shilling attack types by introducing new classification attributes is imperative for further research. Explaining shilling attack detection schemes in detail and robust algorithms proposed so far might open a lead to develop new detection schemes and enhance such robust algorithms further, even propose new ones. Thus, we describe various attack types and introduce new dimensions for attack classification. Detailed description of the proposed detection and robust recommendation algorithms are given. Moreover, we briefly explain evaluation of the proposed schemes. We conclude the paper by discussing various open questions.

273 citations


Journal ArticleDOI
TL;DR: This paper makes an exhaustive survey of various applications of Quantum inspired computational intelligence (QCI) techniques proposed till date and presents an overview on applications of QCI in solving various problems in engineering.
Abstract: This paper makes an exhaustive survey of various applications of Quantum inspired computational intelligence (QCI) techniques proposed till date Definition, categorization and motivation for QCI techniques are stated clearly Major Drawbacks and challenges are discussed The significance of this work is that it presents an overview on applications of QCI in solving various problems in engineering, which will be very much useful for researchers on Quantum computing in exploring this upcoming and young discipline

126 citations


Journal ArticleDOI
TL;DR: This paper presents a survey on research on human behavior analysis with a scope of analyzing the capabilities of the state-of-art methodologies with special focus on semantically enhanced analysis.
Abstract: With increasing crime rates in today's world, there is a corresponding awareness for the necessity of detecting abnormal activity. Automation of abnormal Human behavior analysis can play a significant role in security by decreasing the time taken to thwart unwanted events and picking them up during the suspicion stage itself. With advances in technology, surveillance systems can become more automated than manual. Human Behavior Analysis although crucial, is highly challenging. Tracking and recognizing objects and human motion from surveillance videos, followed by automatic summarization of its content has become a hot topic of research. Many researchers have contributed to the field of automated video surveillance through detection, classification and tracking algorithms. Earlier research work is insufficient for comprehensive analysis of human behavior. With the introduction of semantics, the context of a surveillance domain may be established. Such semantics may extend surveillance systems to perform event-based behavior analysis relevant to the domain. This paper presents a survey on research on human behavior analysis with a scope of analyzing the capabilities of the state-of-art methodologies with special focus on semantically enhanced analysis.

109 citations


Journal ArticleDOI
TL;DR: This comprehensive article, covers various neural network based models for software estimation as presented by various researchers and covers a range of features used for effort prediction.
Abstract: Prediction of software development effort is the key task for the effective management of any software industry. The accuracy and reliability of prediction mechanisms is also important. Neural network based models are competitive to traditional regression and statistical models for software effort estimation. This comprehensive article, covers various neural network based models for software estimation as presented by various researchers. The review of twenty-one articles covers a range of features used for effort prediction. This survey aims to support the research for effort prediction and to emphasize capabilities of neural network based model in effort prediction.

107 citations


Journal ArticleDOI
TL;DR: The main objective of the main objective is to explore the utility of a neural network-based approach to the recognition of the hand gestures through human–computer interaction based on shape analysis.
Abstract: This paper presents a novel technique for hand gesture recognition through human---computer interaction based on shape analysis. The main objective of this effort is to explore the utility of a neural network-based approach to the recognition of the hand gestures. A unique multi-layer perception of neural network is built for classification by using back-propagation learning algorithm. The goal of static hand gesture recognition is to classify the given hand gesture data represented by some features into some predefined finite number of gesture classes. The proposed system presents a recognition algorithm to recognize a set of six specific static hand gestures, namely: Open, Close, Cut, Paste, Maximize, and Minimize. The hand gesture image is passed through three stages, preprocessing, feature extraction, and classification. In preprocessing stage some operations are applied to extract the hand gesture from its background and prepare the hand gesture image for the feature extraction stage. In the first method, the hand contour is used as a feature which treats scaling and translation of problems (in some cases). The complex moment algorithm is, however, used to describe the hand gesture and treat the rotation problem in addition to the scaling and translation. The algorithm used in a multi-layer neural network classifier which uses back-propagation learning algorithm. The results show that the first method has a performance of 70.83% recognition, while the second method, proposed in this article, has a better performance of 86.38% recognition rate.

100 citations


Journal ArticleDOI
TL;DR: An approach to decision making problem based on soft fuzzy rough set model by analyzing the limitations and advantages in the existing literatures is given.
Abstract: Recently, the theory and applications of soft set has brought the attention by many scholars in various areas. Especially, the researches of the theory for combining the soft set with the other mathematical theory have been developed by many authors. In this paper, we propose a new concept of soft fuzzy rough set by combining the fuzzy soft set with the traditional fuzzy rough set. The soft fuzzy rough lower and upper approximation operators of any fuzzy subset in the parameter set were defined by the concept of the pseudo fuzzy binary relation (or pseudo fuzzy soft set) established in this paper. Meanwhile, several deformations of the soft fuzzy rough lower and upper approximations are also presented. Furthermore, we also discuss some basic properties of the approximation operators in detail. Subsequently, we give an approach to decision making problem based on soft fuzzy rough set model by analyzing the limitations and advantages in the existing literatures. The decision steps and the algorithm of the decision method were also given. The proposed approach can obtain a object decision result with the data information owned by the decision problem only. Finally, the validity of the decision methods is tested by an applied example.

98 citations


Journal ArticleDOI
TL;DR: Cl clusters, summarize, interpret and evaluate neural networks in document Image preprocessing, and the importance of the learning algorithms in neural networks training and testing for preprocessing is highlighted.
Abstract: Neural network are most popular in the research community due to its generalization abilities. Additionally, it has been successfully implemented in biometrics, features selection, object tracking, document image preprocessing and classification. This paper specifically, clusters, summarize, interpret and evaluate neural networks in document Image preprocessing. The importance of the learning algorithms in neural networks training and testing for preprocessing is also highlighted. Finally, a critical analysis on the reviewed approaches and the future research guidelines in the field are suggested.

97 citations


Journal ArticleDOI
TL;DR: This study proposes non-adaptive and adaptive resampling schemes for the integration of multiple independent and dependent clusterings, and investigates the effectiveness of bagging techniques, comparing the efficacy of sampling with and without replacement, in conjunction with several consensus algorithms.
Abstract: Clustering ensembles combine multiple partitions of data into a single clustering solution of better quality. Inspired by the success of supervised bagging and boosting algorithms, we propose non-adaptive and adaptive resampling schemes for the integration of multiple independent and dependent clusterings. We investigate the effectiveness of bagging techniques, comparing the efficacy of sampling with and without replacement, in conjunction with several consensus algorithms. In our adaptive approach, individual partitions in the ensemble are sequentially generated by clustering specially selected subsamples of the given dataset. The sampling probability for each data point dynamically depends on the consistency of its previous assignments in the ensemble. New subsamples are then drawn to increasingly focus on the problematic regions of the input feature space. A measure of data point clustering consistency is therefore defined to guide this adaptation. Experimental results show improved stability and accuracy for clustering structures obtained via bootstrapping, subsampling, and adaptive techniques. A meaningful consensus partition for an entire set of data points emerges from multiple clusterings of bootstraps and subsamples. Subsamples of small size can reduce computational cost and measurement complexity for many unsupervised data mining tasks with distributed sources of data. This empirical study also compares the performance of adaptive and non-adaptive clustering ensembles using different consensus functions on a number of datasets. By focusing attention on the data points with the least consistent clustering assignments, whether one can better approximate the inter-cluster boundaries or can at least create diversity in boundaries and this results in improving clustering accuracy and convergence speed as a function of the number of partitions in the ensemble. The comparison of adaptive and non-adaptive approaches is a new avenue for research, and this study helps to pave the way for the useful application of distributed data mining methods.

Journal ArticleDOI
TL;DR: The research progress of TWSVM is reviewed including the learning model and specific applications in recent years, the research and development prospects are described, and the basic theory and the algorithm thought are analyzed.
Abstract: Twin support vector machines (TWSVM) is based on the idea of proximal SVM based on generalized eigenvalues (GEPSVM), which determines two nonparallel planes by solving two related SVM-type problems, so that its computing cost in the training phase is 1/4 of standard SVM. In addition to keeping the superior characteristics of GEPSVM, the classification performance of TWSVM significantly outperforms that of GEPSVM. However, the stand-alone method requires the solution of two smaller quadratic programming problems. This paper mainly reviews the research progress of TWSVM. Firstly, it analyzes the basic theory and the algorithm thought of TWSVM, then tracking describes the research progress of TWSVM including the learning model and specific applications in recent years, finally points out the research and development prospects.

Journal ArticleDOI
TL;DR: In this paper, the authors describe basic feature selection issues in the preprocessing step and discuss the importance of selecting relevant and non-essential features in pre-processing step to improve the learning accuracy and training speed.
Abstract: A lot of candidate features are usually provided to a learning algorithm for pro- ducing a complete characterization of the classification task. However, it is often the case that majority of the candidate features are irrelevant or redundant to the learning task, which will deteriorate the performance of the employed learning algorithm and lead to the problem of overfitting. The learning accuracy and training speed may be significantly deteriorated by these superfluous features. So it is of fundamental importance to select the relevant and nec- essary features in the preprocessing step. This paper describes basic feature selection issues

Journal ArticleDOI
TL;DR: A novel approach for evaluating job applicants in online recruitment systems, using machine learning algorithms to solve the candidate ranking problem and performing semantic matching techniques, which was found to perform consistently compared to human recruiters and can be trusted for the automation of applicant ranking and personality mining.
Abstract: In this work we present a novel approach for evaluating job applicants in online recruitment systems, using machine learning algorithms to solve the candidate ranking problem and performing semantic matching techniques. An application of our approach is implemented in the form of a prototype system, whose functionality is showcased and evaluated in a real-world recruitment scenario. The proposed system extracts a set of objective criteria from the applicants' LinkedIn profile, and compares them semantically to the job's prerequisites. It also infers their personality characteristics using linguistic analysis on their blog posts. Our system was found to perform consistently compared to human recruiters, thus it can be trusted for the automation of applicant ranking and personality mining.

Journal ArticleDOI
TL;DR: A review of the Gaussian Mixture Model based segmentation algorithms for brain MRI images is presented and their comparative evaluations based on reported results are presented.
Abstract: Image segmentation is at a preliminary stage of inclusion in diagnosis tools and the accurate segmentation of brain MRI images is crucial for a correct diagnosis by these tools. Due to in-homogeneity, low contrast, noise and inequality of content with semantic; brain MRI image segmentation is a challenging job. A review of the Gaussian Mixture Model based segmentation algorithms for brain MRI images is presented. The review covers algorithms for segmentation algorithms and their comparative evaluations based on reported results.

Journal ArticleDOI
TL;DR: This paper is the first survey that focuses on touched character segmentation and provides segmentation rates, descriptions of the test data for the approaches discussed, and the main trends in the field of touched character segmentsation.
Abstract: Character segmentation is a challenging problem in the field of optical character recognition. Presence of touched characters make this dilemma more crucial. The goal of this paper is to provide major concepts and progress in domain of off-line cursive touched character segmentation. Accordingly, two broad classes of technique are identified. These include methods that perform explicit or implicit character segmentation. The basic methods used by each class of technique are presented and the contributions of individual algorithms within each class are discussed. It is the first survey that focuses on touched character segmentation and provides segmentation rates, descriptions of the test data for the approaches discussed. Finally, the main trends in the field of touched character segmentation are examined, important contributions are presented and future directions are also suggested.

Journal ArticleDOI
TL;DR: This paper introduces and classify main attacks on open MASs, survey and analyse various security techniques in the literature and suggest which security technique is an appropriate countermeasure for which classes of attack.
Abstract: Open multi-agent systems (MASs) have growing popularity in the Multi-agent Systems community and are predicted to have many applications in future, as large scale distributed systems become more widespread. A major practical limitation to open MASs is security because the openness of such systems negates many traditional security solutions. In this paper we introduce and classify main attacks on open MASs. We then survey and analyse various security techniques in the literature and categorise them under prevention and detection approaches. Finally, we suggest which security technique is an appropriate countermeasure for which classes of attack.

Journal ArticleDOI
TL;DR: This papers highlights the previous researches and methods that has been used in the surface reconstruction field and chooses suitable methods based on the data used.
Abstract: Surface reconstruction means that retrieve the data by scanning an object using a device such as laser scanner and construct it using the computer to gain back the soft copy of data on that particular object It is a reverse process and is very useful especially when that particular object original data is missing without doing any backup Hence, by doing so, the data can be recollected and can be stored for future purposes The type of data can be in the form of structure or unstructured points The accuracy of the reconstructed result should be concerned because if the result is incorrect, hence it will not exactly same like the original shape of the object Therefore, suitable methods should be chosen based on the data used Soft computing methods also have been used in the reconstruction field This papers highlights the previous researches and methods that has been used in the surface reconstruction field

Journal ArticleDOI
TL;DR: The all existing approaches with regard to event detection, video summarization based on video stream and application of text sources in event detection have been surveyed and different computer vision approaches are discussed and compared.
Abstract: This paper presents a state of the art review of features extraction for soccer video summarization research. The all existing approaches with regard to event detection, video summarization based on video stream and application of text sources in event detection have been surveyed. As regard the current challenges for automatic and real time provision of summary videos, different computer vision approaches are discussed and compared. Audio, video feature extraction methods and their combination with textual methods have been investigated. Available commercial products are presented to better clarify the boundaries in this domain and future directions for improvement of existing systems have been suggested.

Journal ArticleDOI
TL;DR: A review of the FCM based segmentation algorithms for brain MRI images is presented and their comparative evaluations based on reported results and the result of experiments for neighborhood based extensions for FCM.
Abstract: Brain image segmentation is one of the most important parts of clinical diagnostic tools. Fuzzy C-mean (FCM) is one of the most popular clustering based segmentation methods. In this paper, a review of the FCM based segmentation algorithms for brain MRI images is presented. The review covers algorithms for FCM based segmentation algorithms, their comparative evaluations based on reported results and the result of experiments for neighborhood based extensions for FCM.

Journal ArticleDOI
TL;DR: The current issues in information security are discussed and the benefits of artificially trained techniques in security process are described andLimitations of the techniques are discussed to identify the factors to be taken into account for efficient performance.
Abstract: Information security paradigm is under a constant threat in enterprises particularly. The extension of World Wide Web and rapid expansion in size and types of documents involved in enterprises has generated many challenges. Extensive research has been conducted to determine the effective solutions to detect and respond but still the space is felt for improvement. Factors that hinder the development of an accurate detection and response techniques have shown links to the amount of data processing involved, number of protocols and application running across and variation in users' requirements and responses. This paper is aimed at discussing the current issue in artificial intelligent (A.I.) techniques that could help in developing a better threat detection algorithm to secure information in enterprises. It is also investigated that the current information security techniques in enterprises have shown an inclination towards A.I. Conventional techniques for detection and response mostly requires human efforts to extract characteristics of malicious intent, investigate and analyze abnormal behaviors and later encode the derived results into the detection algorithm. Instead, A.I. can provide a direct solution to these requirements with a minimal human input. We have made an effort in this paper to discuss the current issues in information security and describe the benefits of artificially trained techniques in security process. We have also carried out survey of current A.I. techniques for IDS. Limitations of the techniques are discussed to identify the factors to be taken into account for efficient performance. Lastly, we have provided a possible research direction in this domain.

Journal ArticleDOI
TL;DR: Felder–Silverman learning style model is identified as the suitable model for E-learning and the use of Fuzzy rules to handle uncertainty in learning style prediction is suggested so that it can enhance the performance of the E- learning system.
Abstract: The performance of the learners in E-learning environments is greatly influenced by the nature of the posted E-learning contents. In such a scenario, the performance of the learners can be enhanced by posting the suitable E-learning contents to the learners based on their learning styles. Hence, it is very essential to have a clear knowledge about various learning styles in order to predict the learning styles of different learners in E-learning environments. However, predicting the learning styles needs complete knowledge about the learners past and present characteristics. Since the knowledge available about learners is uncertain, it can be resolved through the use of Fuzzy rules which can handle uncertainty effectively. The core objective of this survey paper is to outline the working of the existing learning style models and the metrics used to evaluate them. Based on the available models, this paper identifies Felder---Silverman learning style model as the suitable model for E-learning and suggests the use of Fuzzy rules to handle uncertainty in learning style prediction so that it can enhance the performance of the E-learning system.

Journal ArticleDOI
TL;DR: This paper surveys the major advances in GA, particularly in relation to the class of structured population GAs, where better exploration and exploitation of the search space is accomplished by controlling interactions among individuals in the population pool.
Abstract: The Genetic Algorithm (GA) has been one of the most studied topics in evolutionary algorithm literature. Mimicking natural processes of inheritance, mutation, natural selection and genetic operators, GAs have been successful in solving various optimization problems. However, standard GA is often criticized as being too biased in candidate solutions due to genetic drift in search. As a result, GAs sometimes converge on premature solutions. In this paper, we survey the major advances in GA, particularly in relation to the class of structured population GAs, where better exploration and exploitation of the search space is accomplished by controlling interactions among individuals in the population pool. They can be classified as spatial segregation, spatial distance and heterogeneous population. Additionally, secondary factors such as aging, social behaviour, and so forth further guide and shape the reproduction process. Restricting randomness in reproduction has been seen to have positive effects on GAs. It is our hope that by reviewing the many existing algorithms, we shall see even better algorithms being developed.

Journal ArticleDOI
TL;DR: This paper summarizes the major techniques used in machine translation from Arabic into English, and discusses their strengths and weaknesses.
Abstract: Although there is no machine learning technique that fully meets human requirements, finding a quick and efficient translation mechanism has become an urgent necessity, due to the differences between the languages spoken in the world's communities and the vast development that has occurred worldwide, as each technique demonstrates its own advantages and disadvantages. Thus, the purpose of this paper is to shed light on some of the techniques that employ machine translation available in literature, to encourage researchers to study these techniques. We discuss some of the linguistic characteristics of the Arabic language. Features of Arabic that are related to machine translation are discussed in detail, along with possible difficulties that they might present. This paper summarizes the major techniques used in machine translation from Arabic into English, and discusses their strengths and weaknesses.

Journal ArticleDOI
TL;DR: Two illustrative examples (new graduates’ industry sector preferences and supermarket location selection) with crisp and fuzzy values are modeled and solved in the present study.
Abstract: In this paper, a multiple attribute decision making technique is explained in detail and its existing applications are reviewed/analyzed first time in the literature. The technique was originated from combinatorial mathematics and it is based on the graph theory and matrix algebra and has some desirable properties like "ability to model criteria interactions", "ability to generate hierarchical models etc." for modeling and solving complex decision making problems. In order to enable a better understanding of the technique, two illustrative examples (new graduates' industry sector preferences and supermarket location selection) with crisp and fuzzy values are also modeled and solved in the present study.

Journal ArticleDOI
TL;DR: A critical evaluation on the use of Artificial Intelligence (AI) based techniques in ODR is presented, which takes on some of the problems identified in the current state of the art in linking ODR and AI.
Abstract: Litigation in court is still the main dispute resolution mode However, given the amount and characteristics of the new disputes, mostly arising out of electronic contracting, courts are becoming slower and outdated Online Dispute Resolution (ODR) recently emerged as a set of tools and techniques, supported by technology, aimed at facilitating conflict resolution In this paper we present a critical evaluation on the use of Artificial Intelligence (AI) based techniques in ODR In order to fulfill this goal, we analyze a set of commercial providers (in this case twenty four) and some research projects (in this circumstance six) Supported by the results so far achieved, a new approach to deal with the problem of ODR is proposed, in which we take on some of the problems identified in the current state of the art in linking ODR and AI

Journal ArticleDOI
TL;DR: A new classification approach for classification of neural networks hardware that takes into account most of consensual elements in one hand and in the other hand it takes into consideration the evolution of the design technology of integrated circuits and the design techniques.
Abstract: The aim of this paper is to propose a new classification approach of artificial neural networks hardware. Our motivation behind this work is justified by the following two arguments: first, during the last two decades a lot of approaches have been proposed for classification of neural networks hardware. However, at present there is not a clear consensus on classification criteria and performances. Second, with the evolution of the microelectronic technology and the design tools and techniques, new artificial neural networks (ANNs) implementations have been proposed, but they are not taken into consideration in the existing classification approaches of ANN hardware. In this paper, we propose a new approach for classification of neural networks hardware. The paper is organized in three parts: in the first part we review most of existing approaches proposed in the literature during the period 1990---2010 and show the advantages and disadvantages of each one. In the second part, we propose a new classification approach that takes into account most of consensual elements in one hand and in the other hand it takes into consideration the evolution of the design technology of integrated circuits and the design techniques. In the third part, we review examples of neural hardware achievements from industrial, academic and research institutions. According to our classification approach, these achievements range from standard chips to VLSI ASICs, FPGA and embedded systems on chip. Finally, we enumerate design issues that are still posed. This could help to give new directions for future research work.

Journal ArticleDOI
TL;DR: The state-of-the-art methods for mining emerging patterns are reviewed, classify them by different taxonomies, and identify new trends to help researchers and users to select the appropriate algorithm for a given application.
Abstract: Obtaining accurate class prediction of a query object is an important component of supervised classification. However, it could be also important to understand the classification in terms of the application domain, mostly if the prediction disagrees with the expected results. Many accurate classifiers are unable to explain their classification results in terms understandable by an application expert. Classifiers based on emerging patterns, on the other hand, are accurate and easy to understand. The goal of this article is to review the state-of-the-art methods for mining emerging patterns, classify them by different taxonomies, and identify new trends. In this survey, we present the most important emerging pattern miners, categorizing them on the basis of the mining paradigm, the use of discretization, and the stage where the mining occurs. We provide detailed descriptions of the mining paradigms with their pros and cons, what helps researchers and users to select the appropriate algorithm for a given application.

Journal ArticleDOI
TL;DR: The automatic fire detection researches using intelligent techniques from 2000 to 2010 is reviewed and four categories are classified: fire detectors, reduce false alarms systems, fire data analysis and fire predictors.
Abstract: Automatic fire detection system is a system that is capable of assessing environmental factors and their effects on the environment as well as predicting the occurrence of fire in the early stages and even before the outbreak. There are two perspectives in fire detection: fire detection in forests or jungles and fire detection in occupied or residential areas. Automatic fire detection has attracted increased attention due to its importance in decreasing fire damage. There are many studies that have considered appropriate techniques for early fire detection. In recent years researches have been studying technical developments in this field aimed at exploiting wireless communications networks, detection systems and fire prediction systems design. In this paper the automatic fire detection researches using intelligent techniques from 2000 to 2010 is reviewed. We could classify researches to four categories: fire detectors, reduce false alarms systems, fire data analysis and fire predictors. We also classify the intelligent techniques outlined in the researches for each category.

Journal ArticleDOI
TL;DR: A wide variety of benchmark weights selection techniques for linear combination of multiple forecasts in terms of their prediction accuracies are evaluated, demonstrating the relative strengths and weaknesses of various benchmark linear combination techniques.
Abstract: Time series modeling and forecasting are essential in many domains of science and engineering. Extensive works in literature suggest that combining outputs of different forecasting methods substantially increases the overall accuracies as well as reduces the risk of model selection. The most popular method of forecasts combination is the weighted averaging of the constituent forecasts. The effectiveness of this method solely depends on appropriate selection of the combining weights. In this paper, we comprehensively evaluate a wide variety of benchmark weights selection techniques for linear combination of multiple forecasts in terms of their prediction accuracies. Nine real-world time series from different domains and five individual forecasting methods are used in our empirical work. A robust scheme is also suggested for fairly ranking the combination methods on the basis of their forecasting performances. Our study precisely demonstrates the relative strengths and weaknesses of various benchmark linear combination techniques which evidently can be of much practical importance.