scispace - formally typeset
Search or ask a question

Showing papers in "International Journal of Computational Intelligence Research in 2006"


Journal ArticleDOI
TL;DR: This paper presents a comprehensive review of the vari- ous MOPSOs reported in the specialized literature, and includes a classification of the approaches, and identifies the main features of each proposal.
Abstract: The success of the Particle Swarm Optimiza- tion (PSO) algorithm as a single-objective optimizer (mainly when dealing with continuous search spaces) has motivated re- searchers to extend the use of this bio-inspired technique to other areas. One of them is multi-objective optimization. De- spite the fact that the first proposal of a Multi-Objective Par- ticle Swarm Optimizer (MOPSO) is over six years old, a con- siderable number of other algorithms have been proposed since then. This paper presents a comprehensive review of the vari- ous MOPSOs reported in the specialized literature. As part of this review, we include a classification of the approaches, and we identify the main features of each proposal. In the last part of the paper, we list some of the topics within this field that we consider as promising areas of future research.

1,314 citations



Journal ArticleDOI
TL;DR: The problem of unsupervised feature selection and its formulation as a multiobjective optimization problem are investigated and an algorithmic framework encompassing both wrapper and filter methodsoffeatureselection is used.
Abstract: In this paper, the problem of unsupervised feature selection and its formulation as a multiobjective optimization problem are investigated. Two existing multiobjective methods from the literature are revisited and used as the basis for an algorithmic framework, encompassing both wrapper and filter methodsoffeatureselection. Anumberofalternativealgorithms implemented within this framework are then evaluated using an extensive data test suite; the main effect investigated is that of thechoiceofaprimaryobjectivefunction(asecondaryobjective function is used only to militate against an inherent cardinality bias affecting all methods of feature subset evaluation). Partic- ular attention is paid in the study to high-dimensional data sets in which the numberof features is much largerthan the number

114 citations


Journal ArticleDOI
TL;DR: A general indicator- model that can handle any type of distribution representing the uncertainty, allows different distributions for different so- lutions, and does not assume a 'true' objective vector per solu- tion, but in general regards a solution to be inherently associ- ated with an unknown probability distribution in the objective space.
Abstract: Real-world optimization problems are often subject to uncertainties caused by, e.g., missing information in the prob- lem domain or stochastic models. These uncertainties can take different forms in terms of distribution, bounds, and central tendency. In the multiobjective context, some approaches have been proposed to take uncertainties into account within the opti- mization process. Most of them are based on a stochastic exten- sion of Pareto dominance that is combined with standard, non- stochastic diversity preservation mechanisms. Furthermore, it is often assumed that the shape of the underlying probability distribution is known and that for each solution there is a 'true' objective value per dimension which is disturbed by noise. In this paper, we consider a slightly different scenario where the optimization goal is specified in terms of a quality indicator— a real-valued function that induces a total preorder on the set of Pareto set approximations. We propose a general indicator- model that can handle any type of distribution representing the uncertainty, allows different distributions for different so- lutions, and does not assume a 'true' objective vector per solu- tion, but in general regards a solution to be inherently associ- ated with an unknown probability distribution in the objective space. To this end, several variants of an evolutionary algo- rithm for a specific quality indicator, namely the � -indicator, are suggested and empirically investigated. The comparison to ex- isting techniques such as averaging or probabilistic dominance ranking indicates that the proposed approach is especially useful for high-dimensional objective spaces. Moreover, we introduce a general methodology to visualize and analyze Pareto set ap- proximations in the presence of uncertainty which extends the concept of attainment functions.

85 citations


Journal ArticleDOI
TL;DR: It is found that the detection system based on SVM needs less priori knowledge than other methods and can shorten the training time under the same detection performance condition.
Abstract: A novel method based on support vector machine (SVM) is proposed for detecting computer virus. By utilizing SVM, the generalizing ability of virus detection system is still good even the sample dataset size is small. First, the research progress of computer virus detection is recalled and algorithm of SVM taxonomy is introduced. Then the model of a virus detection system based on SVM and virus detection engine are presented respectively. An experiment using system API function call trace is given to illustrate the performance of this model. Finally, comparison of detection ability between the above detection method and other is given. It is found that the detection system based on SVM needs less priori knowledge than other methods and can shorten the training time under the same detection performance condition.

32 citations


Journal ArticleDOI
TL;DR: Empirical results show that the combination of the min-conflicts heuristic with tabu search can be used to solve the problem of automatic generation of rotating workforce schedules very effectively.
Abstract: Rotating workforce scheduling appears in different forms in a broad range of workplaces, such as industrial plants, call centers, public transportation, and airline companies. It is of a high practical relevance to find workforce schedules that fulfill the ergonomic criteria of the employees, and reduce costs for the organization. In this paper we propose new heuristic methods for automatic generation of rotating workforce schedules. To improve the quality of each heuristic method alone, we further propose the hybridization of these methods. The following methods are proposed: (1)A Tabu Search (TS) based algorithm, (2) A heuristic method based on min-conflicts heuristic (MC), (3) A method that includes in the tabu search algorithm the min-conflicts heuristic (TS-MC) and random walk (TS-RW), (4) A method that includes in the min-conflicts heuristic the tabu mechanism (MC-T), random walk (MC-RW), and both the tabu mechanism and the random walk (MC-T-RW). The appropriate neighborhood structure, tabu mechanism, and fitness function, based on the specifics of the problem are proposed. The proposed methods are implemented and experimentally evaluated on the benchmark examples given in the literature and on the real life test problems, which we collected from a broad range of organizations. Empirical results show that the combination of the min-conflicts heuristic with tabu search can be used to solve this problem very effectively. The hybrid methods improve the performance of the commercial system for generation of rotating workforce schedules and are currently in the procees of being included in a commercial package for automatic generation of rotating workforce schedules.

30 citations


Journal ArticleDOI
TL;DR: Experimental results on the proposed image enhancement technique demonstrates strong capability to improve the performance of convolutional face finder compared to histogram equalization and multiscale Retinex with color restoration without compromising the false alarm rate.
Abstract: A robust and efficient image enhancement technique has been developed to improve the visual quality of digital images that exhibit dark shadows due to the limited dynamic ranges of imaging and display devices which are incapable of handling high dynamic range scenes. The proposed technique processes images in two separate steps: dynamic range compression and local contrast enhancement. Dynamic range compression is a neighborhood dependent intensity transformation which is able to enhance the luminance in dark shadows while keeping the overall tonality consistent with that of the input image. The image visibility can be largely and properly improved without creating unnatural rendition in this manner. A neighborhood dependent local contrast enhancement method is used to enhance the images contrast following the dynamic range compression. Experimental results on the proposed image enhancement technique demonstrates strong capability to improve the performance of convolutional face finder compared to histogram equalization and multiscale Retinex with color restoration without compromising the false alarm rate.

29 citations



Journal ArticleDOI
TL;DR: A diagnostic process to detect solder joint defects on Printed Circuit Boards assembled in Surface Mounting Technology proves that the overall classifier is the best compromise in terms of recognition rate and time required for the diagnosis in respect to the single classifiers.
Abstract: The following paper introduces a diagnostic process to detect solder joint defects on Printed Circuit Boards assembled in Surface Mounting Technology. The diagnosis is accomplished by a Neural Network System which processes the images of the solder joints of the integrated circuits mounted on the board. The board images are acquired and then preprocessed to extract the regions of interest for the diagnosis which are the solder joints of the integrated circuits. Five different levels of solder quality in respect to the amount of solder paste have been defined. Two feature vectors have been extracted from each region of interest, the “geometric” feature vector and the “wavelet” feature vector. Both vectors feed the neural network system constituted by two Multi Layer Perceptron neural networks and a Linear Vector Quantization network for the classification. The experimental results are devoted to comparing the performances of a Multi Layer Perceptron network, of a Linear Vector Quantization network, and of the overall neural network system, considering both geometric and wavelet features. The results prove that the overall classifier is the best compromise in terms of recognition rate and time required for the diagnosis in respect to the single classifiers.

28 citations


Journal ArticleDOI
TL;DR: The approach and the GGA are applied on the radio link frequency allocation problem (RLFAP) and on the randomly generated binary CSPs and will let the agents able to count their own GA parameters.
Abstract: DGA is a new multi-agent approach which addresses additive Constraint Satisfaction Problems ( ∑CSPs). This approach is inspired by the guided genetic algorithm (GGA) and by the Dynamic distributed double guided genetic algorithm for Max_CSPs. It consists of agents dynamically created and cooperating in order to solve problems with each agent performs its own GA. First, our approach is enhanced by many new parameters. These latter allow not only diversification but also escaping from local optima. Second, the GGAs performed agents will no longer be the same. This is stirred by NEO-DARWINISM theory and the nature laws. In fact our approach will let the agents able to count their own GA parameters. In order to show DGA advantages, the approach and the GGA are applied on the radio link frequency allocation problem (RLFAP) and on the randomly generated binary CSPs.

28 citations


Journal ArticleDOI
TL;DR: The genetic artificial immune system (GAIS) is presented which evolves non-self detectors and determine their state using a life counter function and is applied to different classification problems.
Abstract: The natural immune system (NIS) protects the body against unwanted foreign material (non-self cells) that could damage the body (self cells). The NIS can be modeled into an artificial immune system (AIS) to detect any non-self patterns in a non-biological environment. Detectors in the NIS can change from their initial mature status to memory status detectors or to annihilated status. A memory detector is a detector that fre- quently detects non-self cells and is a general detector for a sub- set of non-self cells. The NIS uses these memory detectors in a faster response to non-self cells. The purpose of this paper is to present the genetic artificial immune system (GAIS) which evolves these non-self detectors and determine their state using a life counter function. Only detectors with mature or memory status are used to detect non-self. Thus, the number of detectors is dynamically determined by the life counter function. In the paper GAIS is applied to different classification problems.

Journal ArticleDOI
TL;DR: A globally convergent MCA algorithm that can extract multiple minor components sequentially is proposed via the deterministic discrete time (DDT) method.
Abstract: Extracting multiple minor components from the in- put signal is quite useful for many practical applications. In this paper, a globally convergent MCA algorithm that can extract multiple minor components sequentially is proposed. Conver- gence of this MCA algorithm is analyzed via the deterministic discrete time (DDT) method. Sufficient conditions are obtained to guarantee the convergence of this MCA algorithm. Simula- tions are carried out to further illustrate the theoretical results achieved.

Journal ArticleDOI
TL;DR: Experimental results show that the 2PGA outperforms the SPGA very reliably without increasing the amount of fitness function evaluations.
Abstract: In this paper, an analysis of a hierarchical two-population genetic algorithm (2PGA) is presented. Our hierarchical 2PGA composes of two populations that constitute of similarly fit chromosomes. The smaller population, i.e. the elite population, consists of the best chromosomes, whereas the larger population contains less fit chromosomes. The populations have different characteristics, such as size and mutation probability, based on the fitness of the chromosomes in these populations. The performance of our 2PGA is compared to that of a single population genetic algorithm (SPGA). Because the 2PGA has multiple parameters, the significance and the effect of the parameters is also studied. Experimental results show that the 2PGA outperforms the SPGA very reliably without increasing the amount of fitness function evaluations.

Journal ArticleDOI
TL;DR: A novel architecture for performing color image enhancement using a machine learning algorithm called Ratio Rule is proposed in this paper and it is observed that the performance of the system with parallel pipelined architectures is able to achieve 147.3 million outputs per second (MOPS), or equivalently 57.9 billion operations per second on Xilinx’s Virtex II XC2V2000-4ff896 FPGA.
Abstract: A novel architecture for performing color image enhancement using a machine learning algorithm called Ratio Rule is proposed in this paper. The approach promotes logdomain computation to eliminate all multiplications and divisions, utilizing the approximation techniques for efficient estimation of the log2 and inverse-log2. A new quadrant symmetric architecture is also presented to provide very high throughput rate for homomorphic filters which is part of the pixel intensity enhancement across RGB components in the system. The pipelined design of the filter features the flexibility in reloading a wide range of kernels for different frequency responses. A new approach for the design of the uniform filters is also presented to reduce the processing element arrays (PEAs) from W PEAs to 2 PEAs for W×W window. This new concept is applied to assist in training the synaptic weights of the neural network for color balancing to restore the intensity enhanced image to its natural color existed in original image. It is observed that the performance of the system with parallel pipelined architectures is able to achieve 147.3 million outputs per second (MOPS), or equivalently 57.9 billion operations per second on Xilinx’s Virtex II XC2V2000-4ff896 FPGA at a clock frequency of 147.3 MHz.

Journal ArticleDOI
TL;DR: A classification approach based on evolutionary neural networks (CABEN) is presented, which establishes classifiers by a group of three-layer feed-forward neural networks, which has the better performance in classification precision compared with Bayesian and decision trees.
Abstract: * Supported by the Natural Science Foundation under grant NO. 10476006 Abstract: Classification is important in data mining and machine learning. In this paper, a classification approach based on evolutionary neural networks (CABEN) is presented, which establishes classifiers by a group of three-layer feed-forward neural networks. The neural networks are trained by an improving algorithm synthesizing modified Evolutionary Strategy and Levenberg-Marquardt optimization method. The class label of the identifying data can first be evaluated by each neural network, and the final classification result is obtained according to the absolute-majority-voting rule. Experimental results show that the algorithm is effective for the classification, and has the better performance in classification precision, comparing with Bayesian and decision trees, especially for the complex classification problems with many classes.

Journal ArticleDOI
TL;DR: The results indicate that it is possible to identify representative web page in a web site and for this way, improve the site's text content.
Abstract: We introduce a method for improving the web site content through the identification of their most representative web pages. The process begin with the transformation of the web page text content in feature vectors by using the vector space model for documents. Next a Self Organizing Feature Map (SOFM) receive these vectors as input, generating a set of clusters, whose centroids contain the most representative text content for a topic in the site. In the web page's vectorial representation, the text content is transformed in a set of numeric values. Then by operation of the SOFM, the cluster's content are vectors whose relation with the web site pages is not clear. By applying a Reverse Cluster Analysis (RCA), it is possible to identify which pages are rep- resented in each cluster. The RCA consists in the comparison among the vectors in each clusters with the page's vector repre- sentation. Next the pages whose vectorial representation is near to the cluster's centroid, are extracted. This approach was tested in a real web site in order to shows its effectiveness. The results indicate that it is possible to identify representative web page in a web site and for this way, improve the site's text content.

Journal ArticleDOI
TL;DR: A combination of Multi-Objective Evolutionary Algorithms (MOEAs) with Symbolic Techniques (STs) with SAT solvers as known from digital hardware verification to solve Multi-objective Combinatorial Optimization Problems (MCOPs).
Abstract: Solving Multi-objective Combinatorial Optimization Problems (MCOPs) is often a twofold problem: Firstly, the feasible region has to be identified in order to, secondly, im- prove the set of non-dominated solutions. In particular, prob- lems where the construction of a single feasible solution is NP - complete are most challenging. In the present paper, we will pro- pose a combination of Multi-Objective Evolutionary Algorithms (MOEAs) with Symbolic Techniques (STs) to solve this problem. Different Symbolic Techniques, such as Binary Decision Dia- grams (BDDs), Multi-valued Decision Diagrams (MDDs), and SAT solvers as known from digital hardware verification will be considered in our methodology. Experimental results from the area of automatic design space exploration of embedded sys- tems illustrate the benefits of our proposed approach. As a key result, the integration of STs in MOEAs is particularly useful in the presence of large search spaces containing only few feasible solutions.

Journal ArticleDOI
TL;DR: A hybrid evolutionary approach combining steady-state genetic algorithm and a greedy heuristic for the maximum weight clique problem is proposed, which generates cliques that are then extended into maximum weightclique by the heuristic.
Abstract: In this paper we propose a hybrid evolutionary approach combining steady-state genetic algorithm and a greedy heuristic for the maximum weight clique problem. The genetic algorithm generates cliques that are then extended into maximum weight clique by the heuristic. Tests on a variety of benchmark problem instances demonstrate the effectiveness of our approach.

Journal ArticleDOI
TL;DR: The results demonstrate the great potential of using computational intelligent techniques, as an alternative to discriminant analysis, in addressing important economics problems such as bankruptcy prediction.
Abstract: In this paper we apply several computational intelligence techniques to the problem of bankruptcy prediction of medium-sized private companies. Financial data was obtained from Diana, a large database containing financial statements of French companies. Classification accuracy is evaluated for Linear Genetic Programs (LGPs), Classification and Regression Tress (CART), TreeNet, and Random Forests, Multilayer Perceptron (using Back Propogation), Hidden Layer Learning Vector Quantization and several gradient descent methods, conjugate gradient methods, the LevenbergMarquardt algorithm (LM), the Resilient Backpropogation Algorithm (Rprop), and One Step Secant Method. We analyze 2 datasets, one is balanced and the other unbalanced. TreeNet has the best performance accuracy on unbalanced dataset and LGPs performs the best on balanced dataset. Scaled Conjugate Gradient performs the best among the neural network training functions used for the balanced dataset; and Resilient Back Propagation performs the best among the training functions used for the unbalanced dataset. Our results demonstrate the great potential of using computational intelligent techniques, as an alternative to discriminant analysis, in addressing important economics problems such as bankruptcy prediction.

Journal ArticleDOI
TL;DR: Experimental results show that the ANN approach is a promising method for forecasting successful ERP implementation, and this study investigated the usefulness of the ANN model in forecasting success when implementing Enterprise Resource Planning (ERP) systems.
Abstract: Artificial Neural Network (ANN) is widely used in business forecasting. ANN is a powerful forecasting tool. It is suitable for solving complex problems. Recently, ANN has been applied in many varieties of business decision making, such as bankruptcy forecasting, customer churning prediction, stock price forecasting, business process innovations, and systems development. In this study, we investigated the usefulness of the ANN model in forecasting success when implementing Enterprise Resource Planning (ERP) systems. We used an ANN method to compare the performance of three different models: ANN, Multivariable Discriminant Analysis (MDA), and Case-based Reasoning (CBR). Experimental results show that the ANN approach is a promising method for forecasting successful ERP implementation.


Journal ArticleDOI
TL;DR: An evolutionary algorithm has been developed for the design of a diesel engine combustion chamber in order to fulfill present day and future regulations about pollutant emissions and greenhouse gases and the Pareto optimality criterion was applied.
Abstract: An evolutionary algorithm has been developed for the design of a diesel engine combustion chamber in order to fulfill present day and future regulations about pollutant emissions and greenhouse gases. The competitive goals to be achieved in engine optimization are the reduction of emission levels (soot, NOx and HC) and the improvement of specific fuel consumption. They have been taken into account by using a multi-objective approach implemented in an optimization tool called HiPerGEO, which is characterized by a very small population and a mechanism of reinizialization, combined with an external memory to store non-dominated solutions. The method was applied to the design of the combustion chamber profile and numerical simulations were performed with a modified version of the KIVA3V code to evaluate the fitness values of the solutions. The chamber profile was defined according to five geometrical parameters used as inputs to the optimization method. The output of the simulations in terms of emissions and IMEP were used to define four different objective functions. The search for the optimum was performed by applying the Pareto optimality criterion so that it is not bounded to arbitrary weights assigned to each objective. At the end of the simulation, the user can choose from the final Pareto set the best compromise solution for different applications. The method allows the optimization with respect to different engine operating conditions, i.e. load and speed values. In the present investigation, four operating modes were considered and weights were assigned to them according to their importance in the reduction of emissions and fuel consumption. The use of a 3D simulation code to simulate the behavior of the engine with respect to four operating modes is a very time expensive approach. To reduce the required computational time, which is prohibitive on a sequential machine, grid technologies were implemented in a grid portal named DESGrid.


Journal ArticleDOI
TL;DR: This study proposes an approach that combines Neural Network, clustering and valuation technology, which deals with the problem of longperiod valuation, which comprises the MSCI 50 from the Taiwan stock market, and NOLPAT growth and ROIC of company are the key factors for segregating data by SOM.
Abstract: Valuation technology is particularly applicable for investors who focus on long-run rather than short-term returns. Notably, a firm must have existed long enough for an accurate valuation to be possible. This study proposes an approach that combines Neural Network, clustering and valuation technology, which deals with the problem of longperiod valuation. The data set comprises the MSCI 50 from the Taiwan stock market, and NOLPAT growth and ROIC of company are the key factors for segregating data by SOM. A total of 10 elements from the Pro Forma and Mackensy DCF model are inputted back to the propagation neural network. The market value is the output and the shares outstanding can decide the reasonable stock price. The function of important is to shorten the predicting period to one quarter. Investors can rely on the values of share value difference to make their medium-term investment strategy.



Journal ArticleDOI
TL;DR: A population-based algo- rithm for solving permutational optimization problems and can be applied to two classical strongly NP-hard scheduling problems: single machine total weighted tardiness problem and flow shop problem with goal function Cmax.
Abstract: In this paper we present a population-based algo- rithm for solving permutational optimization problems. It con- sists in testing the feasible solutions which are the local minima. This method is based on the following observation: if there are the same elements in some positions in several permutations, which are local minima, then one can suppose that these ele- ments can be in the same positions in the optimal solution. The presented properties and ideas can be applied to two classical strongly NP-hard scheduling problems: 1. single machine total weighted tardiness problem 2. flow shop problem with goal function Cmax. Computational experiments on the benchmark instances from the OR-Library (3) are presented and compared with the results yielded by the best algorithms discussed in the literature. These results show that the algorithm proposed allows us to obtain the best known results for the benchmarks in a short time.



Journal ArticleDOI
TL;DR: An optimisation model based on differential equations is developed to explore and exploit the inherent optimal operation process of EP and uses two performance measures, based on the characteristics of population evolution.
Abstract: Evolutionary algorithms are robust and adaptive. They have found a wide variety of applications solving optimisation and search problems. As one of the main stream algorithms in evolutionary computation evolutionary programming (EP) mainly uses real values of parameters. This makes it very attractive for many engineering optimisation applications. In addition to the evolutionary characteristics more in depth dynamic optimisation mechanisms of EP is investigated in this paper. An optimisation model based on differential equations is developed to explore and exploit the inherent optimal operation process of EP. The proposed model is based on the characteristics of population evolution and uses two performance measures, (i) Population On-line Performance measure and (ii) Population Off-line Performance measure. These two measures are used to quantify the dynamic population optimisation of EP and to form the foundation for the construction of the differential equation based optimisation model. The model is proposed with strict theoretical and numerical analysis. A number of important conclusions and observations are presented in accordance with the analytical results.