scispace - formally typeset
Search or ask a question

Showing papers presented at "Intelligent Systems Design and Applications in 2010"


Proceedings ArticleDOI
01 Nov 2010
TL;DR: This paper proposes an adaptive resource allocation algorithm that adjusts the resource allocation adaptively based on the updated of the actual task executions and shows that it works significantly in the situation where resource contention is fierce.
Abstract: In cloud computing, computational resources are provided to remote users in the form of leases. For a cloud user, he/she can request multiple cloud services simultaneously. In this case, parallel processing in the cloud system can improve the performance. When applying parallel processing in cloud computing, it is necessary to implement a mechanism to allocate resource and schedule the tasks execution order. Furthermore, a resource allocation mechanism with preemptable task execution can increase the utilization of clouds. In this paper, we propose an adaptive resource allocation algorithm for the cloud system with preemptable tasks. Our algorithms adjust the resource allocation adaptively based on the updated of the actual task executions. And the experimental results show that our algorithms works significantly in the situation where resource contention is fierce.

131 citations


Proceedings ArticleDOI
01 Nov 2010
TL;DR: The experimental results show that the proposed intrusion detection system is able to speed up the process of intrusion detection and to minimize the memory space and CPU time cost.
Abstract: Intrusion Detection System (IDS) is an important and necessary component in ensuring network security and protecting network resources and infrastructures. In this paper, we effectively introduced intrusion detection system by using Principal Component Analysis (PCA) with Support Vector Machines (SVMs) as an approach to select the optimum feature subset. We verify the effectiveness and the feasibility of the proposed IDS system by several experiments on NSL-KDD dataset. A reduction process has been used to reduce the number of features in order to decrease the complexity of the system. The experimental results show that the proposed system is able to speed up the process of intrusion detection and to minimize the memory space and CPU time cost.

130 citations


Proceedings ArticleDOI
01 Nov 2010
TL;DR: Comparing performance of several classifiers provided in WEKA such as Bayes, decision tree and classification rules in classifying student's learning style shows that the tree classifiers have high accuracy with more than 91% accuracy.
Abstract: This paper compares performance of several classifiers provided in WEKA such as Bayes, decision tree and classification rules in classifying student's learning style. The student's preferences and behavior while using e-learning system have been observed and analyzed and twenty attributes have been selected to map into Felder Silverman learning style model. There are four learning dimensions in Felder Silverman model and this research integrates the dimensions to map the student's characteristics into sixteen learning styles. A 10-fold cross validation was used to evaluate the classifiers. Among parameters being observed in the performance of the classifiers are classification accuracy, Kappa statistics, training errors and time taken to build the model. The experiment showed that the tree classifiers have high accuracy with more than 91% accuracy. The sizes of the tree and the number of leaves among the tree classifier techniques have also been observed.

76 citations



Proceedings ArticleDOI
01 Nov 2010
TL;DR: A classification model for the cognitive level of question items in examinations based on Bloom's taxonomy is proposed and it is illustrated that document frequency is the most effective feature reduction method because it maintains the classification precision while enhancing the convergence speed.
Abstract: We propose a classification model for the cognitive level of question items in examinations based on Bloom's taxonomy. The model implements the artificial neural network approach, which is trained using the scaled conjugate gradient learning algorithm. Several data preprocessing techniques such as word extraction, stop word removal, stemming, and vector representation are applied to a feature set and then the content of a question item is transformed into a numeric form called a feature vector. Because of the poor scalability of neural networks on high-dimension input spaces, several feature reduction methods were investigated to reduce the dimensionality of the feature space. The experimental results indicate that the proposed model can enhance the convergence speed. The results also illustrate that document frequency is the most effective feature reduction method because it maintains the classification precision while enhancing the convergence speed.

39 citations


Proceedings ArticleDOI
01 Nov 2010
TL;DR: An image segmentation scheme to segment 3D brain tumor from MRI images through the clustering process using K-mean algorithm in conjunction with the connected component labeling algorithm to link the similar clustered objects in all 2D slices.
Abstract: In the recent years human brain segmentation in three-dimensional magnetic resonance imaging (MRI) has gained a lot of importance in the field of biomedical image processing since it is the main stage for the automatic brain disease diagnosis. In this paper, we propose an image segmentation scheme to segment 3D brain tumor from MRI images through the clustering process. The clustering is achieved using K-mean algorithm in conjunction with the connected component labeling algorithm to link the similar clustered objects in all 2D slices and then obtain 3D segmented tissue using the patch object rendering process.

39 citations


Proceedings Article
28 Jun 2010
TL;DR: The analysis of multifunctionality of urban agriculture crossed with internal and external diagnosis of sustainability is an important tool to help urban decision-makers to consider agriculture in urban planning.
Abstract: Social, economical and environmental conditions of Urban Agriculture are very different between industrialized and developing countries, some research models and operational tools may be similarly questioned by this agriculture. In three case studies: in Senegal, France and Madagascar, where agro-economic surveys where done in punctual or long-term research programs, we question the renewal of the representation of farming systems' diversity, the duality of the concepts of sustainability and the analysis of multi-functionality of Urban Agriculture according to different actors. The global activity system must be taken into account due to the frequency of external activities to understand the agricultural production system itself. An external sustainability, i.e. the vision the urban planners and residents have about Urban Agriculture compared to other urban uses of land, is as important as the classical internal sustainability of farming systems to diagnose the future of urban agriculture. The place the urban planners give to agriculture in urban projects depends on the functions they or the inhabitants recognize to it. Thus, research must analyse them and lighten the hierarchy of functions made by these different stakeholders. The analysis of multifunctionality of urban agriculture crossed with internal and external diagnosis of sustainability is an important tool to help urban decision-makers to consider agriculture in urban planning.

38 citations


Proceedings ArticleDOI
01 Nov 2010
TL;DR: A hand gesture detection and recognition system for Ethiopian Sign Language (ESL) has been proposed and Gabor Filter together with Principal Component Analysis (PCA) and Artificial Neural Network (ANN) is used for recognizing the ESL from extracted features and to translate into Amharic voice.
Abstract: Pattern recognition is very challenging multidisciplinary research area attracting researchers and practitioners. Gesture recognition is a specialized pattern recognition task with the goal of interpreting human gestures via mathematical models. One of the usages of gesture recognition is the sign language recognition which is the basic communication method between deaf people. Since there is lack of proficient sign language teachers at schools for the deaf, the teaching and learning process is remaining affected. A system is therefore required to overcome communication barriers facing the deaf community. So, in this paper, a hand gesture detection and recognition system for Ethiopian Sign Language (ESL) has been proposed. Gabor Filter (GF) together with Principal Component Analysis (PCA) has been used for extracting features from the digital images of hand gestures while Artificial Neural Network (ANN) is used for recognizing the ESL from extracted features and to translate into Amharic voice. The experimental results show that the system has produced recognition rate of 98.53%.

37 citations


Proceedings ArticleDOI
01 Nov 2010
TL;DR: The result clearly showed that the best performance was obtained by the cluster network, followed by SOM and k-means, their predicting accuracy ranging from 62% to 78%.
Abstract: This paper aims to assess the effectiveness of three different clustering algorithms, used to detect breast cancer recurrent events The performance of a classical k-means algorithm is compared with a much more sophisticated Self-Organizing Map (SOM-Kohonen network) and a cluster network, closely related to both k-means and SOM The three clustering algorithms have been applied on a concrete breast cancer dataset, and the result clearly showed that the best performance was obtained by the cluster network, followed by SOM and k-means, their predicting accuracy ranging from 62% to 78% Based on the patients' segmentation regarding the occurrence of recurrent events, new patients may be labeled according to their medical characteristics as developing or not recurrent events, thus supporting health professionals in making informed decisions

37 citations


Proceedings ArticleDOI
01 Nov 2010
TL;DR: The experimental result shows the proposed method has the robust ability to track the moving hand under real life scenarios at speed 45fps with average 97.83% tracking rate.
Abstract: This paper introduces a hand tracking system in unconstrained environment. The system consists of two main stages which are initialization and tracking. In initialization, hand region is first detected by combining motion and skin color pixels. A region of interest (ROI) is then created around the detected hand region. In tracking stage, skin and motion pixels are scanned around top, left and right corners of the ROI to detect the moving hand in consecutive video frames. These pixels are used to actually measure the ROI position and fed into measurement update of Adaptive Kalman Filter (AKF) operation. The process noise covariance and measurement noise covariance of AKF are adjusted adaptively by applying weighting factor based on acceleration threshold value. The experimental result shows the proposed method has the robust ability to track the moving hand under real life scenarios at speed 45fps with average 97.83% tracking rate.

35 citations


Proceedings ArticleDOI
01 Nov 2010
TL;DR: A novel representation of Recurrent Artificial neural network is proposed for non-linear markovian and non-markovian control problems and demonstrates that the network has the ability to generate neural architecture and parameters that can solve these problems in substantially fewer number of evaluations in comparison to earlier neuroevolutionary techniques.
Abstract: A novel representation of Recurrent Artificial neural network is proposed for non-linear markovian and non-markovian control problems. The network architecture is inspired by Cartesian Genetic Programming. The neural network attributes namely weights, topology and functions are encoded using Cartesian Genetic Programming. The proposed algorithm is applied on the standard benchmark control problem: double pole balancing for both markovian and non-markovian cases. Results demonstrate that the network has the ability to generate neural architecture and parameters that can solve these problems in substantially fewer number of evaluations in comparison to earlier neuroevolutionary techniques. The power of Recurrent Cartesian Genetic Programming Artificial Neural Network (RCGPANN) is its representation which leads to a thorough evolutionary search producing generalized networks.

Proceedings ArticleDOI
01 Nov 2010
TL;DR: This article formulate problem of fuser design as an optimization task and use neural approach as its solver and proposes a taxonomy of aforementioned fusers and their main features are presented for some of them.
Abstract: Combining pattern recognition is the promising direction in designing an effective classifier systems. There are several approaches of collective decision-making, among them voting methods, where the decision is a combination of individual classifiers' outputs are quite popular. This article focuses on the problem of fuser design which uses continuous outputs of individual classifiers to make a decision. We formulate problem of fuser design as an optimization task and use neural approach as its solver. We propose a taxonomy of aforementioned fusers and their main features are presented for some of them. The results of computer experiments carried out on benchmark datasets confirm quality of proposed concept.

Book ChapterDOI
01 Nov 2010
TL;DR: It is demonstrated that novelty search significantly outperforms fitness-based search in a deceiving barrier avoidance task but does not provide an advantage in the swimming task where a large unconstrained behavior space inhibits its efficiency.
Abstract: Evolutionary algorithms are a frequently used technique for designing morphology and controller of a robot. However, a significant challenge for evolutionary algorithms is premature convergence to local optima. Recently proposed Novelty Search algorithm introduces a radical idea that premature convergence to local optima can be avoided by ignoring the original objective and searching for any novel behaviors instead. In this paper, we apply novelty search to the problem of body-brain co-evolution. We demonstrate that novelty search significantly outperforms fitness-based search in a deceiving barrier avoidance task but does not provide an advantage in the swimming task where a large unconstrained behavior space inhibits its efficiency. Thus, we show that the advantages of novelty search previously demonstrated in other domains can also be utilized in the more complex domain of body-brain co-evolution, provided that the task is deceiving and behavior space is constrained.

Proceedings ArticleDOI
01 Nov 2010
TL;DR: Experimental results demonstrated that the GA-NN approach able to produce better performance with fewer input features compared to the standard NN approach.
Abstract: This paper describes a method using image processing and genetic algorithm-neural network (GA-NN) for automated Mycobacterium tuberculosis detection in tissues. The proposed method can be used to assist pathologists in tuberculosis (TB) diagnosis from tissue sections and replace the conventional manual screening process, which is time-consuming and labour-intensive. The approach consists of image segmentation, feature extraction and identification. It uses Ziehl-Neelsen stained tissue slides images which are acquired using a digital camera attached to a light microscope for diagnosis. To separate the tubercle bacilli from its background, moving k-mean clustering that uses C-Y colour information is applied. Then, seven Hu's moment invariants are extracted as features to represent the bacilli. Finally, based on the input features, a GA-NN approach is used to classify into two classes: ‘true TB’ and ‘possible TB’. In this study, genetic algorithm (GA) is applied to select significant input features for neural network (NN). Experimental results demonstrated that the GA-NN approach able to produce better performance with fewer input features compared to the standard NN approach.

Proceedings ArticleDOI
01 Nov 2010
TL;DR: Artificial Neural Networks were employed to improve the prediction of the linear model, taking advantage of their nonlinear modeling capability, to overcome the defect in real data context.
Abstract: Linear regression and classification techniques are very common in statistical data analysis but they are often able to extract from data only linear models, which can be a limitation in real data context. Aim of this study is to build an innovative procedure to overcome this defect. Initially, a multiple linear regression analysis using the best-subset algorithm was performed to determine the variables for best predicting the dependent variable. Based on the same selected variables, Artificial Neural Networks were employed to improve the prediction of the linear model, taking advantage of their nonlinear modeling capability. Linear and nonlinear models were compared in their classification (ROC curves) and prediction (cross-validation) tasks: nonlinear model resulted to fit better data (36% vs. 10% variance explained for nonlinear and linear, respectively) and provided more reliable parameters for accuracy and misclassification rates (70% and 30% vs. 66% and 34%, respectively).

Proceedings ArticleDOI
01 Nov 2010
TL;DR: This work presents a study of a case of a similarity relation based on a global similarity function between two objects, this function includes the weights for each feature and local functions to calculate how the values of a given feature are similar.
Abstract: In this paper we propose a method to build similarity relations into extended Rough Set Theory. Similarity is estimated using ideas from Granular computing and Case-base reasoning. A new measure is introduced in order to compute the quality of the similarity relation. This work presents a study of a case of a similarity relation based on a global similarity function between two objects, this function includes the weights for each feature and local functions to calculate how the values of a given feature are similar. This approach was proved in the function approximation problem. Promissory results are obtained in several experiments.

Proceedings ArticleDOI
01 Nov 2010
TL;DR: Experimental results show that the BB-BC can produce good quality solutions and comparable with some of the methods applied in the literature (with regards to Socha's benchmark instances).
Abstract: In this study, we present the Big Bang-Big Crunch (BB-BC) method to solve the post-enrolment course timetabling problem. This method is derived from one of the evolution of the universe theories in physics and astronomy. The BB-BC theory involves two phases (Big Bang and Big Crunch). The Big Bang phase feeds the Big Crunch phase with many inputs and the Big Crunch phase is the shrinking destiny of the universe into singularity. Generally, the BB-BC algorithm generates a population of random initial solutions in the Big Bang phase and shrinks those solutions to a single good quality solution represented by a centre of mass. Experimental results show that the BB-BC can produce good quality solutions and comparable with some of the methods applied in the literature (with regards to Socha's benchmark instances).

Proceedings ArticleDOI
01 Nov 2010
TL;DR: The new proposed adaptive mathematical morphological filter gives promising results for significantly suppressing speckle noise and preserving the potential targets in Synthetic Aperture Radar images.
Abstract: Speckle noise is one of the most critical disturbances that alter the quality of Synthetic Aperture Radar (SAR) coherent images. Before using SAR images in automatic target detection and recognition, the first step is to reduce the effect of speckle noise. Several adaptive and non-adaptive filters are widely used for despeckling in SAR images. In this paper, an adaptive mathematical morphological filter is proposed to reduce the speckle noise in SAR images. The new filter performance is compared with a number of despeckling filters with different parameters. For performance measurements, several parameters were evaluated to test the filter ability to attenuate the speckle noise and keep target information. From experimental results, the new proposed morphological filter gives promising results for significantly suppressing speckle noise and preserving the potential targets.

Proceedings ArticleDOI
01 Nov 2010
TL;DR: The class of weighted Kemeny distances on weak orders is introduced for taking into account where the disagreements occur, and the properties of the associated consensus measures are analyzed.
Abstract: In this paper we analyze the consensus in groups of decision makers that rank alternatives by means of weak orders. We have introduced the class of weighted Kemeny distances on weak orders for taking into account where the disagreements occur, and we have analyzed the properties of the associated consensus measures.

Proceedings ArticleDOI
01 Nov 2010
TL;DR: This paper shows how this objective can be achieved by collecting data during the interaction of the user with the mobile device and using this context history to personalize the resource recommender by a genetic algorithm.
Abstract: Situation awareness is a promising approach to recommend to a mobile user the most suitable resources for a specific situation. However, determining the correct user situation is not a simple task since users have different habits that may affect the way in which the situations arise. Thus, an appropriate tuning aimed at adapting the situation recognizer to the specific user is desirable to make a resource recommender more effective. In this paper, we show how this objective can be achieved by collecting data during the interaction of the user with the mobile device and using this context history to personalize the resource recommender by a genetic algorithm. To describe our approach, we adopt a recently proposed resource recommender which exploits fuzzy linguistic variables to manage the inherent vagueness of some contextual parameters. Experimental results on a real business case show that the responsiveness and modeling capabilities of the recommender increase, thus validating the proposed approach.

Proceedings ArticleDOI
01 Nov 2010
TL;DR: This paper proposes an algorithm that first learns a description mixture for the first video frames, and then it uses these results as a starting point for the analysis of the further frames and applies it to a video sequence and shows its effectiveness for real-time tracking multiple moving objects.
Abstract: The usage of Gaussian mixture models for video segmentation has been widely adopted. However, the main difficulty arises in choosing the best model complexity. High complex models can describe the scene accurately, but they come with a high computational requirements, too. Low complex models promote segmentation speed, with the drawback of a less exhaustive description. In this paper we propose an algorithm that first learns a description mixture for the first video frames, and then it uses these results as a starting point for the analysis of the further frames. Then, we apply it to a video sequence and show its effectiveness for real-time tracking multiple moving objects. Moreover, we integrated this procedure into a foreground/background subtraction statistical framework. We compare our procedure against the state-of-the-art alternatives, and we show both its initialization efficacy and its improved segmentation performance.

Proceedings ArticleDOI
01 Nov 2010
TL;DR: The use of GP is explored to develop a software cost estimation model utilizing the effect of both the developed line of code and the used methodology during the development, and an application of estimating the effort for some NASA software projects is introduced.
Abstract: There is still an urgent need of finding a mathematical model which can provide an accurate relationship between the software project effort/cost and the cost drivers. A powerful algorithm which can optimize such a relationship via developing a mathematical relationship between model variables is urgently needed. In this paper, we explore the use of GP to develop a software cost estimation model utilizing the effect of both the developed line of code and the used methodology during the development. An application of estimating the effort for some NASA software projects is introduced. The performance of the developed Genetic Programming (GP) based model was tested and compared to known models in the literature. The developed GP model was able to provide good estimation capabilities compared to other models.

Proceedings ArticleDOI
01 Nov 2010
TL;DR: A Genetic Programming-based feature extraction method driven by Rough Set Theory is presented to help visualize the data in a bidimensional graph, to better understand how the presence of overlapping and data fractures affect classification performance.
Abstract: The classification of imbalanced data is a well-studied topic in data mining. However, there is still a lack of understanding of the factors that make the problem difficult. In this work, we study the two main reasons that make the classification of imbalanced datasets complex: overlapping and data fracture. We present a Genetic Programming-based feature extraction method driven by Rough Set Theory to help visualize the data in a bidimensional graph, to better understand how the presence of overlapping and data fractures affect classification performance.

Proceedings ArticleDOI
01 Nov 2010
TL;DR: The main goal in the design of the protocol was to reduce the routing overhead, response time, end-to-end delay and increase the performance.
Abstract: In this paper, we present a modified on-demand routing algorithm for mobile ad-hoc networks (MANETs). The proposed algorithm is based on both the standard Ad-hoc On-demand Distance Vector (AODV) protocol and ant colony based optimization. The modified routing protocol is highly adaptive, efficient and scalable. The main goal in the design of the protocol was to reduce the routing overhead, response time, end-to-end delay and increase the performance. We refer to the new modified protocol as the Multi-Route AODV Ant routing algorithm (MRAA).

Proceedings ArticleDOI
01 Nov 2010
TL;DR: A comparison between the variations of gradient descent and Genetic Algorithm (GA) based ANNs training with special emphasize given on the developed algorithm and comparison methodology shows that the overall classification error percentage of the family of GA is slightly better than those of gradient ascent on cancer dataset.
Abstract: One of the major issues concerning the Artificial Neural Networks (ANNs) design is a proper adjustment of the weights of the network. There have been a number of studies comparing the performance of evolutionary and gradient based ANNs learning. But the results of the studies, sometime conflicting to each other although the same and standard dataset development had been used. Motivated by this finding, the main objective of this paper is to make another comparison between the variations of gradient descent and Genetic Algorithm (GA) based ANNs training with special emphasize given on the developed algorithm and comparison methodology. Besides, the effect of the crossover operation on GA training is also being investigated. The comparison is done using cancer and diabetes benchmark dataset. The result shows that the overall classification error percentage of the family of GA is slightly better than those of gradient descent on cancer dataset. On the other hand, gradient descent is much better than GA on diabetes.

Proceedings Article
28 Jun 2010
TL;DR: In this paper, the authors investigate the appropriate approaches for the development and extension of organic farming, and give an overview of the process of organic Farming development in case of West Java and North Sumatra, and investigate the importance of joint marketing of organic produce.
Abstract: By increasing rice production significantly, green revolution has been the most remarkable technology in Asian countries. However, it also has negative impact on human health and the environment such as pesticide residues and land degradation. Entering the 21st century, people's awareness regarding the environment and nature has increased and a “back to nature” lifestyle has emerged. Therefore, organic farming, that does not use chemo-synthetic inputs, has become one of the alternatives, and through maintaining harmony with nature, it can be the means to achieve sustainable agriculture. The objectives of this paper are as follows: (1) to investigate the appropriate approaches for the development and extension of organic farming, (2) to give an overview of the process of organic farming development in case of West Java and North Sumatra, and (3) to investigate the importance of joint marketing of organic produce. A survey was conducted in August 2007 in North Sumatra and from May to June 2008 in West Java. Based on the study, the environmentally friendly organic farming can contribute to higher farmers' income. It must be noted that farmers can easily convert to organic farming as it is profitable to do so. However, it is not only the matter of production, but also the matter of marketing or selling the organic produce. Hence, the joint marketing practice that has been implemented by farmer groups in West Java and North Sumatra can be one of the ways to ensure the viable marketing of organic produce.

Proceedings ArticleDOI
01 Nov 2010
TL;DR: This work proposes a decision support technology to minimize risks while choosing among competitive investment projects, providing two stages of investment projects' evaluation using two fuzzy-statistical methods.
Abstract: This work proposes a decision support technology to minimize risks while choosing among competitive investment projects. The technology combines two fuzzy-statistical methods, providing two stages of investment projects' evaluation. At the first stage preliminary selection of projects with small risks is made on the basis of the expertons method [2],[3]. The second stage makes more precise decisions using the method of possibilistic discrimination analysis. This is a new method that represents a generalization of the fuzzy discrimination analysis [6]. The method is applied to a relatively small number of projects, selected during first stage, to compare and sort out high-quality projects. For the latter, the recommendations to provide credits are made. The article provides calculation examples that explain the work of the offered technology.

Proceedings ArticleDOI
01 Nov 2010
TL;DR: The results confirm that the ELM classifier is a promising candidate for improving Accuracy and Minimum Sensitivity.
Abstract: This paper studies the suitability of Extreme Learning Machines (ELM) for resolving bioinformatic and biomedical classification problems. In order to test their overall performance, an experimental study is presented based on five gene microarray datasets found in bioinformatic and biomedical domains. The Fast Correlation-Based Filter (FCBF) was applied in order to identify salient expression genes among the thousands of genes in microarray data that can directly contribute to determining the class membership of each pattern. The results confirm that the ELM classifier is a promising candidate for improving Accuracy and Minimum Sensitivity.

Proceedings ArticleDOI
01 Nov 2010
TL;DR: A simple method based on a manually selected list of 65 pharmaceutical discriminating words is used for labeling spam training tweets and preliminary experimental results show that J48 decision tree classifier has better performance over Naïve Bayesian algorithm.
Abstract: This paper presents a method of applying text mining techniques and data mining tools for pharmaceutical spam detection from Twitter data. A simple method based on a manually selected list of 65 pharmaceutical discriminating words is used for labeling spam training tweets. Preliminary experimental results show that J48 decision tree classifier has better performance over Naive Bayesian algorithm.

Proceedings ArticleDOI
01 Nov 2010
TL;DR: Results show that some methods are able to attain success rates close to 70% when compared to an expert neurologist, and these are promising results that, together with an analysis via Self-Organizing Maps and ant-based clustering, may help to improve the feature extraction and contribute to a better representation of the different classes' characteristics.
Abstract: This paper reports the investigations and experimental procedures conducted for designing an automatic sleep classification tool basedconly in the features extracted with wavelets from EEG, EMG and EOG (electro encephalo-mio- and oculo-gram) signals, without any visual aid or context-based evaluation. Real data collected from infants was processed and classified by several traditional and bio-inspired heuristics. Preliminary results show that some methods are able to attain success rates close to 70% when compared to an expert neurologist. Although still not sufficient to implement a reliable sleep classifier, these are promising results that, together with an analysis via Self-Organizing Maps and ant-based clustering, may help to improve the feature extraction and contribute to a better representation of the different classes' characteristics.