scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Computational Intelligence Magazine in 2015"


Journal ArticleDOI
TL;DR: In such nonstationary environments, where the probabilistic properties of the data change over time, a non-adaptive model trained under the false stationarity assumption is bound to become obsolete in time, and perform sub-optimally at best, or fail catastrophically at worst.
Abstract: The prevalence of mobile phones, the internet-of-things technology, and networks of sensors has led to an enormous and ever increasing amount of data that are now more commonly available in a streaming fashion [1]-[5]. Often, it is assumed - either implicitly or explicitly - that the process generating such a stream of data is stationary, that is, the data are drawn from a fixed, albeit unknown probability distribution. In many real-world scenarios, however, such an assumption is simply not true, and the underlying process generating the data stream is characterized by an intrinsic nonstationary (or evolving or drifting) phenomenon. The nonstationarity can be due, for example, to seasonality or periodicity effects, changes in the users' habits or preferences, hardware or software faults affecting a cyber-physical system, thermal drifts or aging effects in sensors. In such nonstationary environments, where the probabilistic properties of the data change over time, a non-adaptive model trained under the false stationarity assumption is bound to become obsolete in time, and perform sub-optimally at best, or fail catastrophically at worst.

640 citations


Journal ArticleDOI
TL;DR: The general architecture of locally connected ELM is studied, showing that: 1) ELM theories are naturally valid for local connections, thus introducing local receptive fields to the input layer; 2) each hidden node in ELM can be a combination of several hidden nodes (a subnetwork), which is also consistent with ELM theory.
Abstract: Extreme learning machine (ELM), which was originally proposed for "generalized" single-hidden layer feedforward neural networks (SLFNs), provides efficient unified learning solutions for the applications of feature learning, clustering, regression and classification. Different from the common understanding and tenet that hidden neurons of neural networks need to be iteratively adjusted during training stage, ELM theories show that hidden neurons are important but need not be iteratively tuned. In fact, all the parameters of hidden nodes can be independent of training samples and randomly generated according to any continuous probability distribution. And the obtained ELM networks satisfy universal approximation and classification capability. The fully connected ELM architecture has been extensively studied. However, ELM with local connections has not attracted much research attention yet. This paper studies the general architecture of locally connected ELM, showing that: 1) ELM theories are naturally valid for local connections, thus introducing local receptive fields to the input layer; 2) each hidden node in ELM can be a combination of several hidden nodes (a subnetwork), which is also consistent with ELM theories. ELM theories may shed a light on the research of different local receptive fields including true biological receptive fields of which the exact shapes and formula may be unknown to human beings. As a specific example of such general architectures, random convolutional nodes and a pooling structure are implemented in this paper. Experimental results on the NORB dataset, a benchmark for object recognition, show that compared with conventional deep learning solutions, the proposed local receptive fields based ELM (ELM-LRF) reduces the error rate from 6.5% to 2.7% and increases the learning speed up to 200 times.

321 citations


Journal ArticleDOI
TL;DR: An algorithm that assigns contextual polarity to concepts in text and flows this polarity through the dependency arcs in order to assign a final polarity label to each sentence is presented, which enables a more efficient transformation of unstructured social data into structured information, readily interpretable by machines.
Abstract: -Emulating the human brain is one of the core challenges of computational intelligence, which entails many key problems of artificial intelligence, including understanding human language, reasoning, and emotions. In this work, computational intelligence techniques are combined with common-sense computing and linguistics to analyze sentiment data flows, i.e., to automatically decode how humans express emotions and opinions via natural language. The increasing availability of social data is extremely beneficial for tasks such as branding, product positioning, corporate reputation management, and social media marketing. The elicitation of useful information from this huge amount of unstructured data, however, remains an open challenge. Although such data are easily accessible to humans, they are not suitable for automatic processing: machines are still unable to effectively and dynamically interpret the meaning associated with natural language text in very large, heterogeneous, noisy, and ambiguous environments such as the Web. We present a novel methodology that goes beyond mere word-level analysis of text and enables a more efficient transformation of unstructured social data into structured information, readily interpretable by machines. In particular, we describe a novel paradigm for real-time concept-level sentiment analysis that blends computational intelligence, linguistics, and common-sense computing in order to improve the accuracy of computationally expensive tasks such as polarity detection from big social data. The main novelty of the paper consists in an algorithm that assigns contextual polarity to concepts in text and flows this polarity through the dependency arcs in order to assign a final polarity label to each sentence. Analyzing how sentiment flows from concept to concept through dependency relations allows for a better understanding of the contextual role of each concept in text, to achieve a dynamic polarity inference that outperforms state-of-the-art statistical methods in terms of both accuracy and training time.

129 citations


Journal ArticleDOI
Yi Zuo1, Maoguo Gong1, Jiulin Zeng1, Lijia Ma1, Licheng Jiao1 
TL;DR: The task of personalized recommendation is modeled as a multi-objective optimization problem and a multiobjective recommendation model is proposed that maximizes two conflicting performance metrics termed as accuracy and diversity.
Abstract: Traditional recommendation techniques in recommender systems mainly focus on improving recommendation accuracy. However, personalized recommendation, which considers the multiple needs of users and can make both accurate and diverse recommendations, is more suitable for modern recommender systems. In this paper, the task of personalized recommendation is modeled as a multi-objective optimization problem. A multiobjective recommendation model is proposed. The proposed model maximizes two conflicting performance metrics termed as accuracy and diversity. The accuracy is evaluated by the probabilistic spreading method, while the diversity is measured by recommendation coverage. The proposed MOEA-based recommendation method can simultaneously provide multiple recommendations for multiple users in only one run. Our experimental results demonstrate the effectiveness of the proposed algorithm. Comparison experiments also indicate that the proposed algorithm can make more diverse yet accurate recommendations.

105 citations


Journal ArticleDOI
TL;DR: In this paper, the extended nearest neighbor (ENN) method is proposed to predict input patterns according to the maximum gain of intra-class coherence, which considers not only who are the nearest neighbors of the test sample, but also who consider the test samples as their nearest neighbors.
Abstract: This article introduces a new supervised classification method - the extended nearest neighbor (ENN) - that predicts input patterns according to the maximum gain of intra-class coherence. Unlike the classic k-nearest neighbor (KNN) method, in which only the nearest neighbors of a test sample are used to estimate a group membership, the ENN method makes a prediction in a "two-way communication" style: it considers not only who are the nearest neighbors of the test sample, but also who consider the test sample as their nearest neighbors. By exploiting the generalized class-wise statistics from all training data by iteratively assuming all the possible class memberships of a test sample, the ENN is able to learn from the global distribution, therefore improving pattern recognition performance and providing a powerful technique for a wide range of data analysis applications.

101 citations


Journal ArticleDOI
TL;DR: The current survey highlights and classifies key research questions, the current state of the art, and open problems in cloud computing, and calls for new optimization models and solutions.
Abstract: Cloud computing is significantly reshaping the computing industry. Individuals and small organizations can benefit from using state-of-the-art services and infrastructure, while large companies are attracted by the flexibility and the speed with which they can obtain the services. Service providers compete to offer the most attractive conditions at the lowest prices. However, the environmental impact and legal aspects of cloud solutions pose additional challenges. Indeed, the new cloud-related techniques for resource virtualization and sharing and the corresponding service level agreements call for new optimization models and solutions. It is important for computational intelligence researchers to understand the novelties introduced by cloud computing. The current survey highlights and classifies key research questions, the current state of the art, and open problems.

94 citations


Journal ArticleDOI
TL;DR: The purpose of the paper is to compare the methods behind these CULMs, highlighting their features using concepts of vector spaces (i.e. basis functions and projections), which are easy to understand by the computational intelligence community.
Abstract: This paper surveys in a tutorial fashion the recent history of universal learning machines starting with the multilayer perceptron. The big push in recent years has been on the design of universal learning machines using optimization methods linear in the parameters, such as the Echo State Network, the Extreme Learning Machine and the Kernel Adaptive filter. We call this class of learning machines convex universal learning machines or CULMs. The purpose of the paper is to compare the methods behind these CULMs, highlighting their features using concepts of vector spaces (i.e. basis functions and projections), which are easy to understand by the computational intelligence community. We illustrate how two of the CULMs behave in a simple example, and we conclude that indeed it is practical to create universal mappers with convex adaptation, which is an improvement over backpropagation.

65 citations


Journal ArticleDOI
TL;DR: This special issue is dedicated to new trends of Learning in the field of computational intelligence, where emerging computational intelligence techniques such as extreme learning machines (ELM) and fast solutions shed some light upon how to effectively deal with computational bottlenecks.
Abstract: The articles in this special issue are dedicated to new trends of Learning in the field of computational intelligence. Over the past few decades, conventional computational intelligence techniques faced severe bottlenecks in terms of algorithmic learning. Particularly, in the areas of big data computation, brain science, cognition and reasoning, it is almost inevitable that intensive human intervention and time consuming trial and error efforts are to be employed before any meaningful observations can be obtained. Recent development of emerging computational intelligence techniques such as extreme learning machines (ELM) and fast solutions shed some light upon how to effectively deal with these computational bottlenecks.

44 citations


Journal ArticleDOI
TL;DR: This paper describes the recognition of image patterns based on novel representation learning techniques by considering higher-level (meta-)representations of numerical data in a mathematical lattice of (Type-1) Intervals' Numbers, where an IN represents a distribution of image features including orthogonal moments.
Abstract: This paper describes the recognition of image patterns based on novel representation learning techniques by considering higher-level (meta-)representations of numerical data in a mathematical lattice. In particular, the interest here focuses on lattices of (Type-1) Intervals' Numbers (INs), where an IN represents a distribution of image features including orthogonal moments. A neural classifier, namely fuzzy lattice reasoning (flr) fuzzy-ARTMAP (FAM), or flrFAM for short, is described for learning distributions of INs; hence, Type-2 INs emerge. Four benchmark image pattern recognition applications are demonstrated. The results obtained by the proposed techniques compare well with the results obtained by alternative methods from the literature. Furthermore, due to the isomorphism between the lattice of INs and the lattice of fuzzy numbers, the proposed techniques are straightforward applicable to Type-1 and/or Type-2 fuzzy systems. The far-reaching potential for deep learning in big data applications is also discussed.

43 citations


Journal ArticleDOI
TL;DR: A new kind of broker for cloud computing, whose business relies on outsourcing virtual machines (VMs) to its customers in an on-demand basis, at cheaper prices than those of the cloud providers.
Abstract: -This article introduces a new kind of broker for cloud computing, whose business relies on outsourcing virtual machines (VMs) to its customers. More specifically, the broker owns a number of reserved instances of different VMs from several cloud providers and offers them to its customers in an on-demand basis, at cheaper prices than those of the cloud providers. The essence of the business resides in the large difference in price between on-demand and reserved VMs. We define the Virtual Machine Planning Problem, an optimization problem to maximize the profit of the broker. We also propose a number of efficient smart heuristics (seven two-phase list scheduling heuristics and a reordering local search) to allocate a set of VM requests from customers into the available pre-booked ones, that maximize the broker earnings. We perform experimental evaluation to analyze the profit and quality of service metrics for the resulting planning, including a set of 400 problem instances that account for realistic workloads and scenarios using real data from cloud providers.

34 citations


Journal ArticleDOI
TL;DR: Experiments show that T-WELMs achieve much better classification results and are at the same time faster in terms of both training time and further classification than both ELM models and other state-of-the-art methods in the field.
Abstract: Machine learning methods are becoming more and more popular in the field of computer-aided drug design. The specific data characteristic, including sparse, binary representation as well as noisy, imbalanced datasets, presents a challenging binary classification problem. Currently, two of the most successful models in such tasks are the Support Vector Machine (SVM) and Random Forest (RF). In this paper, we introduce a Weighted Tanimoto Extreme Learning Machine (T-WELM), an extremely simple and fast method for predicting chemical compound biological activity and possibly other data with discrete, binary representation. We show some theoretical properties of the proposed model including the ability to learn arbitrary sets of examples. Further analysis shows numerous advantages of T-WELM over SVMs, RFs and traditional Extreme Learning Machines (ELM) in this particular task. Experiments performed on 40 large datasets of thousands of chemical compounds show that T-WELMs achieve much better classification results and are at the same time faster in terms of both training time and further classification than both ELM models and other state-of-the-art methods in the field.

Journal ArticleDOI
TL;DR: Combining the two paradigms of Hebbian and LMS creates a new unsupervised learning algorithm that has practical engineering applications and provides insight into learning in living neural networks.
Abstract: -Hebbian learning is widely accepted in the fields of psychology, neurology, and neurobiology. It is one of the fundamental premises of neuroscience. The LMS (least mean square) algorithm of Widrow and Hoff is the world's most widely used adaptive algorithm, fundamental in the fields of signal processing, control systems, pattern recognition, and artificial neural networks. These are very different learning paradigms. Hebbian learning is unsupervised. LMS learning is supervised. However, a form of LMS can be constructed to perform unsupervised learning and, as such, LMS can be used in a natural way to implement Hebbian learning. Combining the two paradigms creates a new unsupervised learning algorithm that has practical engineering applications and provides insight into learning in living neural networks. A fundamental question is, how does learning take place in living neural networks? "Nature's little secret," the learning algorithm practiced by nature at the neuron and synapse level, may well be the Hebbian-LMS algorithm.

Journal ArticleDOI
TL;DR: A Semi-Random Projection (SRP) framework is proposed, which takes the merit of random feature sampling of RP, but employs learning mechanism in the determination of the transformation matrix, and is applied to ELM to derive Partially Connected ELM (PC-ELM).
Abstract: Random Projection (RP) is a popular technique for dimensionality reduction because of its high computational efficiency. However, RP may not yield highly discriminative low-dimensional space to produce best pattern classification performance since the random transformation matrix of RP is independent of data. In this paper, we propose a Semi-Random Projection (SRP) framework, which takes the merit of random feature sampling of RP, but employs learning mechanism in the determination of the transformation matrix. One advantage of SRP is that it achieves a good balance between computational complexity and classification accuracy. Another advantage of SRP is that multiple SRP modules can be stacked to form a deep learning architecture for compact and robust feature learning. In addition, based on the insight on the relationship between RP and Extreme Learning Machine (ELM), the SRP is applied to ELM to derive Partially Connected ELM (PC-ELM). The hidden nodes of PC-ELM are more discriminative and hence a smaller number of nodes are needed. Experiments on two real-world text corpus, i.e., 20 Newsgroups and Farms Ads., verify the effectiveness and efficiency of the proposed SRP. Experimental results also show that PC-ELM outperforms ELM for high-dimensional data.

Journal ArticleDOI
TL;DR: In this paper, the benefits of type-2 Fuzzy logic systems (FLSs) for the efforts to realize Ambient Intelligent Environments (AIEs) are discussed.
Abstract: Ambient Intelligence (AmI) is an emerging vision that aims to realize intelligent environments which are sensitive and responsive to the users' needs and behaviors. This paper presents an insight on the benefits that type-2 Fuzzy Logic Systems (FLSs) can offer towards the efforts to realize Ambient Intelligent Environments (AIEs). We will introduce research results from the Scaleup project showing different type-2 FLSs based applications in AIEs. Such applications include intelligent machine vision systems, blending real and virtual realities over dispersed geographical areas and allowing natural communication between the AIE and humans.

Journal ArticleDOI
TL;DR: The methodology is robust to noise and can learn abstract target categories; website classification accuracy surpasses 97% for the most important categories considered in this study.
Abstract: This paper presents a comprehensive methodology for general large-scale image-based classification tasks. It addresses the Big Data challenge in arbitrary image classification and more specifically, filtering of millions of websites with abstract target classes and high levels of label noise. Our approach uses local image features and their color descriptors to build image representations with the help of a modified k-NN algorithm. Image representations are refined into image and website class predictions by a two-stage classifier method suitable for a very large-scale real dataset. A modification of an Extreme Learning Machine is found to be a suitable classifier technique. The methodology is robust to noise and can learn abstract target categories; website classification accuracy surpasses 97% for the most important categories considered in this study.

Journal ArticleDOI
Yueming Wang1, Minlong Lu1, Zhaohui Wu1, Liwen Tian1, Kedi Xu1, Xiaoxiang Zheng1, Gang Pan1 
TL;DR: Inspired by the fact that humans usually give a series of stimuli to a rat robot, a closed-loop model is developed that issues a stimulus sequence automatically according to the state of the rat and the objects in front of it until the rat completes the motion successfully.
Abstract: A rat robot is a type of animal robots, where an animal is connected to a machine system via a brain-computer interface. Electrical stimuli can be generated by the machine system and delivered to the animal's brain to control its behavior. The sensory capacity and flexible motion ability of rat robots highlight their potential advantages over mechanical robots. However, most existing rat robots require that a human observes the environmental layout to guide navigation, which limits the applications of rat robots. This work incorporates object detection algorithms to a rat robot system to enable it to find 'human-interesting' objects, and then use these cues to guide its behaviors to perform automatic navigation. A miniature camera is mounted on the rat's back to capture the scene in front of the rat. The video is transferred via a wireless module to a computer and we develop some object detection/identification algorithms to allow objects of interest to be found. Next, we make the rat robot perform a specific motion automatically in response to a detected object, such as turning left. A single stimulus does not allow the rat to perform a motion successfully. Inspired by the fact that humans usually give a series of stimuli to a rat robot, we develop a closed-loop model that issues a stimulus sequence automatically according to the state of the rat and the objects in front of it until the rat completes the motion successfully. Thus, the rat robot, which we refer to as a rat cyborg, is able to move according to the detected objects without the need for manual operations. The object detection methods and the closed-loop stimulation model are evaluated in experiments, which demonstrate that our rat cyborg can accomplish human-specified navigation automatically.

Journal ArticleDOI
TL;DR: The articles in this special section examine new trends and developments in computational intelligence programs and applications.
Abstract: The articles in this special section examine new trends and developments in computational intelligence programs and applications.

Journal ArticleDOI
TL;DR: It is argued that the Bring Your Own Learner model signals a design shift in cloud-based machine learning infrastructure because it is capable of executing anyone's supervised machine learning algorithm.
Abstract: -We introduce FCUBE, a cloud-based framework that enables machine learning researchers to contribute their learners to its community-shared repository. FCUBE exploits data parallelism in lieu of algorithmic parallelization to allow its users to efficiently tackle large data problems automatically. It passes random subsets of data generated via resampling to multiple learners that it executes simultaneously and then it combines their model predictions with a simple fusion technique. It is an example of what we have named a Bring Your Own Learner model. It allows multiple machine learning researchers to contribute algorithms in a plug-and-play style. We contend that the Bring Your Own Learner model signals a design shift in cloud-based machine learning infrastructure because it is capable of executing anyone's supervised machine learning algorithm. We demonstrate FCUBE executing five different learners contributed by three different machine learning groups on a 100 node deployment on Amazon EC2. They collectively solve a publicly available classification problem trained with 11 million exemplars from the Higgs dataset.

Journal ArticleDOI
TL;DR: A novel ant colony optimization and tabu list approach for the discovery of gene-gene interactions in genome-wide association study data is proposed and tested on a number of diseases drawn from the large established database, the Wellcome Trust Case Control Consortium.
Abstract: In this paper, a novel ant colony optimization and tabu list approach for the discovery of gene-gene interactions in genome-wide association study data is proposed. The method is tested on a number of diseases drawn from the large established database, the Wellcome Trust Case Control Consortium which contains hundreds of thousands of small DNA changes known as single nucleotide polymorphisms. To analyze full scale genome-wide association study data, the standard ant colony optimization algorithm has been adapted, with tournament path selection, a subset based approach, and tabu list included in the algorithm. These modifications, in addition to the use of a statistical test of significance of single nucleotide polymorphism interactions as a fitness function, greatly increase execution speeds and permit the discovery of combinations of single nucleotide polymorphisms that can discriminate cases and controls. The methodology is applied to several large-scale genome-wide association study disease datasets namely, inflammatory bowel disease, rheumatoid arthritis, type I diabetes and type II diabetes patients to discover putative gene-gene interactions in reasonable time on modest hardware.

Journal ArticleDOI
TL;DR: PCC technology was developed and deployed to all Grand Slam tennis events and several major golf tournaments that took place in 2013 and to the present, which has decreased wasted computing consumption by over 50%.
Abstract: Major Golf and Grand Slam Tennis tournaments such as Australian Open, The Masters, Roland Garros, United States Golf Association (USGA), Wimbledon, and United States Tennis Association (USTA) United States (US) Open provide real-time and historical sporting information to immerse a global fan base in the action. Each tournament provides realtime content, including streaming video, game statistics, scores, images, schedule of play, and text. Due to the game popularities, some of the web servers are heavily visited and some are not, therefore, we need a method to autonomously provision servers to provide a smooth user experience. Predictive Cloud Computing (PCC) has been developed to provide a smart allocation/deallocation of servers by combining ensembles of forecasts and predictive modeling to determine the future origin demand for web site content. PCC distributes processing through analytical pipelines that correlate streaming data, such as scores, media schedules, and player brackets with a future-simulated tournament state to measure predicted demand spikes for content. Social data streamed from Twitter provides social sentiment and popularity features used within predictive modeling. Data at rest, such as machine logs and web content, provide additional features for forecasting. While the duration of each tournament varies, the number of origin website requests range from 29,000 to 110,000 hits per minute. The PCC technology was developed and deployed to all Grand Slam tennis events and several major golf tournaments that took place in 2013 and to the present, which has decreased wasted computing consumption by over 50%. We propose a novel forecasting ensemble that includes residual, vector, historical, partial, adjusted, cubic and quadratic forecasters. In addition, we present several predictive models based on Multiple Regression as inputs into several of these forecasters. We conclude by empirically demonstrating that the predictive cloud technology is able to forecast the computing load on origin web servers for professional golf and tennis tournaments.

Journal ArticleDOI
TL;DR: The proposed missing entry estimation method is more accurate and robust than the existing algorithms and demonstrates the effectiveness and feasibility of the proposed algorithm on several widely used image sequences.
Abstract: Missing data is a frequently encountered problem for structure-from-motion (SFM) where the 3D structure of an object is estimated based on 2D images. In this paper, an effective approach is proposed to deal with the missing-data estimation problem for small-size image sequences. In the proposed method, a set of sub-sequences is first extracted. Each sub-sequence is composed of the frame to be estimated and a part of the original sequence. In order to obtain diversified estimations, multiple weaker estimators are constructed by means of the column-space-fitting (CSF) algorithm. The various sub-sequences are in turn used as the inputs to the algorithm. As the non-missing entries are known, the estimation errors of these entries are computed so as to select weaker estimators with better estimation performances. Furthermore, a linear programming based weighting model is established to compute the weights for the selected weaker estimators. After the weighting coefficients are obtained, a linear weighting estimation which is used as the final estimation of the missing entries is computed based on the outputs of the weaker estimators. By applying the strategies of weaker-estimator selection and the linear programming weighting model, the proposed missing entry estimation method is more accurate and robust than the existing algorithms. Experimental results on several widely used image sequences demonstrate the effectiveness and feasibility of the proposed algorithm.

Journal ArticleDOI
TL;DR: The extendibility of Linear General Type-2 (LGT2) Fuzzy Logic based CWWs Framework is demonstrated to create an advanced real-world application, which integrates a semi-autonomous, safe and energy efficient electric hob.
Abstract: Ambient Intelligence (AmI) is a multidisciplinary paradigm, which positively alters the relationship between humans and technology. Concerning home environments, the functions of AmI vision include home automation, communication, entertainment, working and learning. In the area of communication, AmI still needs better mechanisms for human-computer communication. A natural human-computer interaction necessitates having systems capable of modelling words and computing with them. For this purpose, the paradigm of Computing With Words (CWWs) can be employed to mimic human-like communication in Ambient Intelligent Environments (AIEs). This paper demonstrates the extendibility of Linear General Type-2 (LGT2) Fuzzy Logic based CWWs Framework to create an advanced real-world application, which integrates a semi-autonomous, safe and energy efficient electric hob. The motivation of this work is twofold: 1) there is a need to develop transparent human-computer communication rather than embedding obtrusive tablets and computing equipment throughout our surroundings, and 2) one of the most hazardous and energy consuming household devices, the electric hob, does not have competent levels of intelligence and energy efficiency. The proposed Ambient Intelligent Food Preparation System (AIFPS) can increase user comfort, facilitate food preparation, minimize energy consumption and be a useful tool for the elderly and people with major disabilities including vision impairment. The results of real-world experiments with various lay users in the intelligent flat (iSpace) show the success of AIFPS in providing up to 55.43% improved natural interaction (compared to Interval Type-2 based CWWs Framework) while achieving semi-autonomous, safe and energy efficient cooking that can save energy between 11.5% and 35.2%.

Journal ArticleDOI
TL;DR: This special issue is dedicated to computational intelligence for Cloud computing, an emerging technology which allows its users to have access to large scale, efficient and highly reliable computing systems by paying according to their needs.
Abstract: The articles in this special issue are dedicated to computational intelligence for Cloud computing. Cloud computing systems represent an emerging technology which allows its users to have access to large scale, efficient and highly reliable computing systems by paying according to their needs. Cloud computing generally consists of a heterogeneous system that holds a large amount of application programs and data.

Journal ArticleDOI
TL;DR: The articles in this special section focus on the growing interest in biologically inspired learning, which refers to a wide range of learning techniques, motivated by biology, that try to mimic specific biological functions or behaviors.
Abstract: The articles in this special section focus on the growing interest in biologically inspired learning (BIL), which refers to a wide range of learning techniques, motivated by biology, that try to mimic specific biological functions or behaviors.