scispace - formally typeset
Search or ask a question

Showing papers on "Benchmark (computing) published in 2016"


Journal ArticleDOI
TL;DR: A deep cascaded multitask framework that exploits the inherent correlation between detection and alignment to boost up their performance and achieves superior accuracy over the state-of-the-art techniques on the challenging face detection dataset and benchmark.
Abstract: Face detection and alignment in unconstrained environment are challenging due to various poses, illuminations and occlusions. Recent studies show that deep learning approaches can achieve impressive performance on these two tasks. In this paper, we propose a deep cascaded multi-task framework which exploits the inherent correlation between them to boost up their performance. In particular, our framework adopts a cascaded structure with three stages of carefully designed deep convolutional networks that predict face and landmark location in a coarse-to-fine manner. In addition, in the learning process, we propose a new online hard sample mining strategy that can improve the performance automatically without manual sample selection. Our method achieves superior accuracy over the state-of-the-art techniques on the challenging FDDB and WIDER FACE benchmark for face detection, and AFLW benchmark for face alignment, while keeps real time performance.

1,982 citations


Journal ArticleDOI
TL;DR: The proposed algorithm is found to secure first rank for the ‘best’ and ‘mean’ solutions in the Friedman’s rank test for all the 24 constrained benchmark problems.
Abstract: A simple yet powerful optimization algorithm is proposed in this paper for solving the constrained and unconstrained optimization problems. This algorithm is based on the concept that the solution obtained for a given problem should move towards the best solution and should avoid the worst solution. This algorithm requires only the common control parameters and does not require any algorithm-specific control parameters. The performance of the proposed algorithm is investigated by implementing it on 24 constrained benchmark functions having different characteristics given in Congress on Evolutionary Computation (CEC 2006) and the performance is compared with that of other well-known optimization algorithms. The results have proved the better effectiveness of the proposed algorithm. Furthermore, the statistical analysis of the experimental work has been carried out by conducting the Friedman’s rank test and Holm-Sidak test. The proposed algorithm is found to secure first rank for the ‘best’ and ‘mean’ solutions in the Friedman’s rank test for all the 24 constrained benchmark problems. In addition to solving the constrained benchmark problems, the algorithm is also investigated on 30 unconstrained benchmark problems taken from the literature and the performance of the algorithm is found better.

1,383 citations


Book ChapterDOI
Yandong Guo1, Lei Zhang1, Yuxiao Hu1, Xiaodong He1, Jianfeng Gao1 
08 Oct 2016
TL;DR: In this article, the authors proposed a benchmark task to recognize one million celebrities from their face images, by using all the possibly collected face images of this individual on the web as training data.
Abstract: In this paper, we design a benchmark task and provide the associated datasets for recognizing face images and link them to corresponding entity keys in a knowledge base. More specifically, we propose a benchmark task to recognize one million celebrities from their face images, by using all the possibly collected face images of this individual on the web as training data. The rich information provided by the knowledge base helps to conduct disambiguation and improve the recognition accuracy, and contributes to various real-world applications, such as image captioning and news video analysis. Associated with this task, we design and provide concrete measurement set, evaluation protocol, as well as training data. We also present in details our experiment setup and report promising baseline results. Our benchmark task could lead to one of the largest classification problems in computer vision. To the best of our knowledge, our training dataset, which contains 10M images in version 1, is the largest publicly available one in the world.

1,346 citations


Book ChapterDOI
08 Oct 2016
TL;DR: A new aerial video dataset and benchmark for low altitude UAV target tracking, as well as, a photo-realistic UAV simulator that can be coupled with tracking methods to easily extend existing real-world datasets.
Abstract: In this paper, we propose a new aerial video dataset and benchmark for low altitude UAV target tracking, as well as, a photo-realistic UAV simulator that can be coupled with tracking methods. Our benchmark provides the first evaluation of many state-of-the-art and popular trackers on 123 new and fully annotated HD video sequences captured from a low-altitude aerial perspective. Among the compared trackers, we determine which ones are the most suitable for UAV tracking both in terms of tracking accuracy and run-time. The simulator can be used to evaluate tracking algorithms in real-time scenarios before they are deployed on a UAV “in the field”, as well as, generate synthetic but photo-realistic tracking datasets with automatic ground truth annotations to easily extend existing real-world datasets. Both the benchmark and simulator are made publicly available to the vision community on our website to further research in the area of object tracking from UAVs. (https://ivul.kaust.edu.sa/Pages/pub-benchmark-simulator-uav.aspx.).

1,277 citations


Posted Content
TL;DR: A new release of the MOTChallenge benchmark, which focuses on multiple people tracking, and offers a significant increase in the number of labeled boxes, but also provides multiple object classes beside pedestrians and the level of visibility for every single object of interest.
Abstract: Standardized benchmarks are crucial for the majority of computer vision applications. Although leaderboards and ranking tables should not be over-claimed, benchmarks often provide the most objective measure of performance and are therefore important guides for reseach. Recently, a new benchmark for Multiple Object Tracking, MOTChallenge, was launched with the goal of collecting existing and new data and creating a framework for the standardized evaluation of multiple object tracking methods. The first release of the benchmark focuses on multiple people tracking, since pedestrians are by far the most studied object in the tracking community. This paper accompanies a new release of the MOTChallenge benchmark. Unlike the initial release, all videos of MOT16 have been carefully annotated following a consistent protocol. Moreover, it not only offers a significant increase in the number of labeled boxes, but also provides multiple object classes beside pedestrians and the level of visibility for every single object of interest.

1,262 citations


Proceedings Article
19 Jun 2016
TL;DR: In this paper, the authors present a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with high state and action dimensionality such as 3D humanoid locomotion, and tasks with partial observations.
Abstract: Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel findings based on the systematic evaluation of a range of implemented reinforcement learning algorithms. Both the benchmark and reference implementations are released at https://github.com/rllab/rllab in order to facilitate experimental reproducibility and to encourage adoption by other researchers.

1,038 citations


Book ChapterDOI
08 Oct 2016
TL;DR: This work introduces a novel Deep Network architecture that implements the full feature point handling pipeline, that is, detection, orientation estimation, and feature description, and shows how to learn to do all three in a unified manner while preserving end-to-end differentiability.
Abstract: We introduce a novel Deep Network architecture that implements the full feature point handling pipeline, that is, detection, orientation estimation, and feature description. While previous works have successfully tackled each one of these problems individually, we show how to learn to do all three in a unified manner while preserving end-to-end differentiability. We then demonstrate that our Deep pipeline outperforms state-of-the-art methods on a number of benchmark datasets, without the need of retraining.

878 citations


Posted Content
TL;DR: OpenAI Gym as mentioned in this paper is a toolkit for reinforcement learning research that includes a growing collection of benchmark problems that expose a common interface, and a website where people can share their results and compare the performance of algorithms.
Abstract: OpenAI Gym is a toolkit for reinforcement learning research It includes a growing collection of benchmark problems that expose a common interface, and a website where people can share their results and compare the performance of algorithms This whitepaper discusses the components of OpenAI Gym and the design decisions that went into the software

690 citations


Proceedings ArticleDOI
TL;DR: In this article, a deep learning based approach for developing an efficient and flexible Network Intrusion Detection System (NIDS) for unforeseen and unpredictable attacks has been proposed, which uses Self-taught Learning (STL) on NSL-KDD -a benchmark dataset for network intrusion detection.
Abstract: A Network Intrusion Detection System (NIDS) helps system administrators to detect network security breaches in their organizations. However, many challenges arise while developing a flexible and efficient NIDS for unforeseen and unpredictable attacks. We propose a deep learning based approach for developing such an efficient and flexible NIDS. We use Self-taught Learning (STL), a deep learning based technique, on NSL-KDD - a benchmark dataset for network intrusion. We present the performance of our approach and compare it with a few previous work. Compared metrics include accuracy, precision, recall, and f-measure values.

685 citations


Proceedings Article
19 Jun 2016
TL;DR: This paper proposes a quantizer design for fixed point implementation of DCNs, formulate and solve an optimization problem to identify optimal fixed point bit-width allocation across DCN layers, and demonstrates that fine-tuning can further enhance the accuracy of fixed point DCNs beyond that of the original floating point model.
Abstract: In recent years increasingly complex architectures for deep convolution networks (DCNs) have been proposed to boost the performance on image recognition tasks. However, the gains in performance have come at a cost of substantial increase in computation and model storage resources. Fixed point implementation of DCNs has the potential to alleviate some of these complexities and facilitate potential deployment on embedded hardware. In this paper, we propose a quantizer design for fixed point implementation of DCNs. We formulate and solve an optimization problem to identify optimal fixed point bit-width allocation across DCN layers. Our experiments show that in comparison to equal bitwidth settings, the fixed point DCNs with optimized bit width allocation offer > 20%reduction in the model size without any loss in accuracy on CIFAR-10 benchmark. We also demonstrate that fine-tuning can further enhance the accuracy of fixed point DCNs beyond that of the original floating point model. In doing so, we report a new state-of-the-art fixed point performance of 6.78% error-rate on CIFAR-10 benchmark.

619 citations


Posted Content
Yandong Guo1, Lei Zhang1, Yuxiao Hu1, Xiaodong He1, Jianfeng Gao1 
TL;DR: A benchmark task to recognize one million celebrities from their face images, by using all the possibly collected face images of this individual on the web as training data, which could lead to one of the largest classification problems in computer vision.
Abstract: In this paper, we design a benchmark task and provide the associated datasets for recognizing face images and link them to corresponding entity keys in a knowledge base. More specifically, we propose a benchmark task to recognize one million celebrities from their face images, by using all the possibly collected face images of this individual on the web as training data. The rich information provided by the knowledge base helps to conduct disambiguation and improve the recognition accuracy, and contributes to various real-world applications, such as image captioning and news video analysis. Associated with this task, we design and provide concrete measurement set, evaluation protocol, as well as training data. We also present in details our experiment setup and report promising baseline results. Our benchmark task could lead to one of the largest classification problems in computer vision. To the best of our knowledge, our training dataset, which contains 10M images in version 1, is the largest publicly available one in the world.

Proceedings Article
01 Dec 2016
TL;DR: Two target dependent long short-term memory models, where target information is automatically taken into account, are developed, which achieve state-of-the-art performances without using syntactic parser or external sentiment lexicons.
Abstract: Target-dependent sentiment classification remains a challenge: modeling the semantic relatedness of a target with its context words in a sentence. Different context words have different influences on determining the sentiment polarity of a sentence towards the target. Therefore, it is desirable to integrate the connections between target word and context words when building a learning system. In this paper, we develop two target dependent long short-term memory (LSTM) models, where target information is automatically taken into account. We evaluate our methods on a benchmark dataset from Twitter. Empirical results show that modeling sentence representation with standard LSTM does not perform well. Incorporating target information into LSTM can significantly boost the classification accuracy. The target-dependent LSTM models achieve state-of-the-art performances without using syntactic parser or external sentiment lexicons.

Posted Content
TL;DR: In this article, the authors present a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with high state and action dimensionality such as 3D humanoid locomotion, and tasks with partial observations.
Abstract: Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel findings based on the systematic evaluation of a range of implemented reinforcement learning algorithms. Both the benchmark and reference implementations are released at this https URL in order to facilitate experimental reproducibility and to encourage adoption by other researchers.

Journal Article
TL;DR: The MLR package provides a generic, object-oriented, and extensible framework for classification, regression, survival analysis and clustering for the R language and includes meta-algorithms and model selection techniques to improve and extend the functionality of basic learners with, e.g., hyperparameter tuning, feature selection, and ensemble construction.
Abstract: The MLR package provides a generic, object-oriented, and extensible framework for classification, regression, survival analysis and clustering for the R language It provides a unified interface to more than 160 basic learners and includes meta-algorithms and model selection techniques to improve and extend the functionality of basic learners with, eg, hyperparameter tuning, feature selection, and ensemble construction Parallel high-performance computing is natively supported The package targets practitioners who want to quickly apply machine learning algorithms, as well as researchers who want to implement, benchmark, and compare their new methods in a structured environment

Journal ArticleDOI
TL;DR: This study uses the Lancichinetti-Fortunato-Radicchi benchmark graph to test eight state-of-the-art community detection algorithms and provides guidelines that help to choose the most adequate community detection algorithm for a given network.
Abstract: Many community detection algorithms have been developed to uncover the mesoscopic properties of complex networks. However how good an algorithm is, in terms of accuracy and computing time, remains still open. Testing algorithms on real-world network has certain restrictions which made their insights potentially biased: the networks are usually small, and the underlying communities are not defined objectively. In this study, we employ the Lancichinetti-Fortunato-Radicchi benchmark graph to test eight state-of-the-art algorithms. We quantify the accuracy using complementary measures and algorithms' computing time. Based on simple network properties and the aforementioned results, we provide guidelines that help to choose the most adequate community detection algorithm for a given network. Moreover, these rules allow uncovering limitations in the use of specific algorithms given macroscopic network properties. Our contribution is threefold: firstly, we provide actual techniques to determine which is the most suited algorithm in most circumstances based on observable properties of the network under consideration. Secondly, we use the mixing parameter as an easily measurable indicator of finding the ranges of reliability of the different algorithms. Finally, we study the dependency with network size focusing on both the algorithm's predicting power and the effective computing time.

Journal ArticleDOI
TL;DR: A specific novel *L-PSO algorithm is proposed, using genetic evolution to breed promising exemplars for PSO, and under such guidance, the global search ability and search efficiency of PSO are both enhanced.
Abstract: Social learning in particle swarm optimization (PSO) helps collective efficiency, whereas individual reproduction in genetic algorithm (GA) facilitates global effectiveness. This observation recently leads to hybridizing PSO with GA for performance enhancement. However, existing work uses a mechanistic parallel superposition and research has shown that construction of superior exemplars in PSO is more effective. Hence, this paper first develops a new framework so as to organically hybridize PSO with another optimization technique for “learning.” This leads to a generalized “learning PSO” paradigm, the *L-PSO. The paradigm is composed of two cascading layers, the first for exemplar generation and the second for particle updates as per a normal PSO algorithm. Using genetic evolution to breed promising exemplars for PSO, a specific novel *L-PSO algorithm is proposed in the paper, termed genetic learning PSO (GL-PSO). In particular, genetic operators are used to generate exemplars from which particles learn and, in turn, historical search information of particles provides guidance to the evolution of the exemplars. By performing crossover, mutation, and selection on the historical information of particles, the constructed exemplars are not only well diversified, but also high qualified. Under such guidance, the global search ability and search efficiency of PSO are both enhanced. The proposed GL-PSO is tested on 42 benchmark functions widely adopted in the literature. Experimental results verify the effectiveness, efficiency, robustness, and scalability of the GL-PSO.

Proceedings ArticleDOI
01 Jan 2016
TL;DR: The authors used natural language strings to automatically assemble neural networks from a collection of composable modules via reinforcement learning, with only (world, question, answer) triples as supervision, achieving state-of-the-art results on benchmark datasets in both visual and structured domains.
Abstract: We describe a question answering model that applies to both images and structured knowledge bases. The model uses natural language strings to automatically assemble neural networks from a collection of composable modules. Parameters for these modules are learned jointly with network-assembly parameters via reinforcement learning, with only (world, question, answer) triples as supervision. Our approach, which we term a dynamic neural module network, achieves state-of-theart results on benchmark datasets in both visual and structured domains.

Journal ArticleDOI
TL;DR: Data sets of benchmark interaction energies in noncovalent complexes are an important tool for quantifying the accuracy of computational methods used in this field, as well as for the development of new computational approaches, and their construction and accuracy are discussed.
Abstract: Data sets of benchmark interaction energies in noncovalent complexes are an important tool for quantifying the accuracy of computational methods used in this field, as well as for the development of new computational approaches. This review is intended as a guide to conscious use of these data sets. We discuss their construction and accuracy, list the data sets available in the literature, and demonstrate their application to validation and parametrization of quantum-mechanical computational methods. In practical model systems, the benchmark interaction energies are usually obtained using composite CCSD(T)/CBS schemes. To use these results as a benchmark, their accuracy should be estimated first. We analyze the errors of this methodology with respect to both the approximations involved and the basis set size. We list the most prominent data sets covering various aspects of the field, from general ones to sets focusing on specific types of interactions or systems. The benchmark data are then used to valida...

Posted Content
TL;DR: In this article, a novel deep network architecture is introduced that implements the full feature point handling pipeline, that is, detection, orientation estimation, and feature description, in a unified manner while preserving end-to-end differentiability.
Abstract: We introduce a novel Deep Network architecture that implements the full feature point handling pipeline, that is, detection, orientation estimation, and feature description. While previous works have successfully tackled each one of these problems individually, we show how to learn to do all three in a unified manner while preserving end-to-end differentiability. We then demonstrate that our Deep pipeline outperforms state-of-the-art methods on a number of benchmark datasets, without the need of retraining.

Journal ArticleDOI
TL;DR: COCO as discussed by the authors is an open source platform for comparing continuous optimizers in a black-box setting, which aims at automatizing the tedious and repetitive task of benchmarking numerical optimization algorithms to the greatest possible extent.
Abstract: We introduce COCO, an open source platform for Comparing Continuous Optimizers in a black-box setting. COCO aims at automatizing the tedious and repetitive task of benchmarking numerical optimization algorithms to the greatest possible extent. The platform and the underlying methodology allow to benchmark in the same framework deterministic and stochastic solvers for both single and multiobjective optimization. We present the rationales behind the (decade-long) development of the platform as a general proposition for guidelines towards better benchmarking. We detail underlying fundamental concepts of COCO such as the definition of a problem as a function instance, the underlying idea of instances, the use of target values, and runtime defined by the number of function calls as the central performance measure. Finally, we give a quick overview of the basic code structure and the currently available test suites.

Posted Content
TL;DR: An efficient, scalable feature extraction algorithm for time series, which filters the available features in an early stage of the machine learning pipeline with respect to their significance for the classification or regression task, while controlling the expected percentage of selected but irrelevant features.
Abstract: The all-relevant problem of feature selection is the identification of all strongly and weakly relevant attributes This problem is especially hard to solve for time series classification and regression in industrial applications such as predictive maintenance or production line optimization, for which each label or regression target is associated with several time series and meta-information simultaneously Here, we are proposing an efficient, scalable feature extraction algorithm for time series, which filters the available features in an early stage of the machine learning pipeline with respect to their significance for the classification or regression task, while controlling the expected percentage of selected but irrelevant features The proposed algorithm combines established feature extraction methods with a feature importance filter It has a low computational complexity, allows to start on a problem with only limited domain knowledge available, can be trivially parallelized, is highly scalable and based on well studied non-parametric hypothesis tests We benchmark our proposed algorithm on all binary classification problems of the UCR time series classification archive as well as time series from a production line optimization project and simulated stochastic processes with underlying qualitative change of dynamics

Proceedings ArticleDOI
25 Aug 2016
TL;DR: This paper presents an attempt to benchmark several state-of-the-art GPU-accelerated deep learning software tools, including Caffe, CNTK, TensorFlow, and Torch, and focuses on evaluating the running time performance of these tools with three popular types of neural networks on two representative CPU platforms and three representative GPU platforms.
Abstract: Deep learning has been shown as a successful machine learning method for a variety of tasks, and its popularity results in numerous open-source deep learning software tools coming to public. Training a deep network is usually a very time-consuming process. To address the huge computational challenge in deep learning, many tools exploit hardware features such as multi-core CPUs and many-core GPUs to shorten the training and inference time. However, different tools exhibit different features and running performance when they train different types of deep networks on different hardware platforms, making it difficult for end users to select an appropriate pair of software and hardware. In this paper, we present our attempt to benchmark several state-of-the-art GPU-accelerated deep learning software tools, including Caffe, CNTK, TensorFlow, and Torch. We focus on evaluating the running time performance (i.e., speed) of these tools with three popular types of neural networks on two representative CPU platforms and three representative GPU platforms. Our contribution is two-fold. First, for end users of deep learning software tools, our benchmarking results can serve as a reference to selecting appropriate hardware platforms and software tools. Second, for developers of deep learning software tools, our in-depth analysis points out possible future directions to further optimize the running performance.

Journal ArticleDOI
TL;DR: A novel physically inspired non-gradient algorithm developed for solution of global optimization problems that mimics the evaporation of a tiny amount of water molecules on the solid surface with different wettability which can be studied by molecular dynamics simulations.

Journal ArticleDOI
TL;DR: This approach exploits the transfer learning technique as a tool to generate an effective initial population pool via reusing past experience to speed up the evolutionary process, and at the same time any population-based multiobjective algorithms can benefit from this integration without any extensive modifications.
Abstract: One of the major distinguishing features of the dynamic multiobjective optimization problems (DMOPs) is the optimization objectives will change over time, thus tracking the varying Pareto-optimal front becomes a challenge. One of the promising solutions is reusing the "experiences" to construct a prediction model via statistical machine learning approaches. However most of the existing methods ignore the non-independent and identically distributed nature of data used to construct the prediction model. In this paper, we propose an algorithmic framework, called Tr-DMOEA, which integrates transfer learning and population-based evolutionary algorithm for solving the DMOPs. This approach takes the transfer learning method as a tool to help reuse the past experience for speeding up the evolutionary process, and at the same time, any population based multiobjective algorithms can benefit from this integration without any extensive modifications. To verify this, we incorporate the proposed approach into the development of three well-known algorithms, nondominated sorting genetic algorithm II (NSGA-II), multiobjective particle swarm optimization (MOPSO), and the regularity model-based multiobjective estimation of distribution algorithm (RM-MEDA), and then employ twelve benchmark functions to test these algorithms as well as compare with some chosen state-of-the-art designs. The experimental results confirm the effectiveness of the proposed method through exploiting machine learning technology.

Journal ArticleDOI
TL;DR: In this article, the authors introduce a standardized format for representing algorithm selection scenarios and a repository that contains a growing number of data sets from the literature, and demonstrate the potential of algorithm selection to achieve significant performance improvements across a broad range of problems and algorithms.

Journal ArticleDOI
TL;DR: A new dynamic MOEA using Kalman filter (KF) predictions in decision space is proposed to solve the aforementioned problems and is capable of significantly improving the dynamic optimization performance.
Abstract: Evolutionary algorithms are effective in solving static multiobjective optimization problems resulting in the emergence of a number of state-of-the-art multiobjective evolutionary algorithms (MOEAs). Nevertheless, the interest in applying them to solve dynamic multiobjective optimization problems has only been tepid. Benchmark problems, appropriate performance metrics, as well as efficient algorithms are required to further the research in this field. One or more objectives may change with time in dynamic optimization problems. The optimization algorithm must be able to track the moving optima efficiently. A prediction model can learn the patterns from past experience and predict future changes. In this paper, a new dynamic MOEA using Kalman filter (KF) predictions in decision space is proposed to solve the aforementioned problems. The predictions help to guide the search toward the changed optima, thereby accelerating convergence. A scoring scheme is devised to hybridize the KF prediction with a random reinitialization method. Experimental results and performance comparisons with other state-of-the-art algorithms demonstrate that the proposed algorithm is capable of significantly improving the dynamic optimization performance.

Proceedings Article
19 Jun 2016
TL;DR: This paper proposes deep structured energy based models (DSEBMs), where the energy function is the output of a deterministic deep neural network with structure, and develops novel model architectures to integrate EBMs with different types of data such as static data, sequential data, and spatial data.
Abstract: In this paper, we attack the anomaly detection problem by directly modeling the data distribution with deep architectures. We propose deep structured energy based models (DSEBMs), where the energy function is the output of a deterministic deep neural network with structure. We develop novel model architectures to integrate EBMs with different types of data such as static data, sequential data, and spatial data, and apply appropriate model architectures to adapt to the data structure. Our training algorithm is built upon the recent development of score matching (Hyvarinen, 2005), which connects an EBM with a regularized autoencoder, eliminating the need for complicated sampling method. Statistically sound decision criterion can be derived for anomaly detection purpose from the perspective of the energy landscape of the data distribution. We investigate two decision criteria for performing anomaly detection: the energy score and the reconstruction error. Extensive empirical studies on benchmark tasks demonstrate that our proposed model consistently matches or outperforms all the competing methods.

Book ChapterDOI
01 Jan 2016
TL;DR: This chapter describes the general knowledge of the teacher-student relationships and the fundamentals and performance of TLBO algorithm, an interesting algorithm which is inspired by the teaching and learning behaviour.
Abstract: This chapter introduces teaching-learning-based optimization (TLBO) algorithm and its elitist and non-dominated sorting multiobjective versions. Two examples of unconstrained and constrained benchmark functions and an example of a multiobjective constrained problem are presented to demonstrate the procedural steps of the algorithm.

Proceedings ArticleDOI
22 May 2016
TL;DR: A general multi-user mobile cloud computing system where each mobile user has multiple independent tasks and an efficient approximate solution is proposed by using separable semidefinite relaxation, followed by recovery of the binary offloading decision and optimal allocation of the communication resource.
Abstract: We consider a general multi-user mobile cloud computing system where each mobile user has multiple independent tasks. These mobile users share the communication resource while offloading tasks to the cloud. We aim to jointly optimize the offloading decisions of all users as well as the allocation of communication resource, to minimize the overall cost of energy, computation, and delay for all users. The optimization problem is formulated as a non-convex quadratically constrained quadratic program, which is NP-hard in general. An efficient approximate solution is proposed by using separable semidefinite relaxation, followed by recovery of the binary offloading decision and optimal allocation of the communication resource. For performance benchmark, we further propose a numerical lower bound of the minimum system cost. By comparison with this lower bound, our simulation results show that the proposed algorithm gives nearly optimal performance under various parameter settings.

Journal ArticleDOI
Laizhong Cui1, Genghui Li1, Qiuzhen Lin1, Jianyong Chen1, Nan Lu1 
TL;DR: A novel adaptive multiple sub-populations based DE algorithm is designed in this paper, named MPADE, in which the parent population is split into three sub- Populations based on the fitness values and then three novel DE strategies are respectively performed to take on the responsibility for either exploitation or exploration.