scispace - formally typeset
Open AccessPosted Content

Techniques for Automated Machine Learning

Reads0
Chats0
TLDR
This paper portrays AutoML as a bi-level optimization problem, where one problem is nested within another to search the optimum in the search space, and review the current developments of AutoML in terms of three categories, automated feature engineering (AutoFE), automated model and hyperparameter tuning (AutoMHT), and automated deep learning (AutoDL).
Abstract
Automated machine learning (AutoML) aims to find optimal machine learning solutions automatically given a machine learning problem. It could release the burden of data scientists from the multifarious manual tuning process and enable the access of domain experts to the off-the-shelf machine learning solutions without extensive experience. In this paper, we review the current developments of AutoML in terms of three categories, automated feature engineering (AutoFE), automated model and hyperparameter learning (AutoMHL), and automated deep learning (AutoDL). State-of-the-art techniques adopted in the three categories are presented, including Bayesian optimization, reinforcement learning, evolutionary algorithm, and gradient-based approaches. We summarize popular AutoML frameworks and conclude with current open challenges of AutoML.

read more

Citations
More filters

美国地理学百年发展脉络分析―基于《Annals of the Association of American Geographers》学术论文的统计分析

TL;DR: In this paper, the authors discuss the relationship between the environment and the environment, and propose a framework to improve the quality of the air quality of air quality in the environment of the environment.

Chi-Square Distribution.

TL;DR: The shorthand X ∼ χ 2 (n) is used to indicate that the random variable X has the chi-square distribution with positive integer parameter n, which is known as the degrees of freedom.
Proceedings ArticleDOI

Towards Automated Neural Interaction Discovery for Click-Through Rate Prediction

TL;DR: This work proposes an automated interaction architecture discovering framework for CTR prediction named AutoCTR, which performs evolutionary architecture exploration with learning-to-rank guidance at the architecture level and achieves acceleration using low-fidelity model.
Posted Content

ASFGNN: Automated Separated-Federated Graph Neural Network

TL;DR: An Automated Separated-Federated Graph Neural Network (ASFGNN) learning paradigm is proposed, which decouples the training of GNN into two parts: the message passing part that is done by clients separately, and the loss computing part that was learnt by clients federally.
Journal ArticleDOI

ASFGNN: Automated separated-federated graph neural network

TL;DR: Wang et al. as mentioned in this paper proposed an Automated Separated-Federated Graph Neural Network (ASFGNN) learning paradigm, which decouples the training of GNN into two parts: the message passing part that is done by clients separately, and the loss computing part that are learnt by clients jointly.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings ArticleDOI

ImageNet: A large-scale hierarchical image database

TL;DR: A new database called “ImageNet” is introduced, a large-scale ontology of images built upon the backbone of the WordNet structure, much larger in scale and diversity and much more accurate than the current image datasets.
Book

Reinforcement Learning: An Introduction

TL;DR: This book provides a clear and simple account of the key ideas and algorithms of reinforcement learning, which ranges from the history of the field's intellectual foundations to the most recent developments and applications.

Statistical learning theory

TL;DR: Presenting a method for determining the necessary and sufficient conditions for consistency of learning process, the author covers function estimates from small data pools, applying these estimations to real-life problems, and much more.
Posted Content

MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications

TL;DR: This work introduces two simple global hyper-parameters that efficiently trade off between latency and accuracy and demonstrates the effectiveness of MobileNets across a wide range of applications and use cases including object detection, finegrain classification, face attributes and large scale geo-localization.