scispace - formally typeset
Open AccessProceedings Article

Self-paced curriculum learning

Reads0
Chats0
TLDR
The missing link between CL and SPL is discovered, and a unified framework named self-paced curriculum leaning (SPCL) is proposed, formulated as a concise optimization problem that takes into account both prior knowledge known before training and the learning progress during training.
Abstract
Curriculum learning (CL) or self-paced learning (SPL) represents a recently proposed learning regime inspired by the learning process of humans and animals that gradually proceeds from easy to more complex samples in training. The two methods share a similar conceptual learning paradigm, but differ in specific learning schemes. In CL, the curriculum is predetermined by prior knowledge, and remain fixed thereafter. Therefore, this type of method heavily relies on the quality of prior knowledge while ignoring feedback about the learner. In SPL, the curriculum is dynamically determined to adjust to the learning pace of the leaner. However, SPL is unable to deal with prior knowledge, rendering it prone to overfitting. In this paper, we discover the missing link between CL and SPL, and propose a unified framework named self-paced curriculum leaning (SPCL). SPCL is formulated as a concise optimization problem that takes into account both prior knowledge known before training and the learning progress during training. In comparison to human education, SPCL is analogous to "instructor-student-collaborative" learning mode, as opposed to "instructor-driven" in CL or "student-driven" in SPL. Empirically, we show that the advantage of SPCL on two tasks.

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings Article

Learning to Reweight Examples for Robust Deep Learning

TL;DR: This article propose a meta-learning algorithm that learns to assign weights to training examples based on their gradient directions, which can be easily implemented on any type of deep network, does not require any additional hyperparameter tuning, and achieves impressive performance on class imbalance and corrupted label problems where only a small amount of clean validation data is available.
Journal ArticleDOI

Cost-Effective Active Learning for Deep Image Classification

TL;DR: This paper proposes a novel active learning (AL) framework, which is capable of building a competitive classifier with optimal feature representation via a limited amount of labeled training instances in an incremental learning manner and incorporates deep convolutional neural networks into AL.
Posted Content

Learning to Reweight Examples for Robust Deep Learning

TL;DR: This work proposes a novel meta-learning algorithm that learns to assign weights to training examples based on their gradient directions that can be easily implemented on any type of deep network, does not require any additional hyperparameter tuning, and achieves impressive performance on class imbalance and corrupted label problems where only a small amount of clean validation data is available.
Journal ArticleDOI

Deep Reinforcement Learning for Offloading and Resource Allocation in Vehicle Edge Computing and Networks

TL;DR: This paper explores a vehicle edge computing network architecture in which the vehicles can act as the mobile edge servers to provide computation services for nearby UEs and proposes as vehicle-assisted offloading scheme for UEs while considering the delay of the computation task.
Journal ArticleDOI

Multi-modal Curriculum Learning for Semi-supervised Image Classification.

TL;DR: A well-organized propagation process leveraging multiple teachers and one learner enables the multi-modal curriculum learning (MMCL) strategy to outperform five state-of-the-art methods on eight popular image data sets.
References
More filters
Journal ArticleDOI

Representation Learning: A Review and New Perspectives

TL;DR: Recent work in the area of unsupervised feature learning and deep learning is reviewed, covering advances in probabilistic models, autoencoders, manifold learning, and deep networks.
Proceedings ArticleDOI

Curriculum learning

TL;DR: It is hypothesized that curriculum learning has both an effect on the speed of convergence of the training process to a minimum and on the quality of the local minima obtained: curriculum learning can be seen as a particular form of continuation method (a general strategy for global optimization of non-convex functions).
Journal ArticleDOI

Algorithm 778: L-BFGS-B: Fortran subroutines for large-scale bound-constrained optimization

TL;DR: L-BFGS-B is a limited-memory algorithm for solving large nonlinear optimization problems subject to simple bounds on the variables, intended for problems in which information on the Hessian matrix is difficult to obtain, or for large dense problems.
Journal ArticleDOI

Shape and motion from image streams under orthography: a factorization method

TL;DR: In this paper, the singular value decomposition (SVDC) technique is used to factor the measurement matrix into two matrices which represent object shape and camera rotation respectively, and two of the three translation components are computed in a preprocessing stage.
Proceedings Article

Self-Paced Learning for Latent Variable Models

TL;DR: A novel, iterative self-paced learning algorithm where each iteration simultaneously selects easy samples and learns a new parameter vector that outperforms the state of the art method for learning a latent structural SVM on four applications.