scispace - formally typeset
Open AccessPosted Content

Deep Learning for Functional Data Analysis with Adaptive Basis Layers

TLDR
In this article, the hidden units are each basis functions themselves implemented as a micro neural network, which learns to apply parsimonious dimension reduction to functional inputs that focuses only on information relevant to the target rather than irrelevant variation in the input function.
Abstract
Despite their widespread success, the application of deep neural networks to functional data remains scarce today. The infinite dimensionality of functional data means standard learning algorithms can be applied only after appropriate dimension reduction, typically achieved via basis expansions. Currently, these bases are chosen a priori without the information for the task at hand and thus may not be effective for the designated task. We instead propose to adaptively learn these bases in an end-to-end fashion. We introduce neural networks that employ a new Basis Layer whose hidden units are each basis functions themselves implemented as a micro neural network. Our architecture learns to apply parsimonious dimension reduction to functional inputs that focuses only on information relevant to the target rather than irrelevant variation in the input function. Across numerous classification/regression tasks with functional data, our method empirically outperforms other types of neural networks, and we prove that our approach is statistically consistent with low generalization error. Code is available at: \url{this https URL}.

read more

References
More filters
Posted Content

Network In Network

TL;DR: With enhanced local modeling via the micro network, the proposed deep network structure NIN is able to utilize global average pooling over feature maps in the classification layer, which is easier to interpret and less prone to overfitting than traditional fully connected layers.
Journal ArticleDOI

Functional Data Analysis for Sparse Longitudinal Data

TL;DR: In this article, a nonparametric method is proposed to perform functional principal components analysis for sparse longitudinal data, where the repeated measurements are located randomly with a random number of repetitions for each subject and are determined by an underlying smooth random (subject-specific) trajectory plus measurement errors.
Proceedings ArticleDOI

Deep learning and the information bottleneck principle

TL;DR: It is argued that both the optimal architecture, number of layers and features/connections at each layer, are related to the bifurcation points of the information bottleneck tradeoff, namely, relevant compression of the input layer with respect to the output layer.
Book

Applied Functional Data Analysis: Methods and Case Studies

TL;DR: In this article, Bone shapes from a Paleopathology study were used to indicate arthritis in a criminal justice study and the Nondurable Goods Index was used to measure reaction time distributions.
Book

Nonparametric functional data analysis : theory and practice

TL;DR: In this paper, a well adapted space for functional data is defined and a local weighting of functional variables is proposed for functional nonparametric prediction methodologies based on selected asymptotics.
Related Papers (5)