scispace - formally typeset
P

Paolo Frasconi

Researcher at University of Florence

Publications -  180
Citations -  15516

Paolo Frasconi is an academic researcher from University of Florence. The author has contributed to research in topics: Artificial neural network & Recurrent neural network. The author has an hindex of 43, co-authored 178 publications receiving 13184 citations. Previous affiliations of Paolo Frasconi include Università Campus Bio-Medico & Katholieke Universiteit Leuven.

Papers
More filters
Journal ArticleDOI

Learning long-term dependencies with gradient descent is difficult

TL;DR: This work shows why gradient based learning algorithms face an increasingly difficult problem as the duration of the dependencies to be captured increases, and exposes a trade-off between efficient learning by gradient descent and latching on information for long periods.
Journal ArticleDOI

Short-Term Traffic Flow Forecasting: An Experimental Comparison of Time-Series Analysis and Supervised Learning

TL;DR: The SARIMA model coupled with a Kalman filter is the most accurate model; however, the proposed seasonal support vector regressor turns out to be highly competitive when performing forecasts during the most congested periods.
Journal ArticleDOI

Exploiting the past and the future in protein secondary structure prediction.

TL;DR: A family of novel architectures which can learn to make predictions based on variable ranges of dependencies are introduced, extending recurrent neural networks and introducing non-causal bidirectional dynamics to capture both upstream and downstream information.
Journal ArticleDOI

A general framework for adaptive processing of data structures

TL;DR: The framework described in this paper is an attempt to unify adaptive models like artificial neural nets and belief nets for the problem of processing structured information, where relations between data variables are expressed by directed acyclic graphs, where both numerical and categorical values coexist.
Posted Content

Bilevel Programming for Hyperparameter Optimization and Meta-Learning

TL;DR: A framework based on bilevel programming that unifies gradient-based hyperparameter optimization and meta-learning is introduced and it is shown that an approximate version of the bileVEL problem can be solved by taking into explicit account the optimization dynamics for the inner objective.