scispace - formally typeset
A

Aran Nayebi

Researcher at Stanford University

Publications -  47
Citations -  1330

Aran Nayebi is an academic researcher from Stanford University. The author has contributed to research in topics: Artificial neural network & Convolutional neural network. The author has an hindex of 13, co-authored 42 publications receiving 858 citations. Previous affiliations of Aran Nayebi include Adobe Systems.

Papers
More filters
Proceedings Article

Deep Learning Models of the Retinal Response to Natural Scenes

TL;DR: In this paper, deep convolutional neural networks (CNNs) were used to capture retinal responses to natural scenes nearly to within the variability of a cell's response, and are markedly more accurate than linear-nonlinear (LN) models and generalized linear models (GLMs).
Posted ContentDOI

CORnet: Modeling the Neural Mechanisms of Core Object Recognition

TL;DR: The current best ANN model derived from this approach (CORnet-S) is among the top models on Brain-Score, a composite benchmark for comparing models to the brain, but is simpler than other deep ANNs in terms of the number of convolutions performed along the longest path of information processing in the model.
Journal ArticleDOI

Unsupervised neural network models of the ventral visual stream

TL;DR: Recently, this article showed that neural network models learned with deep unsupervised contrastive embedding methods achieve neural prediction accuracy in multiple ventral visual cortical areas that equals or exceeds that of models derived using today's best supervised methods and that the mapping of neural network hidden layers is neuroanatomically consistent across the ventral stream.
Posted Content

Brain-Like Object Recognition with High-Performing Shallow Recurrent ANNs

TL;DR: CORnet-S, a compact, recurrent ANN, is established, a shallow ANN with four anatomically mapped areas and recurrent connectivity, guided by Brain-Score, a new large-scale composite of neural and behavioral benchmarks for quantifying the functional fidelity of models of the primate ventral visual stream.
Posted ContentDOI

Unsupervised Neural Network Models of the Ventral Visual Stream

TL;DR: It is found that neural network models learned with deep unsupervised contrastive embedding methods achieve neural prediction accuracy in multiple ventral visual cortical areas that equals or exceeds that of models derived using today’s best supervised methods.