scispace - formally typeset
N

Nitish Srivastava

Researcher at Apple Inc.

Publications -  44
Citations -  47724

Nitish Srivastava is an academic researcher from Apple Inc.. The author has contributed to research in topics: Generative model & Boltzmann machine. The author has an hindex of 22, co-authored 41 publications receiving 40184 citations. Previous affiliations of Nitish Srivastava include Indian Institute of Technology Kanpur & Cornell University.

Papers
More filters
Proceedings ArticleDOI

Accelerating Face Detection on Programmable SoC Using C-Based Synthesis

TL;DR: A case study of accelerating face detection based on the Viola Jones algorithm on a programmable SoC using a C-based HLS flow and the performance and quality of results are comparable to those of many traditional RTL implementations.
Posted Content

On the generalization of learning-based 3D reconstruction

TL;DR: It is found that 3 inductive biases impact performance: the spatial extent of the encoder, the use of the underlying geometry of the scene to describe point features, and the mechanism to aggregate information from multiple views.
Patent

Partially shared neural networks for multiple tasks

TL;DR: In this article, a neural network is organized into layers corresponding to stages of inferences, with a common portion, a first portion, and a second portion, which are used for both the first and second inference tasks.
Proceedings ArticleDOI

Modeling Inter-individual Variability in Sugar Beet Populations

TL;DR: In this article, a mathematical framework is introduced to integrate the different sources of variability in plant growth models based on the classical method of Taylor Series Expansion, which allows the propagation of uncertainty in the dynamic system of growth and the computation of the approximate means and standard deviations of the model outputs.
Patent

Inspection neural network for assessing neural network reliability

TL;DR: In this paper, an inspection neural network (INN) is used to inspect data generated during an inference process of a primary neural network to generate an indication of reliability for an output generated by the PNN.