scispace - formally typeset
R

Rohith Kuditipudi

Researcher at Duke University

Publications -  8
Citations -  243

Rohith Kuditipudi is an academic researcher from Duke University. The author has contributed to research in topics: Graph (abstract data type) & Artificial neural network. The author has an hindex of 6, co-authored 7 publications receiving 175 citations.

Papers
More filters
Posted Content

On the Opportunities and Risks of Foundation Models.

Rishi Bommasani, +113 more
- 16 Aug 2021 - 
TL;DR: The authors provides a thorough account of the opportunities and risks of foundation models, ranging from their capabilities (e.g., language, vision, robotics, reasoning, human interaction) and technical principles(e. g.g. model architectures, training procedures, data, systems, security, evaluation, theory) to their applications.
Posted Content

Explaining Landscape Connectivity of Low-cost Solutions for Multilayer Nets

TL;DR: This work gives mathematical explanations for mode connectivity of deep nets, assuming generic properties of well-trained deep nets (such as dropout stability and noise stability) and experiments are presented to verify the theory.
Proceedings Article

Learning Two-Layer Neural Networks with Symmetric Inputs

TL;DR: A new algorithm for learning a two-layer neural network under a general class of input distributions based on the method-of-moments framework and extends several results in tensor decompositions to avoid the complicated non-convex optimization in learning neural networks.
Proceedings Article

Explaining Landscape Connectivity of Low-cost Solutions for Multilayer Nets

TL;DR: In this article, the authors give mathematical explanations for mode connectivity in deep networks, assuming generic properties (such as dropout stability and noise stability) of well-trained deep nets, which have previously been identified as part of understanding the generalization properties of deep nets.
Posted Content

Learning Two-layer Neural Networks with Symmetric Inputs.

TL;DR: In this article, the authors proposed a method for learning a two-layer neural network under a general class of input distributions, which is based on the method-of-moments framework and extends several tensor decompositions.