scispace - formally typeset
C

Cho-Jui Hsieh

Researcher at University of California, Los Angeles

Publications -  355
Citations -  29087

Cho-Jui Hsieh is an academic researcher from University of California, Los Angeles. The author has contributed to research in topics: Robustness (computer science) & Computer science. The author has an hindex of 60, co-authored 301 publications receiving 22410 citations. Previous affiliations of Cho-Jui Hsieh include Amazon.com & University of California, Davis.

Papers
More filters
Posted Content

Beta-CROWN: Efficient Bound Propagation with Per-neuron Split Constraints for Complete and Incomplete Neural Network Verification.

TL;DR: In this article, a new bound propagation based method that can fully encode per-neuron splits via optimizable parameters is proposed, called ''beta$-CROWN'' which can produce better bounds than typical LP verifiers with neuron split constraints.
Posted Content

RandomRooms: Unsupervised Pre-training from Synthetic Shapes and Randomized Layouts for 3D Object Detection

TL;DR: RandomRooms as mentioned in this paper generates random layouts of a scene by making use of the objects in the synthetic CAD dataset and learns the 3D scene representation by applying object-level contrastive learning on two random scenes generated from the same set of synthetic objects.
Posted Content

Optimal Transport Classifier: Defending Against Adversarial Attacks by Regularized Deep Embedding

TL;DR: Leveraging optimal transport theory, a new framework is proposed, Optimal Transport Classifier (OT-Classifier), and an objective is derived that minimizes the discrepancy between the Distribution of the true label and the distribution of the OT-Classifiers output.
Posted Content

The Limit of the Batch Size.

TL;DR: For the first time, this paper scales the batch size on ImageNet to at least a magnitude larger than all previous work, and provides detailed studies on the performance of many state-of-the-art optimization schemes under this setting.
Journal ArticleDOI

Efficient Contextual Representation Learning With Continuous Outputs

TL;DR: This work revisits the design of the output layer and considers directly predicting the pre-trained embedding of the target word for a given context and achieves a 4-fold speedup and eliminates 80% trainable parameters while achieving competitive performance on downstream tasks.