scispace - formally typeset
C

Cho-Jui Hsieh

Researcher at University of California, Los Angeles

Publications -  355
Citations -  29087

Cho-Jui Hsieh is an academic researcher from University of California, Los Angeles. The author has contributed to research in topics: Robustness (computer science) & Computer science. The author has an hindex of 60, co-authored 301 publications receiving 22410 citations. Previous affiliations of Cho-Jui Hsieh include Amazon.com & University of California, Davis.

Papers
More filters
Proceedings Article

Robustness Verification of Tree-based Models

TL;DR: In this article, the robustness verification problem of tree based models, including random forest (RF) and gradient boosted decision tree (GBDT), is studied and a simple linear time algorithm for verifying a single tree, and for tree ensembles the verification problem can be cast as a max-clique problem on a multipartite boxicity graph.
Proceedings Article

Sparse Linear Programming via primal and dual augmented coordinate descent

TL;DR: This paper investigates a general LP algorithm based on the combination of Augmented Lagrangian and Coordinate Descent (AL-CD), giving an iteration complexity of O((log(1/∊))2) with O(nnz(A)) cost per iteration, and yields a tractable alternative to standard LP methods for large-scale problems of sparse solutions and nnz(A) ≪ mn.
Journal ArticleDOI

Using Side Information to Reliably Learn Low-Rank Matrices from Missing and Corrupted Observations

TL;DR: A general model that exploits side information to better learn low-rank matrices from missing and corrupted observations is proposed, and it is shown that the proposed model can be further applied to several popular scenarios such as matrix completion and robust PCA.
Posted Content

Efficient Neural Interaction Function Search for Collaborative Filtering

TL;DR: Experimental results demonstrate that the proposed method can be much more efficient than popular AutoML approaches, can obtain much better prediction performance than state-of-the-art CF approaches, and can discover distinct IFCs for different data sets and tasks.
Book ChapterDOI

Improved Adversarial Training via Learned Optimizer

TL;DR: It is empirically demonstrate that the commonly used PGD attack may not be optimal for inner maximization, and improved inner optimizer can lead to a more robust model.