scispace - formally typeset
P

Penghang Yin

Researcher at University of California, Los Angeles

Publications -  51
Citations -  1563

Penghang Yin is an academic researcher from University of California, Los Angeles. The author has contributed to research in topics: Artificial neural network & Convex function. The author has an hindex of 18, co-authored 49 publications receiving 1135 citations. Previous affiliations of Penghang Yin include National University of Defense Technology & State University of New York System.

Papers
More filters
Journal ArticleDOI

Minimization of $\ell_{1-2}$ for Compressed Sensing

TL;DR: A sparsity oriented simulated annealing procedure with non-Gaussian random perturbation is proposed and the almost sure convergence of the combined algorithm (DCASA) to a global minimum is proved.
Posted Content

Understanding Straight-Through Estimator in Training Activation Quantized Neural Nets

TL;DR: In this paper, the authors consider the problem of learning a two-linear-layer network with binarized ReLU activation and Gaussian input data and show that a poor choice of STE leads to instability of the training algorithm near certain local minima.
Journal ArticleDOI

Computing Sparse Representation in a Highly Coherent Dictionary Based on Difference of L 1 and L 2

TL;DR: Numerically, it is found numerically that the DCA method outperforms many existing algorithms for other nonconvex metrics and persistently produces better results than L1-L2 minimization, especially when the sensing matrix is ill-conditioned.
Journal ArticleDOI

SCRABBLE: single-cell RNA-seq imputation constrained by bulk RNA-seq data

TL;DR: It is demonstrated that SCRABBLE outperforms the existing methods in recovering dropout events, capturing true distribution of gene expression across cells, and preserving gene-gene relationship and cell-cell relationship in the data.
Journal ArticleDOI

Ratio and difference of $l_1$ and $l_2$ norms and sparse representation with coherent dictionaries

TL;DR: The mathematical theory of the sparsity promoting properties of the ratio metric in the context of basis pursuit via over-complete dictionaries is studied and sequentially convex algorithms are introduced to illustrate how the ratio and difference penalties are computed to produce both stable and sparse solutions.