scispace - formally typeset
J

Jie Chen

Researcher at Beijing Institute of Technology

Publications -  39
Citations -  2055

Jie Chen is an academic researcher from Beijing Institute of Technology. The author has contributed to research in topics: Phase retrieval & Lanczos resampling. The author has an hindex of 19, co-authored 35 publications receiving 1690 citations. Previous affiliations of Jie Chen include University of Minnesota & Tongji University.

Papers
More filters
Journal ArticleDOI

Dense Subgraph Extraction with Application to Community Detection

TL;DR: This paper presents a method for identifying a set of dense subgraphs of a given sparse graph that takes into account the fact that not every participating node in the network needs to belong to a community, and does not require to specify the number of clusters.
Journal Article

Fast Approximate k NN Graph Construction for High Dimensional Data via Recursive Lanczos Bisection

TL;DR: Two divide and conquer methods for computing an approximate kNN graph in Θ(dnt) time for high dimensional data (large d) and an additional refinement step is performed to improve the accuracy of the graph.
Journal ArticleDOI

Trace optimization and eigenproblems in dimension reduction methods

TL;DR: All the eigenvalue problems solved in the context of explicit linear projections can be viewed as the projected analogues of the nonlinear or implicit projections, including kernels as a means of unifying linear and nonlinear methods.
Journal ArticleDOI

Fast Estimation of $tr(f(A))$ via Stochastic Lanczos Quadrature

TL;DR: An inexpensive method to estimate the trace of f(A) for cases where f is analytic inside a closed interval and A is a symmetric positive definite matrix, which combines three key ingredients, namely, the stochastic trace estimator, Gaussian quadrature, and the Lanczos algorithm.
Journal ArticleDOI

Learning ReLU Networks on Linearly Separable Data: Algorithm, Optimality, and Generalization

TL;DR: This paper presents a novel stochastic gradient descent (SGD) algorithm, which can provably train any single-hidden-layer ReLU network to attain global optimality, despite the presence of infinitely many bad local minima, maxima, and saddle points in general.