C
Chi-Jen Lu
Researcher at Academia Sinica
Publications - 104
Citations - 2609
Chi-Jen Lu is an academic researcher from Academia Sinica. The author has contributed to research in topics: Randomness & Computer science. The author has an hindex of 23, co-authored 99 publications receiving 2387 citations. Previous affiliations of Chi-Jen Lu include National Taiwan University & National Chiao Tung University.
Papers
More filters
Journal ArticleDOI
A linear-time component-labeling algorithm using contour tracing technique
TL;DR: A new linear-time algorithm is presented in this paper that simultaneously labels connected components and their contours in binary images and extracts component contours and sequential orders of contour points, which can be useful for many applications.
Proceedings Article
Online Optimization with Gradual Variations
Chao-Kai Chiang,Chao-Kai Chiang,Tianbao Yang,Chia-Jung Lee,Mehrdad Mahdavi,Chi-Jen Lu,Rong Jin,Shenghuo Zhu +7 more
TL;DR: It is shown that for the linear and general smooth convex loss functions, an online algorithm modified from the gradient descend algorithm can achieve a regret which only scales as the square root of the deviation, and as an application, this can also have such a logarithmic regret for the portfolio management problem.
Proceedings ArticleDOI
Extractors: optimal up to constant factors
TL;DR: This paper provides the first explicit construction of extractors which are simultaneously optimal up to constant factors in both seed length and output length, and introduces new condensers that have constant seed length (and retain a constant fraction of the min-entropy in the random source).
Book ChapterDOI
Conditional Computational Entropy, or Toward Separating Pseudoentropy from Compressibility
TL;DR: This work obtains a separation between conditional HILL and Yao entropies, a new, natural notion of unpredictability entropy, which implies conditional Yao entropy and thus allows for known extraction and hardcore bit results to be stated and used more generally.
Proceedings ArticleDOI
Tree Decomposition for Large-Scale SVM Problems
TL;DR: A method that uses a decision tree to decompose a given data space and trains SVMs on the decomposed regions and it is shown that the decision tree has several merits for large-scale SVM training.