scispace - formally typeset
M

Meizhu Liu

Researcher at Yahoo!

Publications -  35
Citations -  916

Meizhu Liu is an academic researcher from Yahoo!. The author has contributed to research in topics: Boosting (machine learning) & Divergence (statistics). The author has an hindex of 16, co-authored 34 publications receiving 826 citations. Previous affiliations of Meizhu Liu include University of Florida & Princeton University.

Papers
More filters
Journal ArticleDOI

Learning a Mahalanobis Distance-Based Dynamic Time Warping Measure for Multivariate Time Series Classification

TL;DR: A LogDet divergence-based metric learning with triplet constraint model which can learn Mahalanobis matrix with high precision and robustness is established.
Journal ArticleDOI

Total Bregman Divergence and Its Applications to DTI Analysis

TL;DR: This paper presents a novel divergence dubbed the Total Bregman divergence (TBD), which is intrinsically robust to outliers, a very desirable property in many applications, and derives the piecewise smooth active contour model for segmentation of DT-MRI using the TBD and presents several comparative results on real data.
Journal ArticleDOI

Shape Retrieval Using Hierarchical Total Bregman Soft Clustering

TL;DR: This paper considers the family of total Bregman divergences (tBDs) as an efficient and robust “distance” measure to quantify the dissimilarity between shapes, and proves that for any tBD, there exists a distribution which belongs to the lifted exponential family (lEF) of statistical distributions.
Proceedings ArticleDOI

Total Bregman divergence and its applications to shape retrieval

TL;DR: A novel divergence measure between any two given points in Rn or two distribution functions is presented and this orthogonal distance is used to redefine the Bregman class of divergences and develop a new theory for estimating the center of a set of vectors as well as probability distribution functions.
Proceedings ArticleDOI

Nationality Classification Using Name Embeddings

TL;DR: In this article, the authors exploit the phenomena of homophily in communication patterns to learn name embeddings, a new representation that encodes gender, ethnicity, and nationality which is readily applicable to building classifiers and other systems.