scispace - formally typeset
Search or ask a question

Showing papers by "Yangqing Jia published in 2009"


Journal ArticleDOI
TL;DR: A theoretical overview of the global optimum solution to the TR problem via the equivalent trace difference problem is proposed, and Eigenvalue perturbation theory is introduced to derive an efficient algorithm based on the Newton-Raphson method.
Abstract: Dimensionality reduction is an important issue in many machine learning and pattern recognition applications, and the trace ratio (TR) problem is an optimization problem involved in many dimensionality reduction algorithms. Conventionally, the solution is approximated via generalized eigenvalue decomposition due to the difficulty of the original problem. However, prior works have indicated that it is more reasonable to solve it directly than via the conventional way. In this brief, we propose a theoretical overview of the global optimum solution to the TR problem via the equivalent trace difference problem. Eigenvalue perturbation theory is introduced to derive an efficient algorithm based on the Newton-Raphson method. Theoretical issues on the convergence and efficiency of our algorithm compared with prior literature are proposed, and are further supported by extensive empirical results.

279 citations


Journal ArticleDOI
TL;DR: This paper proposes a novel semi-supervised orthogonal discriminant analysis via label propagation that propagates the label information from the labeled data to the unlabeled data through a specially designed label propagation, and thus the distribution of the unl labeled data can be explored more effectively to learn a better subspace.

152 citations


Journal ArticleDOI
TL;DR: Experimental results show that the new vehicle detection approach based on Markov chain Monte Carlo has a high detection rate on vehicles and can perform successful segmentation, and reduce the influence caused by vehicle occlusion.

43 citations


Proceedings Article
11 Jul 2009
TL;DR: This paper discusses how to carry out temporal smoothness assumption using temporal regularizers defined in a structural way with respect to the Hilbert space, and derives the online algorithm that efficiently finds the closed-form solution to the classification functions.
Abstract: In this paper, we consider semi-supervised classification on evolutionary data, where the distribution of the data and the underlying concept that we aim to learn change over time due to short-term noises and long-term drifting, making a single aggregated classifier inapplicable for long-term classification. The drift is smooth if we take a localized view over the time dimension, which enables us to impose temporal smoothness assumption for the learning algorithm. We first discuss how to carry out such assumption using temporal regularizers defined in a structural way with respect to the Hilbert space, and then derive the online algorithm that efficiently finds the closed-form solution to the classification functions. Experimental results on real-world evolutionary mailing list data demonstrate that our algorithm outperforms classical semi-supervised learning algorithms in both algorithmic stability and classification accuracy.

12 citations