scispace - formally typeset
T

Tianbao Yang

Researcher at University of Iowa

Publications -  278
Citations -  7349

Tianbao Yang is an academic researcher from University of Iowa. The author has contributed to research in topics: Computer science & Convex optimization. The author has an hindex of 38, co-authored 247 publications receiving 5848 citations. Previous affiliations of Tianbao Yang include General Electric & Princeton University.

Papers
More filters
Proceedings ArticleDOI

Combining link and content for community detection: a discriminative approach

TL;DR: A discriminative model for combining the link and content analysis for community detection from networked data, such as paper citation networks and Word Wide Web is proposed and introduced and hidden variables are introduced to explicitly model the popularity of nodes.
Proceedings Article

Nyström Method vs Random Fourier Features: A Theoretical and Empirical Comparison

TL;DR: It is shown that when there is a large gap in the eigen-spectrum of the kernel matrix, approaches based on the Nystrom method can yield impressively better generalization error bound than random Fourier features based approach.
Journal ArticleDOI

Detecting communities and their evolutions in dynamic social networks--a Bayesian approach

TL;DR: This paper proposes a dynamic stochastic block model for finding communities and their evolution in a dynamic social network that captures the evolution of communities by explicitly modeling the transition of community memberships for individual nodes in the network.
Proceedings ArticleDOI

Hetero-ConvLSTM: A Deep Learning Approach to Traffic Accident Prediction on Heterogeneous Spatio-Temporal Data

TL;DR: A Hetero-ConvL STM framework is proposed, where a few novel ideas are implemented on top of the basic ConvLSTM model, such as incorporating spatial graph features and spatial model ensemble, which makes reasonably accurate predictions and significantly improves the prediction accuracy over baseline approaches.
Proceedings Article

Online Optimization with Gradual Variations

TL;DR: It is shown that for the linear and general smooth convex loss functions, an online algorithm modified from the gradient descend algorithm can achieve a regret which only scales as the square root of the deviation, and as an application, this can also have such a logarithmic regret for the portfolio management problem.