scispace - formally typeset
L

Lizhen Qu

Researcher at Monash University

Publications -  54
Citations -  2383

Lizhen Qu is an academic researcher from Monash University. The author has contributed to research in topics: Parsing & Computer science. The author has an hindex of 16, co-authored 47 publications receiving 1689 citations. Previous affiliations of Lizhen Qu include NICTA & Max Planck Society.

Papers
More filters
Proceedings ArticleDOI

Making Deep Neural Networks Robust to Label Noise: A Loss Correction Approach

TL;DR: In this article, a theoretically grounded approach to train deep neural networks, including recurrent networks, subject to class-dependent label noise is presented, and two procedures for loss correction that are agnostic to both application domain and network architecture are proposed.
Proceedings Article

The Bag-of-Opinions Method for Review Rating Prediction from Sparse Text Patterns

TL;DR: Experiments show that the bag- of-opinions method outperforms prior state-of-the-art techniques for review rating prediction, and is presented as a constrained ridge regression algorithm for learning opinion scores.
Proceedings ArticleDOI

STransE: a novel embedding model of entities and relationships in knowledge bases

TL;DR: STransE as discussed by the authors combines insights from several previous link prediction models into a new embedding model STransE that represents each entity as a lowdimensional vector, and each relation by two matrices and a translation vector.
Proceedings ArticleDOI

Timely YAGO: harvesting, querying, and visualizing temporal knowledge from Wikipedia

TL;DR: This paper introduces Timely YAGO, which extends the previously built knowledge base Y AGO with temporal aspects, and extracts temporal facts from Wikipedia infoboxes, categories, and lists in articles, and integrates these into the TimelyYAGO knowledge base.
Proceedings ArticleDOI

STransE: a novel embedding model of entities and relationships in knowledge bases

TL;DR: STransE is a simple combination of the SE and TransE models, but it obtains better link prediction performance on two benchmark datasets than previous embedding models, and can serve as a new baseline for the more complex models in the link prediction task.