scispace - formally typeset
W

Wray Buntine

Researcher at Monash University

Publications -  220
Citations -  9133

Wray Buntine is an academic researcher from Monash University. The author has contributed to research in topics: Topic model & Computer science. The author has an hindex of 40, co-authored 207 publications receiving 8302 citations. Previous affiliations of Wray Buntine include Deakin University & University of California, Berkeley.

Papers
More filters
Book ChapterDOI

Theory refinement on Bayesian networks

TL;DR: In this paper, the problem of theory refinement under uncertainty is reviewed in the context of Bayesian statistics, a theory of belief revision, reduced to an incremental learning task as follows: the learning system is initially primed with a partial theory supplied by a domain expert, and thereafter maintains its own internal representation of alternative theories which is able to be interrogated by the domain expert and can be incrementally refined from data.
Journal ArticleDOI

Operations for learning with graphical models

TL;DR: In this article, a multidisciplinary review of empirical, statistical learning from a graphical model perspective is presented, including decomposition, differentiation, and manipulation of probability models from the exponential family.
Journal ArticleDOI

A guide to the literature on learning probabilistic networks from data

TL;DR: The literature review presented discusses different methods under the general rubric of learning Bayesian networks from data, and includes some overlapping work on more general probabilistic networks.
Book ChapterDOI

Machine invention of first order predicates by inverting resolution

TL;DR: A mechanism for automatically inventing and generalising first-order Horn clause predicates is presented and implemented in a system called CIGOL, which uses incremental induction to augment incomplete clausal theories.
Proceedings ArticleDOI

Improving LDA topic models for microblogs via tweet pooling and automatic labeling

TL;DR: This paper empirically establishes that a novel method of tweet pooling by hashtags leads to a vast improvement in a variety of measures for topic coherence across three diverse Twitter datasets in comparison to an unmodified LDA baseline and a range of pooling schemes.