scispace - formally typeset
K

Kilian Q. Weinberger

Researcher at Cornell University

Publications -  241
Citations -  71535

Kilian Q. Weinberger is an academic researcher from Cornell University. The author has contributed to research in topics: Computer science & Deep learning. The author has an hindex of 76, co-authored 222 publications receiving 49707 citations. Previous affiliations of Kilian Q. Weinberger include University of Washington & Washington University in St. Louis.

Papers
More filters

Learning with single view co-training and marginalized dropout

TL;DR: This work focuses on making use of additional data, which is readily available or can be obtained easily but comes from a different distribution than the testing data, to aid learning, and introduces two strategies and manifest them in five ways to cope with the difference between the training and testing distribution.
Proceedings Article

Goal-oriented Euclidean heuristics with manifold learning

TL;DR: A goal-oriented manifold learning scheme is proposed that optimizes the Euclidean distance to goals in the embedding while maintaining admissibility and consistency and a state heuristic enhancement technique is proposed to reduce the gap between heuristic and true distances.

Collaborative spam filtering with the hashing trick

TL;DR: There is substantial deviation in users’ notions of what constitutes spam and ham, and these realities make it extremelydifficult to assemble a single, global spam classifi er.
Posted Content

FastFusionNet: New State-of-the-Art for DAWNBench SQuAD.

TL;DR: FastFusionNet is introduced, an efficient variant of FusionNet, which removes the expensive CoVe layers and substitute the BiLSTMs with far more efficient SRU layers and obtains state-of-the-art results on DAWNBench while achieving the lowest training and inference time on SQuAD to-date.
Posted Content

Convolutional Networks with Dense Connectivity

TL;DR: The Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion, and has several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially improve parameter efficiency.