M
Marion Neumann
Researcher at Washington University in St. Louis
Publications - 25
Citations - 2112
Marion Neumann is an academic researcher from Washington University in St. Louis. The author has contributed to research in topics: Graph kernel & Graph (abstract data type). The author has an hindex of 12, co-authored 24 publications receiving 1210 citations. Previous affiliations of Marion Neumann include University of Washington & Fraunhofer Society.
Papers
More filters
Proceedings Article
An End-to-End Deep Learning Architecture for Graph Classification
TL;DR: This paper designs a localized graph convolution model and shows its connection with two graph kernels, and designs a novel SortPooling layer which sorts graph vertices in a consistent order so that traditional neural networks can be trained on the graphs.
Posted Content
TUDataset: A collection of benchmark datasets for learning with graphs.
Christopher Morris,Nils M. Kriege,Franka Bause,Kristian Kersting,Petra Mutzel,Marion Neumann +5 more
TL;DR: The TUDataset for graph classification and regression is introduced, which consists of over 120 datasets of varying sizes from a wide range of applications and provides Python-based data loaders, kernel and graph neural network baseline implementations, and evaluation tools.
Journal ArticleDOI
Propagation kernels: efficient graph kernels from propagated information
TL;DR: It is shown that if the graphs at hand have a regular structure, one can exploit this regularity to scale the kernel computation to large databases of graphs with thousands of nodes, and can be considerably faster than state-of-the-art approaches without sacrificing predictive performance.
Book ChapterDOI
Efficient graph kernels by randomization
TL;DR: This paper explores the power of continuous node-level features for propagation-based graph kernels, and shows that propagation kernels utilizing locality-sensitive hashing reduce the runtime of existing graph kernels by several orders of magnitude.
Proceedings ArticleDOI
Stacked Gaussian Process Learning
TL;DR: Experimental results on real-world data from the market relevant application show that stacked Gaussian processes learning can significantly improve prediction performance of a standard Gaussian process.