scispace - formally typeset
C

Chaitanya K. Joshi

Researcher at Nanyang Technological University

Publications -  20
Citations -  1287

Chaitanya K. Joshi is an academic researcher from Nanyang Technological University. The author has contributed to research in topics: Computer science & Travelling salesman problem. The author has an hindex of 6, co-authored 11 publications receiving 454 citations. Previous affiliations of Chaitanya K. Joshi include Agency for Science, Technology and Research.

Papers
More filters
Posted Content

Benchmarking Graph Neural Networks

TL;DR: A reproducible GNN benchmarking framework is introduced, with the facility for researchers to add new models conveniently for arbitrary datasets, and a principled investigation into the recent Weisfeiler-Lehman GNNs (WL-GNNs) compared to message passing-based graph convolutional networks (GCNs).
Journal Article

Benchmarking Graph Neural Networks

TL;DR: A reproducible GNN benchmarking framework is introduced, with the facility for researchers to add new models conveniently for arbitrary datasets, and a principled investigation into the recent Weisfeiler-Lehman GNNs (WL-GNNs) compared to message passing-based graph convolutional networks (GCNs).
Posted Content

An Efficient Graph Convolutional Network Technique for the Travelling Salesman Problem

TL;DR: This paper introduces a new learning-based approach for approximately solving the Travelling Salesman Problem on 2D Euclidean graphs using deep Graph Convolutional Networks to build efficient TSP graph representations and output tours in a non-autoregressive manner via highly parallelized beam search.
Posted Content

Personalization in Goal-Oriented Dialog.

TL;DR: This paper analyzes the shortcomings of an existing end-to-end dialog system based on Memory Networks and proposes modifications to the architecture which enable personalization, and investigates personalization in dialog as a multi-task learning problem.
Posted Content

Learning TSP Requires Rethinking Generalization

TL;DR: This paper identifies inductive biases, model architectures and learning algorithms that promote generalization to instances larger than those seen in training, revealing that extrapolating beyond training data requires rethinking the entire neural combinatorial optimization pipeline.