N
Nikhil Krishnaswamy
Researcher at Brandeis University
Publications - 58
Citations - 442
Nikhil Krishnaswamy is an academic researcher from Brandeis University. The author has contributed to research in topics: Computer science & Gesture. The author has an hindex of 11, co-authored 38 publications receiving 318 citations. Previous affiliations of Nikhil Krishnaswamy include Colorado State University.
Papers
More filters
Proceedings Article
VoxML: A Visualization Modeling Language
TL;DR: VoxML is intended to overcome the limitations of existing 3D visual markup languages by allowing for the encoding of a broad range of semantic knowledge that can be exploited by a variety of systems and platforms, leading to multimodal simulations of real-world scenarios using conceptual objects that represent their semantic values.
Proceedings Article
VoxSim: A Visual Platform for Modeling Motion Language
TL;DR: This paper presents a working system that generates animations in real time that correlate with human conceptions of the event described over a test set, discussing challenges encountered and describing the solutions implemented.
Journal ArticleDOI
Embodied Human Computer Interaction
TL;DR: This paper describes a simulation platform for building Embodied Human Computer Interactions (EHCI), VoxWorld, which enables multimodal dialogue systems that communicate through language, gesture, action, facial expressions, and gaze tracking, in the context of task-oriented interactions.
Posted Content
Multimodal Semantic Simulations of Linguistically Underspecified Motion Events
TL;DR: VoxSim uses a rich formal model of events and their participants to generate simulations that satisfy the minimal constraints entailed by an utterance and its minimal model, relying on real-world semantic knowledge of physical objects and motion events.
Journal ArticleDOI
Combining Deep Learning and Qualitative Spatial Reasoning to Learn Complex Structures from Sparse Examples with Noise
TL;DR: In this article, the authors presented an approach to introduce new spatial structures to an AI agent, combining deep learning over qualitative spatial relations with various heuristic search algorithms. And they verified that the agent learned the concept by observing its virtual block-building activities, wherein it ranked each potential subsequent action toward building its learned concept.