scispace - formally typeset
K

Kevin Desai

Researcher at University of Texas at San Antonio

Publications -  24
Citations -  142

Kevin Desai is an academic researcher from University of Texas at San Antonio. The author has contributed to research in topics: Computer science & Autoencoder. The author has an hindex of 4, co-authored 14 publications receiving 65 citations. Previous affiliations of Kevin Desai include University of Texas at Dallas.

Papers
More filters
Proceedings ArticleDOI

Augmented reality-based exergames for rehabilitation

TL;DR: An augmented reality based system for rehabilitation by playing four interactive, cognitive and fun Exergames (exercise and gaming) that uses low-cost RGB-D cameras such as Microsoft Kinect V2 to capture and generate 3D model of the person by extracting him/her from the entire captured data and immersing it in different interactive virtual environments.
Proceedings ArticleDOI

Experiences with Multi-modal Collaborative Virtual Laboratory (MMCVL)

TL;DR: Results obtained from the user study analysis prove the system to be fun and realistic, and at the same time engaging and motivating for learning laboratory experiments.
Proceedings ArticleDOI

Generalized Zero-Shot Learning Using Multimodal Variational Auto-Encoder With Semantic Concepts

TL;DR: A Multimodal Variational Auto-Encoder (M-VAE) which can learn the shared latent space of image features and the semantic space and outperforms the current state-of-the-art approaches for generalized zero-shot learning.
Proceedings ArticleDOI

Skeleton-based continuous extrinsic calibration of multiple RGB-D kinect cameras

TL;DR: Evaluations show that the skeleton based approach to calibrate multiple RGB-D Kinect cameras in a closed setup, automatically without any intervention, within a few seconds, can provide fast, accurate and continuous calibration, as long as a human is moving around in the captured scene.
Proceedings ArticleDOI

Cybersickness Prediction from Integrated HMD’s Sensors: A Multimodal Deep Fusion Approach using Eye-tracking and Head-tracking Data

TL;DR: In this paper, a deep fusion network was proposed to predict cybersickness severity from heterogeneous data readily available from the integrated HMD sensors, including eye-tracking and head-tracking data.