A
Ashish Jaiswal
Researcher at University of Texas at Arlington
Publications - 17
Citations - 879
Ashish Jaiswal is an academic researcher from University of Texas at Arlington. The author has contributed to research in topics: Computer science & Cognition. The author has an hindex of 5, co-authored 10 publications receiving 91 citations.
Papers
More filters
Posted Content
A Survey on Contrastive Self-supervised Learning
TL;DR: This paper provides an extensive review of self-supervised methods that follow the contrastive approach, explaining commonly used pretext tasks in a contrastive learning setup, followed by different architectures that have been proposed so far.
Journal ArticleDOI
A Survey on Contrastive Self-Supervised Learning
TL;DR: In contrastive self-supervised learning as discussed by the authors, augmented versions of the same sample close to each other while trying to push away embeddings from different samples is used to learn representations for several downstream tasks.
Journal ArticleDOI
A Review of Extended Reality (XR) Technologies for Manufacturing Training
Sanika Doolani,Callen Wessels,Varun Kanal,Christos Sevastopoulos,Ashish Jaiswal,Harish Ram Nambiappan,Fillia Makedon +6 more
TL;DR: A review of the current state-of-the-art of use of XR technologies in training personnel in the field of manufacturing and presents several key application domains where XR is being currently applied, notably in maintenance training and in performing assembly task.
Proceedings ArticleDOI
HAND-REHA: dynamic hand gesture recognition for game-based wrist rehabilitation
TL;DR: A novel game-based system for wrist rehabilitation to automatically recognize pre-defined hand gestures using a web-camera, so to control an avatar in a three dimensional maze run game.
Proceedings ArticleDOI
A Multi-modal System to Assess Cognition in Children from their Physical Movements
Ashwin Ramesh Babu,Mohammad Zaki Zadeh,Ashish Jaiswal,Alexis Lueckenhoff,Maria Kyrarini,Fillia Makedon +5 more
TL;DR: A computer vision-based assessment system that employs an attention-based fusion mechanism to combine multiple modalities such as optical flow, human poses, and objects in the scene to predict a child's action outperforms other state-of-the-art approaches.