S
Shalini Ghosh
Researcher at Samsung
Publications - 24
Citations - 940
Shalini Ghosh is an academic researcher from Samsung. The author has contributed to research in topics: Object detection & Language model. The author has an hindex of 10, co-authored 24 publications receiving 560 citations. Previous affiliations of Shalini Ghosh include Amazon.com.
Papers
More filters
Posted Content
Contextual LSTM (CLSTM) models for Large scale NLP tasks.
TL;DR: Results from experiments indicate that using both words and topics as features improves performance of the CLSTM models over baseline L STM models for these tasks, demonstrating the significant benefit of using context appropriately in natural language (NL) tasks.
Proceedings ArticleDOI
Class-incremental Learning via Deep Model Consolidation
Junting Zhang,Jie Zhang,Shalini Ghosh,Dawei Li,Serafettin Tasci,Larry Heck,Heming Zhang,C.-C. Jay Kuo +7 more
TL;DR: Deep Model Consolidation (DMC) as discussed by the authors proposes to first train a separate model only for the new classes, and then combine the two individual models trained on data of two distinct set of classes (old classes and new classes) via a novel double distillation training objective.
Proceedings ArticleDOI
Taking a HINT: Leveraging Explanations to Make Vision and Language Models More Grounded
Ramprasaath R. Selvaraju,Stefan Lee,Yilin Shen,Hongxia Jin,Shalini Ghosh,Larry Heck,Dhruv Batra,Devi Parikh +7 more
TL;DR: In this article, the alignment between human attention maps and gradient-based network importance is optimized to encourage deep networks to be sensitive to the same input regions as humans to improve visual grounding.
Posted Content
Class-incremental Learning via Deep Model Consolidation
Junting Zhang,Jie Zhang,Shalini Ghosh,Dawei Li,Serafettin Tasci,Larry Heck,Heming Zhang,C.-C. Jay Kuo +7 more
TL;DR: A class-incremental learning paradigm called Deep Model Consolidation (DMC), which works well even when the original training data is not available, and demonstrates significantly better performance in image classification and object detection in the single-headed IL setting.
Posted Content
Taking a HINT: Leveraging Explanations to Make Vision and Language Models More Grounded.
Ramprasaath R. Selvaraju,Stefan Lee,Yilin Shen,Hongxia Jin,Shalini Ghosh,Larry Heck,Dhruv Batra,Devi Parikh +7 more
TL;DR: This work proposes a generic approach called Human Importance-aware Network Tuning (HINT), which effectively leverages human demonstrations to improve visual grounding and encourages deep networks to be sensitive to the same input regions as humans.