scispace - formally typeset
H

Hamid Reza Vaezi Joze

Researcher at Microsoft

Publications -  25
Citations -  786

Hamid Reza Vaezi Joze is an academic researcher from Microsoft. The author has contributed to research in topics: Pixel & Real image. The author has an hindex of 12, co-authored 25 publications receiving 514 citations. Previous affiliations of Hamid Reza Vaezi Joze include Johns Hopkins University & Simon Fraser University.

Papers
More filters
Journal ArticleDOI

Exemplar-Based Color Constancy and Multiple Illumination

TL;DR: A technique to overcome the multiple illuminant situation using the proposed method and it is shown that it performs very well, for standard datasets, compared to current color constancy algorithms, including when learning based on one image dataset is applied to tests from a different dataset.
Posted Content

MMTM: Multimodal Transfer Module for CNN Fusion

TL;DR: A simple neural network module for leveraging the knowledge from multiple modalities in convolutional neural networks, named Multimodal Transfer Module (MMTM), which improves the recognition accuracy of well-known multimodal networks.
Proceedings Article

MS-ASL: A Large-Scale Data Set and Benchmark for Understanding American Sign Language.

TL;DR: In this paper, a large-scale ASL data set was proposed, which covers over 200 signers, signer independent sets, challenging and unconstrained recording conditions and a large class count of 1000 signs.
Proceedings ArticleDOI

Improving the Performance of Unimodal Dynamic Hand-Gesture Recognition With Multimodal Training

TL;DR: This work presents an efficient approach for leveraging the knowledge from multiple modalities in training unimodal 3D convolutional neural networks (3D-CNNs) for the task of dynamic hand gesture recognition, and introduces a "spatiotemporal semantic alignment" loss (SSA) to align the content of the features from different networks.
Posted Content

Improving the Performance of Unimodal Dynamic Hand-Gesture Recognition with Multimodal Training

TL;DR: In this article, the knowledge from multiple modalities in training unimodal 3D convolutional neural networks (3D-CNNs) for the task of dynamic hand gesture recognition is leveraged.