scispace - formally typeset
S

Syed Waqas Zamir

Researcher at Yonsei University

Publications -  43
Citations -  2208

Syed Waqas Zamir is an academic researcher from Yonsei University. The author has contributed to research in topics: Gamut & Real image. The author has an hindex of 14, co-authored 36 publications receiving 668 citations. Previous affiliations of Syed Waqas Zamir include COMSATS Institute of Information Technology & Pompeu Fabra University.

Papers
More filters
Proceedings ArticleDOI

Multi-Stage Progressive Image Restoration

TL;DR: MPRNet as discussed by the authors proposes a multi-stage architecture that progressively learns restoration functions for the degraded inputs, thereby breaking down the overall recovery process into more manageable steps, and introduces a novel per-pixel adaptive design that leverages in-situ supervised attention to reweight the local features.
Book ChapterDOI

Learning Enriched Features for Real Image Restoration and Enhancement

TL;DR: MIRNet as mentioned in this paper proposes a multi-scale residual block containing several key elements: (a) parallel multi-resolution convolution streams for extracting mult-scale features, (b) information exchange across the multiresolution streams, (c) spatial and channel attention mechanisms for capturing contextual information, and (d) attention-based multiscale feature aggregation.
Proceedings ArticleDOI

CycleISP: Real Image Restoration via Improved Data Synthesis

TL;DR: CycleISP as discussed by the authors is a framework that models camera imaging pipeline in forward and reverse directions, which allows to produce any number of realistic image pairs for denoising both in RAW and sRGB spaces.
Proceedings ArticleDOI

Striking the Right Balance With Uncertainty

TL;DR: This paper demonstrates that the Bayesian uncertainty estimates directly correlate with the rarity of classes and the difficulty level of individual samples, and presents a novel framework for uncertainty based class imbalance learning that efficiently utilizes sample and class uncertainty information to learn robust features and more generalizable classifiers.
Posted Content

Transformers in Vision: A Survey

TL;DR: Transformer networks as mentioned in this paper enable modeling long dependencies between input sequence elements and support parallel processing of sequence as compared to recurrent networks e.g., Long short-term memory (LSTM).