scispace - formally typeset
Search or ask a question
Author

Mahmoud Afifi

Other affiliations: Adobe Systems, Samsung, Assiut University
Bio: Mahmoud Afifi is an academic researcher from York University. The author has contributed to research in topics: Color balance & Color constancy. The author has an hindex of 13, co-authored 58 publications receiving 596 citations. Previous affiliations of Mahmoud Afifi include Adobe Systems & Samsung.

Papers published on a yearly basis

Papers
More filters
Journal Article•DOI•
TL;DR: In this paper, the combination of isolated facial components and a contextual feature called foggy face is used to train deep convolutional neural networks followed by an AdaBoost-based score fusion to infer the final gender class.

85 citations

Proceedings Article•DOI•
15 Jun 2019
TL;DR: This paper introduces a k-nearest neighbor strategy that is able to compute a nonlinear color mapping function to correct the image's colors and shows the method is highly effective and generalizes well to camera models not in the training set.
Abstract: This paper focuses on correcting a camera image that has been improperly white-balanced. This situation occurs when a camera's auto white balance fails or when the wrong manual white-balance setting is used. Even after decades of computational color constancy research, there are no effective solutions to this problem. The challenge lies not in identifying what the correct white balance should have been, but in the fact that the in-camera white-balance procedure is followed by several camera-specific nonlinear color manipulations that make it challenging to correct the image's colors in post-processing. This paper introduces the first method to explicitly address this problem. Our method is enabled by a dataset of over 65,000 pairs of incorrectly white-balanced images and their corresponding correctly white-balanced images. Using this dataset, we introduce a k-nearest neighbor strategy that is able to compute a nonlinear color mapping function to correct the image's colors. We show our method is highly effective and generalizes well to camera models not in the training set.

84 citations

Journal Article•DOI•
TL;DR: Zhang et al. as mentioned in this paper proposed a two-stream convolutional neural network (CNN) which accepts hand images as input and predicts gender information from these hand images, which is then used as a feature extractor to feed a set of support vector machine classifiers for biometric identification.
Abstract: Human hand not only possesses distinctive feature for gender information, it is also considered one of the primary biometric traits used to identify a person. Unlike face images, which are usually unconstrained, an advantage of hand images is they are usually captured under a controlled position. Most state-of-the-art methods, that rely on hand images for gender recognition or biometric identification, employ handcrafted features to train an off-the-shelf classifier or be used by a similarity metric for biometric identification. In this work, we propose a deep learning-based method to tackle the gender recognition and biometric identification problems. Specifically, we design a two-stream convolutional neural network (CNN) which accepts hand images as input and predicts gender information from these hand images. This trained model is then used as a feature extractor to feed a set of support vector machine classifiers for biometric identification. As part of this effort, we propose a large dataset of human hand images, 11K Hands, which contains dorsal and palmar sides of human hand images with detailed ground-truth information for different problems including gender recognition and biometric identification. By leveraging thousands of hand images, we could effectively train our CNN-based model achieving promising results. One of our findings is that the dorsal side of human hands is found to have effective distinctive features similar to, if not better than, those available in the palmar side of human hand images. To facilitate access to our 11K Hands dataset, the dataset, the trained CNN models, and our Matlab source code are available at ( https://goo.gl/rQJndd ).

83 citations

Proceedings Article•DOI•
Mahmoud Afifi1, Michael S. Brown1•
14 Jun 2020
TL;DR: A deep neural network (DNN) architecture trained in an end-to-end manner to learn the correct white balance for sRGB images that are rendered with the incorrect white balance is introduced.
Abstract: We introduce a deep learning approach to realistically edit an sRGB image's white balance. Cameras capture sensor images that are rendered by their integrated signal processor (ISP) to a standard RGB (sRGB) color space encoding. The ISP rendering begins with a white-balance procedure that is used to remove the color cast of the scene's illumination. The ISP then applies a series of nonlinear color manipulations to enhance the visual quality of the final sRGB image. Recent work by [3] showed that sRGB images that were rendered with the incorrect white balance cannot be easily corrected due to the ISP's nonlinear rendering. The work in [3] proposed a k-nearest neighbor (KNN) solution based on tens of thousands of image pairs. We propose to solve this problem with a deep neural network (DNN) architecture trained in an end-to-end manner to learn the correct white balance. Our DNN maps an input image to two additional white-balance settings corresponding to indoor and outdoor illuminations. Our solution not only is more accurate than the KNN approach in terms of correcting a wrong white-balance setting but also provides the user the freedom to edit the white balance in the sRGB image to other illumination settings.

83 citations

Proceedings Article•DOI•
01 Jun 2020
TL;DR: This paper reviews the NTIRE 2020 challenge on real image denoising with focus on the newly introduced dataset, the proposed methods and their results, based on the SIDD benchmark.
Abstract: This paper reviews the NTIRE 2020 challenge on real image denoising with focus on the newly introduced dataset, the proposed methods and their results. The challenge is a new version of the previous NTIRE 2019 challenge on real image denoising that was based on the SIDD benchmark. This challenge is based on a newly collected validation and testing image datasets, and hence, named SIDD+. This challenge has two tracks for quantitatively evaluating image denoising performance in (1) the Bayer-pattern rawRGB and (2) the standard RGB (sRGB) color spaces. Each track ~250 registered participants. A total of 22 teams, proposing 24 methods, competed in the final phase of the challenge. The proposed methods by the participating teams represent the current state-of-the-art performance in image denoising targeting real noisy images. The newly collected SIDD+ datasets are publicly available at: https://bit.ly/siddplus_data.

72 citations


Cited by
More filters
Reference Entry•DOI•
15 Oct 2004

2,118 citations

Journal Article•DOI•
TL;DR: Visualization results demonstrate that, compared with the CNN without Gate Unit, ACNNs are capable of shifting the attention from the occluded patches to other related but unobstructed ones and outperform other state-of-the-art methods on several widely used in thelab facial expression datasets under the cross-dataset evaluation protocol.
Abstract: Facial expression recognition in the wild is challenging due to various unconstrained conditions. Although existing facial expression classifiers have been almost perfect on analyzing constrained frontal faces, they fail to perform well on partially occluded faces that are common in the wild. In this paper, we propose a convolution neutral network (CNN) with attention mechanism (ACNN) that can perceive the occlusion regions of the face and focus on the most discriminative un-occluded regions. ACNN is an end-to-end learning framework. It combines the multiple representations from facial regions of interest (ROIs). Each representation is weighed via a proposed gate unit that computes an adaptive weight from the region itself according to the unobstructedness and importance. Considering different RoIs, we introduce two versions of ACNN: patch-based ACNN (pACNN) and global–local-based ACNN (gACNN). pACNN only pays attention to local facial patches. gACNN integrates local representations at patch-level with global representation at image-level. The proposed ACNNs are evaluated on both real and synthetic occlusions, including a self-collected facial expression dataset with real-world occlusions, the two largest in-the-wild facial expression datasets (RAF-DB and AffectNet) and their modifications with synthesized facial occlusions. Experimental results show that ACNNs improve the recognition accuracy on both the non-occluded faces and occluded faces. Visualization results demonstrate that, compared with the CNN without Gate Unit, ACNNs are capable of shifting the attention from the occluded patches to other related but unobstructed ones. ACNNs also outperform other state-of-the-art methods on several widely used in-the-lab facial expression datasets under the cross-dataset evaluation protocol.

536 citations

Book•
01 Jan 1997
TL;DR: This book is a good overview of the most important and relevant literature regarding color appearance models and offers insight into the preferred solutions.
Abstract: Color science is a multidisciplinary field with broad applications in industries such as digital imaging, coatings and textiles, food, lighting, archiving, art, and fashion. Accurate definition and measurement of color appearance is a challenging task that directly affects color reproduction in such applications. Color Appearance Models addresses those challenges and offers insight into the preferred solutions. Extensive research on the human visual system (HVS) and color vision has been performed in the last century, and this book contains a good overview of the most important and relevant literature regarding color appearance models.

496 citations

Book Chapter•DOI•
28 Jan 2005

328 citations

01 Jan 2016
TL;DR: The handbook of biometrics is universally compatible with any devices to read, and will help you to get the most less latency time to download any of the authors' books like this one.
Abstract: Thank you very much for reading handbook of biometrics. Maybe you have knowledge that, people have look numerous times for their favorite books like this handbook of biometrics, but end up in malicious downloads. Rather than reading a good book with a cup of coffee in the afternoon, instead they are facing with some harmful virus inside their desktop computer. handbook of biometrics is available in our digital library an online access to it is set as public so you can download it instantly. Our books collection saves in multiple locations, allowing you to get the most less latency time to download any of our books like this one. Merely said, the handbook of biometrics is universally compatible with any devices to read.

275 citations