Author
Hazim Kemal Ekenel
Other affiliations: Sabancı University, Boğaziçi University, École Polytechnique Fédérale de Lausanne ...read more
Bio: Hazim Kemal Ekenel is an academic researcher from Istanbul Technical University. The author has contributed to research in topics: Facial recognition system & Convolutional neural network. The author has an hindex of 30, co-authored 215 publications receiving 3554 citations. Previous affiliations of Hazim Kemal Ekenel include Sabancı University & Boğaziçi University.
Papers published on a yearly basis
Papers
More filters
••
18 Jun 2018
TL;DR: In this article, an end-to-end network called Cycle-Dehaze, which does not require pairs of hazy and corresponding ground truth images for training, is presented.
Abstract: In this paper, we present an end-to-end network, called Cycle-Dehaze, for single image dehazing problem, which does not require pairs of hazy and corresponding ground truth images for training. That is, we train the network by feeding clean and hazy images in an unpaired manner. Moreover, the proposed approach does not rely on estimation of the atmospheric scattering model parameters. Our method enhances CycleGAN formulation by combining cycle-consistency and perceptual losses in order to improve the quality of textural information recovery and generate visually better haze-free images. Typically, deep learning models for dehazing take low resolution images as input and produce low resolution outputs. However, in the NTIRE 2018 challenge on single image dehazing, high resolution images were provided. Therefore, we apply bicubic downscaling. After obtaining low-resolution outputs from the network, we utilize the Laplacian pyramid to upscale the output images to the original resolution. We conduct experiments on NYU-Depth, I-HAZE, and O-HAZE datasets. Extensive experiments demonstrate that the proposed approach improves CycleGAN method both quantitatively and qualitatively.
301 citations
••
01 Jan 2019TL;DR: This work presents a new dataset for form understanding in noisy scanned documents (FUNSD) that aims at extracting and structuring the textual content of forms, and is the first publicly available dataset with comprehensive annotations to address FoUn task.
Abstract: We present a new dataset for form understanding in noisy scanned documents (FUNSD) that aims at extracting and structuring the textual content of forms. The dataset comprises 199 real, fully annotated, scanned forms. The documents are noisy and vary widely in appearance, making form understanding (FoUn) a challenging task. The proposed dataset can be used for various tasks, including text detection, optical character recognition, spatial layout analysis, and entity labeling/linking. To the best of our knowledge, this is the first publicly available dataset with comprehensive annotations to address FoUn task. We also present a set of baselines and introduce metrics to evaluate performance on the FUNSD dataset, which can be downloaded at https://guillaumejaume.github.io/FUNSD.
190 citations
••
TL;DR: This paper proposes a method to employ multiresolution analysis to decompose the image into its subbands, and aims to search for the subbands that are insensitive to the variations in expression and in illumination.
181 citations
••
TL;DR: The systems for spontaneous speech recognition, multimodal dialogue processing, and visual perception of a user, which includes localization, tracking, and identification of the user, recognition of pointing gestures, as well as the recognition of a person's head orientation are presented.
Abstract: In this paper, we present our work in building technologies for natural multimodal human-robot interaction. We present our systems for spontaneous speech recognition, multimodal dialogue processing, and visual perception of a user, which includes localization, tracking, and identification of the user, recognition of pointing gestures, as well as the recognition of a person's head orientation. Each of the components is described in the paper and experimental results are presented. We also present several experiments on multimodal human-robot interaction, such as interaction using speech and gestures, the automatic determination of the addressee during human-human-robot interaction, as well on interactive learning of dialogue strategies. The work and the components presented here constitute the core building blocks for audiovisual perception of humans and multimodal human-robot interaction used for the humanoid robot developed within the German research project (Sonderforschungsbereich) on humanoid cooperative robots.
150 citations
•
01 Sep 2005TL;DR: The performance of the proposed algorithm is tested on the Yale and CMU PIE face databases, and the obtained results show significant improvement over the holistic approaches.
Abstract: In this paper, a local appearance based face recognition algorithm is proposed. In the proposed algorithm local information is extracted using block-based discrete cosine transform. Obtained local features are combined both at the feature level and at the decision level. The performance of the proposed algorithm is tested on the Yale and CMU PIE face databases, and the obtained results show significant improvement over the holistic approaches.
146 citations
Cited by
More filters
•
28,685 citations
•
3,940 citations
••
TL;DR: This survey aims at providing multimedia researchers with a state-of-the-art overview of fusion strategies, which are used for combining multiple modalities in order to accomplish various multimedia analysis tasks.
Abstract: This survey aims at providing multimedia researchers with a state-of-the-art overview of fusion strategies, which are used for combining multiple modalities in order to accomplish various multimedia analysis tasks. The existing literature on multimodal fusion research is presented through several classifications based on the fusion methodology and the level of fusion (feature, decision, and hybrid). The fusion methods are described from the perspective of the basic concept, advantages, weaknesses, and their usage in various analysis tasks as reported in the literature. Moreover, several distinctive issues that influence a multimodal fusion process such as, the use of correlation and independence, confidence level, contextual information, synchronization between different modalities, and the optimal modality selection are also highlighted. Finally, we present the open issues for further research in the area of multimodal fusion.
1,019 citations