Other affiliations: Simon Fraser University
Bio: Sepidehsadat Hosseini is an academic researcher from Seoul National University. The author has contributed to research in topics: Computer science & Convolutional neural network. The author has an hindex of 5, co-authored 9 publications receiving 71 citations. Previous affiliations of Sepidehsadat Hosseini include Simon Fraser University.
01 Jun 2021
TL;DR: In this paper, a generative adversarial layout refinement network is proposed for automated floorplan generation, where a previously generated layout becomes the next input constraint, enabling iterative refinement.
Abstract: This paper proposes a generative adversarial layout refinement network for automated floorplan generation. Our architecture is an integration of a graph-constrained relational GAN and a conditional GAN, where a previously generated layout becomes the next input constraint, enabling iterative refinement. A surprising discovery of our research is that a simple non-iterative training process, dubbed component-wise GT-conditioning, is effective in learning such a generator. The iterative generator further allows us to improve a metric of choice via meta-optimization techniques by controlling when to pass which input constraints during iterative refinement. Our qualitative and quantitative evaluation based on the three standard metrics demonstrate that the proposed system makes significant improvements over the current state-of-the-art, even competitive against the ground-truth floorplans, designed by professional architects. Code, model, and data are available at https://ennauata.github.io/houseganpp/page.html.
••01 Jan 2018
TL;DR: This paper proposes a convolutional neural network based architecture for joint age-gender classification, where the Gabor filter responses as the input and shows improved accuracy in both age and gender classification compared to the state-of-the-art methodologies.
Abstract: Age and gender classification has received more attention recently owing to its important role in user-friendly intelligent systems. In this paper, we propose a convolutional neural network (CNN) based architecture for joint age-gender classification, where we use the Gabor filter responses as the input. The weighting of Gabor-filter responses is learned through back-propagation in an end-to-end architecture. The architecture is trained to label the input images into 8 ranges of age and 2 types of gender. Our approach shows improved accuracy in both age and gender classification compared to the state-of-the-art methodologies. We also observe that increasing the width of neural network would increase the accuracy of the overall system.
TL;DR: It is shown that feeding an appropriate feature to the CNN enhances its performance in some face related works such as age/gender estimation, face detection and emotion recognition.
Abstract: Since the convolutional neural network (CNN) is be- lieved to find right features for a given problem, the study of hand-crafted features is somewhat neglected these days. In this paper, we show that finding an appropriate feature for the given problem may be still important as they can en- hance the performance of CNN-based algorithms. Specif- ically, we show that feeding an appropriate feature to the CNN enhances its performance in some face related works such as age/gender estimation, face detection and emotion recognition. We use Gabor filter bank responses for these tasks, feeding them to the CNN along with the input image. The stack of image and Gabor responses can be fed to the CNN as a tensor input, or as a fused image which is a weighted sum of image and Gabor responses. The Gabor filter parameters can also be tuned depending on the given problem, for increasing the performance. From the extensive experiments, it is shown that the proposed methods provide better performance than the conventional CNN-based methods that use only the input images.
14 May 2019
TL;DR: The performance of age estimation and FER are improved by using the capsule network than using the plain CNNs, which is shown to be robust to the rotation of objects and be able to capture the relationship of facial landmarks.
Abstract: The convolutional neural network (CNN) works very well in many computer vision tasks including the face-related problems. However, in the case of age estimation and facial expression recognition (FER), the accuracy provided by the CNN is still not good enough to be used for the real-world problems. It seems that the CNN does not well find the subtle differences in thickness and amount of wrinkles on the face, which are the essential features for the age estimation and FER. Also, the face images in the real world have many variations due to the face rotation and illumination, where the CNN is not robust in finding the rotated objects when not every possible variation is in the training data. To alleviate these problems, we first propose to use the Gabor filter responses of faces as the input to the CNN, along with the original face image. This method enhances the wrinkles on the face so that the face-related features are found in the earlier stage of convolutional layers, and hence the overall performance is increased. We also adopt the idea of capsule network, which is shown to be robust to the rotation of objects and be able to capture the relationship of facial landmarks. We show that the performance of age estimation and FER are improved by using the capsule network than using the plain CNNs. Moreover, by using the Gabor responses as the input to the capsule network, the overall performances of face-related problems are increased compared to the recent CNN-based methods.
••01 Jan 2018
TL;DR: Experimental results show that the proposed model with residual learning yields improved performance, and the estimation method consists of three deep neural networks where it adopts residual learning methods.
Abstract: In this paper, we propose a deep residual learning model for age and gender estimation. Our method detects faces in input images, and then the age and gender of each face are estimated. The estimation method consists of three deep neural networks where we adopt residual learning methods. We train the model with IMDB-WIKI database . However, since the database has only a small number of face images under the age of 20, we augment the set by collecting the images on the Internet. Experimental results show that the proposed model with residual learning yields improved performance.
01 Jan 2006
TL;DR: It is concluded that the problem of age-progression on face recognition (FR) is not unique to the algorithm used in this work, and the efficacy of this algorithm is evaluated against the variables of gender and racial origin.
Abstract: This paper details MORPH a longitudinal face database developed for researchers investigating all facets of adult age-progression, e.g. face modeling, photo-realistic animation, face recognition, etc. This database contributes to several active research areas, most notably face recognition, by providing: the largest set of publicly available longitudinal images; longitudinal spans from a few months to over twenty years; and, the inclusion of key physical parameters that affect aging appearance. The direct contribution of this data corpus for face recognition is highlighted in the evaluation of a standard face recognition algorithm, which illustrates the impact that age-progression, has on recognition rates. Assessment of the efficacy of this algorithm is evaluated against the variables of gender and racial origin. This work further concludes that the problem of age-progression on face recognition (FR) is not unique to the algorithm used in this work.
TL;DR: A new fully automated system is proposed for the recognition of gastric infections through multi‐type features extraction, fusion, and robust features selection that performs better as compared to existing methods and achieves an accuracy of 96.5%.
Abstract: Automated detection and classification of gastric infections (i.e., ulcer, polyp, esophagitis, and bleeding) through wireless capsule endoscopy (WCE) is still a key challenge. Doctors can identify these endoscopic diseases by using the computer?aided diagnostic (CAD) systems. In this article, a new fully automated system is proposed for the recognition of gastric infections through multi?type features extraction, fusion, and robust features selection. Five key steps are performed—database creation, handcrafted and convolutional neural network (CNN) deep features extraction, a fusion of extracted features, selection of best features using a genetic algorithm (GA), and recognition. In the features extraction step, discrete cosine transform, discrete wavelet transform strong color feature, and VGG16?based CNN features are extracted. Later, these features are fused by simple array concatenation and GA is performed through which best features are selected based on K?Nearest Neighbor fitness function. In the last, best selected features are provided to Ensemble classifier for recognition of gastric diseases. A database is prepared using four datasets—Kvasir, CVC?ClinicDB, Private, and ETIS?LaribPolypDB with four types of gastric infections such as ulcer, polyp, esophagitis, and bleeding. Using this database, proposed technique performs better as compared to existing methods and achieves an accuracy of 96.5%.
TL;DR: This work proposes a multi-view convolutional neural network framework to predict the occurrence of epilepsy seizures with the goal of acquiring a shared representation of time-domain and frequency-domain features.
Abstract: The unpredictability of seizures is often considered by patients to be the most problematic aspect of epilepsy, so this work aims to develop an accurate epilepsy seizure predictor, making it possible to enable devices to warn patients of impeding seizures. To develop a model for seizure prediction, most studies relied on Electroencephalograms (EEGs) to capture physiological measurements of epilepsy. This work uses the two domains of EEGs, including frequency domain and time domain, to provide two different views for the same data source. Subsequently, this work proposes a multi-view convolutional neural network framework to predict the occurrence of epilepsy seizures with the goal of acquiring a shared representation of time-domain and frequency-domain features. By conducting experiments on Kaggle data set, we demonstrated that the proposed method outperforms all methods listed in the Kaggle leader board. Additionally, our proposed model achieves average area under the curve (AUCs) of 0.82 and 0.89 on two subjects of CHB-MIT scalp EEG data set. This work serves as an effective paradigm for applying deep learning approaches to the crucial topic of risk prediction in health domains.
TL;DR: This study presents a quantitative, objective and non-invasive facial expression recognition system to help in the monitoring and diagnosis of neurological disorders influencing facial expressions.
Abstract: Facial expressions are a significant part of non-verbal communication. Recognizing facial expressions of people with neurological disorders is essential because these people may have lost a significant amount of their verbal communication ability. Such an assessment requires time consuming examination involving medical personnel, which can be quite challenging and expensive. Automated facial expression recognition systems that are low-cost and non-invasive can help experts detect neurological disorders. In this study, an automated facial expression recognition system is developed using a novel deep learning approach. The architecture consists of four-stage networks. The first, second and third networks segment the facial components which are essential for facial expression recognition. Owing to the three networks, an iconize facial image is obtained. The fourth network classifies facial expressions using raw facial images and iconize facial images. This four-stage method combines holistic facial information with local part-based features to achieve more robust facial expression recognition. Preliminary experimental results achieved 94.44% accuracy for facial expression recognition on RaFD database. The proposed system produced 5% improvement than the facial expression recognition system by using raw images. This study presents a quantitative, objective and non-invasive facial expression recognition system to help in the monitoring and diagnosis of neurological disorders influencing facial expressions.
TL;DR: A simple yet efficient way to rephrase the output layer of the conventional deep neural network is proposed, in order to alleviate the effects of label noise in ordinal datasets, and a unimodal label regularization strategy is proposed.
Abstract: This paper targets for the ordinal regression/classification, which objective is to learn a rule to predict labels from a discrete but ordered set. For instance, the classification for medical diagnosis usually involves inherently ordered labels corresponding to the level of health risk. Previous multi-task classifiers on ordinal data often use several binary classification branches to compute a series of cumulative probabilities. However, these cumulative probabilities are not guaranteed to be monotonically decreasing. It also introduces a large number of hyper-parameters to be fine-tuned manually. This paper aims to eliminate or at least largely reduce the effects of those problems. We propose a simple yet efficient way to rephrase the output layer of the conventional deep neural network. Besides, in order to alleviate the effects of label noise in ordinal datasets, we propose a unimodal label regularization strategy. It also explicitly encourages the class predictions to distribute on nearby classes of ground truth. We show that our methods lead to the state-of-the-art accuracy on the medical diagnose task (e.g., Diabetic Retinopathy and Ultrasound Breast dataset) as well as the face age prediction (e.g., Adience face and MORPH Album II) with very little additional cost.