scispace - formally typeset
Search or ask a question
Author

Omar Zatarain Duran

Bio: Omar Zatarain Duran is an academic researcher from University of Calgary. The author has contributed to research in topics: Regression analysis & Biometrics. The author has an hindex of 2, co-authored 2 publications receiving 15 citations.

Papers
More filters
Proceedings ArticleDOI
01 Jul 2017
TL;DR: A novel quality estimation method based on linear regression analysis, to model the relationship between different quality factors and corresponding face recognition performance, and a practical set of quality measures is used to estimate the quality scores.
Abstract: The quality of biometrie data has a strong relationship with the performance of a face recognition system. The accuracy of automated face recognition systems is greatly affected by various quality factors, such as illumination, contrast, brightness, and blur. Therefore, an effective method is needed, which can characterize the quality of the facial image by fusing different quality measures into a single quality score. In this paper, we propose a novel quality estimation method based on linear regression analysis, to model the relationship between different quality factors and corresponding face recognition performance. A practical set of quality measures is used to estimate the quality scores. The linear regression model adjusts the weight of different quality factors according to their impact on recognition performance. The facial features are extracted using Local Binary Pattern (LBP) and k-nearest neighbor (KNN) classifier is used for the classification purpose. The prediction scores generated from the model is a strong indicator of the overall quality of a facial image. This model has many applications, for example, saving the processing time and improving the face recognition accuracy during enrollment processes by discarding poor quality images. The residual error of the regression model is 4.29%, and considering 0 and ±1 error between original response value and the prediction value results in a very high accuracy of 94.06%.

15 citations

Book ChapterDOI
01 Jan 2020
TL;DR: This chapter proposes a quality estimation method based on a linear regression analysis to characterize the relationship between different quality factors and the performance of a face recognition system.
Abstract: Over past decade, behavioral biometric systems based on face recognition became leading commercial systems that meet the need for fast and efficient confirmation of a person’s identity. Facial recognition works on biometric samples, like image or video frames, to recognize people. The performance of an automated face recognition system has a strong relationship with the quality of the biometric samples. In this chapter, the authors propose a quality estimation method based on a linear regression analysis to characterize the relationship between different quality factors and the performance of a face recognition system. The regression model can predict the overall quality of a facial sample which reflects the effects of various quality factors on that sample. The authors evaluated the quality estimation model on the Extended Yale Database B, finally formulating a data set of samples which will enable efficient implementation of biometric facial recognition. Quality Estimation for Facial Biometrics

3 citations

DOI
16 Nov 2022
TL;DR: In this paper , the authors proposed a web platform for monitoring and identification of various harmful environmental pollutants such as light intensity in places where human activities are carried out using a web scraping technique.
Abstract: Pollutants in urban areas are harmful to humans and to other living beings that live in such areas, the pollutants come from different sources such as vehicles, homes, industries, even from the lighting system. In this work, we propose the monitoring and identification of various harmful environmental pollutants such as light intensity in places where human activities are carried out. The monitoring and identification of pollutants are implemented through a web platform, developed with technology PHP, as well as Python and SQL to obtain information using a web scraping technique. A series of websites for environment monitoring were exploited using a web-scrapping script that collects and records pollution data into a database on a remote server, to later display the information on a web system. The information is processed to ease the user interpretation of the data for decision making.
Proceedings ArticleDOI
19 Oct 2022
TL;DR: In this paper , a progressive web application on a tourist guide on historic places requires the involvement of a team made up of graphic designers, programmers, tourism experts and government personnel, which is expected to work as a virtual tourist guide for individuals, describing in a summarized and interesting way the points of the journey.
Abstract: The development and implementation of a progressive web application on a tourist guide on historic places requires the involvement of a team made up of graphic designers, programmers, tourism experts and government personnel. The application is expected to work as a virtual tourist guide for individuals, describing in a summarized and interesting way the points of the journey, from historical places to gastronomic regions through which the visitor passes by using the geolocation of the device that accesses the application. The platform displays the information using a visual and auditory format. This application is being developed with the React and Material UI library for the front-end supported by a Google-Firebase back-end that manages data collections and account management. The cross-platform hybrid app can be executed on Android/iOS mobile devices, as well as in desktop environments in the form of a PWA (Progressive Web App). Additionally, it is sought to be used for other tourist tours in the city or other settlements, as long as there is internet access or data from a telephonic company.

Cited by
More filters
Posted Content
01 Mar 2018
TL;DR: A fully Computer-Aided Detection system is presented for speckle reduction and segmentation of nodule from thyroid ultrasound images that can facilitate the endocrinologists by providing second opinion to improve diagnosis of nodules as benign or malignant.
Abstract: Thyroid nodule is one of the endocrine problem caused due to abnormal growth of cells. This survival rate can be enhanced by earlier detection of nodules. Thus, the accurate detection of nodule is of utmost importance in providing effective diagnosis to increase the survival rate. However, accuracy of nodule detection from ultrasound images is suffered due to speckle noise. It considerably deteriorates the image quality and makes the differentiation of fine details quite difficult. Most of the detection systems for the thyroid nodules are semi-automated entailing manual intervention to draw rough outline of the nodule at some level or require manual segmentation in training or testing phases that increase the inaccuracies and evaluation time. To handle this, a fully Computer-Aided Detection system is presented for speckle reduction and segmentation of nodules from thyroid ultrasound images. The proposed system has three components: speckle reduction to reduce speckle noise and preserve the diagnostic features of ultrasound image, automatic generation of Region of interest (ROI) that identifies suspicious regions and fully automatic segmentation of nodule in processed ROI image. The proposed segmentation method outperformed other methods by gaining high True Positive (TP) value (95.92 ± 3.70%), False Positive (FP) value (7.04 ± 4.21%), Dice Coefficient (DC) value (93.88 ± 2.59%), Overlap Metric (OM) (91.18 ± 7.04 pixels) and Hausdroff Distance (HD) (0.52 ± 0.20 pixels). This system can facilitate the endocrinologists by providing second opinion to improve diagnosis of nodules as benign or malignant.

32 citations

16 Feb 2010
TL;DR: A new study is presented that explains the edge density effect in terms of illumination and shows that top performing algorithms in FRVT 2006 are still sensitive to lighting, and a new lighting model developed in this study can be used as a measure of face image quality.
Abstract: Recent studies show that face recognition in uncontrolled images remains a challenging problem, although the reasons why are less clear. Changes in illumination are one possible explanation, even though algorithms developed since the advent of the PIE and Yale B data bases supposedly compensate for illumination variation. Edge density has also been shown to be a strong predictor of algorithm failure on the FRVT 2006 uncontrolled images; recognition is harder on images with higher edge density. This paper presents a new study that explains the edge density effect in terms of illumination and shows that top performing algorithms in FRVT 2006 are still sensitive to lighting. This new study also shows that focus, originally suggested as an explanation for the edge density effect, is not a significant factor. The new lighting model developed in this study can be used as a measure of face image quality.

30 citations

Journal Article
TL;DR: This paper proposes an approach for standardization of facial image quality, and develops facial symmetry based methods for the assessment of it by measuring facial asymmetries caused by non-frontal lighting and improper facial pose.
Abstract: Performance of biometric systems is dependent on quality of acquired biometric samples. Poor sample quality is a main reason for matching errors in biometric systems and may be the main weakness of some implementations. This paper proposes an approach for standardization of facial image quality, and develops facial symmetry based methods for the assessment of it by measuring facial asymmetries caused by non-frontal lighting and improper facial pose. Experimental results are provided to illustrate the concepts, definitions and effectiveness.

28 citations