scispace - formally typeset
Search or ask a question
Author

Jarmila Pavlovicova

Bio: Jarmila Pavlovicova is an academic researcher from Slovak University of Technology in Bratislava. The author has contributed to research in topics: Facial recognition system & Feature extraction. The author has an hindex of 9, co-authored 43 publications receiving 267 citations.

Papers
More filters
01 Jan 2008
TL;DR: This paper presents the results of difierent statistical algorithms used for face recognition, namely PCA (Principal Component Analysis), LDA (Linear Discriminant Analysis) and SVM (Support Vector Machines), and proposes the best settings in order to maximize the face recognition success rate.
Abstract: In this paper, we consider the human face be biometric. We present the results of difierent statistical algorithms used for face recognition, namely PCA (Principal Component Analysis), LDA (Linear Discriminant Analysis) and SVM (Support Vector Machines). Pre-processed (normalization of size, unifled position and rotation, contrast optimization and face masking) image sets from the FERET database are used for experiments. We take advantage of csuFaceIdEval and libsvm software that implement the mentioned algorithms. We also propose a combination of PCA and LDA methods with SVM which produces interesting results from the point of view of recognition success, rate, and robustness of the face recognition algorithm. We use difierent classiflers to match the image of a person to a class (a subject) obtained from the training data. These classiflers are in the form of both simple metrics (Mahalinobis cosine, LdaSoft) and more complex support vector machines. We present the results of face recognition of all these methods. We also propose the best settings in order to maximize the face recognition success rate. K e y w o r d s: biometrics, face recognition, principal component analysis, linear discriminant analysis, support vector machines

56 citations

Journal ArticleDOI
TL;DR: An overview of the existing publicly available datasets and their popularity in the research community using a bibliometric approach is provided to help investigators conducting research in the domain of iris recognition to identify relevant datasets.
Abstract: Research on human eye image processing and iris recognition has grown steadily over the last few decades. It is important for researchers interested in this discipline to know the relevant datasets in this area to (i) be able to compare their results and (ii) speed up their research using existing datasets rather than creating custom datasets. In this paper, we provide a comprehensive overview of the existing publicly available datasets and their popularity in the research community using a bibliometric approach. We reviewed 158 different iris datasets referenced from the 689 most relevant research articles indexed by the Web of Science online library. We categorized the datasets and described the properties important for performing relevant research. We provide an overview of the databases per category to help investigators conducting research in the domain of iris recognition to identify relevant datasets.

28 citations

Book ChapterDOI
01 Apr 2010
TL;DR: The aim is to present complex view to biometric face recognition including methodology, settings of parameters of selected methods, detailed recognition results, comparison and discussion of obtained results using large face database.
Abstract: In this chapter, we consider biometric recognition based on human face. Biometrics became frequently used in automated systems for identification of people (Jain et al., 2004) and huge interest is devoted to the area of biometrics at present (Jain et al., 2008; Shoniregun & Crosier, 2008; Ross et al, 2006). Along with well-known methods such as fingerprint or DNA recognition, face image already opened new possibilities. Face recognition has been put into real life by many companies. It is already implemented in image organizing software (e.g. Google’s Picasa: http://www.deondesigns.ca/blog/picasa-3-5-adds-face-recognition/), web applications (e.g. web photo albums http://picasa.google.com/intl/en_us/features-nametags.html) and even in commercial compact cameras (e.g. Panasonic Lumix). Passports contain face biometric data since 2006 (EU – Passport Specification, 2006). In the area of face recognition, a class represents all images of the same subject (person). The goal is to implement an automated machine supported system that recognizes well the identity of a person in the images that were not used in a training phase (an initialization and training by representative sample of images precede an evaluation phase). Various applications are possible, e.g. automated person identification, recognition of race, gender, emotion, age etc. The area of face recognition is well described at present, e.g. starting by conventional approaches (PCA, LDA) (Turk & Pentland1991; Marcialis & Roli, 2002; Martinez & Kak, 2001), and continuing at present by kernel methods (Wang, et al., 2008; Hotta, 2008; Wang et al., 2004; Yang, 2002; Yang et al., 2005). Advances in face recognition are summarized also in books (Li & Jain, 2005; Delac et al., 2008) and book chapters (Oravec et al., 2008). Our aim is to present complex view to biometric face recognition including methodology, settings of parameters of selected methods (both conventional and kernel methods), detailed recognition results, comparison and discussion of obtained results using large face database. The rest of this chapter is organized as follows: In section 2, we present theoretical background of methods used for face recognition purposes - PCA (Principal Component

21 citations

Proceedings ArticleDOI
01 Sep 2017
TL;DR: This contribution is focused on image recognition methods that are suitable for diagnostic purposes in ophthalmology and an identification of bright lesions in fundus images that are a side effect of disease called diabetic retinopathy, using retinal images from the publicly available database, MESSIDOR.
Abstract: This contribution is focused on image recognition methods that are suitable for diagnostic purposes in ophthalmology. Particularly it is an identification of bright lesions in fundus images that are a side effect of disease called diabetic retinopathy. To achieve the goal, we used retinal images from the publicly available database, MESSIDOR. These images were pre-processed, transformed and normalized, in order to enhance their quality and to increase the amount of input data. For classification purposes we split them into multiple groups (clusters). To classify the images according to whether or not they have some types of anomalies, we proposed a convolutional neural network (CNN) with 4 convolutional layers. We've used accuracy criteria and the cross-validation method to evaluate the classification efficiency.

17 citations

Proceedings ArticleDOI
01 Sep 2017
TL;DR: The paper deals with the topic of detection of vehicle speed based on information from video record, and describes the most important methods, namely Gaussian mixture models, DBSCAN, Kalman filter, Optical flow.
Abstract: The paper deals with the topic of detection of vehicle speed based on information from video record. In theoretical part we describe the most important methods, namely Gaussian mixture models, DBSCAN, Kalman filter, Optical flow. The implementation part is comprised of the architectural design and the description of modes of communication of individual segments. The conclusion comprises the tests of obtained video records using different vehicles, different natures of driving and the vehicle position at the time of recording.

16 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

01 Jul 1963

169 citations

Journal ArticleDOI
TL;DR: A new colour space is used which contains error signals derived from differentiating the grayscale map and the non-red encodedgrayscale version to show that luminance can be useful in the segregation of skin and non-skin clusters.
Abstract: Challenges face biometrics researchers and particularly those who are dealing with skin tone detection include choosing a colour space, generating the skin model and processing the obtained regions to fit applications. The majority of existing methods have in common the de-correlation of luminance from the considered colour channels. Luminance is underestimated since it is seen as the least contributing colour component to skin colour detection. This work questions this claim by showing that luminance can be useful in the segregation of skin and non-skin clusters. To this end, here we use a new colour space which contains error signals derived from differentiating the grayscale map and the non-red encoded grayscale version. The advantages of the approach are the reduction of space dimensionality from 3D, RGB, to 1D space advocating its unfussiness and the construction of a rapid classifier necessary for real time applications. The proposed method generates a 1D space map without prior knowledge of the host image. A comprehensive experimental test was conducted and initial results are presented. This paper also discusses an application of the method to image steganography where it is used to orient the embedding process since skin information is deemed to be psycho-visually redundant.

144 citations

Journal ArticleDOI
TL;DR: The proposed approach to semantically classify buildings into much finer categories by learning random forest (RF) classifier from a large number of imbalanced samples with high-dimensional features is effective and accurate.
Abstract: While most existing studies have focused on extracting geometric information on buildings, only a few have concentrated on semantic information. The lack of semantic information cannot satisfy many demands on resolving environmental and social issues. This study presents an approach to semantically classify buildings into much finer categories than those of existing studies by learning random forest (RF) classifier from a large number of imbalanced samples with high-dimensional features. First, a two-level segmentation mechanism combining GIS and VHR image produces single image objects at a large scale and intra-object components at a small scale. Second, a semi-supervised method chooses a large number of unbiased samples by considering the spatial proximity and intra-cluster similarity of buildings. Third, two important improvements in RF classifier are made: a voting-distribution ranked rule for reducing the influences of imbalanced samples on classification accuracy and a feature importance measurement for evaluating each feature’s contribution to the recognition of each category. Fourth, the semantic classification of urban buildings is practically conducted in Beijing city, and the results demonstrate that the proposed approach is effective and accurate. The seven categories used in the study are finer than those in existing work and more helpful to studying many environmental and social problems.

123 citations

Journal ArticleDOI
TL;DR: A convolution neural network (CNN) is used to train the classifier for performing classification and experimental results show that the proposed algorithm provides improved results, when compared to traditional schemes.
Abstract: Diabetic retinopathy is ophthalmological distress, diabetic patients suffer due to clots, lesions, or haemorrhage formation in the light-sensitive region of the retina. Blocking of vessels leads, due to the increase of blood sugar leads to the formation of new vessel growth, which gives rise to mesh-like structures. Assessing the branching retinal vasculature is an important aspect for ophthalmologists for efficient diagnosis. The fundus scans of the eye are first subjected to pre-processing, followed by segmentation. To extract the branching blood vessels, the technique of maximal principal curvature has been applied, which utilizes the maximum Eigenvalues of the Hessian matrix. Adaptive histogram equalization and the morphological opening, are performed post to that, to enhance and eliminate falsely segmented regions. The proliferation of optical nerves was observed much greater in diabetic or affected patients than in healthy ones. We have used a convolution neural network (CNN) to train the classifier for performing classification. The CNN, constructed for classification, comprises a combination of squeeze and excitation and bottleneck layers, one for each class, and a convolution and pooling layer architecture for classification between the two classes. For the performance evaluation of the proposed algorithm, we use the dataset DIARETDB1 (standard Diabetic Retinopathy Dataset) and the dataset provided by a medical institution, comprised of fundus scans of both affected and normal retinas. Experimental results show that the proposed algorithm provides improved results, when compared to traditional schemes. The model yielded an accuracy of 98.7 % and a precision of 97.2 % while evaluated on the DIARETDB1 dataset.

103 citations