Author
Madhura Datta
Bio: Madhura Datta is an academic researcher from University of Calcutta. The author has contributed to research in topics: Spoofing attack & Face (geometry). The author has an hindex of 3, co-authored 7 publications receiving 14 citations.
Papers
More filters
TL;DR: In this paper set estimation technique is applied for generation of 2D face images on the basis of inheriting features from inter and intra face classes in face space using nearest neighbor classifier.
Abstract: In this paper set estimation technique is applied for generation of 2D face images. The synthesis is done on the basis of inheriting features from inter and intra face classes in face space. Face images without artifacts and expressions are transformed to images with artifacts and expressions with the help of the developed methods. Most of the test images are generated using the proposed method. The measured PSNR values for the generated faces with respect to the training faces reflect the well accepted quality of the generated images. The generated faces are also classified properly to their respective face classes using nearest neighbor classifier. Validation of the method is demonstrated on AR and FIA datasets. Classification accuracy is increased when the new generated faces are added to the training set.
5 citations
TL;DR: An intra-class threshold for multimodal biometric recognition procedure has been developed and is found to perform better than traditional ROC curve based threshold technique.
Abstract: Biometric recognition techniques attracted the researchers for the last two decades due to their many applications in the field of security. In recent times multimodal biometrics have been found to perform better, in several aspects, over unimodal biometrics. The classical approach for recognition is based on dissimilarity measure and for the sake of proper classification one needs to put a threshold on the dissimilarity value. In this paper an intra-class threshold for multimodal biometric recognition procedure has been developed. The authors' selection method of threshold is based on statistical set estimation technique which is applied on a minimal spanning tree and consisting of fused face and iris images. The fusion is performed here on feature level using face and iris biometrics. The proposed method, applied on several multimodal datasets, found to perform better than traditional ROC curve based threshold technique.
4 citations
01 Jan 2011
TL;DR: A hypothetical face recognition task can be viewed in general as combinations of two phases i.e. face authentication or verification and face identification.
Abstract: A hypothetical face recognition task can be viewed in general as combinations of two phases i.e. face authentication or verification and face identification. Several evaluation protocols (Philips, 2003; Jain, 2004; Blackburn, 2004) have been designed earlier for measuring the performance of different existing algorithms. Among those popular methods, appearance based methods (Zhao, 2003; Moghaddam, 2004; Solar, 2005; Maltoni 2005) are generally based on dissimilarity, where the query image is either put in the class for which the dissimilarity is minimal or from which the maximum number of matches are found. This is a classical approach of identification and known ABStRACt
3 citations
TL;DR: In this article, the authors proposed an approach of anti spoofing using Multivariate histogram of oriented gradients descriptor in the auto detected micro expression (μE) regions of human facial videos.
Abstract: Facial video presentation is a topic of interest in many security systems due to its non-intrusive nature. However, such systems are vulnerable to spoof attacks made by fake face videos and thereby gaining unauthorized access in the system. For a robust biometric system anti spoofing approaches like liveness detection ought to be implemented in order to counter the aforesaid print and replay attacks. This article proposes a novel approach of anti spoofing using Multivariate histogram of oriented gradients descriptor in the auto detected micro expression (μE) regions of human facial videos. Facial μE are very brief, spontaneous facial expressions that highlight the face of humans when they either unconsciously or deliberately conceal an emotion. The work emphasizes the variations in μE in fake and original video representation by a considerable amount and claims such a variance is a tool to combat against presentation attacks. In particular, the method automatically extracts the ROI of major changes in μE using the Multivariate orientation gradients parameter and thus proposes this descriptor as one of the most suitable tools to characterize the liveness. The entire implementation is carried out on the self-created Database for replay attacks. The result obtained is satisfactory and tested statistically significant.
3 citations
TL;DR: In this paper, a modified high frequency descriptor is used for proper discrimination between a live and fake facial video streams, which works efficiently for a change in facial micro-expression (μE) in higher frequency spectrum.
Abstract: Facial replay attacks have been a topic of interest in recent past due to the vulnerability of intrusive nature in biometric security systems In order to build a robust biometric system many safeguard approaches have already been developed by the researchers to nullify spoofing activities like print and replay attacks This paper proposes a comprehensive study on the application of Multidimensional Fourier transform to combat replay attacks Since the higher frequency in Multidimensional Fourier transform contains the major feature variations, liveness of a face is mostly reflected in the high frequency spectrum The spontaneous facial expressions like micro-expression( μ E) carries the detailed inner facial variations In this novel approach a modified high frequency descriptor is used for proper discrimination between a live and fake facial video streams The descriptor in particular works efficiently for a change in facial μ E Inclusion of noise along with the feature variation is trivial in higher frequency spectrum The method, therefore, during the pre-processing phase not only extracts the video frames with major μ E changes but also filters out frames carrying any abrupt expression change (macro expression) or spike noise The selected frame sequence are thereafter fed into the multi dimensional Fourier plane in order to detect the liveness The experiment is performed on the self created dataset and also being tested on standard play back attack dataset The result obtained by the proposed anti spoofing approach is satisfactory and verified to be statistically significant
3 citations
Cited by
More filters
Journal Article•
TL;DR: This paper combines face and iris features for developing a multimodal biometrics approach, which is able to diminish the drawback of single biometric approach as well as to improve the performance of authentication system.
Abstract: The recognition accuracy of a single biometric authentication system is often much reduced due to the environment, user mode and physiological defects. In this paper, we combine face and iris features for developing a multimode biometric approach, which is able to diminish the drawback of single biometric approach as well as to improve the performance of authentication system. We combine a face database ORL and iris database CASIA to construct a multimodal biometric experimental database with which we validate the proposed approach and evaluate the multimodal biometrics performance. The experimental results reveal the multimodal biometrics verification is much more reliable and precise than single biometric approach.
53 citations
TL;DR: The proposed methodology consists of three main phases and each phase has several steps in which activities that must be carried out are clearly defined in this paper.
Abstract: Metaheuristic algorithms will gain more and more popularity in the future as optimization problems are increasing in size and complexity. In order to record experiences and allow project to be replicated, a standard process as a methodology for designing and implementing metaheuristic algorithms is necessary. To the best of the authors' knowledge, no methodology has been proposed in literature for this purpose. This paper presents a Design and Implementation Methodology for Metaheuristic Algorithms, named DIMMA. The proposed methodology consists of three main phases and each phase has several steps in which activities that must be carried out are clearly defined in this paper. In addition, design and implementation of tabu search metaheuristic for travelling salesman problem is done as a case study to illustrate applicability of DIMMA.
25 citations
31 Dec 2011
TL;DR: This chapter describes an investigation into the premise that blind programmers and web-developers can create modern Graphical User Interfaces (GUI) through perceptions of MulSeMedia, and whether perceptual culture has a role in this understanding.
Abstract: This chapter describes an investigation into the premise that blind programmers and web-developers can create modern Graphical User Interfaces (GUI) through perceptions of MulSeMedia, and whether perceptual culture has a role in this understanding. Its purpose it to: 1) investigate whether the understanding of computer interfaces is related to perceptual culture as well as perceptual ability; 2) investigate whether it is possible for a person who has never seen to understand visual concepts in informational technology through non-visual senses and memories; and 3) provoke questions as to the nature of computer interfaces, and whether they can ever be regarded as MulSeMedia style interfaces. Beyond this, it proposes to: 1) inform accessible MulSeMedia interface design; and 2) investigate the boundaries of accessing computer interfaces through non-visual perceptions and memories. In order to address these aims and objectives, this chapter discusses the following two research questions:1) Is the perceptual culture of a blind person as important as physical level of blindness in being able to understand, work with, learn how to use or create and program Graphical User Inerfaces (GUIs)?2) Can a cultural model of understanding blindness in part explain the difficulties in adapting Windows MulSeMedia applications for blind people? The study found that programmers who had been introduced to, and educated using a range of visual, audio and /or tactile devices, whether early or late blind, could adapt to produce code with GUIs, but programmers who were educated using only tactile and audio devices preferred to shun visual references in their work.
18 citations
TL;DR: This paper provides an overview over the relationship between Petri Nets and Discrete Event Systems as they have been proved as key factors in the cognitive processes of perception and memorization.
Abstract: This paper provides an overview over the relationship between Petri Nets and Discrete Event Systems as they have been proved as key factors in the cognitive processes of perception and memorization. In this sense, different aspects of encoding Petri Nets as Discrete Dynamical Systems that try to advance not only in the problem of reachability but also in the one of describing the periodicity of markings and their similarity, are revised. It is also provided a metric for the case of Non-bounded Petri Nets.
17 citations
TL;DR: An algorithm selection approach that permits to always use the most appropriate algorithm for the given input image by at first selecting an algorithm based on low level features such as color intensity, histograms, spectral coefficients.
Abstract: Natural Image processing and understanding encompasses hundreds or even thousands of different algorithms.
Each algorithm has a certain peak performance for a particular set of input features and configurations of
the objects/regions of the input image (environment). To obtain the best possible result of processing, we
propose an algorithm selection approach that permits to always use the most appropriate algorithm for the
given input image. This is obtained by at first selecting an algorithm based on low level features such as color
intensity, histograms, spectral coefficients. The resulting high level image description is then analyzed for logical
inconsistencies (contradictions) that are then used to refine the selection of the processing elements. The feedback
created from the contradiction information is executed by a Bayesian Network that integrates both the features
and a higher level information selection processes. The selection stops when the high level inconsistencies are all
resolved or no more different algorithms can be selected.
12 citations