scispace - formally typeset
Search or ask a question
Author

Helio Pedrini

Bio: Helio Pedrini is an academic researcher from State University of Campinas. The author has contributed to research in topics: Computer science & Feature extraction. The author has an hindex of 25, co-authored 242 publications receiving 3383 citations. Previous affiliations of Helio Pedrini include Universidade Federal de Ouro Preto & Federal University of Paraná.


Papers
More filters
Journal ArticleDOI
TL;DR: This work assumes a very limited knowledge about biometric spoofing at the sensor to derive outstanding spoofing detection systems for iris, face, and fingerprint modalities based on two deep learning approaches based on convolutional networks.
Abstract: Biometrics systems have significantly improved person identification and authentication, playing an important role in personal, national, and global security. However, these systems might be deceived (or spoofed) and, despite the recent advances in spoofing detection, current solutions often rely on domain knowledge, specific biometric reading systems, and attack types. We assume a very limited knowledge about biometric spoofing at the sensor to derive outstanding spoofing detection systems for iris, face, and fingerprint modalities based on two deep learning approaches. The first approach consists of learning suitable convolutional network architectures for each domain, whereas the second approach focuses on learning the weights of the network via back propagation. We consider nine biometric spoofing benchmarks—each one containing real and fake samples of a given biometric modality and attack type—and learn deep representations for each benchmark by combining and contrasting the two learning approaches. This strategy not only provides better comprehension of how these approaches interplay, but also creates systems that exceed the best known results in eight out of the nine benchmarks. The results strongly indicate that spoofing detection systems based on convolutional networks can be robust to attacks already known and possibly adapted, with little effort, to image-based attacks that are yet to come.

353 citations

Journal ArticleDOI
TL;DR: In this paper, the authors proposed two deep learning approaches for spoofing detection of iris, face, and fingerprint modalities based on a very limited knowledge about biometric spoofing at the sensor.
Abstract: Biometrics systems have significantly improved person identification and authentication, playing an important role in personal, national, and global security. However, these systems might be deceived (or "spoofed") and, despite the recent advances in spoofing detection, current solutions often rely on domain knowledge, specific biometric reading systems, and attack types. We assume a very limited knowledge about biometric spoofing at the sensor to derive outstanding spoofing detection systems for iris, face, and fingerprint modalities based on two deep learning approaches. The first approach consists of learning suitable convolutional network architectures for each domain, while the second approach focuses on learning the weights of the network via back-propagation. We consider nine biometric spoofing benchmarks --- each one containing real and fake samples of a given biometric modality and attack type --- and learn deep representations for each benchmark by combining and contrasting the two learning approaches. This strategy not only provides better comprehension of how these approaches interplay, but also creates systems that exceed the best known results in eight out of the nine benchmarks. The results strongly indicate that spoofing detection systems based on convolutional networks can be robust to attacks already known and possibly adapted, with little effort, to image-based attacks that are yet to come.

325 citations

Journal ArticleDOI
TL;DR: This paper proposes a forgery detection method that exploits subtle inconsistencies in the color of the illumination of images that is applicable to images containing two or more people and requires no expert interaction for the tampering decision.
Abstract: For decades, photographs have been used to document space-time events and they have often served as evidence in courts. Although photographers are able to create composites of analog pictures, this process is very time consuming and requires expert knowledge. Today, however, powerful digital image editing software makes image modifications straightforward. This undermines our trust in photographs and, in particular, questions pictures as evidence for real-world events. In this paper, we analyze one of the most common forms of photographic manipulation, known as image composition or splicing. We propose a forgery detection method that exploits subtle inconsistencies in the color of the illumination of images. Our approach is machine-learning-based and requires minimal user interaction. The technique is applicable to images containing two or more people and requires no expert interaction for the tampering decision. To achieve this, we incorporate information from physics- and statistical-based illuminant estimators on image regions of similar material. From these illuminant estimates, we extract texture- and edge-based features which are then provided to a machine-learning approach for automatic decision-making. The classification performance using an SVM meta-fusion classifier is promising. It yields detection rates of 86% on a new benchmark dataset consisting of 200 images, and 83% on 50 images that were collected from the Internet.

220 citations

Proceedings ArticleDOI
TL;DR: This competition is to compare the performance of different state-of-the-art algorithms on the same database using a unique evaluation method and the results suggest the investigation of more complex attacks.
Abstract: Spoofing identities using photographs is one of the most common techniques to attack 2-D face recognition systems. There seems to exist no comparative studies of different techniques using the same protocols and data. The motivation behind this competition is to compare the performance of different state-of-the-art algorithms on the same database using a unique evaluation method. Six different teams from universities around the world have participated in the contest. Use of one or multiple techniques from motion, texture analysis and liveness detection appears to be the common trend in this competition. Most of the algorithms are able to clearly separate spoof attempts from real accesses. The results suggest the investigation of more complex attacks.

180 citations

Journal ArticleDOI
23 Nov 2013
TL;DR: This paper presents a novel strategy for extending the GLCM to multiple scales through two different approaches, a Gaussian scale-space representation, which is constructed by smoothing the image with larger and larger low-pass filters producing a set of smoothed versions of the original image, and an image pyramid,Which is defined by sampling the image both in space and scale.
Abstract: Texture information plays an important role in image analysis. Although several descriptors have been proposed to extract and analyze texture, the development of automatic systems for image interpretation and object recognition is a difficult task due to the complex aspects of texture. Scale is an important information in texture analysis, since a same texture can be perceived as different texture patterns at distinct scales. Gray level co-occurrence matrices (GLCM) have been proved to be an effective texture descriptor. This paper presents a novel strategy for extending the GLCM to multiple scales through two different approaches, a Gaussian scale-space representation, which is constructed by smoothing the image with larger and larger low-pass filters producing a set of smoothed versions of the original image, and an image pyramid, which is defined by sampling the image both in space and scale. The performance of the proposed approach is evaluated by applying the multi-scale descriptor on five benchmark texture data sets and the results are compared to other well-known texture operators, including the original GLCM, that even though faster than the proposed method, is significantly outperformed in accuracy.

173 citations


Cited by
More filters
Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Dissertation
01 Jan 1975

2,119 citations

Reference EntryDOI
15 Oct 2004

2,118 citations

Journal Article
TL;DR: In this article, the authors explore the effect of dimensionality on the nearest neighbor problem and show that under a broad set of conditions (much broader than independent and identically distributed dimensions), as dimensionality increases, the distance to the nearest data point approaches the distance of the farthest data point.
Abstract: We explore the effect of dimensionality on the nearest neighbor problem. We show that under a broad set of conditions (much broader than independent and identically distributed dimensions), as dimensionality increases, the distance to the nearest data point approaches the distance to the farthest data point. To provide a practical perspective, we present empirical results on both real and synthetic data sets that demonstrate that this effect can occur for as few as 10-15 dimensions. These results should not be interpreted to mean that high-dimensional indexing is never meaningful; we illustrate this point by identifying some high-dimensional workloads for which this effect does not occur. However, our results do emphasize that the methodology used almost universally in the database literature to evaluate high-dimensional indexing techniques is flawed, and should be modified. In particular, most such techniques proposed in the literature are not evaluated versus simple linear scan, and are evaluated over workloads for which nearest neighbor is not meaningful. Often, even the reported experiments, when analyzed carefully, show that linear scan would outperform the techniques being proposed on the workloads studied in high (10-15) dimensionality!.

1,992 citations

01 Jan 2016
TL;DR: The remote sensing and image interpretation is universally compatible with any devices to read and is available in the digital library an online access to it is set as public so you can get it instantly.
Abstract: Thank you very much for downloading remote sensing and image interpretation. As you may know, people have look hundreds times for their favorite novels like this remote sensing and image interpretation, but end up in malicious downloads. Rather than reading a good book with a cup of tea in the afternoon, instead they are facing with some malicious virus inside their computer. remote sensing and image interpretation is available in our digital library an online access to it is set as public so you can get it instantly. Our book servers spans in multiple countries, allowing you to get the most less latency time to download any of our books like this one. Merely said, the remote sensing and image interpretation is universally compatible with any devices to read.

1,802 citations