scispace - formally typeset
Search or ask a question

Showing papers by "Terrance E. Boult published in 2005"


Proceedings ArticleDOI
17 Nov 2005
TL;DR: This paper presents an example that demonstrates how using and adapting cryptographic ideas and combining them with intelligent video processing, technological approaches can provide for solutions addressing these critical trade-offs, potentially improving both security and privacy.
Abstract: Signifiicant research progress has been made in intelligent imaging systems, surveillance and biometrics improving robustness, increasing performance and decreasing cost. As a result, deployment of surveillance and intelligent video systems is booming and increasing the impact of these on privacy. For many, networked intelligent video systems, especially video surveillance and biometrics, epitomize the invasion of privacy by an Orwellian "big brother:. While tens of millions in government funding have been spent on research improving video surveillance, virtually none has been invested in technologies to enhance privacy or effectively balance privacy and security. This paper presents an example that demonstrates how using and adapting cryptographic ideas and combining them with intelligent video processing, technological pproaches can provide for solutions addressing these critical trade-offs, potentially improving both security and privacy. After reviewing previous research in privacy improving technology in video systems, the paper then presents cryptographically invertible obscuration. This is an application of encryption techniques to improve the privacy aspects while allowing general surveillance to continue and allowing full access (i.e. violation ofprivacy) only with use of a decryption key, maintained by a court or other thirdparty.

117 citations


Patent
14 Oct 2005
TL;DR: In this article, various biometric cryptographically secure revocable transformation approaches are described that support a robust pseudo-distance computation in encoded form, thereby supporting confidence in verification, and which can provide for verification without identification.
Abstract: Techniques, systems and methods relating to cryptographically secure revocable biometric signatures and identification computed with robust distance metrics are described. Various biometric cryptographically secure revocable transformation approaches are described that support a robust pseudo-distance computation in encoded form (such as seen in figure 4), thereby supporting confidence in verification, and which can provide for verification without identification.

58 citations


Proceedings ArticleDOI
29 Aug 2005
TL;DR: This is the first paper to focus on predicting the failure of a recognizer (or classifier) and verifying the correctness of the recognition (or classification) system, and this research provides a unique component to the overall understanding of biometric systems.
Abstract: Object recognition (or classification) systems largely emphasize improving system performance and focus on their "positive" recognition (or classification). Few papers have addressed the prediction of recognition algorithm failures, even though it directly addresses a very relevant issue and can be very important in overall system design. This is the first paper to focus on predicting the failure of a recognizer (or classifier) and verifying the correctness of the recognition (or classification) system This research provides a unique component to the overall understanding of biometric systems. The approach presented in the paper is the post-recognition analysis techniques (PRAT), where the similarity scores used in recognition are analyzed to predict the system failure or to verify the system correctness after a recognizer has been applied. Applying an AdaBoost learning the approach combines the features computed from the similarity measures to produce a patent pending system that predicts the failure of a biometric system. Because the approach is learning-based the PRAT is a general paradigm predicting failure of any "similarity-based" recognition (or classification) algorithm. Failure prediction, using a leading commercial face recognition system, is presented as an example to show how to use the approach. On outdoor weathered face data, the system demonstrated the ability to predict 90% of the underlying facial recognition system failures with a 15% false alarm rate

30 citations


Patent
19 Aug 2005
TL;DR: In this paper, the authors proposed a method for computing biometric signatures and identification that are projective invariant and hence are not impacted by the viewing angle of the subregion of the human body containing the biometric data.
Abstract: Techniques, systems and methods for obtaining biometric signatures and identification are described. Broadly stated, embodiments of the present invention utilize specified geometric principles t provide means for accurate biometric identification using projective invariant features of a subregion of the human body. The present invention provides a means for computing biometric signatures and identification that are projective invariant and hence are not impacted by the viewing angle of the subregion of the human body containing the biometric data. This novel invention removes the restriction, often implicit in the previous work, of the imaging or sensing system being in a fixed repeatable (and generally orthogonal) viewing position. This invention can be applied across a wide range of biometrics, although it is most easily applicable to features that are approximately co-planar. A plurality of such projective invariant features can be used to define a biometric signature to either verify an individual’s identity, or recognize an individual from a database of already known persons.

20 citations


Book ChapterDOI
TL;DR: A novel technique for improving face recognition performance by predicting system failure, and, if necessary, perturbing eye coordinate inputs and repredicting failure as a means of selecting the optimal perturbation for correct classification.
Abstract: This paper presents a novel technique for improving face recognition performance by predicting system failure, and, if necessary, perturbing eye coordinate inputs and repredicting failure as a means of selecting the optimal perturbation for correct classification. This relies on a method that can accurately identify patterns that can lead to more accurate classification, without modifying the classification algorithm itself. To this end, a neural network is used to learn 'good' and 'bad' wavelet transforms of similarity score distributions from an analysis of the gallery. In production, face images with a high likelihood of having been incorrectly matched are reprocessed using perturbed eye coordinate inputs, and the best results used to “correct” the initial results. The overall approach suggest a more general approach involving the use of input perturbations for increasing classifier performance in general. Results for both commercial and research face-based biometrics are presented using both simulated and real data. The statistically significant results show the strong potential for this to improve system performance, especially with uncooperative subjects.

13 citations


Proceedings ArticleDOI
20 Jun 2005
TL;DR: The classification error statistics of the trained strong classifier are characterized as a function of the true distributions of classes, the collection of the weak classifiers, and the size of the training set to show that they can be numerically computed and the results are more accurate than previous bounds in the literature.
Abstract: Boosting algorithms have been widely applied in the machine vision systems. Two fundamental issues that have to be solved in these systems are how much training data and how many Boosting rounds are needed to achieve a desired performance. We view the Boosting algorithm as a nonlinear estimation scheme that estimates a strong classifier from a given training sample set (that is generated by sampling a true unknown distribution), the weak classifiers, and the number of Boosting rounds T. The performance characterization of this estimator involves the derivation of the classification error statistics of the trained strong classifier as a function of the training set and the collection of the weak classifiers. Although the convergence and the error bounds for the training error and generalization error of the algorithms have been studied for several years, the estimated bounds are still loose bounds that are only meaningful for large training sets. With no effective tools for determining the error bounds, users are now collecting training samples with as much data as they can afford, with no good way to know if they are sufficient. In this paper, we characterize the classification error statistics of the trained strong classifier as a function of the true distributions of classes, the collection of the weak classifiers, and the size of the training set. We show that the statistics can be numerically computed and the results are more accurate than previous bounds in the literature. Theoretical results are verified through the simulations. Face detection is used as a case study to illustrate the application of the theory on real data.

7 citations