scispace - formally typeset
Search or ask a question

Showing papers by "Terrance E. Boult published in 2008"


Patent
26 Nov 2008
TL;DR: In this article, various transformation approaches are described that provide a secure means for transforming a stored or live, secure biometric-based identity token, embedding data into such tokens and biometricbased matching to both verify the user's identity and recover the embedded data.
Abstract: Techniques, systems and methods are described relating to combining biometric and cryptographic techniques to support securely embedding data within a token and subsequent biometrically-enabled recovery of said data. Various transformation approaches are described that provide a secure means for transforming a stored or live, secure biometric-based identity token, embedding data into such tokens and biometric-based matching to both verify the user's identity and recover the embedded data. Security enhancements to a range of existing protocols are described using the techniques. Systems using novel protocols based on these techniques are described

53 citations


Proceedings ArticleDOI
21 Oct 2008
TL;DR: B bipartite biotokens offer a convenient enhancement to keys and passwords, allowing for tighter auditing and non-repudiation, as well as protection from phishing and man-in-the-middle attacks.
Abstract: Cryptographic protocols are the foundation of secure network infrastructure, facilitating authentication, transactions, and data integrity. In traditional cryptographic protocols, generated keys (and, in most cases, passwords) are used. The utility of biometrics as a convenient and reliable method for authentication has emerged in recent years, but little work has been performed on a serious integration of biometrics with cryptographic protocols. In this paper, we review the notion of revocable biotokens, explain their nesting properties, and extend them to bipartite bitokens and use these to develop protocols for transactions, digital signatures, and a biometric version of Kerberos. We show bipartite biotokens offer a convenient enhancement to keys and passwords, allowing for tighter auditing and non-repudiation, as well as protection from phishing and man-in-the-middle attacks.

47 citations


Proceedings ArticleDOI
23 Jun 2008
TL;DR: Experimental results show that the authors can reliably predict biometric system failure using the SVM approach, and introduce support vector machines as yet another approach for accurate classification in facial recognition algorithms.
Abstract: The notion of quality in biometric system evaluation has often been restricted to raw image quality, with a prediction of failure leaving no other option but to acquire another sample image of the subject at large. The very nature of this sort of failure prediction is very limiting for both identifying situations where algorithms fail, and for automatically compensating for failure conditions. Moreover, when expressed in a ROC curve, image quality paints an often misleading picture regarding its potential to predict failure. In this paper, we extend previous work on predicting algorithmic failures via similarity surface analysis. To generate the surfaces used for comparison, we define a set of new features derived from distance measures or similarity scores from a recognition system. For learning, we introduce support vector machines as yet another approach for accurate classification. A large set of scores from facial recognition algorithms are evaluated, including EBGM, robust PCA, robust revocable PCA, and a leading commercial algorithm. Experimental results show that we can reliably predict biometric system failure using the SVM approach.

26 citations


Proceedings ArticleDOI
08 Dec 2008
TL;DR: This paper develops a full multi-modal recognition system integrating an FPROC fusion-based failure prediction engine, and shows a significant improvement in recognition performance with the fusion approach, over the baseline recognition results and previous fusion approaches.
Abstract: Competing notions of biometric recognition system failure prediction have emerged recently, which can roughly be categorized as quality and non-quality based approaches. Quality, while well correlated overall with recognition performance, is a weaker indication of how the system will perform in a particular instance - something of primary importance for critical installations, screening areas, and surveillance posts. An alternative approach, incorporating a failure prediction receiver operator characteristic (FPROC) analysis has been proposed to overcome the limitations of the quality approach, yielding accurate predictions on a per instance basis. In this paper, we develop a full multi-modal recognition system integrating an FPROC fusion-based failure prediction engine. Four different fusion techniques to enhance failure prediction are developed and evaluated for this system. We present results for the NIST BSSR1 multi-modal data set, and a larger "chimera" set also composed of data from BSSR1. Our results show a significant improvement in recognition performance with the fusion approach, over the baseline recognition results and previous fusion approaches.

19 citations


Proceedings ArticleDOI
16 Mar 2008
TL;DR: This paper summarizes 7 years of effort on Face at a distance, which for us is far more than a fad and presents experimental results showing the limitations of existing systems at significant distance and under non-ideal weather conditions and presents some reasons for the weak performance.
Abstract: The issues of applying facial recognition at significant distances are non-trivial and often subtle. This paper summarizes 7 years of effort on Face at a distance, which for us is far more than a fad. Our effort started under the DARPA Human Identification at a Distance (HID) program. Of all the programmers under HID, only a few of the efforts demonstrated face recognition at greater than 25ft and only one, lead by Dr. Boult, studied face recognition at distances greater than 50 meters. Two issues were explicitly studied. The first was atmospherics/weather, which can have a measurable impact at these distances. The second area was sensor issues including resolution, field-of-view and dynamic range. This paper starts with a discussion and some of results in sensors related issues including resolution, FOV, dynamic range and lighting normalization. It then discusses the "Photohead" technique developed to analyze the impact of weather/imaging and atmospherics at medium distances. The paper presents experimental results showing the limitations of existing systems at significant distance and under non-ideal weather conditions and presents some reasons for the weak performance. It ends with a discussion of our FASSTTM (failure prediction from similarity surface theory) and RandomEyesTM approaches, combined into the FIINDERTM system and how they improved FAAD.

8 citations


Proceedings ArticleDOI
23 Jun 2008
TL;DR: This paper provides two data sets for the unseen challenge of the First IEEE workitorial on vision of the unseen (WVU) to challenge researchers to assess their Digital Image steg analysis state-of-the-art algorithms.
Abstract: Nowadays, it is paramount to study and develop robust algorithms to detect the very existence of hidden messages in digital images. In this paper, we provide two data sets for the unseen challenge of the First IEEE workitorial on vision of the unseen (WVU). Example usage of the data sets is demonstrated with the steg detect analysis tool, with surprising results reported. Our objective is to challenge researchers to assess their Digital Image steg analysis state-of-the-art algorithms.

7 citations


Book ChapterDOI
01 Jan 2008

6 citations


Proceedings ArticleDOI
23 Jun 2008
TL;DR: In this work, two new and important qualities of features are defined which expand upon the usual local feature information and quantifies physical distance from other features in the same frame.
Abstract: In this work, we define two new and important qualities of features which expand upon the usual local feature information. Spatio-temporal consistency is a quality of a feature that quantifies how consistently a feature has been tracked in prior frames and how smooth its motion was over prior frames. Distributivity is a quality of a feature that quantifies physical distance (in number of pixels) from other features in the same frame.

4 citations


01 Jan 2008
TL;DR: Privacy Enhancement via Adaptive Cryptographic Embedding is a method utilizing encryption techniques to improve privacy while allowing security applications to continue to use much of the data in context, and allowing full access only with possession of a decryption key.
Abstract: Significant research progress has been made in electronic surveillance, signals intelligence, and biometric identification that has increased the deployment and effectiveness of these technologies. For many, video domestic surveillance cameras, wiretapping and biometrics epitomize the (misperceived) “inherent” tradeoff between security and privacy, with staunch defenders of these technologies promoting them as indispensable tools for security and equally vocal groups that berate them as an ineffective siege. Congress, over the past thirty years, has enacted a slew of legislation to protect constitutional rights to privacy and speech, and other laws to provide for enhanced national security, yet little has been done to address, technologically, the balance between liberty and security. While the balance of liberty and security is important, it is equally important to note that this is not a zero sum game or an inherent tradeoff. This paper presents a method that demonstrates the adaptation and application of cryptographic ideas to sculpt technological approaches that can move the “balance point” and provide simultaneous improvements to both security and privacy. Privacy Enhancement via Adaptive Cryptographic Embedding (PEACE) is a method utilizing encryption techniques to improve privacy while allowing security applications to continue to use much of the data in context, and allowing full access (i.e. violation of privacy) only with possession of a decryption key. We introduce the application of PEACE in three areas: video surveillance, wiretapping, and biometrics. We present an in-depth case study for “warrantless” wiretapping, showing the details necessary for a new tool that protects both our country and the privacy of our citizens.

2 citations


Proceedings ArticleDOI
07 Jan 2008
TL;DR: A novel technique that exploits the spectral response characteristics of a traditional sensor (i.e. CMOS or CCD) to utilize it as a low-cost spectrometer using the raw Bayer pattern data from a sensor to estimate the brightness and wavelength of the measured light at a particular point.
Abstract: Modern spectrometer equipment tends to be expensive, thus increasing the cost of emerging systems that take advantage of spectral properties as part of their operation. This paper introduces a novel technique that exploits the spectral response characteristics of a traditional sensor (i.e. CMOS or CCD) to utilize it as a low-cost spectrometer. Using the raw Bayer pattern data from a sensor, we estimate the brightness and wavelength of the measured light at a particular point. We use this information to support wide dynamic range, high noise tolerance, and, if sampling takes place on a slope, sub-pixel resolution. Experimental results are provided for both simulation and real data. Further, we investigate the potential of this low-cost technology for spoof detection in biometric systems. Lastly, an actual hardware systhesis is conducted to show the ease with which this algorithm can be implemented onto an FPGA.

2 citations