scispace - formally typeset
Search or ask a question

Showing papers by "Terrance E. Boult published in 2007"


Proceedings ArticleDOI
TL;DR: A security analysis of leading privacy enhanced technologies for biometrics including biometric fuzzy vaults (BFV) and biometric encryption (BE) and a discussion of the requirements for an architecture to address the privacy and security requirements.
Abstract: This paper is a security analysis of leading privacy enhanced technologies (PETs) for biometrics including biometric fuzzy vaults (BFV) and biometric encryption (BE). The lack of published attacks, combined with various "proven" security properties has been taken by some as a sign that these technologies are ready for deployment. While some of the existing BFV and BE techniques do have "proven" security properties, those proofs make assumptions that may not, in general, be valid for biometric systems. We briefly review some of the other known attacks against BFV and BE techniques. We introduce three disturbing classes of attacks against PET techniques including attack via record multiplicity, surreptitious key-inversion attack, and novel blended substitution attacks. The paper ends with a discussion of the requirements for an architecture to address the privacy and security requirements.

315 citations


Proceedings ArticleDOI
17 Jun 2007
TL;DR: This paper adapts a recently introduced approach that separates each datum into two fields, one of which is encoded and one which is left to support the approximate matching to enhance an existing fingerprint system.
Abstract: This paper reviews the biometric dilemma, the pending threat that may limit the long-term value of biometrics in security applications. Unlike passwords, if a biometric database is ever compromised or improperly shared, the underlying biometric data cannot be changed. The concept of revocable or cancelable biometric-based identity tokens (biotokens), if properly implemented, can provide significant enhancements in both privacy and security and address the biometric dilemma. The key to effective revocable biotokens is the need to support the highly accurate approximate matching needed in any biometric system as well as protecting privacy/security of the underlying data. We briefly review prior work and show why it is insufficient in both accuracy and security. This paper adapts a recently introduced approach that separates each datum into two fields, one of which is encoded and one which is left to support the approximate matching. Previously applied to faces, this paper uses this approach to enhance an existing fingerprint system. Unlike previous work in privacy-enhanced biometrics, our approach improves the accuracy of the underlying svstem! The security analysis of these biotokens includes addressing the critical issue of protection of small fields. The resulting algorithm is tested on three different fingerprint verification challenge datasets and shows an average decrease in the Equal Error Rate of over 30% - providing improved security and improved privacy.

189 citations


Proceedings ArticleDOI
17 Jun 2007
TL;DR: It is demonstrated how the practical problem of "privacy invasion" can be successfully addressed through DSP hardware in terms of smallness in size and cost optimization.
Abstract: Considerable research work has been done in the area of surveillance and biometrics, where the goals have always been high performance, robustness in security and cost optimization With the emergence of more intelligent and complex video surveillance mechanisms, the issue of "privacy invasion" has been looming large Very little investment or effort has gone into looking after this issue in an efficient and cost-effective way The process of PICO (privacy through invertible cryptographic obscuration) is a way of using cryptographic techniques and combining them with image processing and video surveillance to provide a practical solution to the critical issue of "privacy invasion" This paper presents the idea and example of a realtime embedded application of the PICO technique, using uCLinux on the tiny Blackfin DSP architecture, along with a small Omnivision camera It demonstrates how the practical problem of "privacy invasion" can be successfully addressed through DSP hardware in terms of smallness in size and cost optimization After review of previous applications of "privacy protection", and system components, we discuss the "embedded jpeg-space" detection of regions of interest and the real time application of encryption techniques to improve privacy while allowing general surveillance to continue The resulting approach permits full access (violation of privacy) only by access to the private-key to recover the decryption key, thereby striking a fine trade-off among privacy, security, cost and space

104 citations


Journal ArticleDOI
TL;DR: The concept of hand geometry is extended from a geometrical size-based technique that requires physical hand constraints to a projective invariant- based technique that allows free hand motion.
Abstract: Our research focuses on finding mathematical representations of biometric features that are not only distinctive, but also invariant to projective transformations. We have chosen hand geometry technology to work with, because it has wide public awareness and acceptance and most important, large space for improvement. Unlike the traditional hand geometry technologies, the hand descriptor in our hand geometry system is constructed using projective-invariant features. Hand identification can be accomplished by a single view of a hand regardless of the viewing angles. The noise immunity and the discriminability possessed by a hand feature vector using different types of projective invariants are studied. We have found an appropriate symmetric polynomial representation of the hand features with which both noise immunity and discrimminability of a hand feature vector are considerably improved. Experimental results show that the system achieves an equal error rate (EER) of 2.1% by a 5-D feature vector on a database of 52 hand images. The EER reduces to 0.00% when the feature vector dimension increases to 18. In this paper, we extend the concept of hand geometry from a geometrical size-based technique that requires physical hand constraints to a projective invariant-based technique that allows free hand motion.

64 citations


Proceedings ArticleDOI
16 Apr 2007
TL;DR: A ZigBee-based power-conserving network design in which a multi-mode scheduler can be used at the application-level for all network nodes that brings the whole net up and down, increasing the per-node lifetime, leading to an increased overall network lifetime.
Abstract: This paper addresses power reduction in wireless sensor networks (WSNs), proposing an application-level solution for the IEEE 802.15.4 compliant ZigBee protocol. ZigBee protocol supports the least power-consuming 'sleep' mode of operation only for end-nodes that do not route packets. This significantly limits ZigBee's WSN applications. This paper presents a ZigBee-based power-conserving network design in which a multi-mode scheduler can be used at the application-level for all network nodes that brings the whole net up and down. This allows all nodes to 'deep sleep', hence increasing the per-node lifetime, leading to an increased overall network lifetime. Results of network simulations with a rate monotonic-based scheduler, schedules operating times for various nodes in the entire network, setting the devices to the lowest power operating mode - sleep mode - for all other times when no operation needs to be performed. When the network wakes up, all nodes necessary for a particular scheduled transmission awake and rebuild the ZigBee network. The paper presents power savings estimates from simulations showing significant advantages over standard ZigBee

16 citations


Proceedings ArticleDOI
21 Feb 2007
TL;DR: Adaboost is enhanced to improve the component based face detector running in each channel as well as the channel reliability measure training, and a reliability measure trained from examples to evaluate the inherent quality of channel recognition.
Abstract: Single-camera face recognition has severe limitations when the subject is not cooperative, or there are pose changes and different illumination conditions. Face recognition using multiple synchronized cameras is proposed to overcome the limitations. We introduce a reliability measure trained from examples to evaluate the inherent quality of channel recognition. The recognition from the channel predicted to be the most reliable is selected as the final recognition results. In this paper, we enhance Adaboost to improve the component based face detector running in each channel as well as the channel reliability measure training. Effective features are designed to train the channel reliability measure using data from both face detection and recognition. The recognition rate is far better than that of either single channel, and consistently better than common classifier fusion rules

12 citations


Journal ArticleDOI
TL;DR: Simulation results show that HPPD can significantly mitigate the congestion by reducing the retransmission overhead of dropped packets and achieve the proportional loss rate differentiation at the same time.

6 citations


Proceedings ArticleDOI
17 Jun 2007
TL;DR: It is proved that when spatial cohesion is assumed for targets, a better classification result than the "optimal" single threshold classification can be achieved.
Abstract: The concept of the Bayesian optimal single threshold is a well established and widely used classification technique. In this paper, we prove that when spatial cohesion is assumed for targets, a better classification result than the "optimal" single threshold classification can be achieved. Under the assumption of spatial cohesion and certain prior knowledge about the target and background, the method can be further simplified as dual threshold classification. In core-dual threshold classification, spatial cohesion within the target core allows "continuation" linking values to fall between the two thresholds to the target core; classical Bayesian classification is employed beyond the dual thresholds. The core-dual threshold algorithm can be built into a Markov random field model (MRF). From this MRF model, the dual thresholds can be obtained and optimal classification can be achieved. In some practical applications, a simple method called symmetric subtraction may be employed to determine effective dual thresholds in real time. Given dual thresholds, the quasi-connected component algorithm is shown to be a deterministic implementation of the MRF core-dual threshold model combining the dual thresholds, extended neighborhoods and efficient connected component computation.

6 citations



Proceedings ArticleDOI
17 Jun 2007
TL;DR: This paper focuses on three significant inherent limitations of current surveillance systems: the effective accuracy at relevant distances, the ability to define and visualize the events on a large scale, and the usability of the system.
Abstract: To be viable commercial multi-modal surveillance systems, the systems need to be reliable, robust and must be able to work at night (maybe the most critical time). They must handle small and non-distinctive targets that are as far away as possible. Like other commercial applications, end users of the systems must be able to operate them in a proper way. In this paper, we focus on three significant inherent limitations of current surveillance systems: the effective accuracy at relevant distances, the ability to define and visualize the events on a large scale, and the usability of the system.

3 citations


Proceedings ArticleDOI
27 Apr 2007
TL;DR: An advanced surveillance/security system is being developed for unattended 24/7 image acquisition and automated detection, discrimination, and tracking of humans and vehicles that avoids the high life-cycle cost of infrared cameras and image intensifiers.
Abstract: An advanced surveillance/security system is being developed for unattended 24/7 image acquisition and automated detection, discrimination, and tracking of humans and vehicles. The low-light video camera incorporates an electron multiplying CCD sensor with a programmable on-chip gain of up to 1000:1, providing effective noise levels of less than 1 electron. The EMCCD camera operates in full color mode under sunlit and moonlit conditions, and monochrome under quarter-moonlight to overcast starlight illumination. Sixty frame per second operation and progressive scanning minimizes motion artifacts. The acquired image sequences are processed with FPGA-compatible real-time algorithms, to detect/localize/track targets and reject non-targets due to clutter under a broad range of illumination conditions and viewing angles. The object detectors that are used are trained from actual image data. Detectors have been developed and demonstrated for faces, upright humans, crawling humans, large animals, cars and trucks. Detection and tracking of targets too small for template-based detection is achieved. For face and vehicle targets the results of the detection are passed to secondary processing to extract recognition templates, which are then compared with a database for identification. When combined with pan-tilt-zoom (PTZ) optics, the resulting system provides a reliable wide-area 24/7 surveillance system that avoids the high life-cycle cost of infrared cameras and image intensifiers.

Proceedings ArticleDOI
17 Jun 2007
TL;DR: A new evaluation methodology is presented that improves the accuracy of variance estimator via the discovery of false assumptions about the homogeneity of cofactors - i.e., when the data is not ''well mixed".
Abstract: Measuring system performance seems conceptually straightforward. However, the interpretation of the results and predicting future performance remain as exceptional challenges in system evaluation. Robust experimental design is critical in evaluation, but there have been very few techniques to check designs for either overlooked associations or weak assumptions. For biometric & vision system evaluation, the complexity of the systems make a thorough exploration of the problem space impossible - this lack of verifiability in experimental design is a serious issue. In this paper, we present a new evaluation methodology that improves the accuracy of variance estimator via the discovery of false assumptions about the homogeneity of cofactors - i.e., when the data is not ''well mixed". The new methodology is then applied in the context of a biometric system evaluation with highly influential cofactors.

01 Jan 2007
TL;DR: It is found that temporal consistency provides better match correctness than non-temporally consistent feature matches, regardless of the local feature detector, and SIFT's match correctness ratio is improved by using different distance measures besides the standard Euclidean distance.
Abstract: Feature detection and the matching and tracking of those features are common tasks in computer vision systems. Qualities like invariance to scale, rotation, and other transforms, robustness to varying lighting conditions etc. have all been identified as important feature properties. In this work, we define three new and important feature properties which expand upon the usual local feature information. These are temporal consistency, a distributivity quality, and a distinctiveness quality. Temporal consistency is a quality of a feature that quantifies how consistently a feature has been tracked in prior frames, and can be a simple database field that stores number of frames tracked or can include more sophisticated smoothness of motion in its consistency measure. Distributivity is a quality of a feature that quantifies physical distance (number of pixels) from other features in the same image (frame). Distinctiveness is a measure of how unique a feature is among other features in the same image (frame). In this work we compare and extend two different kinds of local feature detectors, Shi's and Tomasi's [ST94] Good Features To Track (GFTT) and Lowe's Scale Invariant Feature Transform (SIFT) [Low04], which represent two of the most widely used and well known approaches to feature detection. We use them as the local feature information, and add the new properties to improve overall performance. With this we define two new "types" of better features to track, temporally consistent features and distributive features. We compare SIFT, GFTT, combined SIFT/GFTT, temporally consistent SIFT, temporally consistent GFTT, and temporally consistent combined SIFT/GFTT for matching accuracy in several video sequences. We find that temporal consistency provides better match correctness than non-temporally consistent feature matches, regardless of the local feature detector. We utilize distributed GFTT with optical flow to help build what we call a tubular mosaic. We show that distributed features allow a more accurate mosaic to be built than non-distributed features in scenes with low contrast. We improve SIFT's match correctness ratio - where SIFT determines how alike two features are - by using different distance measures besides the standard Euclidean distance. Euclidean distance is optimal when any noise in the data is Gaussian. However, we expect that there are outliers in the data, not noise. Therefore we chose to experiment with robust distance measures. A robust simple M-estimator distance measure as well as L1-norm distance measure were implemented, tested and shown to improve the percentage of correct matches for SIFT. We began initial work exploring contrast enhancement which shows some promise of finding features in low contrast areas and matching them with reasonable accuracy. Temporal consistency and distributivity are shown to be useful qualities in this work and should enhance a large class of feature detectors which, in turn, should expand their application. As an example, Tubular Mosaics is described and shown to be improved by the results of this research.