scispace - formally typeset
Search or ask a question

Showing papers in "International Journal of Central Banking in 2014"


Proceedings ArticleDOI
TL;DR: In this paper, a multi-view face detector using aggregate channel features is proposed, which extends the image channel to diverse types like gradient magnitude and oriented gradient histograms and therefore encodes rich information in a simple form.
Abstract: Face detection has drawn much attention in recent decades since the seminal work by Viola and Jones. While many subsequences have improved the work with more powerful learning algorithms, the feature representation used for face detection still can’t meet the demand for effectively and efficiently handling faces with large appearance variance in the wild. To solve this bottleneck, we borrow the concept of channel features to the face detection domain, which extends the image channel to diverse types like gradient magnitude and oriented gradient histograms and therefore encodes rich information in a simple form. We adopt a novel variant called aggregate channel features, make a full exploration of feature design, and discover a multiscale version of features with better performance. To deal with poses of faces in the wild, we propose a multi-view detection approach featuring score re-ranking and detection adjustment. Following the learning pipelines in ViolaJones framework, the multi-view face detector using aggregate channel features surpasses current state-of-the-art detectors on AFW and FDDB testsets, while runs at 42 FPS

288 citations


Proceedings ArticleDOI
TL;DR: Once upon a time there was a criminal; he was reading his e-mail when a banner caught his attention: low cost flights for the destination of his dreams, but he suddenly realized that, being wanted by the police, he could not use his passport without being arrested.
Abstract: Once upon a time there was a criminal; he was reading his e-mail when a banner caught his attention: low cost flights for the destination of his dreams! He had already started to book the trip when suddenly realized that, being wanted by the police, he could not use his passport without being arrested. What to do? He could not miss that opportunity, so he called a good friend and they started to think for a possible solution. Do you want to know if they succeeded? Read the rest of the paper and find it out.

239 citations


Proceedings ArticleDOI
TL;DR: It is concluded that the large-scale unconstrained face recognition problem is still largely unresolved, thus further attention and effort is needed in developing effective feature representations and learning algorithms.
Abstract: Many efforts have been made in recent years to tackle the unconstrained face recognition challenge. For the benchmark of this challenge, the Labeled Faces in the Wild (LFW) database has been widely used. However, the standard LFW protocol is very limited, with only 3,000 genuine and 3,000 impostor matches for classification. Today a 97% accuracy can be achieved with this benchmark, remaining a very limited room for algorithm development. However, we argue that this accuracy may be too optimistic because the underlying false accept rate may still be high (e.g. 3%). Furthermore, performance evaluation at low FARs is not statistically sound by the standard protocol due to the limited number of impostor matches. Thereby we develop a new benchmark protocol to fully exploit all the 13,233 LFW face images for large-scale unconstrained face recognition evaluation under both verification and open-set identification scenarios, with a focus at low FARs. Based on the new benchmark, we evaluate 21 face recognition approaches by combining 3 kinds of features and 7 learning algorithms. The benchmark results show that the best algorithm achieves 41.66% verification rates at FAR=0.1%, and 18.07% open-set identification rates at rank 1 and FAR=1%. Accordingly we conclude that the large-scale unconstrained face recognition problem is still largely unresolved, thus further attention and effort is needed in developing effective feature representations and learning algorithms. We thereby release a benchmark tool to advance research in this field.

147 citations


Posted Content
Michael Ehrmann1
TL;DR: In this paper, the authors argue that given the current environment of persistently weak inflation in many advanced economies, IT central banks must now bring inflation up to target, and propose an IT-based approach to bring inflation back to target.
Abstract: Inflation targeting (IT) had originally been introduced as a device to bring inflation down and stabilize it at low levels. Given the current environment of persistently weak inflation in many advanced economies, IT central banks must now bring inflation up to target.

102 citations


Proceedings ArticleDOI
TL;DR: A new invariant descriptor of fingerprint ridge texture called histograms of invariant gradients (HIG) is proposed, designed to preserve robustness to variations in gradient positions.
Abstract: Security of fingerprint authentication systems remains threatened by the presentation of spoof artifacts. Most current mitigation approaches rely upon the fingerprint liveness detection as the main anti-spoofing mechanisms. However, liveness detection algorithms are not robust to sensor variations. In other words, typical liveness detection algorithms need to be retrained and adapted to each and every sensor used for fingerprint capture. In this paper, inspired by popular invariant feature descriptors such as histograms of oriented gradients (HOG) and the scale invariant feature transform (SIFT), we propose a new invariant descriptor of fingerprint ridge texture called histograms of invariant gradients (HIG). The proposed descriptor is designed to preserve robustness to variations in gradient positions. Spoofed fingerprints are detected using multiple histograms of invariant gradients computed from spatial neighborhoods within the fingerprint. Results show that proposed method achieves an average accuracy comparable to the best algorithms of the Fingerprint Liveness Detection Competition 2013, while being applicable with no change to multiple acquisition sensors.

90 citations


Proceedings ArticleDOI
TL;DR: This paper uses modern facial analysis technologies to determine the Gender, Age, and Race attributes of facial images, and preserves these attributes by seeking corresponding representatives constructed through a gallery dataset, and shows that the proposed approach outperforms previous solutions in preserving data utility while achieving similar degree of privacy protection.
Abstract: Face de-identification, the process of preventing a person’ identity from being connected with personal information, is an important privacy protection tool in multimedia data processing. With the advance of face detection algorithms, a natural solution is to blur or block facial regions in visual data so as to obscure identity information. Such solutions however often destroy privacy-insensitive information and hence limit the data utility, e.g., gender and age information. In this paper we address the de-identification problem by proposing a simple yet effective framework, named GARP-Face, that balances utility preservation in face deidentification. In particular, we use modern facial analysis technologies to determine the Gender, Age, and Race attributes of facial images, and Preserving these attributes by seeking corresponding representatives constructed through a gallery dataset. We evaluate the proposed approach using the MORPH dataset in comparison with several stateof-the-art face de-identification solutions. The results show that our method outperforms previous solutions in preserving data utility while achieving similar degree of privacy protection.

88 citations


Proceedings ArticleDOI
Yu Zhong1, Yunbin Deng1
TL;DR: A novel gait representation for accelerometer and gyroscope data is proposed which is both sensor-orientation-invariant and highly discriminative to enable high-performance gait biometrics for real-world applications.
Abstract: Accelerometers and gyroscopes embedded in mobile devices have shown great potential for non-obtrusive gait biometrics by directly capturing a user’s characteristic locomotion. Despite the success in gait analysis under controlled experimental settings using these sensors, their performance in realistic scenarios is unsatisfactory due to data dependency on sensor placement. In practice, the placement of mobile devices is unconstrained. In this paper, we propose a novel gait representation for accelerometer and gyroscope data which is both sensororientation-invariant and highly discriminative to enable high-performance gait biometrics for real-world applications. We also adopt the i-vector paradigm, a stateof-the-art machine learning technique widely used for speaker recognition, to extract gait identities using the proposed gait representation. Performance studies using both the naturalistic McGill University gait dataset, and the Osaka University gait dataset containing 744 subjects have shown dominant superiority of this novel gait biometrics approach compared to existing methods.

86 citations


Posted Content
TL;DR: In this article, a methodology based on Furfine was developed to identify unsecured interbank money market loans from transaction data of the most important euro processing payment system, TARGET2, for maturity ranging from one day (overnight) up to one year.
Abstract: This paper develops a methodology, based on Furfine (1999), to identify unsecured interbank money-market loans from transaction data of the most important euro processing payment system, TARGET2, for maturity ranging from one day (overnight) up to one year. The implementation has been verified with (i) interbank money-market transactions executed on the Italian trading platform e-MID and (ii) individual reporting by the EONIA panel banks. The type 2 (false negative) error for the best performing algorithm setup is equal to 0.92 percent. The different stages of the global financial crisis and of the sovereign debt crises are clearly visible in the interbank money market, characterized by significant drops in the turnover. We find aggregated interest rates very close to EONIA but we observe high heterogeneity across countries and market participants.

77 citations


Proceedings ArticleDOI
TL;DR: The goal for the Liveness Detection (LivDet) competitions is to compare software-based iris liveness detection methodologies using a standardized testing protocol and large quantities of spoof and live images.
Abstract: The use of an artificial replica of a biometric characteristic in an attempt to circumvent a system is an example of a biometric presentation attack. Liveness detection is one of the proposed countermeasures, and has been widely implemented in fingerprint and iris recognition systems in recent years to reduce the consequences of spoof attacks. The goal for the Liveness Detection (LivDet) competitions is to compare software-based iris liveness detection methodologies using a standardized testing protocol and large quantities of spoof and live images. Three submissions were received for the competition Part 1; Biometric Recognition Group de Universidad Autonoma de Madrid, University of Naples Federico II, and Faculdade de Engenharia de Universidade do Porto. The best results from across all three datasets was from Federico with a rate of falsely rejected live samples of 28.6% and the rate of falsely accepted fake samples of 5.7%.

71 citations


Proceedings ArticleDOI
TL;DR: A memorability based frame selection algorithm is presented that enables automatic selection of memorable frames for facial feature extraction and matching and achieves state-of-the-art performance at low false accept rates.
Abstract: Videos have ample amount of information in the form of frames that can be utilized for feature extraction and matching. However, face images in not all of the frames are ”memorable” and useful. Therefore, utilizing all the frames available in a video for recognition does not necessarily improve the performance but significantly increases the computation time. In this research, we present a memorability based frame selection algorithm that enables automatic selection of memorable frames for facial feature extraction and matching. A deep learning algorithm is then proposed that utilizes a stack of denoising autoencoders and deep Boltzmann machines to perform face recognition using the most memorable frames. The proposed algorithm, termed as MDLFace, is evaluated on two publicly available video face databases, Youtube Faces and Point and Shoot Challenge. The results show that the proposed algorithm achieves state-of-the-art performance at low false accept rates.

63 citations


Journal Article
TL;DR: In this article, the effects of exogenous innovations to the balance sheet of the European Central Bank (ECB) since the start of the financial crisis have been investigated under a structural VAR framework, showing that an expansionary balance sheet shock stimulates bank lending, reduces interest rate spreads, leads to a depreciation of the euro, and has a positive impact on economic activity and inflation.
Abstract: We estimate the effects of exogenous innovations to the balance sheet of the ECB since the start of the financial crisis within a structural VAR framework. An expansionary balance sheet shock stimulates bank lending, reduces interest rate spreads, leads to a depreciation of the euro, and has a positive impact on economic activity and inflation. A counterfactual analysis reveals that the macroeconomic consequences of the balance sheet policies in the aftermath of the crisis have been substantial. For example, euro-area output and inflation would have been more than 1 percent lower in 2012 without the threeyear LTRO programs. Finally, we find that the effects on output turn out to be smaller in the member countries that have been more affected by the financial crisis, in particular those countries where the banking system is less well capitalized.

Proceedings ArticleDOI
TL;DR: This work designs a scheme for automatic adaptation of a liveness detector to novel spoof materials encountered during the operational phase and suggests a 62% increase in the error rate of existing liveness detectors when tested using new spoof materials, and upto 46% improvement in liveness detection performance across spoof materials when the proposed adaptive approach is used.
Abstract: A fingerprint liveness detector is a pattern classifier that is used to distinguish a live finger from a fake (spoof) one in the context of an automated fingerprint recognition system. Most liveness detectors are learning-based and rely on a set of training images. Consequently, the performance of a liveness detector significantly degrades upon encountering spoofs fabricated using new materials not used during the training stage. To mitigate the security risk posed by new spoofs, it is necessary to automatically adapt the liveness detector to new spoofing materials. The aim of this work is to design a scheme for automatic adaptation of a liveness detector to novel spoof materials encountered during the operational phase. To facilitate this, a novel-material detector is used to flag input images that are deemed to be made of a new spoofing material. Such flagged images are then used to retrain the liveness detector. Experiments conducted on the LivDet 2011 database suggest (i) a 62% increase in the error rate of existing liveness detectors when tested using new spoof materials, and (ii) upto 46% improvement in liveness detection performance across spoof materials when the proposed adaptive approach is used.

Proceedings ArticleDOI
TL;DR: A robust imaging device that can capture both fingerprint and finger vein simultaneously and a novel finger vein recognition algorithm that explores both the maximum curvature method and Spectral Minutiae Representation (SMR) are presented.
Abstract: Multimodal biometric systems based on fingerprint and finger vein modality provide promising features useful for robust and reliable identity verification. In this paper, we present a robust imaging device that can capture both fingerprint and finger vein simultaneously. The presented low-cost sensor employs a single camera followed by both near infrared and visible light sources organized along with the physical structures to capture good quality finger vein and fingerprint samples. We further present a novel finger vein recognition algorithm that explores both the maximum curvature method and Spectral Minutiae Representation (SMR). Extensive experiments are carried out on our newly collected database that comprises of 1500 samples of fingerprint and finger vein from 150 unique fingers corresponding to 41 subjects. Our results demonstrate the efficacy of the proposed sensor with a lowest Equal Error Rate of 0.78%.

Proceedings ArticleDOI
TL;DR: A new dataset is described that is developed with the goal to serve as a shared common testbed to enable future improvements in keystroke authentication and includes video of a subject's facial expression and hand movement during the data collection sessions, allowing for a deeper understanding of why an algorithm works the way it does.
Abstract: Keystroke authentication can help significantly improve computer security by hardening passwords or offering active, continuous authentication. Over the years, many keystroke authentication algorithms have been reported to produce promising results. However, these results are tested on proprietary datasets with varying numbers of subjects and amounts of text, making it difficult to compare and improve the state of art. We describe a new dataset that we have developed with the goal to serve as a shared common testbed to enable future improvements. The new dataset includes keystroke data for short pass-phrases, fixed text (transcription of long proses), and free text. It also includes video of a subject’s facial expression and hand movement during the data collection sessions, allowing for a deeper understanding of why an algorithm works the way it does, for example, by finding out whether a subject is a touchtypist or not. As a baseline for benchmarking, we also include the results of replicating two existing algorithms using the new dataset.

Proceedings ArticleDOI
TL;DR: The following strategies are devised to improve the fingerprint recognition accuracy when comparing the acquired fingerprints against an extended gallery database of 32,768 infant fingerprints collected by VaxTrac in Benin: upsample the acquired fingerprint image to facilitate minutiae extraction, match the query print against templates created from each enrollment impression and fuse the match scores.
Abstract: One of the major goals of most national, international and non-governmental health organizations is to eradicate the occurrence of vaccine-preventable childhood diseases (e.g., polio). Without a high vaccination coverage in a country or a geographical region, these deadly diseases take a heavy toll on children. Therefore, it is important for an effective immunization program to keep track of children who have been immunized and those who have received the required booster shots during the first 4 years of life to improve the vaccinationcoverage. Given that children,as well as the adults, in low income countries typically do not have any form of identification documents which can be used for this purpose, we address the following question: can fingerprints be effectively used to recognize children from birth to 4 years? We have collected 1,600 fingerprint images (500 ppi) of 20 infants and toddlers captured over a 30-day period in East Lansing, Michigan and 420 fingerprints of 70 infants and toddlers at two different health clinics in Benin, West Africa. We devised the following strategies to improve the fingerprint recognition accuracy when comparing the acquired fingerprints against an extended gallery database of 32,768 infant fingerprints collected by VaxTrac in Benin: (i) upsample the acquired fingerprint image to facilitate minutiae extraction, (ii) match the query print against templates created from each enrollment impression and fuse the match scores, (iii) fuse the match scores of the thumb and index finger, and (iv) update the gallery with fingerprints acquired over multiple sessions. A rank-1 (rank-10) identification accuracy of 83.8% (89.6%) on the East Lansing data, and 40.00% (48.57%) on the Benin data is obtained after incorporating these strategies when matching infant and toddler fingerprints using a commercial fingerprint SDK. This is an improvement of about 38% and 20%, respectively, on the two datasets without using the proposed strategies. A state-of-the-art latent fingerprint SDK achieves an even higher rank-1 (rank-10) identification accuracy of 98.97% (99.39%) and 67.14% (71.43%) on the two datasets, respectively, using these strategies; an improvement of about 23% and 24%, respectively, on the two datasets without using the proposed strategies.

Proceedings ArticleDOI
TL;DR: The proposed method complements sketchbased face recognition by allowing investigators to immediately search face repositories without the time delay that is incurred due to sketch generation.
Abstract: We present a method for using human describable face attributes to perform face identification in criminal investigations. To enable this approach, a set of 46 facial attributes were carefully defined with the goal of capturing all describable and persistent facial features. Using crowd sourced labor, a large corpus of face images were manually annotated with the proposed attributes. In turn, we train an automated attribute extraction algorithm to encode target repositories with the attribute information. Attribute extraction is performed using localized face components to improve the extraction accuracy. Experiments are conducted to compare the use of attribute feature information, derived from crowd workers, to face sketch information, drawn by expert artists. In addition to removing the dependence on expert artists, the proposed method complements sketchbased face recognition by allowing investigators to immediately search face repositories without the time delay that is incurred due to sketch generation.

Proceedings ArticleDOI
TL;DR: The proposed algorithm utilizes a SSD based dictionary generated via 50,000 images from the CMU Multi-PIE database, and the gallery-probe feature vectors created using SSD dictionary are matched using GentleBoostKO classifier.
Abstract: Sketch recognition has important law enforcement applications in detecting and apprehending suspects. Compared to hand drawn sketches, software generated composite sketches are faster to create and require lesser skill sets as well as bring consistency in sketch generation. While sketch generation is one side of the problem, recognizing composite sketches with digital images is another side. This paper presents an algorithm to address the second problem, i.e. matching composite sketches with digital images. The proposed algorithm utilizes a SSD based dictionary generated via 50,000 images from the CMU Multi-PIE database. The gallery-probe feature vectors created using SSD dictionary are matched using GentleBoostKO classifier. The results on extended PRIP composite sketch database show the effectiveness of the proposed algorithm.

Proceedings ArticleDOI
TL;DR: A novel descriptor based minutiae detection algorithm for latent fingerprints that shows promising results on latent fingerprint matching on the NIST SD-27 database.
Abstract: Latent fingerprint identification is of critical importance in criminal investigation. FBI’s Next Generation Identification program demands latent fingerprint identification to be performed in lights-out mode, with very little or no human intervention. However, the performance of an automated latent fingerprint identification is limited due to imprecise automated feature (minutiae) extraction, specifically due to noisy ridge pattern and presence of background noise. In this paper, we propose a novel descriptor based minutiae detection algorithm for latent fingerprints. Minutia and non-minutia descriptors are learnt from a large number of tenprint fingerprint patches using stacked denoising sparse autoencoders. Latent fingerprint minutiae extraction is then posed as a binary classification problem to classify patches as minutia or non-minutia patch. Experiments performed on the NIST SD-27 database shows promising results on latent fingerprint matching.

Proceedings ArticleDOI
TL;DR: It is shown here how clothing traits can be exploited for identification purposes, and the validity and usability of a set of proposed semantic attributes are explored.
Abstract: Recently, soft biometrics has emerged as a novel attribute-based person description for identification. It is likely that soft biometrics can be deployed where other biometrics cannot, and have stronger invariance properties than vision-based biometrics, such as invariance to illumination and contrast. Previously, a variety of bodily soft biometrics has been used for identifying people. Describing a person by their clothing properties is a natural task performed by people. As yet, clothing descriptions have attracted little attention for identification purposes. There has been some usage of clothing attributes to augment biometric description, but a detailed description has yet to be used. We show here how clothing traits can be exploited for identification purposes. We explore the validity and usability of a set of proposed semantic attributes. Human identification is performed, evaluated and compared using different proposed forms of soft clothing traits in addition and in isolation.

Proceedings ArticleDOI
TL;DR: The first human performance on unconstrained faces in still images and videos via crowd-sourcing on Amazon Mechanical Turk is reported and it is shown that humans are superior to machines, especially when videos contain contextual cues in addition to the face image.
Abstract: Research focus in face recognition has shifted towards recognition of faces “in the wild” for both still images and videos which are captured in unconstrained imaging environments and without user cooperation. Due to confounding factors of pose, illumination, and expression, as well as occlusion and low resolution, current face recognition systems deployed in forensic and security applications operate in a semi-automatic manner; an operator typically reviews the top results from the face recognition system to manually determine the final match. For this reason, it is important to analyze the accuracies achieved by both the matching algorithms (machines) and humans on unconstrained face recognition tasks. In this paper, we report human accuracy on unconstrained faces in still images and videos via crowd-sourcing on Amazon Mechanical Turk. In particular, we report the first human performance on the YouTube Faces database and show that humans are superior to machines, especially when videos contain contextual cues in addition to the face image. We investigate the accuracy of humans from two different countries (United States and India) and find that humans from the United States are more accurate, possibly due to their familiarity with the faces of the public figures in the YouTube Faces database. A fusion of recognitions made by humans and a commercial-off-the-shelf face matcher improves performance over humans alone.

ReportDOI
TL;DR: Aizenman et al. as mentioned in this paper evaluated the impact of tapering announcements by Fed senior policy makers on financial markets in emerging economies and found that the stronger group was more adversely exposed to tapering news than the weaker group.
Abstract: Center for Analytical Finance University of California, Santa Cruz Working Paper No. 2 The Transmission of Federal Reserve Tapering News to Emerging Financial Markets Joshua Aizenman, Mahir Binici and Michael M. Hutchison June 4, 2014 Abstract This paper evaluates the impact of tapering “news” announcements by Fed senior policy makers on financial markets in emerging economies. We apply a panel framework using daily data, and find that emerging market asset prices respond most to statements by Fed Chairman Bernanke, and much less to other Fed officials. We group emerging markets into those with “robust” fundamentals (current account surpluses, high international reserves and low external debt) and those with “fragile” fundamentals and, intriguingly, find that the stronger group was more adversely exposed to tapering news than the weaker group. News of tapering coming from Chairman Bernanke is associated with much larger exchange rate depreciation, drops in the stock market, and increases in sovereign CDS spreads of the robust group compared with the fragile group. A possible interpretation is that tapering news had less impact on countries that received fewer inflows of funds in the first instance during the quantitative years and had less to lose in terms of repatriation of capital and reversal of carry-trade activities. About CAFIN The Center for Analytical Finance (CAFIN) includes a global network of researchers whose aim is to produce cutting edge research with practical applications in the area of finance and financial markets. CAFIN focuses primarily on three critical areas: • Market Design • Systemic Risk • Financial Access Seed funding for CAFIN has been provided by Dean Sheldon Kamieniecki of the Division of Social Sciences at the University of California, Santa Cruz.

Proceedings ArticleDOI
TL;DR: The key goal of this competition is to compare the performance of different methods on a new-collected dataset with the same evaluation protocol and develop the first standardized benchmark for kinship verification in the wild.
Abstract: Kinship verification from facial images in wild conditions is a relatively new and challenging problem in face analysis. Several datasets and algorithms have been proposed in recent years. However, most existing datasets are of small sizes and one standard evaluation protocol is still lack so that it is difficult to compare the performance of different kinship verification methods. In this paper, we present the Kinship Verification in the Wild Competition: the first kinship verification competition which is held in conjunction with the International Joint Conference on Biometrics 2014, Clearwater, Florida, USA. The key goal of this competition is to compare the performance of different methods on a new-collected dataset with the same evaluation protocol and develop the first standardized benchmark for kinship verification in the wild.

Proceedings ArticleDOI
TL;DR: This paper proposes a discriminative approach to cross-view gait recognition using view-dependent projection matrices, unlike the existing discriminant approaches which utilize only a single common projection matrix for different views.
Abstract: Gait is a unique and promising behavioral biometrics which allows to authenticate a person even at a distance from the camera. Since a matching pair of gait features are often drawn from different views due to differences in camera position/attitude and walking directions in the real world, it is important to cope with cross-view gait recognition. In this paper, we propose a discriminative approach to cross-view gait recognition using view-dependent projection matrices, unlike the existing discriminant approaches which utilize only a single common projection matrix for different views. We demonstrated the effectiveness of the proposed method through cross-view gait recognition experiments with two publicly available gait datasets. In addition, since the success of the discriminant analysis relies on the training sample size, we show the effect of transfer learning across two gait datasets as well as provide the rigorous sensitivity analysis of the proposed method against the number of training subjects ranging from 10 to approximately 1,000 subjects.

Proceedings ArticleDOI
TL;DR: A very high accuracy multi-modal authentication system based on fusion of several biometrics combined with a policy manager and a new biometric modality: chirography which is based on user writing on multi-touch screens using their finger is introduced.
Abstract: User authentication in the context of a secure transaction needs to be continuously evaluated for the risks associated with the transaction authorization. The situation becomes even more critical when there are regulatory compliance requirements. Need for such systems have grown dramatically with the introduction of smart mobile devices which make it far easier for the user to complete such transaction quickly but with a huge exposure to risk. Biometrics can play a very significant role in addressing such problems as a key indicator of the user identity and thus reducing the risk of fraud. While unimodal biometrics authentication systems are being increasingly experimented by mainstream mobile system manufacturers (e.g., fingerprint in iOS), we explore various opportunities of reducing risk in a multimodal biometrics system. The multimodal system is based on fusion of several biometrics combined with a policy manager. A new biometric modality: chirography which is based on user writing on multi-touch screens using their finger is introduced. Coupling with chirography, we also use two other biometrics: face and voice. Our fusion strategy is based on inter-modality score level fusion that takes into account a voice quality measure. The proposed system has been evaluated on an in-house database that reflects the latest smart mobile devices. On this database, we demonstrate a very high accuracy multi-modal authentication system reaching an EER of 0.1% in an office environment and an EER of 0.5% in challenging noisy environments.

Proceedings ArticleDOI
TL;DR: A brief description of the methods and the results achieved by the six participants in the 1st Mobile Iris Liveness Detection Competition (MobILive) is presented.
Abstract: Biometric systems based on iris are vulnerable to several attacks, particularly direct attacks consisting on the presentation of a fake iris to the sensor. The development of iris liveness detection techniques is crucial for the deployment of iris biometric applications in daily life specially in the mobile biometric field. The 1 st Mobile Iris Liveness Detection Competition (MobILive) was organized in the context of IJCB2014 in order to record recent advances in iris liveness detection. The goal for (MobILive) was to contribute to the state of the art of this particular subject. This competition covered the most common and simple spoofing attack in which printed images from an authorized user are presented to the sensor by a non-authorized user in order to obtain access. The benchmark dataset was the MobBIOfake database which is composed by a set of 800 iris images and its corresponding fake copies (obtained from printed images of the original ones captured with the same handheld device and in similar conditions). In this paper we present a brief description of the methods and the results achieved by the six participants in the competition.

Posted Content
TL;DR: In this article, the role of bank market power as an internal factor influencing banks' reaction in terms of lending and risk-taking to monetary policy impulses is examined empirically, showing that banks with even moderate levels of market power are able to buffer the negative impact of a monetary policy change on bank loans and credit risk.
Abstract: This paper examines empirically the role of bank market power as an internal factor influencing banks’ reaction in terms of lending and risk-taking to monetary policy impulses. The analysis is carried out for the US and euro-area banking sectors over the period 1997-2010. Market power is estimated at the bank-year level, using a method that allows the efficient estimation of marginal cost of banks also at the bank-year level. The findings show that banks with even moderate levels of market power are able to buffer the negative impact of a monetary policy change on bank loans and credit risk. This effect is somewhat more pronounced in the euro area compared to the US. However, following the subprime mortgage crisis of 2007, the level of market power needed to shield bank loans and credit risk from the impact of a change in monetary policy increased substantially. This is clear evidence that the financial crisis reinforced the mechanisms of the bank lending and the risk-taking channels.

Posted Content
TL;DR: In this article, the authors measure consumers' use of cash by harmonizing payment diary surveys from seven countries: Australia, Austria, Canada, France, Germany, Netherlands and the United States (conducted during 2009 and 2012).
Abstract: We measure consumers’ use of cash by harmonizing payment diary surveys from seven countries: Australia, Austria, Canada, France, Germany, the Netherlands and the United States (conducted during 2009 and 2012). Our paper finds important cross-country differences such as the level of cash usage differs across countries. However, cash has not disappeared as a payment instrument, especially for low-value transactions. We also find that the use of cash is strongly correlated with transaction size, demographics and pointof-sale characteristics such as merchant card acceptance and venue.

Proceedings ArticleDOI
TL;DR: The second edition of Eye Movement Verification and Identification Competition (EMVIC) is described, which may be regarded as an attempt to provide some common basis for eye movement biometrics (EMB).
Abstract: The idea concerning usage of the eye movement for human identification has been known for 10 years. However, there is still lack of commonly accepted methods how to perform such identification. This paper describes the second edition of Eye Movement Verification and Identification Competition (EMVIC), which may be regarded as an attempt to provide some common basis for eye movement biometrics (EMB). The paper presents some details describing the organization of the competition, its results and formulates some conclusions for further development of EMB.

Proceedings ArticleDOI
TL;DR: A Bayesian approach to model the relation between image quality and corresponding face recognition performance and it is shown that this model can accurately aggregate verification samples into groups for which the verification performance varies fairly consistently.
Abstract: Quality of a pair of facial images is a strong indicator of the uncertainty in decision about identity based on that image pair. In this paper, we describe a Bayesian approach to model the relation between image quality (like pose, illumination, noise, sharpness, etc) and corresponding face recognition performance. Experiment results based on the MultiPIE data set show that our model can accurately aggregate verification samples into groups for which the verification performance varies fairly consistently. Our model does not require similarity scores and can predict face recognition performance using only image quality information. Such a model has many applications. As an illustrative application, we show improved verification performance when the decision threshold automatically adapts according to the quality of facial images.

Proceedings ArticleDOI
TL;DR: A fully automatic, robust and fast graph-cut based eyebrow segmentation technique to extract the eyebrow shape from a given face image and an eyebrow shape-based identification system for periocular face recognition are proposed.
Abstract: Recent studies in biometrics have shown that the periocular region of the face is sufficiently discriminative for robust recognition, and particularly effective in certain scenarios such as extreme occlusions, and illumination variations where traditional face recognition systems are unreliable. In this paper, we first propose a fully automatic, robust and fast graph-cut based eyebrow segmentation technique to extract the eyebrow shape from a given face image. We then propose an eyebrow shape-based identification system for periocular face recognition. Our experiments have been conducted over large datasets from the MBGC and AR databases and the resilience of the proposed approach has been evaluated under varying data conditions. The experimental results show that the proposed eyebrow segmentation achieves high accuracy with an F-Measure of 99.4% and the identification system achieves rates of 76.0% on the AR database and 85.0% on the MBGC database.