Maria De Marsico
Bio: Maria De Marsico is an academic researcher from Sapienza University of Rome. The author has contributed to research in topics: Biometrics & Mobile device. The author has an hindex of 21, co-authored 162 publications receiving 1846 citations.
Papers published on a yearly basis
TL;DR: A new dataset of iris images acquired by mobile devices can support researchers with regard to biometric dimensions of interest including uncontrolled settings, demographics, interoperability, and real-world applications.
Abstract: A new dataset of iris images acquired by mobile devices can support researchers.MICHE-I will assist with developing continuous authentication to counter spoofing.The dataset includes images from different mobile devices, sessions and conditions. We introduce and describe here MICHE-I, a new iris biometric dataset captured under uncontrolled settings using mobile devices. The key features of the MICHE-I dataset are a wide and diverse population of subjects, the use of different mobile devices for iris acquisition, realistic simulation of the acquisition process (including noise), several data capture sessions separated in time, and image annotation using metadata. The aim of MICHE-I dataset is to make up the starting core of a wider dataset that we plan to collect, with the further aim to address interoperability, both in the sense of matching samples acquired with different devices and of assessing the robustness of algorithms to the use of devices with different characteristics. We discuss throughout the merits of MICHE-I with regard to biometric dimensions of interest including uncontrolled settings, demographics, interoperability, and real-world applications. We also consider the potential for MICHE-I to assist with developing continuous authentication aimed to counter adversarial spoofing and impersonation, when the bar for uncontrolled settings raises even higher for proper and effective defensive measures.
01 Mar 2004-International Journal of Human-computer Studies \/ International Journal of Man-machine Studies
TL;DR: A new goal-based approach to measure usability of web sites is presented, strongly taking into account the customer's expectations, which are often hardly foreseeable as a whole.
Abstract: A new goal-based approach to measure usability of web sites is presented, strongly taking into account the customer's expectations, which are often hardly foreseeable as a whole. After a general discussion on web site design issues, we present a short survey of evaluation methods currently used for web sites. We next introduce a new taxonomy of site categories in a three-dimensional space, derived from Aristotle's rhetorical triangle, including different aspects of the site designer's goals. In our approach, we use this taxonomy to identify a number of sites belonging to the same category, in order to carry out a comparative analysis of their features. This analysis is the basis for a two-shot generation of a form for the evaluation of that category of sites. In the first shot, the users fill a generic evaluation form, acquainting them with sites characteristics. They are next asked to perform specific tasks of their choice, according to what they expect from a site of the given category. They note their impressions and list those features they found useful; the analysis of their comments is exploited to formulate statements specific to the given category, to be added to the initial form (second shot). We found that the responses to the second, expanded form, provide more comprehensive criteria for site evaluation, and turn helpful to precisely locate flaws in site functionalities. After testing, our methodology has proved very promising and may be applied for the evaluation of any other site category, most of all those providing a set of special services.
TL;DR: FRIME (Face and Iris Recognition for Mobile Engagement) is described as a biometric application based on a multimodal recognition of face and iris, which is designed to be embedded in mobile devices and optimized to be low-demanding and computation-light.
Abstract: Mobile devices, namely phones and tablets, have long gone “smart”. Their growing use is both a cause and an effect of their technological advancement. Among the others, their increasing ability to store and exchange sensitive information, has caused interest in exploiting their vulnerabilities, and the opposite need to protect users and their data through secure protocols for access and identification on mobile platforms. Face and iris recognition are especially attractive, since they are sufficiently reliable, and just require the webcam normally equipping the involved devices. On the contrary, the alternative use of fingerprints requires a dedicated sensor. Moreover, some kinds of biometrics lend themselves to uses that go beyond security. Ambient intelligence services bound to the recognition of a user, as well as social applications, such as automatic photo tagging on social networks, can especially exploit face recognition. This paper describes FIRME (Face and Iris Recognition for Mobile Engagement) as a biometric application based on a multimodal recognition of face and iris, which is designed to be embedded in mobile devices. Both design and implementation of FIRME rely on a modular architecture, whose workflow includes separate and replaceable packages. The starting one handles image acquisition. From this point, different branches perform detection, segmentation, feature extraction, and matching for face and iris separately. As for face, an antispoofing step is also performed after segmentation. Finally, results from the two branches are fused. In order to address also security-critical applications, FIRME can perform continuous reidentification and best sample selection. To further address the possible limited resources of mobile devices, all algorithms are optimized to be low-demanding and computation-light.
••06 Aug 2012
TL;DR: Starting from a set of automatically located facial points, geometric invariants are exploited for detecting replay attacks and the presented results demonstrate the effectiveness and efficiency of the proposed indices.
Abstract: Face recognition provides many advantages compared with other available biometrics, but it is particularly subject to spoofing. The most accurate methods in literature addressing this problem, rely on the estimation of the three-dimensionality of faces, which heavily increase the whole cost of the system. This paper proposes an effective and efficient solution to problem of face spoofing. Starting from a set of automatically located facial points, we exploit geometric invariants for detecting replay attacks. The presented results demonstrate the effectiveness and efficiency of the proposed indices.
TL;DR: This survey focuses on recognition, and leaves the detection and feature extraction problems in the background, because the kind of features used to code the iris pattern may significantly influence the complexity of the methods and their performance.
Abstract: Iris recognition is one of the most promising fields in biometrics. Notwithstanding this, there are not so many research works addressing it by machine learning techniques. In this survey, we especially focus on recognition, and leave the detection and feature extraction problems in the background. However, the kind of features used to code the iris pattern may significantly influence the complexity of the methods and their performance. In other words, complexity affects learning, and iris patterns require relatively complex feature vectors, even if their size can be optimized. A cross-comparison of these two parameters, feature complexity vs. learning effectiveness, in the context of different learning algorithms, would require an unbiased common benchmark. Moreover, at present it is still very difficult to reproduce techniques and experiments due to the lack of either sufficient implementation details or reliable shared code.
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).
01 Jan 2005
TL;DR: In this article, a general technique called Bubbles is proposed to assign the credit of human categorization performance to specific visual information, such as gender, expressive or not and identity.
Abstract: Everyday, people flexibly perform different categorizations of common faces, objects and scenes. Intuition and scattered evidence suggest that these categorizations require the use of different visual information from the input. However, there is no unifying method, based on the categorization performance of subjects, that can isolate the information used. To this end, we developed Bubbles, a general technique that can assign the credit of human categorization performance to specific visual information. To illustrate the technique, we applied Bubbles on three categorization tasks (gender, expressive or not and identity) on the same set of faces, with human and ideal observers to compare the features they used.
TL;DR: This paper introduces a novel and appealing approach for detecting face spoofing using a colour texture analysis that exploits the joint colour-texture information from the luminance and the chrominance channels by extracting complementary low-level feature descriptions from different colour spaces.
Abstract: Research on non-intrusive software-based face spoofing detection schemes has been mainly focused on the analysis of the luminance information of the face images, hence discarding the chroma component, which can be very useful for discriminating fake faces from genuine ones. This paper introduces a novel and appealing approach for detecting face spoofing using a colour texture analysis. We exploit the joint colour-texture information from the luminance and the chrominance channels by extracting complementary low-level feature descriptions from different colour spaces. More specifically, the feature histograms are computed over each image band separately. Extensive experiments on the three most challenging benchmark data sets, namely, the CASIA face anti-spoofing database, the replay-attack database, and the MSU mobile face spoof database, showed excellent results compared with the state of the art. More importantly, unlike most of the methods proposed in the literature, our proposed approach is able to achieve stable performance across all the three benchmark data sets. The promising results of our cross-database evaluation suggest that the facial colour texture representation is more stable in unknown conditions compared with its gray-scale counterparts.
01 Dec 2006
TL;DR: This study investigates website quality factors, their relative importance in selecting the most preferred website, and the relationship between website preference and financial performance and found that the website with the highest quality produced the highest business performance.
Abstract: This study investigates website quality factors, their relative importance in selecting the most preferred website, and the relationship between website preference and financial performance. DeLone and McLean's IS success model extended through applying an analytic hierarchy process is used. A field study with 156 online customers and 34 managers/designers of e-business companies was performed. The study identified different relative importance of each website quality factor and priority of alternative websites across e-business domains and between stakeholders. This study also found that the website with the highest quality produced the highest business performance. The findings of this study provide decision makers of e-business companies with useful insights to enhance their website quality.