scispace - formally typeset
Search or ask a question
Author

Vincenzo Piuri

Bio: Vincenzo Piuri is an academic researcher from University of Milan. The author has contributed to research in topics: Fault tolerance & Biometrics. The author has an hindex of 39, co-authored 416 publications receiving 6280 citations. Previous affiliations of Vincenzo Piuri include Fiat Automobiles & Instituto Politécnico Nacional.


Papers
More filters
Journal ArticleDOI
TL;DR: Two fault detection schemes are presented: the first is a redundancy-based scheme while the second uses an error detecting code, which is a novel scheme which leads to very efficient and high coverage fault detection.
Abstract: The goal of the Advanced Encryption Standard (AES) is to achieve secure communication. The use of AES does not, however, guarantee reliable communication. Prior work has shown that even a single transient error occurring during the AES encryption (or decryption) process will very likely result in a large number of errors in the encrypted/decrypted data. Such faults must be detected before sending to avoid the transmission and use of erroneous data. Concurrent fault detection is important not only to protect the encryption/decryption process from random faults. It will also protect the encryption/decryption circuitry from an attacker who may maliciously inject faults in order to find the encryption secret key. In this paper, we first describe some studies of the effects that faults may have on a hardware implementation of AES by analyzing the propagation of such faults to the outputs. We then present two fault detection schemes: The first is a redundancy-based scheme while the second uses an error detecting code. The latter is a novel scheme which leads to very efficient and high coverage fault detection. Finally, the hardware costs and detection latencies of both schemes are estimated.

379 citations

Proceedings ArticleDOI
29 Dec 2011
TL;DR: A new public dataset of blood samples is proposed, specifically designed for the evaluation and the comparison of algorithms for segmentation and classification, to offer a new test tool to the image processing and pattern matching communities.
Abstract: The visual analysis of peripheral blood samples is an important test in the procedures for the diagnosis of leukemia. Automated systems based on artificial vision methods can speed up this operation and increase the accuracy and homogeneity of the response also in telemedicine applications. Unfortunately, there are not available public image datasets to test and compare such algorithms. In this paper, we propose a new public dataset of blood samples, specifically designed for the evaluation and the comparison of algorithms for segmentation and classification. For each image in the dataset, the classification of the cells is given, as well as a specific set of figures of merits to fairly compare the performances of different algorithms. This initiative aims to offer a new test tool to the image processing and pattern matching communities, direct to stimulating new studies in this important field of research.

369 citations

Journal ArticleDOI
TL;DR: Deep-ECG extracts significant features from one or more leads using a deep CNN and compares biometric templates by computing simple and fast distance functions, obtaining remarkable accuracy for identification, verification and periodic re-authentication.

255 citations

Journal ArticleDOI
TL;DR: An innovative, system-level, modular perspective on creating and managing fault tolerance in Clouds is introduced and a comprehensive high-level approach to shading the implementation details of the fault tolerance techniques to application developers and users by means of a dedicated service layer is proposed.
Abstract: The increasing popularity of Cloud computing as an attractive alternative to classic information processing systems has increased the importance of its correct and continuous operation even in the presence of faulty components. In this paper, we introduce an innovative, system-level, modular perspective on creating and managing fault tolerance in Clouds. We propose a comprehensive high-level approach to shading the implementation details of the fault tolerance techniques to application developers and users by means of a dedicated service layer. In particular, the service layer allows the user to specify and apply the desired level of fault tolerance, and does not require knowledge about the fault tolerance techniques that are available in the envisioned Cloud and their implementations.

196 citations

Proceedings ArticleDOI
14 Jul 2004
TL;DR: This paper presents a methodology to achieve an automated detection and classification of leucocytes by microscope color images and firstly individuates in the blood image the leucocyte from the others blood cells, then it extracts morphological indexes and finally it classifies the leukocytes by a neural classifier in Basophil, Eosinophils, Lymphocyte, Monocyte and Neutrophil.
Abstract: The classification and the count of white blood cells in microscopy images allows the in vivo assessment of a wide range of important hematic pathologies (i.e., from presence of infections to leukemia). Nowadays, the morphological cell classification is typically made by experienced operators. Such a procedure presents undesirable drawbacks: slowness and it presents a not standardized accuracy since it depends on the operator's capabilities and tiredness. Only few attempts of partial/full automated systems based on image-processing systems are present in literature and they are still at prototype stage. This paper presents a methodology to achieve an automated detection and classification of leucocytes by microscope color images. The proposed system firstly individuates in the blood image the leucocytes from the others blood cells, then it extracts morphological indexes and finally it classifies the leucocytes by a neural classifier in Basophil, Eosinophil, Lymphocyte, Monocyte and Neutrophil.

193 citations


Cited by
More filters
Christopher M. Bishop1
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

10,141 citations

01 Jan 2003

3,093 citations

09 Mar 2012
TL;DR: Artificial neural networks (ANNs) constitute a class of flexible nonlinear models designed to mimic biological neural systems as mentioned in this paper, and they have been widely used in computer vision applications.
Abstract: Artificial neural networks (ANNs) constitute a class of flexible nonlinear models designed to mimic biological neural systems. In this entry, we introduce ANN using familiar econometric terminology and provide an overview of ANN modeling approach and its implementation methods. † Correspondence: Chung-Ming Kuan, Institute of Economics, Academia Sinica, 128 Academia Road, Sec. 2, Taipei 115, Taiwan; ckuan@econ.sinica.edu.tw. †† I would like to express my sincere gratitude to the editor, Professor Steven Durlauf, for his patience and constructive comments on early drafts of this entry. I also thank Shih-Hsun Hsu and Yu-Lieh Huang for very helpful suggestions. The remaining errors are all mine.

2,069 citations

Book ChapterDOI
E.R. Davies1
01 Jan 1990
TL;DR: This chapter introduces the subject of statistical pattern recognition (SPR) by considering how features are defined and emphasizes that the nearest neighbor algorithm achieves error rates comparable with those of an ideal Bayes’ classifier.
Abstract: This chapter introduces the subject of statistical pattern recognition (SPR). It starts by considering how features are defined and emphasizes that the nearest neighbor algorithm achieves error rates comparable with those of an ideal Bayes’ classifier. The concepts of an optimal number of features, representativeness of the training data, and the need to avoid overfitting to the training data are stressed. The chapter shows that methods such as the support vector machine and artificial neural networks are subject to these same training limitations, although each has its advantages. For neural networks, the multilayer perceptron architecture and back-propagation algorithm are described. The chapter distinguishes between supervised and unsupervised learning, demonstrating the advantages of the latter and showing how methods such as clustering and principal components analysis fit into the SPR framework. The chapter also defines the receiver operating characteristic, which allows an optimum balance between false positives and false negatives to be achieved.

1,189 citations