scispace - formally typeset
Search or ask a question

Showing papers by "Vincenzo Piuri published in 2007"


Journal ArticleDOI
TL;DR: Local computation is introduced here, by means of an adequate partitioning of the data space called hyperbox (HB), to reduce the computational time so as to be linear in the number of data points N, saving more than 80% of time in real applications.
Abstract: Modern scanners are able to deliver huge quantities of three-dimensional (3-D) data points sampled on an object's surface, in a short time. These data have to be filtered and their cardinality reduced to come up with a mesh manageable at interactive rates. We introduce here a novel procedure to accomplish these two tasks, which is based on an optimized version of soft vector quantization (VQ). The resulting technique has been termed enhanced vector quantization (EVQ) since it introduces several improvements with respect to the classical soft VQ approaches. These are based on computationally expensive iterative optimization; local computation is introduced here, by means of an adequate partitioning of the data space called hyperbox (HB), to reduce the computational time so as to be linear in the number of data points N, saving more than 80% of time in real applications. Moreover, the algorithm can be fully parallelized, thus leading to an implementation that is sublinear in N. The voxel side and the other parameters are automatically determined from data distribution on the basis of the Zador's criterion. This makes the algorithm completely automatic. Because the only parameter to be specified is the compression rate, the procedure is suitable even for nontrained users. Results obtained in reconstructing faces of both humans and puppets as well as artifacts from point clouds publicly available on the web are reported and discussed, in comparison with other methods available in the literature. EVQ has been conceived as a general procedure, suited for VQ applications with large data sets whose data space has relatively low dimensionality

21 citations


Proceedings ArticleDOI
15 Dec 2007
TL;DR: This work presents a biometric authentication technology based on the combination of multiple biometric read- ings that can be performed offline and the stored identifier does not disclose any information on the biometric traits of the identified person, so that even in case of loss or steal of the document, privacy is protected.
Abstract: Biometric techniques are more and more exploited in or- der to fasten and make more reliable the identification pro- cess. Recently, many proposals have been formulated com- bining cryptography and biometrics in order to increase the confidence in the system when biometric templates are stored for verification. In this work we present a biometric authentication tech- nique based on the combination of multiple biometric read- ings. The authentication control can be performed offline and the stored identifier does not disclose any information on the biometric traits of the identified person, so that even in case of loss or steal of the document, privacy is guaran- teed. Keywords: Biometric identification, Privacy, Secure sketch.

8 citations


Proceedings ArticleDOI
21 May 2007
TL;DR: This paper proposes a high level architecture of a system on chip (SoC) which implements IPSec, thought to be placed on the main data path of the host machine (flow-through architecture), thus allowing for transparent processing of IPSec traffic.
Abstract: IPSec is a suite of protocols which adds security to communications at the IP level. Protocols within the IPSec suite make extensive use of cryptographic algorithms. Since these algorithms are computationally very intensive, some hardware acceleration is needed to support high throughput. In this paper we propose a high level architecture of a system on chip (SoC) which implements IPSec. This SoC has been thought to be placed on the main data path of the host machine (flow-through architecture), thus allowing for transparent processing of IPSec traffic. The functionalities of the different blocks and their interactions, along with an estimation of the internal memory size, are also shown.

7 citations


Proceedings ArticleDOI
01 May 2007
TL;DR: This work proposes a model of an intelligent short term demand side management system (DSM) based on a distributed measurement and management data system that can be the consumer's key to take advantage of a DSM program automatically.
Abstract: This work proposes a model of an intelligent short term demand side management system (DSM) based on a distributed measurement and management data system. The system is designed to avoid peaks of power request greater than a given threshold and to give maximum comfort to user. The DSM problem is modeled as a multi objectives scheduling problem and it is solved using a metaheuristic approach based on a multi agent system. The proposed system is composed of a distributed network of processing nodes (PN). Each PN hosts one agent and it is able to manage a single node of a distribution network allowing or disallowing it to supply power. Each agent reacts to a new critical condition entering in competition with the others to gain the access at a shared limited resource. As the results shown the proposed system can be the consumer's key to take advantage of a DSM program automatically.

6 citations


Book ChapterDOI
12 Sep 2007
TL;DR: An online procedure for configuring the parameters of a hierarchical radial basis functions (HRBF) network is presented and results show that the algorithm trained online well compares with the batch version.
Abstract: An online procedure for configuring the parameters of a hierarchical radial basis functions (HRBF) network is presented here. The proposed procedure has been implemented and applied to a problem of real-time surface reconstruction. Results show that the algorithm trained online well compares with the batch version.

6 citations


Proceedings ArticleDOI
25 Jun 2007
TL;DR: The authors propose a graphical user interface for search engines based on the geomorphologic metaphor that makes user aware of the semantic distribution of the Web sites retrieved by the search engine.
Abstract: Aim of this work is to present a new human computer interface for Web search engines. Despite the noteworthy improvements introduced in the Web search engines, their human interface still remain surprisingly based on a textual sorted list. The position of a site in this list expresses its distance from the user's query. The authors propose a graphical user interface for search engines based on the geomorphologic metaphor. This interface makes user aware of the semantic distribution of the Web sites retrieved by the search engine. The proposed interface is implemented as a browser plug-in and it is able to work with all the modern search engines.

4 citations


Proceedings ArticleDOI
27 Jun 2007
TL;DR: An ANN-based residential load classification component to use in the DSM system is described, to prevent cut-off from happening and to schedule loads in a prioritized mode.
Abstract: Demand-side Management (DSM) systems have became common in both industrial and homely applications. Basically, these systems help the customers to use electricity more efficiency. Commercial DSM systems are based on the knowledge of instantaneous load power request and, using a priority table, they make their choice. These approaches embed low-level intelligence, hence they can guarantee only coarse results. In this paper an ANN-based residential load classification component to use in the DSM system is described. Aim of the DSM is to prevent cut-off from happening and to schedule loads in a prioritized mode. By means of an associative memory, each socket tap is capable of identify the connected load from a table of "known devices". The eventual misclassification that may arise during the guessing phase is specifically handled by a new training phase. The time the system spends responding to the wrong classification and reacting to it is generally shorter than the time required by the provider's meter to detect the exceeding of the power limit.

4 citations


Patent
29 Mar 2007
TL;DR: In this article, a method (100) is described for generating security information associated with an entity to be recorded, comprising: carrying out biometric readings on said entity to acquire a first (I1) and a second (I2) detection information, separate from each other; processing (Fl, PPl; F2, PP2, ECE) the information, obtaining a first cSl and second cS4 value having respectively associated the biometric content of said information; applying a function of cryptographic type with at least two operands (RFl) to the first
Abstract: A method (100) is described for generating security information associated with an entity to be recorded, comprising: carrying out biometric readings on said entity to be recorded to acquire a first (I1) and a second (I2) detection information, separate from each other; - processing (Fl, PPl; F2, PP2, ECE) the information, obtaining a first (cSl) and a second (S4) value having respectively associated the biometric content of said information; applying a function of cryptographic type with at least two operands (RFl) to the first and second value (cSl, S4), obtaining a combination value (S5) from which (IFl) a security label (ID) of said entity to be recorded is generated.

4 citations


Proceedings ArticleDOI
29 Oct 2007
TL;DR: A modified HRBF model is proposed which makes use of the geometric error as a measure of the reconstruction accuracy, and for computer graphics applications, the geometric distance is a more suitable error metric.
Abstract: The hierarchical radial basis function (HRBF) Network is a neural model that proved its ability in surface reconstruction problem. The algebraic error is used to drive the HRBF configuration procedure and for evaluating the reconstruction ability of the network. While for function approximation the algebraic distance is the appropriate error metric, for computer graphics applications, such as model reconstruction by 3D scanning, the geometric distance is a more suitable error metric. In this paper, we propose a modified HRBF model which makes use of the geometric error as a measure of the reconstruction accuracy.

3 citations


Proceedings ArticleDOI
25 Jun 2007
TL;DR: Aim of this paper is to highlight the behavior of the MAS while it is integrating data from multiple information sources and present an innovative method for Web based information source integration.
Abstract: An information system supporting environmental applications must be reliable, scalable and able to acquire and integrate data from a lot of monitoring stations distributed in different places. This paper proposes a system integrating the data acquired by a distributed network of sensors for air quality monitoring. The monitoring system is based on a well-tested Multi Agent System architecture based on functions layering. Aim of this paper is to highlight the behavior of the MAS while it is integrating data from multiple information sources and present an innovative method for Web based information source integration.

3 citations


Journal ArticleDOI
TL;DR: This paper presents a packet scheduling algorithm that provides the capability of scheduling grouped packets over multiple cryptographic accelerators and high-level simulations of the scheduling algorithm have been performed and the results for a one-accelerator and for a two-ac acceleration system are shown.
Abstract: IPSec is a suite of protocols that adds security to communications at the IP level. Protocols within the IPSec suite make extensive use of cryptographic algorithms. Since these algorithms are computationally very intensive, some hardware acceleration is needed to support high throughput. IPSec accelerator performance may heavily depend on the dimension of the packets to be processed. In fact, when packets are small, the time needed to transfer data and to set up the accelerators may exceed the one to process (e.g. to encrypt) the packets by software. In this paper we present a packet scheduling algorithm that tackles this problem. Packets belonging to the same Security Association are grouped before the transfer to the accelerators. Thus, the transfer and the initialization time have a lower influence on the total processing time of the packets. This algorithm also provides the capability of scheduling grouped packets over multiple cryptographic accelerators. High-level simulations of the scheduling algorithm have been performed and the results for a one-accelerator and for a two-accelerator system are also shown in this paper.

Proceedings Article
01 Jan 2007
TL;DR: An architecture for these query units for the Security Policy Database and the Security Association Database is discussed, different query methods for the two databases are proposed, and they are compared through simulation.
Abstract: IPSec is a suite of protocols that adds security to communications at the IP level. Protocols within IPSec make extensive use of two databases, namely the Security Policy Database (SPD) and the Security Association Database (SAD). The ability to query the SPD quickly is fundamental as this operation needs to be done for each incoming or outgoing IP packet, even if no IPSec processing needs to be applied on it. This may easily result in millions of query per second in gigabit networks. Since the databases may be of several thousands of records on large secure gateways, a dedicated hardware solution is needed to support high throughput. In this paper we discuss an architecture for these query units, we propose different query methods for the two databases, and we compare them through simulation. Two different versions of the architecture are presented: the basic version is modified to support multithreading. As shown by the simulations, this technique is very effective in this case. The architecture that supports multithreading allows for 11 million queries per second in the best case.

Book ChapterDOI
28 Jul 2007
TL;DR: IPSec is a suite of protocols that adds security to communications at the IP level that makes extensive use of two databases, namely the Security Policy Database (SPD) and the Security Association Database (SAD).
Abstract: IPSec is a suite of protocols that adds security to communications at the IP level Protocols within IPSec make extensive use of two databases, namely the Security Policy Database (SPD) and the Security Association Database (SAD) The ability to query the SPD quickly is fundamental as this operation needs to be done for each incoming or outgoing IP packet, even if no IPSec processing needs to be applied on it This may easily result in millions of query per second in gigabit networks