scispace - formally typeset
Search or ask a question
Author

Luca Fanucci

Bio: Luca Fanucci is an academic researcher from University of Pisa. The author has contributed to research in topics: Digital signal processing & Interface (computing). The author has an hindex of 20, co-authored 246 publications receiving 1519 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: A low-error approximation of the sigmoid function and hyperbolic tangent, which are mainly used to activate the artificial neuron, are proposed based on the piecewise linear method, showing better results than the state-of-the-art.

53 citations

Proceedings ArticleDOI
26 Dec 2007
TL;DR: The logic synthesis on 65 nm CMOS technology with low- power standard-cell library, shows that the proposed design is suitable for portable devices, the throughput ranging from 180 to 410 Mbps, and the power consumption being below 235 mW.
Abstract: This paper describes a scalable IP of a decoder for LDPC codes compliant to IEEE 802.1 In and running the well- known layered decoding algorithm. The decoder architecture is arranged in clusters of serial processing units, which are configurable to process all the codes in the standard and, at the same time, to support multiple frame decoding. An optimization methodology of the iteration latency is also described, which relates to the order of the messages updated by the processors, as well as to the sequence of layers the decoder goes through. The logic synthesis on 65 nm CMOS technology with low- power standard-cell library, shows that the proposed design is suitable for portable devices, the throughput ranging from 180 to 410 Mbps, and the power consumption being below 235 mW.

41 citations

Proceedings ArticleDOI
09 Mar 2012
TL;DR: In this paper, a double-stage Kalman filter for sensor fusion in 9D IMU is presented, where gyro data are used to first estimate the angular position, then the first stage corrects roll and pitch angles using accelerometer data and the second stage processes magnetic compass data to correct the yaw angle.
Abstract: This work presents an orientation tracking system based on a double stage Kalman filter for sensor fusion in 9D IMU. The IMU is composed by a 3D gyro, a 3D accelerometer and a magnetic compass. The filter was divided into two stages to reduce algorithm complexity. Gyro data are used to first estimate the angular position, then the first stage corrects roll and pitch angles using accelerometer data. The second stage processes magnetic compass data to correct the yaw angle. One of the advantages of this kind of filter is that a magnetic anomaly does not influence roll and pitch estimation accuracy. The flexibility is also desirable, because if the magnetic compass is not available, it is simply possible to switch off the second stage of the filter. In this work an ASIP was designed to process the filter algorithm and a proof of concept on FPGA was successfully realized. In the future the ASIP will be integrated within the logic of a new 6D sensor that could be optionally interfaced with an external magnetic compass.

34 citations

Proceedings ArticleDOI
28 Oct 2010
TL;DR: This paper describes the first complete design of a single-core multi-standard flexible Turbo/LDPC decoder using an ASIC approach and provides a proof-of-concept implementation complaint with 3GPP-HSDPA, DVB-SH, IEEE 802.16e and IEEE802.11n standards.
Abstract: This paper describes the first complete design of a single-core multi-standard flexible Turbo/LDPC decoder using an ASIC approach. Such a solution outperforms other state-of-the-art implementations based on application-specific instruction-set processors (ASIPs), which are shown to suffer from impaired throughput and power consumption. In this paper, we describe in detail the VLSI flexible architecture of a decoder coping with all the modern communication standards defining LDPC and Turbo codes, and provide a proof-of-concept implementation complaint with 3GPP-HSDPA, DVB-SH, IEEE 802.16e and IEEE 802.11n standards. The decoder, running at only 150MHz for a reduced power, occupies an area of 0.9mm2 with a maximum power consumption of only 86.1mW.

33 citations

Journal ArticleDOI
TL;DR: An efficient architecture of a reconfigurable multi-size circular shifting network is described, used to circularly shift an array with arbitrary size.
Abstract: The need for circularly shifting an array of data is a distinguishing feature of decoders for structured low-density parity-check (LDPC) code, as a result of an efficient trade-off between performance and parallelisation of the elaborations, or throughput. Since the decoder must typically cope with blocks of data with different size, described is an efficient architecture of a reconfigurable multi-size circular shifting network, used to circularly shift an array with arbitrary size.

30 citations


Cited by
More filters
Proceedings Article
01 Jan 1991
TL;DR: It is concluded that properly augmented and power-controlled multiple-cell CDMA (code division multiple access) promises a quantum increase in current cellular capacity.
Abstract: It is shown that, particularly for terrestrial cellular telephony, the interference-suppression feature of CDMA (code division multiple access) can result in a many-fold increase in capacity over analog and even over competing digital techniques. A single-cell system, such as a hubbed satellite network, is addressed, and the basic expression for capacity is developed. The corresponding expressions for a multiple-cell system are derived. and the distribution on the number of users supportable per cell is determined. It is concluded that properly augmented and power-controlled multiple-cell CDMA promises a quantum increase in current cellular capacity. >

2,951 citations

Journal ArticleDOI
TL;DR: This article provides an overview of signal processing challenges in mmWave wireless systems, with an emphasis on those faced by using MIMO communication at higher carrier frequencies.
Abstract: Communication at millimeter wave (mmWave) frequencies is defining a new era of wireless communication. The mmWave band offers higher bandwidth communication channels versus those presently used in commercial wireless systems. The applications of mmWave are immense: wireless local and personal area networks in the unlicensed band, 5G cellular systems, not to mention vehicular area networks, ad hoc networks, and wearables. Signal processing is critical for enabling the next generation of mmWave communication. Due to the use of large antenna arrays at the transmitter and receiver, combined with radio frequency and mixed signal power constraints, new multiple-input multiple-output (MIMO) communication signal processing techniques are needed. Because of the wide bandwidths, low complexity transceiver algorithms become important. There are opportunities to exploit techniques like compressed sensing for channel estimation and beamforming. This article provides an overview of signal processing challenges in mmWave wireless systems, with an emphasis on those faced by using MIMO communication at higher carrier frequencies.

2,380 citations

Journal ArticleDOI
TL;DR: In this article, the authors proposed hybrid architectures based on switching networks to reduce the complexity and the power consumption of the structures based on phase shifters and defined a power consumption model and used it to evaluate the energy efficiency of both structures.
Abstract: Hybrid analog/digital multiple-input multiple-output architectures were recently proposed as an alternative for fully digital-precoding in millimeter wave wireless communication systems. This is motivated by the possible reduction in the number of RF chains and analog-to-digital converters. In these architectures, the analog processing network is usually based on variable phase shifters. In this paper, we propose hybrid architectures based on switching networks to reduce the complexity and the power consumption of the structures based on phase shifters. We define a power consumption model and use it to evaluate the energy efficiency of both structures. To estimate the complete MIMO channel, we propose an open-loop compressive channel estimation technique that is independent of the hardware used in the analog processing stage. We analyze the performance of the new estimation algorithm for hybrid architectures based on phase shifters and switches. Using the estimate, we develop two algorithms for the design of the hybrid combiner based on switches and analyze the achieved spectral efficiency. Finally, we study the tradeoffs between power consumption, hardware complexity, and spectral efficiency for hybrid architectures based on phase shifting networks and switching networks. Numerical results show that architectures based on switches obtain equal or better channel estimation performance to that obtained using phase shifters, while reducing hardware complexity and power consumption. For equal power consumption, all the hybrid architectures provide similar spectral efficiencies.

632 citations

Journal Article
TL;DR: Der DES basiert auf einer von Horst Feistel bei IBM entwickelten Blockchiffre („Lucipher“) with einer Schlüssellänge von 128 bit zum Sicherheitsrisiko, und zuletzt konnte 1998 mit einem von der „Electronic Frontier Foundation“ (EFF) entwickkelten Spezialmaschine mit 1.800 parallel arbeit
Abstract: Im Jahre 1977 wurde der „Data Encryption Algorithm“ (DEA) vom „National Bureau of Standards“ (NBS, später „National Institute of Standards and Technology“ – NIST) zum amerikanischen Verschlüsselungsstandard für Bundesbehörden erklärt [NBS_77]. 1981 folgte die Verabschiedung der DEA-Spezifikation als ANSI-Standard „DES“ [ANSI_81]. Die Empfehlung des DES als StandardVerschlüsselungsverfahren wurde auf fünf Jahre befristet und 1983, 1988 und 1993 um jeweils weitere fünf Jahre verlängert. Derzeit liegt eine Neufassung des NISTStandards vor [NIST_99], in dem der DES für weitere fünf Jahre übergangsweise zugelassen sein soll, aber die Verwendung von Triple-DES empfohlen wird: eine dreifache Anwendung des DES mit drei verschiedenen Schlüsseln (effektive Schlüssellänge: 168 bit) [NIST_99]. Der DES basiert auf einer von Horst Feistel bei IBM entwickelten Blockchiffre („Lucipher“) mit einer Schlüssellänge von 128 bit. Da die amerikanische „National Security Agency“ (NSA) dafür gesorgt hatte, daß der DES eine Schlüssellänge von lediglich 64 bit besitzt, von denen nur 56 bit relevant sind, und spezielle Substitutionsboxen (den „kryptographischen Kern“ des Verfahrens) erhielt, deren Konstruktionskriterien von der NSA nicht veröffentlicht wurden, war das Verfahren von Beginn an umstritten. Kritiker nahmen an, daß es eine geheime „Trapdoor“ in dem Verfahren gäbe, die der NSA eine OnlineEntschlüsselung auch ohne Kenntnis des Schlüssels erlauben würde. Zwar ließ sich dieser Verdacht nicht erhärten, aber sowohl die Zunahme von Rechenleistung als auch die Parallelisierung von Suchalgorithmen machen heute eine Schlüssellänge von 56 bit zum Sicherheitsrisiko. Zuletzt konnte 1998 mit einer von der „Electronic Frontier Foundation“ (EFF) entwickelten Spezialmaschine mit 1.800 parallel arbeitenden, eigens entwickelten Krypto-Prozessoren ein DES-Schlüssel in einer Rekordzeit von 2,5 Tagen gefunden werden. Um einen Nachfolger für den DES zu finden, kündigte das NIST am 2. Januar 1997 die Suche nach einem „Advanced Encryption Standard“ (AES) an. Ziel dieser Initiative ist, in enger Kooperation mit Forschung und Industrie ein symmetrisches Verschlüsselungsverfahren zu finden, das geeignet ist, bis weit ins 21. Jahrhundert hinein amerikanische Behördendaten wirkungsvoll zu verschlüsseln. Dazu wurde am 12. September 1997 ein offizieller „Call for Algorithm“ ausgeschrieben. An die vorzuschlagenden symmetrischen Verschlüsselungsalgorithmen wurden die folgenden Anforderungen gestellt: nicht-klassifiziert und veröffentlicht, weltweit lizenzfrei verfügbar, effizient implementierbar in Hardund Software, Blockchiffren mit einer Blocklänge von 128 bit sowie Schlüssellängen von 128, 192 und 256 bit unterstützt. Auf der ersten „AES Candidate Conference“ (AES1) veröffentlichte das NIST am 20. August 1998 eine Liste von 15 vorgeschlagenen Algorithmen und forderte die Fachöffentlichkeit zu deren Analyse auf. Die Ergebnisse wurden auf der zweiten „AES Candidate Conference“ (22.-23. März 1999 in Rom, AES2) vorgestellt und unter internationalen Kryptologen diskutiert. Die Kommentierungsphase endete am 15. April 1999. Auf der Basis der eingegangenen Kommentare und Analysen wählte das NIST fünf Kandidaten aus, die es am 9. August 1999 öffentlich bekanntmachte: MARS (IBM) RC6 (RSA Lab.) Rijndael (Daemen, Rijmen) Serpent (Anderson, Biham, Knudsen) Twofish (Schneier, Kelsey, Whiting, Wagner, Hall, Ferguson).

624 citations

Posted Content
TL;DR: An exhaustive review of the research conducted in neuromorphic computing since the inception of the term is provided to motivate further work by illuminating gaps in the field where new research is needed.
Abstract: Neuromorphic computing has come to refer to a variety of brain-inspired computers, devices, and models that contrast the pervasive von Neumann computer architecture This biologically inspired approach has created highly connected synthetic neurons and synapses that can be used to model neuroscience theories as well as solve challenging machine learning problems The promise of the technology is to create a brain-like ability to learn and adapt, but the technical challenges are significant, starting with an accurate neuroscience model of how the brain works, to finding materials and engineering breakthroughs to build devices to support these models, to creating a programming framework so the systems can learn, to creating applications with brain-like capabilities In this work, we provide a comprehensive survey of the research and motivations for neuromorphic computing over its history We begin with a 35-year review of the motivations and drivers of neuromorphic computing, then look at the major research areas of the field, which we define as neuro-inspired models, algorithms and learning approaches, hardware and devices, supporting systems, and finally applications We conclude with a broad discussion on the major research topics that need to be addressed in the coming years to see the promise of neuromorphic computing fulfilled The goals of this work are to provide an exhaustive review of the research conducted in neuromorphic computing since the inception of the term, and to motivate further work by illuminating gaps in the field where new research is needed

570 citations