scispace - formally typeset
Search or ask a question
Author

Evangelos Eleftheriou

Other affiliations: GlobalFoundries, Hitachi, King's College London  ...read more
Bio: Evangelos Eleftheriou is an academic researcher from IBM. The author has contributed to research in topics: Phase-change memory & Spiking neural network. The author has an hindex of 55, co-authored 380 publications receiving 15777 citations. Previous affiliations of Evangelos Eleftheriou include GlobalFoundries & Hitachi.


Papers
More filters
Journal ArticleDOI
TL;DR: Simulation results show that the PEG algorithm is a powerful algorithm to generate good short-block-length LDPC codes.
Abstract: We propose a general method for constructing Tanner graphs having a large girth by establishing edges or connections between symbol and check nodes in an edge-by-edge manner, called progressive edge-growth (PEG) algorithm. Lower bounds on the girth of PEG Tanner graphs and on the minimum distance of the resulting low-density parity-check (LDPC) codes are derived in terms of parameters of the graphs. Simple variations of the PEG algorithm can also be applied to generate linear-time encodeable LDPC codes. Regular and irregular LDPC codes using PEG Tanner graphs and allowing symbol nodes to take values over GF(q) (q>2) are investigated. Simulation results show that the PEG algorithm is a powerful algorithm to generate good short-block-length LDPC codes.

1,507 citations

Journal ArticleDOI
TL;DR: This paper presents an overview of nanopositioning technologies and devices emphasizing the key role of advanced control techniques in improving precision, accuracy, and speed of operation of these systems.
Abstract: Nanotechnology is the science of understanding matter and the control of matter at dimensions of 100 nm or less. Encompassing nanoscale science, engineering, and technology, nanotechnology involves imaging, measuring, modeling, and manipulation of matter at this level of precision. An important aspect of research in nanotechnology involves precision control and manipulation of devices and materials at a nanoscale, i.e., nanopositioning. Nanopositioners are precision mechatronic systems designed to move objects over a small range with a resolution down to a fraction of an atomic diameter. The desired attributes of a nanopositioner are extremely high resolution, accuracy, stability, and fast response. The key to successful nanopositioning is accurate position sensing and feedback control of the motion. This paper presents an overview of nanopositioning technologies and devices emphasizing the key role of advanced control techniques in improving precision, accuracy, and speed of operation of these systems.

1,027 citations

Journal ArticleDOI
TL;DR: The unified treatment of decoding techniques for LDPC codes presented here provides flexibility in selecting the appropriate scheme from performance, latency, computational-complexity, and memory-requirement perspectives.
Abstract: Various log-likelihood-ratio-based belief-propagation (LLR-BP) decoding algorithms and their reduced-complexity derivatives for low-density parity-check (LDPC) codes are presented. Numerically accurate representations of the check-node update computation used in LLR-BP decoding are described. Furthermore, approximate representations of the decoding computations are shown to achieve a reduction in complexity by simplifying the check-node update, or symbol-node update, or both. In particular, two main approaches for simplified check-node updates are presented that are based on the so-called min-sum approximation coupled with either a normalization term or an additive offset term. Density evolution is used to analyze the performance of these decoding algorithms, to determine the optimum values of the key parameters, and to evaluate finite quantization effects. Simulation results show that these reduced-complexity decoding algorithms for LDPC codes achieve a performance very close to that of the BP algorithm. The unified treatment of decoding techniques for LDPC codes presented here provides flexibility in selecting the appropriate scheme from performance, latency, computational-complexity, and memory-requirement perspectives.

989 citations

Journal ArticleDOI
TL;DR: This Review provides an overview of memory devices and the key computational primitives enabled by these memory devices as well as their applications spanning scientific computing, signal processing, optimization, machine learning, deep learning and stochastic computing.
Abstract: Traditional von Neumann computing systems involve separate processing and memory units. However, data movement is costly in terms of time and energy and this problem is aggravated by the recent explosive growth in highly data-centric applications related to artificial intelligence. This calls for a radical departure from the traditional systems and one such non-von Neumann computational approach is in-memory computing. Hereby certain computational tasks are performed in place in the memory itself by exploiting the physical attributes of the memory devices. Both charge-based and resistance-based memory devices are being explored for in-memory computing. In this Review, we provide a broad overview of the key computational primitives enabled by these memory devices as well as their applications spanning scientific computing, signal processing, optimization, machine learning, deep learning and stochastic computing. This Review provides an overview of memory devices and the key computational primitives for in-memory computing, and examines the possibilities of applying this computing approach to a wide range of applications.

841 citations

Journal ArticleDOI
TL;DR: This work shows that chalcogenide-based phase-change materials can be used to create an artificial neuron in which the membrane potential is represented by the phase configuration of the nanoscale phase- change device and shows that the temporal integration of postsynaptic potentials can be achieved on a nanosecond timescale.
Abstract: A nanoscale phase-change device can be used to create an artificial neuron that exhibits integrate-and-fire functionality with stochastic dynamics.

808 citations


Cited by
More filters
Proceedings Article
01 Jan 1991
TL;DR: It is concluded that properly augmented and power-controlled multiple-cell CDMA (code division multiple access) promises a quantum increase in current cellular capacity.
Abstract: It is shown that, particularly for terrestrial cellular telephony, the interference-suppression feature of CDMA (code division multiple access) can result in a many-fold increase in capacity over analog and even over competing digital techniques. A single-cell system, such as a hubbed satellite network, is addressed, and the basic expression for capacity is developed. The corresponding expressions for a multiple-cell system are derived. and the distribution on the number of users supportable per cell is determined. It is concluded that properly augmented and power-controlled multiple-cell CDMA promises a quantum increase in current cellular capacity. >

2,951 citations

Journal ArticleDOI

2,415 citations

Dissertation
24 Apr 2002
TL;DR: Results show that remarkable energy and spectral efficiencies are achievable by combining concepts drawn from space-time coding, multiuser detection, array processing and iterative decoding.
Abstract: Space-time codes (STC) are a class of signaling techniques, offering coding and diversity gains along with improved spectral efficiency. These codes exploit both the spatial and the temporal diversity of the wireless link by combining the design of the error correction code, modulation scheme and array processing. STC are well suited for improving the downlink performance, which is the bottleneck in asymmetric applications such as downstream Internet. Three original contributions to the area of STC are presented in this dissertation. First, the development of analytic tools that determine the fundamental limits on the performance of STC in a variety of channel conditions. For trellis-type STC, transfer function based techniques are applied to derive performance bounds over Rayleigh, Rician and correlated fading environments. For block-type STC, an analytic framework that supports various complex orthogonal designs with arbitrary signal cardinalities and array configurations is developed. In the second part of the dissertation, the Virginia Tech Space-Time Advanced Radio (VT-STAR) is designed, introducing a multi-antenna hardware laboratory test bed, which facilitates characterization of the multiple-input multiple-output (MIMO) channel and validation of various space-time approaches. In the third part of the dissertation, two novel space-time architectures paired with iterative processing principles are proposed. The first scheme extends the suitability of STC to outdoor wireless communications by employing iterative equalization/decoding for time dispersive channels and the second scheme employs iterative interference cancellation/decoding to solve the error propagation problem of Bell-Labs Layered Space-Time Architecture (BLAST). Results show that remarkable energy and spectral efficiencies are achievable by combining concepts drawn from space-time coding, multiuser detection, array processing and iterative decoding.

2,286 citations

Journal ArticleDOI
01 Aug 1997
TL;DR: This paper provides a comprehensive and detailed treatment of different beam-forming schemes, adaptive algorithms to adjust the required weighting on antennas, direction-of-arrival estimation methods-including their performance comparison-and effects of errors on the performance of an array system, as well as schemes to alleviate them.
Abstract: Array processing involves manipulation of signals induced on various antenna elements. Its capabilities of steering nulls to reduce cochannel interferences and pointing independent beams toward various mobiles, as well as its ability to provide estimates of directions of radiating sources, make it attractive to a mobile communications system designer. Array processing is expected to play an important role in fulfilling the increased demands of various mobile communications services. Part I of this paper showed how an array could be utilized in different configurations to improve the performance of mobile communications systems, with references to various studies where feasibility of apt array system for mobile communications is considered. This paper provides a comprehensive and detailed treatment of different beam-forming schemes, adaptive algorithms to adjust the required weighting on antennas, direction-of-arrival estimation methods-including their performance comparison-and effects of errors on the performance of an array system, as well as schemes to alleviate them. This paper brings together almost all aspects of array signal processing.

2,169 citations

Journal ArticleDOI
TL;DR: Simulation results show that the PEG algorithm is a powerful algorithm to generate good short-block-length LDPC codes.
Abstract: We propose a general method for constructing Tanner graphs having a large girth by establishing edges or connections between symbol and check nodes in an edge-by-edge manner, called progressive edge-growth (PEG) algorithm. Lower bounds on the girth of PEG Tanner graphs and on the minimum distance of the resulting low-density parity-check (LDPC) codes are derived in terms of parameters of the graphs. Simple variations of the PEG algorithm can also be applied to generate linear-time encodeable LDPC codes. Regular and irregular LDPC codes using PEG Tanner graphs and allowing symbol nodes to take values over GF(q) (q>2) are investigated. Simulation results show that the PEG algorithm is a powerful algorithm to generate good short-block-length LDPC codes.

1,507 citations