Author
Johan Eilert
Bio: Johan Eilert is an academic researcher from Linköping University. The author has contributed to research in topics: MIMO & Software-defined radio. The author has an hindex of 13, co-authored 28 publications receiving 508 citations.
Papers
More filters
••
27 May 2007TL;DR: A new method is proposed using programmable hardware units which not only achieves higher performance but also consumes less silicon area and can be reused for many other operations such as complex matrix multiplication, filtering, correlation and FFT/IFFT.
Abstract: Complex matrix inversion is a very computationally demanding operation in advanced multi-antenna wireless communications. Traditionally, systolic array-based QR decomposition (QRD) is used to invert large matrices. However, the matrices involved in MIMO baseband processing in mobile handsets are generally small which means QRD is not necessarily efficient. In this paper, a new method is proposed using programmable hardware units which not only achieves higher performance but also consumes less silicon area. Furthermore, the hardware can be reused for many other operations such as complex matrix multiplication, filtering, correlation and FFT/IFFT.
78 citations
•
27 Apr 2011TL;DR: A channel estimation ASIC, which handles the real-time channel estimation, is presented, which boosts the throughput at feasible silicon cost by adopting a recently proposed estimation method named Approximate Linear Minimum Mean Square Error (ALMMSE).
Abstract: In this paper, hardware implementation aspects of the channel estimator in 3GPP LTE terminals are investigated. A channel estimation ASIC, which handles the real-time channel estimation, is presented. Compared to traditional correlator-based channel estimators, the channel estimator presented boosts the throughput at feasible silicon cost by adopting a recently proposed estimation method named Approximate Linear Minimum Mean Square Error (ALMMSE). In this paper, both the architecture and VLSI implementation of the estimator are elaborated. Implemented using a 65nm CMOS process, the channel estimator supports the full 20MHz bandwidth of 3GPP LTE and consumes only 49 kgates.
70 citations
••
TL;DR: The LeoCore architecture is presented here as an example of a baseband processor design aimed at reducing power and silicon cost while maintaining sufficient flexibility, and the challenges and solutions of radio baseband signal processing.
Abstract: A programmable radio baseband signal processor is one of the essential enablers of software- defined radio. As wireless standards evolve, the processing power needed for baseband processing increases dramatically and the underlying hardware needs to cope with various standards or even simultaneously maintaining several radio links. Meanwhile, the maximum power consumption allowed by mobile terminals is still strictly limited. These challenges require both system and architecture level innovations. This article introduces a design methodology for radio baseband processors discussing the challenges and solutions of radio baseband signal processing. The LeoCore architecture is presented here as an example of a baseband processor design aimed at reducing power and silicon cost while maintaining sufficient flexibility.
61 citations
••
12 May 2008TL;DR: The detector is implemented using a programmable baseband processor aimed for software-defined radio (SDR) and can be adopted to reduce the computational latency of detection.
Abstract: This paper presents a linear minimum mean square error (LMMSE) symbol detector for MIMO-OFDM enabled mobile terminals. The detector is implemented using a programmable baseband processor aimed for software-defined radio (SDR). Owing to the dynamic range supplied by the floating-point SIMD datapath, special algorithms can be adopted to reduce the computational latency of detection. The programmable solution not only supports different transmit/receive antenna configurations, but also allows hardware multiplexing to obtain silicon and power efficiency. Compared to several existing fixed-functional solutions, the one proposed in this paper is smaller, more flexible and faster.
42 citations
••
01 Jan 2004TL;DR: A DSP processor based on 16-bit floating point intermediate storage is designed and implemented and is capable of decoding all MP3 bit streams at 20 MHz and this has been demonstrated on an FPGA prototype.
Abstract: The purpose of our work has been to evaluate the practicality of using a 16-bit floating point representation to store the intermediate sample values and other data in memory during the decoding of MP3 bit streams. A floating point number representation offers a better trade-off between dynamic range and precision than a fixed point representation for a given word length. Using a floating point representation means that smaller memories can be used which leads to smaller chip area and lower power consumption without reducing sound quality. We have designed and implemented a DSP processor based on 16-bit floating point intermediate storage. The DSP processor is capable of decoding all MP3 bit streams at 20 MHz and this has been demonstrated on an FPGA prototype.
34 citations
Cited by
More filters
01 Jan 2007
TL;DR: In this paper, the authors provide updates to IEEE 802.16's MIB for the MAC, PHY and asso-ciated management procedures in order to accommodate recent extensions to the standard.
Abstract: This document provides updates to IEEE Std 802.16's MIB for the MAC, PHY and asso- ciated management procedures in order to accommodate recent extensions to the standard.
1,481 citations
••
TL;DR: Some of the VANET research challenges that still need to be addressed to enable the ubiquitous deployment and widespead adoption of scalable, reliable, robust, and secure VANet architectures, protocols, technologies, and services are outlined.
Abstract: Recent advances in hardware, software, and communication technologies are enabling the design and implementation of a whole range of different types of networks that are being deployed in various environments. One such network that has received a lot of interest in the last couple of years is the Vehicular Ad-Hoc Network (VANET). VANET has become an active area of research, standardization, and development because it has tremendous potential to improve vehicle and road safety, traffic efficiency, and convenience as well as comfort to both drivers and passengers. Recent research efforts have placed a strong emphasis on novel VANET design architectures and implementations. A lot of VANET research work have focused on specific areas including routing, broadcasting, Quality of Service (QoS), and security. We survey some of the recent research results in these areas. We present a review of wireless access standards for VANETs, and describe some of the recent VANET trials and deployments in the US, Japan, and the European Union. In addition, we also briefly present some of the simulators currently available to VANET researchers for VANET simulations and we assess their benefits and limitations. Finally, we outline some of the VANET research challenges that still need to be addressed to enable the ubiquitous deployment and widespead adoption of scalable, reliable, robust, and secure VANET architectures, protocols, technologies, and services.
1,132 citations
••
TL;DR: This work proposes a new approximate matrix inversion algorithm relying on a Neumann series expansion, which substantially reduces the complexity of linear data detection in single-carrier frequency-division multiple access (SC-FDMA)-based large-scale MIMO systems.
Abstract: Large-scale (or massive) multiple-input multiple-out put (MIMO) is expected to be one of the key technologies in next-generation multi-user cellular systems based on the upcoming 3GPP LTE Release 12 standard, for example. In this work, we propose-to the best of our knowledge-the first VLSI design enabling high-throughput data detection in single-carrier frequency-division multiple access (SC-FDMA)-based large-scale MIMO systems. We propose a new approximate matrix inversion algorithm relying on a Neumann series expansion, which substantially reduces the complexity of linear data detection. We analyze the associated error, and we compare its performance and complexity to those of an exact linear detector. We present corresponding VLSI architectures, which perform exact and approximate soft-output detection for large-scale MIMO systems with various antenna/user configurations. Reference implementation results for a Xilinx Virtex-7 XC7VX980T FPGA show that our designs are able to achieve more than 600 Mb/s for a 128 antenna, 8 user 3GPP LTE-based large-scale MIMO system. We finally provide a performance/complexity trade-off comparison using the presented FPGA designs, which reveals that the detector circuit of choice is determined by the ratio between BS antennas and users, as well as the desired error-rate performance.
363 citations
••
TL;DR: This paper proposes a low-complexity minimum mean-squared error (MMSE) based parallel interference cancellation algorithm, develops a suitable VLSI architecture, and presents a corresponding four-stream 1.5 mm2 detector chip in 90 nm CMOS technology, which is the first ASIC implementation of a SISO detector for iterative MIMO decoding.
Abstract: Multiple-input multiple-output (MIMO) technology is the key to meet the demands for data rate and link reliability of modern wireless communication systems, such as IEEE 802.11n or 3GPP-LTE. The full potential of MIMO systems can, however, only be achieved by means iterative MIMO decoding relying on soft-input soft-output (SISO) data detection. In this paper, we describe the first ASIC implementation of a SISO detector for iterative MIMO decoding. To this end, we propose a low-complexity minimum mean-squared error (MMSE) based parallel interference cancellation algorithm, develop a suitable VLSI architecture, and present a corresponding four-stream 1.5 mm2 detector chip in 90 nm CMOS technology. The fabricated ASIC includes all necessary preprocessing circuitry and exceeds the 600 Mb/s peak data-rate of IEEE 802.11n. A comparison with state-of-the-art MIMO-detector implementations demonstrates the performance benefits of our ASIC prototype in practical system-scenarios.
297 citations
••
TL;DR: This study explains how link and system level simulations are connected and shows how the link level simulator serves as a reference to design the system level simulator, and compares the accuracy of the PHY modeling at system level by means of simulations performed both with bit-accurate link level simulations and PHY-model-based systemlevel simulations.
Abstract: In this article, we introduce MATLAB-based link and system level simulation environments for UMTS Long-Term Evolution (LTE). The source codes of both simulators are available under an academic non-commercial use license, allowing researchers full access to standard-compliant simulation environments. Owing to the open source availability, the simulators enable reproducible research in wireless communications and comparison of novel algorithms. In this study, we explain how link and system level simulations are connected and show how the link level simulator serves as a reference to design the system level simulator. We compare the accuracy of the PHY modeling at system level by means of simulations performed both with bit-accurate link level simulations and PHY-model-based system level simulations. We highlight some of the currently most interesting research questions for LTE, and explain by some research examples how our simulators can be applied.
292 citations