scispace - formally typeset
Search or ask a question
Author

Chadi Jabbour

Other affiliations: ParisTech, Institut Mines-Télécom, Télécom ParisTech  ...read more
Bio: Chadi Jabbour is an academic researcher from Université Paris-Saclay. The author has contributed to research in topics: Delta-sigma modulation & Bandwidth (signal processing). The author has an hindex of 8, co-authored 49 publications receiving 280 citations. Previous affiliations of Chadi Jabbour include ParisTech & Institut Mines-Télécom.


Papers
More filters
Journal ArticleDOI
TL;DR: A new all-digital calibration technique suppressing the timing mismatch effect in time-interleaved analog-to-digital converters (TIADCs) for input at any Nyquist band (NB) using the equivalent polyphase structure of the TIADC is proposed.
Abstract: This brief proposes a new all-digital calibration technique suppressing the timing mismatch effect in time-interleaved analog-to-digital converters (TIADCs) for input at any Nyquist band (NB) using the equivalent polyphase structure of the TIADC. The correction technique is simple and does not require the adaptive digital synthesis filters. The timing mismatch is estimated based on an adaptive stochastic gradient descent technique, which is a promising solution for TIADCs operating at a very fast sampling rate. The digital circuit of the proposed calibration algorithm is designed and synthesized using a 28-nm fully depleted Silicon on insulator (FD-SOI) CMOS technology for the 11-b 60-dB SNR TIADC clocked at 2.7 GHz with the input in the first four NBs. The designed circuit occupies the area of 0.05 mm2 and dissipates the total power of 41 mW.

60 citations

Journal ArticleDOI
TL;DR: A low-power fully digital clock skew feedforward background calibration technique in sub-sampling Time-Interleaved Analog-to-Digital Converters (TIADCs) that can be implemented on a moderate hardware cost with low power dissipation.
Abstract: This paper presents a low-power fully digital clock skew feedforward background calibration technique in sub-sampling Time-Interleaved Analog-to-Digital Converters (TIADCs). Both estimation and correction algorithms share the common derivative filter, which makes them possible to reduce the chip area. Furthermore, these algorithms use the polyphase filtering technique and do not use adaptive digital synthesis filters. Thus, the proposed calibration can be implemented on a moderate hardware cost with low power dissipation. The adopted feedforward technology eliminates the stability issues encountered with the adaptive technique. The Hardware Description Language (HDL) design of the proposed calibration is synthesized using a 28nm FD-SOI process for a 60dB SNR TIADC clocked at 2.7GHz. The calibration is designed for both baseband and sub-sampling TIADC applications. For sub-sampling TIADCs with the input at the first four Nyquist bands, the synthesized calibration system occupies $0.04\rm {mm}^{2}$ of area and dissipates a total power of 33.2mW. For the baseband TIADC applications, it occupies $0.02\rm {mm}^{2}$ and consumes 15.5mW.

55 citations

Proceedings ArticleDOI
22 Jun 2014
TL;DR: A fully digital calibration of timing mismatch for undersampling Time Interleaved Analog-to-Digital Converter employed in Software Defined Radio (SDR) receivers using an ideal differentiator filter, a Hilbert transform filter and a scaling factor to compute the derivative of the input in any Nyquist Band.
Abstract: This paper proposes a fully digital calibration of timing mismatch for undersampling Time Interleaved Analog-to-Digital Converter (TI-ADC) employed in Software Defined Radio (SDR) receivers. The proposed calibration scheme employs an ideal differentiator filter, a Hilbert transform filter and a scaling factor to compute the derivative of the input in any Nyquist Band (NB). The efficiency of the proposed technique is shown using a four-channel undersampling 60 dB SNR TI-ADC clocked at 2.7 GHz. Monte Carlo simulations show SNDR and SFDR improvements of respectively, 18 dB and 21 dB over the first three NBs.

29 citations

Proceedings ArticleDOI
24 May 2015
TL;DR: A new Least Mean Square (LMS) based detection scheme is proposed to increase convergence speed as well as to enhance the estimate accuracy of the TI-ADC gain and timing mismatches.
Abstract: This paper presents a practical implementation of all digital calibration algorithm for the gain and timing mismatches in undersampling Time-Interleaved Analog-to-Digital Converter (TI-ADC) A new Least Mean Square (LMS) based detection scheme is proposed to increase convergence speed as well as to enhance the estimate accuracy Monte Carlo simulations for a four-channel undersampling 60 dB SNR TI-ADC clocked at 27 GHz show that SFDR can achieve approximately 90 dB SFDR within the stable point of the channel mismatch coefficients over the first three Nyquist Bands The proposed architecture is implemented and validated on the Altera FPGA DE4 board The synthesized design consumes a few percentages of the hardware resources of the FPGA chip and work properly on a Hardware-In-the-Loop emulation framework

20 citations

Journal ArticleDOI
TL;DR: A novel solution based on mimicking the AAF frequency response in the digital domain is proposed and two other techniques are developed to improve the modeling and the correction of the memory effect using, respectively, a bank of filters and two-step architecture.
Abstract: Digital post-distortion is becoming an increasingly attractive solution to compensate for the non-linearities of RF receivers implemented in deep submicron CMOS technologies. A very promising technique which provides flexibility and robustness is the frequency spreading based digital post-distortion. This paper focuses on the analysis of this approach for wideband receivers and its main limitations, such as the anti-alias filter (AAF) impact. As a matter of fact, distortions filtered by the AAF can degrade the correction performance. To overcome this problem, a novel solution based on mimicking the AAF frequency response in the digital domain is proposed. Aspects regarding the estimation and the algorithm convergence are also studied: the noise effect and the choice of the free frequency band. Furthermore, two other techniques are developed to improve the modeling and the correction of the memory effect using, respectively, a bank of filters and two-step architecture. The proposed techniques are demonstrated by simulations and measurements on a multi-channel receiver suited for DVB-T applications. The receiver is based on a 1-GHz bandwidth RF front-end followed by a 2.7-GHz 13-bit analog-to-digital converter.

16 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: In this article, the authors provide a thorough overview on using a class of advanced machine learning techniques, namely deep learning (DL), to facilitate the analytics and learning in the IoT domain.
Abstract: In the era of the Internet of Things (IoT), an enormous amount of sensing devices collect and/or generate various sensory data over time for a wide range of fields and applications. Based on the nature of the application, these devices will result in big or fast/real-time data streams. Applying analytics over such data streams to discover new information, predict future insights, and make control decisions is a crucial process that makes IoT a worthy paradigm for businesses and a quality-of-life improving technology. In this paper, we provide a thorough overview on using a class of advanced machine learning techniques, namely deep learning (DL), to facilitate the analytics and learning in the IoT domain. We start by articulating IoT data characteristics and identifying two major treatments for IoT data from a machine learning perspective, namely IoT big data analytics and IoT streaming data analytics. We also discuss why DL is a promising approach to achieve the desired analytics in these types of data and applications. The potential of using emerging DL techniques for IoT data analytics are then discussed, and its promises and challenges are introduced. We present a comprehensive background on different DL architectures and algorithms. We also analyze and summarize major reported research attempts that leveraged DL in the IoT domain. The smart IoT devices that have incorporated DL in their intelligence background are also discussed. DL implementation approaches on the fog and cloud centers in support of IoT applications are also surveyed. Finally, we shed light on some challenges and potential directions for future research. At the end of each section, we highlight the lessons learned based on our experiments and review of the recent literature.

903 citations

Journal ArticleDOI
01 Oct 2019
TL;DR: This work provides a comprehensive survey of the state of the art in the application of machine learning techniques to address key problems in IoT wireless communications with an emphasis on its ad hoc networking aspect.
Abstract: The Internet of Things (IoT) is expected to require more effective and efficient wireless communications than ever before. For this reason, techniques such as spectrum sharing, dynamic spectrum access, extraction of signal intelligence and optimized routing will soon become essential components of the IoT wireless communication paradigm. In this vision, IoT devices must be able to not only learn to autonomously extract spectrum knowledge on-the-fly from the network but also leverage such knowledge to dynamically change appropriate wireless parameters ( e.g. , frequency band, symbol modulation, coding rate, route selection, etc.) to reach the network’s optimal operating point. Given that the majority of the IoT will be composed of tiny, mobile, and energy-constrained devices, traditional techniques based on a priori network optimization may not be suitable, since (i) an accurate model of the environment may not be readily available in practical scenarios; (ii) the computational requirements of traditional optimization techniques may prove unbearable for IoT devices. To address the above challenges, much research has been devoted to exploring the use of machine learning to address problems in the IoT wireless communications domain. The reason behind machine learning’s popularity is that it provides a general framework to solve very complex problems where a model of the phenomenon being learned is too complex to derive or too dynamic to be summarized in mathematical terms. This work provides a comprehensive survey of the state of the art in the application of machine learning techniques to address key problems in IoT wireless communications with an emphasis on its ad hoc networking aspect. First, we present extensive background notions of machine learning techniques. Then, by adopting a bottom-up approach, we examine existing work on machine learning for the IoT at the physical, data-link and network layer of the protocol stack. Thereafter, we discuss directions taken by the community towards hardware implementation to ensure the feasibility of these techniques. Additionally, before concluding, we also provide a brief discussion of the application of machine learning in IoT beyond wireless communication. Finally, each of these discussions is accompanied by a detailed analysis of the related open problems and challenges.

194 citations

Posted Content
TL;DR: In this article, the authors provide a thorough overview on using a class of advanced machine learning techniques, namely Deep Learning (DL), to facilitate the analytics and learning in the IoT domain.
Abstract: In the era of the Internet of Things (IoT), an enormous amount of sensing devices collect and/or generate various sensory data over time for a wide range of fields and applications. Based on the nature of the application, these devices will result in big or fast/real-time data streams. Applying analytics over such data streams to discover new information, predict future insights, and make control decisions is a crucial process that makes IoT a worthy paradigm for businesses and a quality-of-life improving technology. In this paper, we provide a thorough overview on using a class of advanced machine learning techniques, namely Deep Learning (DL), to facilitate the analytics and learning in the IoT domain. We start by articulating IoT data characteristics and identifying two major treatments for IoT data from a machine learning perspective, namely IoT big data analytics and IoT streaming data analytics. We also discuss why DL is a promising approach to achieve the desired analytics in these types of data and applications. The potential of using emerging DL techniques for IoT data analytics are then discussed, and its promises and challenges are introduced. We present a comprehensive background on different DL architectures and algorithms. We also analyze and summarize major reported research attempts that leveraged DL in the IoT domain. The smart IoT devices that have incorporated DL in their intelligence background are also discussed. DL implementation approaches on the fog and cloud centers in support of IoT applications are also surveyed. Finally, we shed light on some challenges and potential directions for future research. At the end of each section, we highlight the lessons learned based on our experiments and review of the recent literature.

182 citations

Journal ArticleDOI
TL;DR: A new all-digital calibration technique suppressing the timing mismatch effect in time-interleaved analog-to-digital converters (TIADCs) for input at any Nyquist band (NB) using the equivalent polyphase structure of the TIADC is proposed.
Abstract: This brief proposes a new all-digital calibration technique suppressing the timing mismatch effect in time-interleaved analog-to-digital converters (TIADCs) for input at any Nyquist band (NB) using the equivalent polyphase structure of the TIADC. The correction technique is simple and does not require the adaptive digital synthesis filters. The timing mismatch is estimated based on an adaptive stochastic gradient descent technique, which is a promising solution for TIADCs operating at a very fast sampling rate. The digital circuit of the proposed calibration algorithm is designed and synthesized using a 28-nm fully depleted Silicon on insulator (FD-SOI) CMOS technology for the 11-b 60-dB SNR TIADC clocked at 2.7 GHz with the input in the first four NBs. The designed circuit occupies the area of 0.05 mm2 and dissipates the total power of 41 mW.

60 citations

Journal ArticleDOI
TL;DR: In this article, the authors defined the power difference as the peak to average power ratio (PAPR) of the signal, which is defined as the difference between the maximum and average power of the input signal.
Abstract: Power efficiency is one of the most important parameters in designing communication systems, especially battery operated mobile terminals. In a typical transceiver, most of the power is dissipated in the power amplifier (PA) and consequently, it is very important to obtain the maximum efficiency from the PA. A PA operating in Class AB or B is at its maximum efficiency when it is driven by its maximum allowable input power [1]. In practice, the input signal of the PA usually has a varying envelope, and to avoid distortion the PA should not be driven to more than its maximum input saturating power. Unfortunately, this peak power of the input signal happens at very short periods, and most of the time the signal power is around its average power, which is much smaller than its peak power, meaning that, often, the PA works at much lower efficiencies than its maximum efficiency. The power difference is defined as the peak to average power ratio (PAPR) of the signal. For example, for a signal with 12 dB PAPR, a Class B PA would be driven with 12 dB power back-off from its peak input power, and at this power back-off, the efficiency of the PA will degrade from 78.5% to around 20% [1]. Unfortunately, by moving to high throughput modulation schemes, for example, quadrature amplitude modulations (QAMs) such as 16-QAM and 64-QAM mean that more envelope variation is needed to encode the information, and, consequently, lower efficiency is achieved.

59 citations