scispace - formally typeset
Search or ask a question
Author

Tobias A. Eriksson

Bio: Tobias A. Eriksson is an academic researcher from National Institute of Information and Communications Technology. The author has contributed to research in topics: Transmission (telecommunications) & Quadrature amplitude modulation. The author has an hindex of 20, co-authored 111 publications receiving 1770 citations. Previous affiliations of Tobias A. Eriksson include Chalmers University of Technology & Royal Institute of Technology.


Papers
More filters
Journal ArticleDOI
TL;DR: In this article, an end-to-end deep learning-based optimization of optical fiber communication systems is proposed to achieve bit error rates below the 6.7% hard-decision forward error correction (HD-FEC) threshold.
Abstract: In this paper, we implement an optical fiber communication system as an end-to-end deep neural network, including the complete chain of transmitter, channel model, and receiver. This approach enables the optimization of the transceiver in a single end-to-end process. We illustrate the benefits of this method by applying it to intensity modulation/direct detection (IM/DD) systems and show that we can achieve bit error rates below the 6.7% hard-decision forward error correction (HD-FEC) threshold. We model all componentry of the transmitter and receiver, as well as the fiber channel, and apply deep learning to find transmitter and receiver configurations minimizing the symbol error rate. We propose and verify in simulations a training method that yields robust and flexible transceivers that allow—without reconfiguration—reliable transmission over a large range of link dispersions. The results from end-to-end deep learning are successfully verified for the first time in an experiment. In particular, we achieve information rates of 42 Gb/s below the HD-FEC threshold at distances beyond 40 km. We find that our results outperform conventional IM/DD solutions based on two- and four-level pulse amplitude modulation with feedforward equalization at the receiver. Our study is the first step toward end-to-end deep learning based optimization of optical fiber communication systems.

274 citations

Journal ArticleDOI
TL;DR: In this article, a low-complexity coherent receiver solution is presented to improve spectral efficiency in WDM systems based on the receiver-side partial-response equalization and maximum-likelihood sequence detection.
Abstract: A novel low-complexity coherent receiver solution is presented to improve spectral efficiency in wavelength-division multiplexing (WDM) systems. It is based on the receiver-side partial-response equalization and maximum-likelihood sequence detection (MLSD) in prefiltered WDM systems. The partial-response equalization shapes the channel into an intermediate state with a known partial response which is finally recovered by MLSD without the need of channel estimation. In this scheme, the severe intersymbol interference induced by the prefiltering can be shared between the partial-response equalization and MLSD. Therefore, a tradeoff can be made between complexity and performance. The feasibility of receiver-side partial-response shaping relaxes the efforts and requirements on the transmitter-side prefiltering, which permits the mature WDM components to implement prefiltering. In addition, the partial-response equalization or shaping structure is also improved based on our prior art, which further simplifies the overall scheme. For near-baudrate-spacing optically prefiltered WDM systems, duobinary response is experimentally proved as a good intermediate response to shape. Due to the short memory of the duobinary response, the complexity of the receiver based on duobinary shaping has been reduced to a low level. As a whole, the proposed scheme provides a good alternative to Nyquist-WDM at comparable spectral efficiencies. With the proposed receiver-side duobinary shaping technique, three sets of experiments have been carried out to verify the improved duobinary shaping scheme and meanwhile demonstrate the main features, including 5 ×112-Gb/s polarization-multiplexed quadrature phase-shift keying (PM-QPSK) WDM transmission over a 25-GHz grid, single-channel 40-Gbaud PM-QPSK experiment, and 30-GHz-spaced 3 × 224-Gb/s PM 16-ary quadrature amplitude modulation transmission.

139 citations

Journal ArticleDOI
TL;DR: Experimental verification of QKD co-propagating with a large number of wavelength division multiplexing (WDM) coherent data channels is presented, demonstrating more than a factor of 10 increase in the number of WDM channels and more than 90 times higher classical bitrate.
Abstract: Quantum key distribution (QKD) can offer communication with unconditional security and is a promising technology to protect next generation communication systems. For QKD to see commercial success, several key challenges have to be solved, such as integrating QKD signals into existing fiber optical networks. In this paper, we present experimental verification of QKD co-propagating with a large number of wavelength division multiplexing (WDM) coherent data channels. We show successful secret key generation over 24 h for a continuous-variable QKD channel jointly transmitted with 100 WDM channels of erbium doped fiber amplified polarization multiplexed 16-ary quadrature amplitude modulation signals amounting to a datarate of 18.3 Tbit/s. Compared to previous co-propagation results in the C-band, we demonstrate more than a factor of 10 increase in the number of WDM channels and more than 90 times higher classical bitrate, showing the co-propagation with Tbit/s data-carrying channels. The security of communications networks is a fundamental challenge of the current era, particularly with the move towards quantum communications. The authors perform joint transmission of quantum key distribution and up to 100 classical communication channels in the same fiber and report an average secret key rate of 27.2 kbit/s over a 24 h operation period where the classical data rate amounted to 18.3 Tbit/s.

127 citations

Journal ArticleDOI
TL;DR: It is shown that with pseudorandom bit sequences, a large artificial gain can be obtained, which comes from pattern prediction rather than predicting or compensating the studied channel/phenomena.
Abstract: We investigate the risk of overestimating the performance gain when applying neural network-based receivers in systems with pseudorandom bit sequences or with limited memory depths, resulting in repeated short patterns. We show that with such sequences, a large artificial gain can be obtained, which comes from pattern prediction rather than predicting or compensating the studied channel/phenomena.

114 citations

Journal ArticleDOI
TL;DR: In this paper, the authors investigate the risk of overestimating the performance gain when applying neural network based receivers in systems with pseudo random bit sequences or with limited memory depths, resulting in repeated short patterns, and they show that a large artificial gain can be obtained which comes from pattern prediction rather than predicting or compensating the studied channel/phenomena.
Abstract: We investigate the risk of overestimating the performance gain when applying neural network based receivers in systems with pseudo random bit sequences or with limited memory depths, resulting in repeated short patterns. We show that with such sequences, a large artificial gain can be obtained which comes from pattern prediction rather than predicting or compensating the studied channel/phenomena.

102 citations


Cited by
More filters
Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Journal ArticleDOI
TL;DR: In this article, the authors provide a thorough overview on using a class of advanced machine learning techniques, namely deep learning (DL), to facilitate the analytics and learning in the IoT domain.
Abstract: In the era of the Internet of Things (IoT), an enormous amount of sensing devices collect and/or generate various sensory data over time for a wide range of fields and applications. Based on the nature of the application, these devices will result in big or fast/real-time data streams. Applying analytics over such data streams to discover new information, predict future insights, and make control decisions is a crucial process that makes IoT a worthy paradigm for businesses and a quality-of-life improving technology. In this paper, we provide a thorough overview on using a class of advanced machine learning techniques, namely deep learning (DL), to facilitate the analytics and learning in the IoT domain. We start by articulating IoT data characteristics and identifying two major treatments for IoT data from a machine learning perspective, namely IoT big data analytics and IoT streaming data analytics. We also discuss why DL is a promising approach to achieve the desired analytics in these types of data and applications. The potential of using emerging DL techniques for IoT data analytics are then discussed, and its promises and challenges are introduced. We present a comprehensive background on different DL architectures and algorithms. We also analyze and summarize major reported research attempts that leveraged DL in the IoT domain. The smart IoT devices that have incorporated DL in their intelligence background are also discussed. DL implementation approaches on the fog and cloud centers in support of IoT applications are also surveyed. Finally, we shed light on some challenges and potential directions for future research. At the end of each section, we highlight the lessons learned based on our experiments and review of the recent literature.

903 citations

Journal ArticleDOI
TL;DR: This review begins by reviewing protocols of quantum key distribution based on discrete variable systems, and considers aspects of device independence, satellite challenges, and high rate protocols based on continuous variable systems.
Abstract: Quantum cryptography is arguably the fastest growing area in quantum information science. Novel theoretical protocols are designed on a regular basis, security proofs are constantly improving, and experiments are gradually moving from proof-of-principle lab demonstrations to in-field implementations and technological prototypes. In this paper, we provide both a general introduction and a state-of-the-art description of the recent advances in the field, both theoretical and experimental. We start by reviewing protocols of quantum key distribution based on discrete variable systems. Next we consider aspects of device independence, satellite challenges, and protocols based on continuous-variable systems. We will then discuss the ultimate limits of point-to-point private communications and how quantum repeaters and networks may overcome these restrictions. Finally, we will discuss some aspects of quantum cryptography beyond standard quantum key distribution, including quantum random number generators and quantum digital signatures.

769 citations

Journal ArticleDOI
TL;DR: In this paper, the authors summarize the developments, applications and underlying physics of optical frequency comb generation in photonic-chip waveguides via supercontinuum generation and in microresonators via Kerr-comb generation that enable comb technology from the near-ultraviolet to the mid-infrared regime.
Abstract: Recent developments in chip-based nonlinear photonics offer the tantalizing prospect of realizing many applications that can use optical frequency comb devices that have form factors smaller than 1 cm3 and that require less than 1 W of power. A key feature that enables such technology is the tight confinement of light due to the high refractive index contrast between the core and the cladding. This simultaneously produces high optical nonlinearities and allows for dispersion engineering to realize and phase match parametric nonlinear processes with laser-pointer powers across large spectral bandwidths. In this Review, we summarize the developments, applications and underlying physics of optical frequency comb generation in photonic-chip waveguides via supercontinuum generation and in microresonators via Kerr-comb generation that enable comb technology from the near-ultraviolet to the mid-infrared regime. This Review discusses the developments and applications of on-chip optical frequency comb generation based on two concepts—supercontinuum generation in photonic-chip waveguides and Kerr-comb generation in microresonators.

650 citations