scispace - formally typeset
Search or ask a question
Author

Luis Pesquera

Bio: Luis Pesquera is an academic researcher from Spanish National Research Council. The author has contributed to research in topics: Semiconductor laser theory & Polarization (waves). The author has an hindex of 24, co-authored 116 publications receiving 3340 citations.


Papers
More filters
Journal ArticleDOI
17 Nov 2005-Nature
TL;DR: High-speed long-distance communication based on chaos synchronization over a commercial fibre-optic channel is demonstrated, showing that information can be transmitted at high bit rates using deterministic chaos in a manner that is robust to perturbations and channel disturbances unavoidable under real-world conditions.
Abstract: Chaos is good, if you are looking to send encrypted information across a broadband optical network. The idea that the transmission of light-based signals embedded in chaos can provide privacy in data transmission has been demonstrated over short distances in the laboratory. Now it has been shown to work for real, across a commercial fibre-optic channel in the metropolitan area network of Athens, Greece. The results show that the technology is robust to perturbations and channel disturbances unavoidable under real-world conditions. Chaotic signals have been proposed as broadband information carriers with the potential of providing a high level of robustness and privacy in data transmission1,2. Laboratory demonstrations of chaos-based optical communications have already shown the potential of this technology3,4,5, but a field experiment using commercial optical networks has not been undertaken so far. Here we demonstrate high-speed long-distance communication based on chaos synchronization over a commercial fibre-optic channel. An optical carrier wave generated by a chaotic laser is used to encode a message for transmission over 120 km of optical fibre in the metropolitan area network of Athens, Greece. The message is decoded using an appropriate second laser which, by synchronizing with the chaotic carrier, allows for the separation of the carrier and the message. Transmission rates in the gigabit per second range are achieved, with corresponding bit-error rates below 10-7. The system uses matched pairs of semiconductor lasers as chaotic emitters and receivers, and off-the-shelf fibre-optic telecommunication components. Our results show that information can be transmitted at high bit rates using deterministic chaos in a manner that is robust to perturbations and channel disturbances unavoidable under real-world conditions.

1,267 citations

Journal ArticleDOI
TL;DR: This work experimentally demonstrate optical information processing using a nonlinear optoelectronic oscillator subject to delayed feedback and implements a neuro-inspired concept, called Reservoir Computing, proven to possess universal computational capabilities.
Abstract: Many information processing challenges are difficult to solve with traditional Turing or von Neumann approaches. Implementing unconventional computational methods is therefore essential and optics provides promising opportunities. Here we experimentally demonstrate optical information processing using a nonlinear optoelectronic oscillator subject to delayed feedback. We implement a neuro-inspired concept, called Reservoir Computing, proven to possess universal computational capabilities. We particularly exploit the transient response of a complex dynamical system to an input data stream. We employ spoken digit recognition and time series prediction tasks as benchmarks, achieving competitive processing figures of merit.

662 citations

Journal ArticleDOI
TL;DR: Improved strategies to perform photonic information processing using an optoelectronic oscillator with delayed feedback are presented and it is illustrated that the performance degradation induced by noise can be compensated for via multi-level pre-processing masks.
Abstract: We present improved strategies to perform photonic information processing using an optoelectronic oscillator with delayed feedback. In particular, we study, via numerical simulations and experiments, the influence of a finite signal-to-noise ratio on the computing performance. We illustrate that the performance degradation induced by noise can be compensated for via multi-level pre-processing masks.

148 citations

Journal ArticleDOI
TL;DR: It is shown that the reservoir computing implementation, in this case optoelectronic, is also capable to realize extreme learning machines, demonstrating the unified framework for both schemes in software as well as in hardware.
Abstract: In this paper we present a unified framework for extreme learning machines and reservoir computing (echo state networks), which can be physically implemented using a single nonlinear neuron subject to delayed feedback. The reservoir is built within the delay-line, employing a number of “virtual” neurons. These virtual neurons receive random projections from the input layer containing the information to be processed. One key advantage of this approach is that it can be implemented efficiently in hardware. We show that the reservoir computing implementation, in this case optoelectronic, is also capable to realize extreme learning machines, demonstrating the unified framework for both schemes in software as well as in hardware.

143 citations

Journal ArticleDOI
TL;DR: A scheme that integrates a digital key in a phase-chaos electro-optical delay system for optical chaos communications that provides a large flexibility allowing for easy reconfigurations to communicate securely at a high bit rate between different systems is introduced.
Abstract: We introduce a scheme that integrates a digital key in a phase-chaos electro-optical delay system for optical chaos communications. A pseudorandom binary sequence (PRBS) is mixed within the chaotic dynamics in a way that a mutual concealment is performed; e.g., the time delay is hidden by the binary sequence, and the PRBS is also masked by the chaos. In addition to bridging the gap between algorithmic symmetric key cryptography and chaos-based analog encoding, the proposed approach is intended to benefit from the complex algebra mixing between a (pseudorandom) Boolean variable, and another continuous time (chaotic) variable. The scheme also provides a large flexibility allowing for easy reconfigurations to communicate securely at a high bit rate between different systems.

119 citations


Cited by
More filters
Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Journal ArticleDOI
01 Jul 2017
TL;DR: A new architecture for a fully optical neural network is demonstrated that enables a computational speed enhancement of at least two orders of magnitude and three order of magnitude in power efficiency over state-of-the-art electronics.
Abstract: Artificial Neural Networks have dramatically improved performance for many machine learning tasks. We demonstrate a new architecture for a fully optical neural network that enables a computational speed enhancement of at least two orders of magnitude and three orders of magnitude in power efficiency over state-of-the-art electronics.

1,955 citations

01 Mar 1995
TL;DR: This thesis applies neural network feature selection techniques to multivariate time series data to improve prediction of a target time series and results indicate that the Stochastics and RSI indicators result in better prediction results than the moving averages.
Abstract: : This thesis applies neural network feature selection techniques to multivariate time series data to improve prediction of a target time series. Two approaches to feature selection are used. First, a subset enumeration method is used to determine which financial indicators are most useful for aiding in prediction of the S&P 500 futures daily price. The candidate indicators evaluated include RSI, Stochastics and several moving averages. Results indicate that the Stochastics and RSI indicators result in better prediction results than the moving averages. The second approach to feature selection is calculation of individual saliency metrics. A new decision boundary-based individual saliency metric, and a classifier independent saliency metric are developed and tested. Ruck's saliency metric, the decision boundary based saliency metric, and the classifier independent saliency metric are compared for a data set consisting of the RSI and Stochastics indicators as well as delayed closing price values. The decision based metric and the Ruck metric results are similar, but the classifier independent metric agrees with neither of the other metrics. The nine most salient features, determined by the decision boundary based metric, are used to train a neural network and the results are presented and compared to other published results. (AN)

1,545 citations

Journal ArticleDOI
TL;DR: An overview of recent advances in physical reservoir computing is provided by classifying them according to the type of the reservoir to expand its practical applications and develop next-generation machine learning systems.

959 citations

Journal ArticleDOI
26 Jul 2017-Nature
TL;DR: In this article, a magnetic tunnel junction (MTJ) was used to achieve spoken-digit recognition with an accuracy similar to that of state-of-the-art neural networks.
Abstract: Neurons in the brain behave as nonlinear oscillators, which develop rhythmic activity and interact to process information. Taking inspiration from this behaviour to realize high-density, low-power neuromorphic computing will require very large numbers of nanoscale nonlinear oscillators. A simple estimation indicates that to fit 108 oscillators organized in a two-dimensional array inside a chip the size of a thumb, the lateral dimension of each oscillator must be smaller than one micrometre. However, nanoscale devices tend to be noisy and to lack the stability that is required to process data in a reliable way. For this reason, despite multiple theoretical proposals and several candidates, including memristive and superconducting oscillators, a proof of concept of neuromorphic computing using nanoscale oscillators has yet to be demonstrated. Here we show experimentally that a nanoscale spintronic oscillator (a magnetic tunnel junction) can be used to achieve spoken-digit recognition with an accuracy similar to that of state-of-the-art neural networks. We also determine the regime of magnetization dynamics that leads to the greatest performance. These results, combined with the ability of the spintronic oscillators to interact with each other, and their long lifetime and low energy consumption, open up a path to fast, parallel, on-chip computation based on networks of oscillators.

900 citations