Reservoir Computing Overview

Introduction

A reservoir computer is conceptually similar to a neural network where only a small fraction is trained. Our version uses non-linear optics and is made with a kilometer of fiber that stores optical pulses that act as the nodes in the network. While similar to a traditional neural network, a reservoir computer is more suitable for hardware implementation and easier to train. Additionally, less domain expertise is generally required than building more conventional networks such as convolutional neural networks. The key difference between a reservoir computer and a traditional neural network is that in the reservoir setting, most of the network remains fixed with only a final output layer being trained. This structure is ideal for physically implemented networks, where making systems reconfigurable is a significant engineering challenge. While in principle any of these systems could be simulated with conventional computing hardware, using a physical device directly rather than a simulation has the advantage of much lower power consumption, thus bypassing a key weakness of neural networks run on traditional architectures. Given the strong connection with neural networks, we discuss reservoir computing from a neural network perspective on a separate page here.

Conceptually the idea behind reservoir computing is that performing a complex non-linear transformation on data can produce a high dimensional solution space on which non-trivial machine learning tasks can be more easily performed than on the original data. The underlying intuition is that sufficiently complex transformations, even random ones, can provide information that is much easier to process than the original data. Thus, much simpler learning techniques (both in terms of user expertise and computational effort) can be used in an output layer of a reservoir than could be successful on the original raw input. This intuition has been shown to be correct experimentally, as reservoir computing has been shown to be successful at a number of machine learning tasks. See this review for some examples.

reservoir diagram
diagramatic image showing reservoir computation

Photonic Reservoir Computing

Light makes an ideal substrate for reservoir computing. A challenge often encountered in physical systems is the difficulty of engineering high connectivity, especially in two-dimensional systems like on a chip. Light provides a unique solution to this challenge, the information carried in light is constantly moving, eliminating the need to directly connect physically distant components. 

This leads to another advantage, the nodes in an optical neural network such as a reservoir computer are usually the packets of light themselves, which leads to a natural scalability. Instead of needing to build each node as a physical object, more nodes can simply be added by making an optical fiber longer. Photonic networks are therefore highly scalable. In fact, in a delay-based implementation (see section 4 of this review), only a single nonlinear interaction is needed, independent of the size of the network. Our current approach is delay-based, owing to the relative simplicity of implementing such systems, but as our technology matures we could develop an architecture that is able to take more advantage of parallel photon processing.

We have previously discussed how the light that we use to process information is effectively in a “Goldilocks” regime in terms of energy scales. The energy of the light particles we use is high enough not to require cooling to protect them from noise caused by heat, but much lower than the energy scales encountered in typical electronics. This means that our photonic reservoir computers have the potential to operate at much lower e

nergy consumption levels than other approaches. 


Looking to the future, there is another exciting advantage. Light interacts very little with its surroundings and can support highly non-trivial quantum superpositions. For these reasons, systems like optical reservoir computers could be made quantum in the sense of using quantum interference and superposition in a non-trivial way. Indeed, there has been significant interest in using quantum optics-based neural networks to perform a variety of tasks and it has been suggested that quantum systems can be advantageous to use as reservoirs. In particular, a small quantum reservoir can sometimes match the performance of a much larger classical reservoir. Furthermore, because no quantum elements are ever trained, quantum reservoir computing naturally sidesteps the problem of barren plateaus that plague many quantum machine learning approaches. While our current reservoir computers are completely classical devices, they could relatively easily be extended into the quantum domain in a way that which most neural networks could not.

Read more

See our list of publications for a deep dive into photonic reservoir computing.