Cisco Webex, Online seminar
(線上演講 Cisco Webex)
Separation Capacity in Random Linear Reservoirs
Youness Boutaib (University of Liverpool)
Abstract
Recurrent neural networks (RNNs) constitute the simplest machine learning paradigm that is able to handle variable-length data sequences while tracking long-term dependencies and taking into account the temporal order of the received information. Reservoir computing (i.e. randomly choosing the connectivity matrix of the RNN) is a paradigm based on the idea that universal approximation properties can be achieved for several dynamical systems without the need to optimise all parameters. This technique simplifies the training of RNNs and has shown exceptional performance in a variety of tasks. Despite this, there is a fundamental lack in the mathematical understanding of the success of such approach.
In this work, we explain this success by the separation capacity of such random reservoirs. In particular, we show that the expected separation capacity is characterised by the spectral analysis of a generalised matrix of moments - a classical object of interest in random matrix theory. As a byproduct of this result, we discuss how the parameters of the problem (dimension of the reservoir, geometry of the classes of time-series, the choice of the probability distribution, symmetries, etc.) impact the performance of the architecture. The analysis is complemented by probabilistic bounds and empirical insights, shedding light on the design of effective reservoir architectures for temporal learning tasks.
Organizer: Te-Sheng Lin (NYCU)