The last years have seen the emergence of significant interplays between machine learning, dynamical systems, and stochastic processes with interesting applications in time series analysis and forecasting. The resulting techniques have revolutionized the way in which we learn and path-continue complex and high-dimensional deterministic dynamical systems. Preliminary results show that a similar success could be expected in the stochastic context.
In these lectures, we will present in a self-contained manner how these techniques are built and will analyze in detail their connections with classical results in systems and approximation theories, control, filtering, and dynamical systems. Some previous background on any of these topics would help in understanding the material but it is by no means necessary. In order to adopt a point of view as close as possible to the applications, we will work in semi-infinite discrete-time input/output setups. This allows us to adequately model the non-markovianity associated to the observations and the subsystems of large dimensional systems and, additionally, provides us with the necessary tools for the development of both finite-sample and asymptotic results in estimation theory.
Several sessions will be dedicated to explaining the implementation of these techniques in the identification and path continuation of deterministic systems (learning of chaotic attractors) and forecasting of stochastic processes (realized financial covariances). We shall see how these novel techniques outperform all the benchmarks available in the literature to accomplish those tasks.
Time: Thursdays 10-12
Auditorium: HG G 43 (HWZ)
Begins: September 26, 2019