The lecture has been developed by Christa Cuchiero and Josef Teichmann. It has been held by Josef Teichmann in spring 2019 at ETH Zurich as a regular lecture for master students, and as a

References and several talks on new developments can be found at the end of this webpage.

The lecture introduces several fundamental concepts from machine learning with a view towards important financial applications. Lecture notes are provided as ipython notebooks or in form of slides as well as of classical notes. Most of the following code runs savely under Python 3.6, Tensorflow 1.8.0 and Keras 2.0.8, see the first notebook for checking the version. You can get to a downloadable .ipynb file by clicking on 'download' in the upper left corner of the jupyter notebook viewer.

- Lecture 1 (Introduction, Universal Approximation by shallow networks, one string of arguments for depth): Lecture 1 as iPython notebook and some training data (should be unpacked, then store the files in the folder of the notebook). For the exercises see also Function approximation with linear models and neural network from Tirthajyoti Sarkar's github resources.
- Lecture 2 (neural ordinary differential equations, backpropagation, expressiveness by randomness): Lecture 2 as iPython notebook and some data-file. For the exercises just write a two-hidden layer network by hand with backpropagation by hand following the code in the notebook, see also one possible solution.
- Lecture 3 (Deep Hedging without transaction costs): Lecture 3 as iPython notebook, where a tensorflow implementation of Deep Hedging as well as some background is explained. A short Keras implementation of deep portfolio optimization (without transaction costs, but easily to be modified) can be found at as iPython notebook. A Keras implementation of Deep Hedging for the BS model with analysis of the hedging strategies can be found at as iPython notebook. An implementation illustrating the use of Keras in a very simple case can be found here.
- Lecture 4 (Deep Portfolio Optimization under transaction costs): Lecture 4 as iPython notebook. Some general thoughts on training can be found here as iPython notebook. A short Keras implementation of deep portfolio optimization with transaction costs you can be found here as iPython notebook. A Keras implementation of Deep portfolio optimization in the Merton model with analysis of the trading strategies can be found here as iPython notebook.
- Lecture 5 (Deep Simulation): Lecture 5 as iPython notebook. These notes include an introduction on iterated integrals of controls and on the Johnson-Lindenstrauss Lemma as well as code on 'learning' unknown S(P)DEs and simultating real markets.
- Lecture 6 (Deep Calibration): Lecture 6 as iPython notebook. Code is available for calibration via learning a pricing functional as iPython notebook, and for a learning the local volatility function in a local stochastic volatility model as iPython notebook.
- Lecture 7 (Deep Reinforcement Learning): a short theoretical introduction to concepts of reinforcement learning as iPython notebook.

The course in Vienna held by Christa Cuchiero splits into partially distinct parts for Master and PhD students, the structure is similar but more exercises and some slides are included:

- Lecture 1 (Introduction, Universal Approximation by shallow networks): Lecture 1 as iPython notebook. Some exercises are provided as well as the solutions for the first, second and third exercise.
- Lecture 2 (Deep neural networks, wavlets, expressiveness by randomness): Lecture 2 as iPython notebook (Master student version) or Lecture 2 as iPython notebook (PhD student version). Some exercises for the Master course are provided as well as the solutions for the first and second exercise. The corresponding exercises and solution for the first exercise for the PhD course can be found here.
- Lecture 3 (Stochastic gradient descent and deep hedging): Some exercises are provided as well as the solutions for the first and second exercise.
- Lecture 4 (Deep Hedging): Lecture 4 as iPython notebook . Some exercises for the Master course are provided as well as the solutions for the first, second and third exercise.
- Lecture 5 (Deep Calibration, Deep Portfolio optimization): Lecture 5 as iPython notebook . Some exercises for the Master course are provided as well as the solutions for the first and second exercise.
- Further material can be found for Lecture 2 as Slides (Master and PhD), for Lecture 2 as Slides (PhD), and for Lecture 3 as Slides (PhD).

The lecture introduces several fundamental concepts from machine learning with a view towards important financial applications. Lecture notes are provided as ipython notebooks or in form of slides as well as of classical notes. Most of the following code runs savely under Python 3.7, Tensorflow 1.14.0 and Keras 2.2.5, see the first notebook for checking the version. You can get to a downloadable .ipynb file by clicking on 'download' in the upper left corner of the jupyter notebook viewer.

- Lecture 1 (Introduction, Universal Approximation by shallow networks, one string of arguments for depth): Lecture 1 as iPython notebook and some training data (should be unpacked, then store the files in the folder of the notebook).
- Lecture 2 (neural ordinary differential equations, backpropagation, expressiveness by randomness): Lecture 2 as iPython notebook and some data-file. For the exercises just write a two-hidden layer network by hand with backpropagation by hand following the code in the notebook, see also one possible solution. Exercises can be found at Exercise 1 and Exercise 2.
- Lecture 3 (training): Lecture 3 as iPython notebook. Here updates will follow soon.
- Lecture 4 (Deep Hedging without transaction costs): Lecture 4 as iPython notebook, where a short Keras implementation of Deep Hedging for the BS model with analysis of the hedging strategies can be found. An implementation illustrating the use of Keras in a very simple case can be found here.
- Lecture 5 (Deep Portfolio Optimization without or with transaction costs): Lecture 5 as iPython notebook, where a short Keras implementation of the Merton problem for the BS model with analysis of the trading strategies can be found. Code with transaction costs can be found here.
- Lecture 6 (Deep Calibration): deep calibration for the Heston model and deep calibration for local stochastic volatility models. We distinguish three sorts of deep calibration: learning directly the map from market data to model parameters, learning the map from model parameters to market data and inverting it by inverse problem methodology, and parmetrizing (infinite dimensional) parmeters by neural networks. Notice that the Heston calibration code runs safely under Python 3.6, Tensorflow 1.8.0 and Keras 2.0.8.
- Lecture 7 (Deep Reinforcement Learning): a short theoretical introduction to concepts of reinforcement learning as iPython notebook.
- Lecture 8 (Deep Simulation): Lecture 8 as iPython notebook. These notes include an introduction on iterated integrals of controls and on the Johnson-Lindenstrauss Lemma as well as code on 'learning' an unknown SDE and stock market dynamics.

- Helmut Bölcskei, Philipp Grohs, Gitta Kutyniok, Philipp Petersen: Optimal Approximation with Sparsely Connected Deep Neural Networks, arxiv.1705.01714.
- Hans Bühler, Lukas Gonon, Josef Teichmann, Ben Wood: Deep Hedging, arXiv:1802.03042.
- Ricky T. Q. Chen, Yulia Rubanova, Jesse Bettencourt, David Duvenaud: Neural ordinary differential equations, arxiv.1806.07366.
- Ilya Chevyrev, Andrej Kromilitzin, A primer on the signature method in machine learning, arxiv.1603.03788.
- Uri Shaham, Alexander Cloninger, Ronald R. Coifman: Provable approximation properties of deep neural networks, arxiv.1509.07385.
- Christa Cuchiero, Martin Larsson, Josef Teichmann: Deep neural networks, generic universal interpolation, and controlled ODEs, arXiv/1908.07838.
- Weinan E, Jiequn Han, Arnulf Jentzen: Deep learning-based numerical methods for high-dimensional parabolic partial differential equations and backward stochastic differential equations, arxiv.1706.04702.
- Jakob Heiss, Josef Teichmann, Hanna Wutte: How implicit regularization of Neural Networks affects the learned function -- Part I, arXiv/1911.02903.
- Catherine F. Higham, Desmond J. Higham, Deep Learning: An Introduction for Applied Mathematicians, arxiv.1801.05894.
- Na Lei, Kehua Su, Li Cui, Shing-Tung Yau, David Xianfeng Gu, A Geometric View of Optimal Transportation and Generative Model, arxiv.1710.05488.
- Terry Lyons, Rough paths, Signatures and the modelling of functions on streams, arxiv.1405.4537.
- Terry Lyons, Harald Oberhauser, Sketching the order of events, arxiv.1708.09708.
- Josef Teichmann, A recent talk in Konstanz on randomness in training algorithms, 2019.
- Josef Teichmann, A recent talk in Oslo on stationary versions of discrete signature in the spirit of reservoir computing, 2019.
- Josef Teichmann, A recent talk in New York on at Thalesian Seminar Series on generative models in finance, 2020.