Q & A

  1. (What is a neural network?) definition, universal approximation (lecture 1).
  2. (Connection to controlled ODEs) backpropagation (example from lecture 2).
  3. (Role of randomness, Lie brackets, Chow's theorem) here only some general comments are expected (lecture 2).
  4. (deep hedging) describe an algorithm how to implement deep hedging in a given market environment, i.e. recognize the market factors (lecture 3).
  5. (deep portfolio optimization) describe an algorithm how to implement deep portfolio optimization in a given market environment, i.e. recognize the market factors (lecture 4).
  6. (How does learning work?) Describe the stochastic gradient algorithm (lecture 6).
  7. (deep simulation) expand controlled DEs in iterated integrals and explain why this is important (lecture 5).
  8. (random projection of iterated integrals) explain the contents of the JL lemma and show how it is applied to iterated integrals (lecture 5).
  9. (deep simulation) explain the algorithm of deep simulation (lecture 5).
  10. (deep calibration) explain calibration problem as inverse problems. Why are inverse problems difficult and what makes the Bayesian approach so successful (lecture 6)?
  11. (deep calibration) explain three sorts of algorithms for calibration: learn the pricing functional, learn the inverse of the pricing functional, learn characteristics of equations directly (lecture 6).
  12. (reinforcement learning) what is a Markov decision problem? Explain the most important concepts like environment, action space, action, HJB equation in case of the stationary problem with value function, i.e. $$ V(x) = \sup_\pi E \big[ \sum_k \gamma^k r(X_k^\pi) \big] \, . $$
  13. (algorithms) How does value iteration, policy iteration and Q learning work in case of the stationary problem (lecture 7).
  14. (machine learning algorithms) How are the classical algorithms connected with machine learning: describe a Q learning algorithm with machine learning technology in case of a portfolio optimization problem (lecture 7).