Q & A¶
- (What is a neural network?) definition, universal approximation (lecture 1).
- (Connection to controlled ODEs) backpropagation (example from lecture 2).
- (Role of randomness, Lie brackets, Chow's theorem) here only some general comments are expected (lecture 2).
- (deep hedging) describe an algorithm how to implement deep hedging in a given market environment, i.e. recognize the market factors (lecture 3).
- (deep portfolio optimization) describe an algorithm how to implement deep portfolio optimization in a given market environment, i.e. recognize the market factors (lecture 4).
- (How does learning work?) Describe the stochastic gradient algorithm (lecture 6).
- (deep simulation) expand controlled DEs in iterated integrals and explain why this is important (lecture 5).
- (random projection of iterated integrals) explain the contents of the JL lemma and show how it is applied to iterated integrals (lecture 5).
- (deep simulation) explain the algorithm of deep simulation (lecture 5).
- (deep calibration) explain calibration problem as inverse problems. Why are inverse problems difficult and what makes the Bayesian approach so successful (lecture 6)?
- (deep calibration) explain three sorts of algorithms for calibration: learn the pricing functional, learn the inverse of the pricing functional, learn characteristics of equations directly (lecture 6).
- (reinforcement learning) what is a Markov decision problem? Explain the most important concepts like environment, action space, action, HJB equation in case of the stationary problem with value function, i.e.
$$
V(x) = \sup_\pi E \big[ \sum_k \gamma^k r(X_k^\pi) \big] \, .
$$
- (algorithms) How does value iteration, policy iteration and Q learning work in case of the stationary problem (lecture 7).
- (machine learning algorithms) How are the classical algorithms connected with machine learning: describe a Q learning algorithm with machine learning technology in case of a portfolio optimization problem (lecture 7).