Basic principles of pricing and hedging

We shall first go through basic principles of modeling in mathematical Finance before we introduce concepts of interest rate theory.

We shall denote by $ S^i $ the price of asset $i$ and by $ H^i $ the holdings in asset $i$ (can be any real number here, no market frictions assumed). $ H^i_t $ is determined at time $ t- 1 $ with all the information available there.

First we learn to write portfolio values, i.e. $ V_t = \sum_{i=0}^d H^i_tS^i_t $ as P & L processes in case the portfolio is self-financing. Self-financing means that $$ \sum_{i=0}^d H^i_{t+1}S^i_t = \sum_{i=0}^d H^i_tS^i_t \, , $$ which in turn leads to $$ V_{t+1} - V_t = \sum_{i=0}^d H^i_{t+1} (S^i_{t+1} - S^i_{t}) \, . $$ This means that the change in value of the portfolio comes from the change in value of the prices and nothing else.

This formula allows for a simplification. If we divide everything by the value of $S^0$, the price of the $0$-th asset, then in the above sum one term vanishes. We denote $ X^i_t = S^i_t/S^0_t $ When discounted, e.g. by $S^0$, this means $$ \frac{V_t}{S^0_t} - \frac{V_0}{S^0_0} = (H \bullet X)_t = \sum_{s \leq t} \sum_{i=1}^d H^i_{t+1} (S^i_{s+1} - S^i_{s})\, . $$ Notice that the inner sum only starts at $1$ because $ X^0_t = 1 $.

The right hand side is a P & L process. The argument can be turned around: given a portfolio value which is given by a constant plus a P & L process, then we can of course construct a self-financing portfolio.

Consider now a one period case: we can ask whether arbitrages are possible and under which conditions (for which price) payoffs can be dominated (super-hedged) by self-financing portfolios.

A model is free of arbitrage if there is no self-financing portfolio which starts at zero and has a positive outcome

We assume from now on that any positive outcome of a self-financing portfolio cannot come for free, i.e. the initial capital $x$ has to be positive, too.

This leads us the valuation problem: what is the value of a payoff $f$ at time $T$? We can answer that by constructing appropriate (super-)hedging portfolios. Superhedging just means hear "dominating".

Let us consider this question in a one period case, i.e. $T=1$. The states of the world are denoted by $\omega$, hence we are interested in the question to find the smallest $ x $ such that for all $ \omega $ $$ f(\omega) \leq x + \sum_{i=1}^d H_0^i (X^i_1(\omega)-X^i_0(\omega)) \, . $$ This is equivalent to characterize the cone $ C:= \{ (H \bullet X)_1 - g \text{ for all possible strategies } H \text{ and } g \geq 0\} $.

This is a geometric question: the solution is $ C = \{ e \text{ such that } E_Q[e] \leq 0 \text{ for all equivalent martingale measures } $Q$\} $. This yields the beautiful formula $$ \sup_{Q \in \mathcal{M}} E_Q[f] = \inf \{x \text{ such that there is a strategy } H \text{ with } f \leq x + (H \bullet X)_1 \} \, . $$ The set $\mathcal{M} $ is the set of equivalent martingale measures. A super-hedging portfolio is a self-financing portfolio (i.e. the value process is the initial value of the portfolio plus the P&L process -- all in discounted terms) dominating a certain payoff.

We have seen in one step bi- and tri-nomial models that pricing and hedging are in a fundamental duality relationship given a certain payoff: the largest arbitrage free price equals the smallest price of a super-hedging portfolio. Superhedging prices can be calculated by backwards induction.

This yields the following pricing formula: if a payoff's contract is liquidly traded at price $ \pi_t(f) $ at intermediate times $ t $, then there exists an equivalent martingale measure for the given market constituted by $X^1,\dots,X^d$ such that

$$ E_Q\big[ \frac{f}{S^0_T} | \mathcal{F}_t \big] = \frac{\pi_t(f)}{S^0_t} . $$ Let us consider these principles in the following modeling situations.

In [106]:
import numpy as np
from itertools import product


# EUROPEAN
S0 = 100.
u  = 1.1
d  = 0.9
payoffE = lambda S: 10*(S >= S0)

timesteps = 1
bin = set((0,1))
trajectories = set(product(bin, repeat = timesteps))


import pygraphviz as PG
from IPython.display import Image
binomialtreeforward = PG.AGraph(directed=True, strict=True)
binomialtreeforward.edge_attr.update(len='2.0',color='blue')
binomialtreeforward.node_attr.update(color='red')
process = {(omega,0):S0 for omega in trajectories}

#construct process by forward steps
for time in range(1,timesteps+1):
    for omega in trajectories:
        shelper = process[(omega,time-1)]*u**(omega[time-1])*d**(1.-omega[time-1])
        process.update({(omega,time):shelper})

for time in range(1,timesteps+1):
    for omega in trajectories:
        binomialtreeforward.add_edge('%d, %d'% (time-1,process[(omega,time-1)]),'%d, %d'% (time,process[(omega,time)]))

def condprob(omega,omegahelper,time):
    if omega[0:time]==omegahelper[0:time]:
        prod = 1
        for i in range(time,timesteps):
            prod *= ((u-1)/(u-d))**omegahelper[time]*(((1-d)/(u-d))**(1-omegahelper[time])) 
        return prod
    else:
        return 0.0                                   
binomialtreebackward = PG.AGraph(directed=True, strict=True)
processbackward = {(omega,timesteps):(payoffE(process[(omega,timesteps)])) for omega in trajectories}
#backwardssteps: European
for time in reversed(range(1,timesteps+1)):
    for omega in trajectories:
        shelper=0                                   
        for omegahelper in trajectories:
            shelper += processbackward[(omegahelper,time)]*condprob(omega,omegahelper,time-1)
        processbackward.update({(omega,time-1):shelper})

for time in range(1,timesteps+1):
    for omega in trajectories:
        binomialtreebackward.add_edge('%d, %d, %f'% (time-1,process[(omega,time-1)],processbackward[(omega,time-1)]),
                                      '%d, %d, %f'% (time,process[(omega,time)],processbackward[(omega,time)]))

Image(binomialtreeforward.draw(format='png',prog='dot'))        
Out[106]:
In [107]:
Image(binomialtreebackward.draw(format='png',prog='dot'))
Out[107]:

Furthermore models are free of arbitrage if and only if there exists an equivalent pricing measure, which associates in particular to P & L processes the value $0$. Whence prices of payoffs can be calculated by taking expectations (i.e. a Monte Carlo evaluation is possible) with respect to this equivalent pricing measure.

Next we shall see the concepts of pricing and hedging realized in a geometric Brownian motion market environment. We shall use code from https://github.com/yhilpisch/dx for this purpose.

In [68]:
import dx
import datetime as dt
import pandas as pd
import seaborn as sns; sns.set()
import numpy as np

First we shall define some market environment:

In [69]:
r = dx.constant_short_rate('r', 0.)
  # a constant short rate

cas_1 = dx.market_environment('cas', dt.datetime(2016, 1, 1))
    
cas_1.add_constant('initial_value', 100.)
  # starting value of simulated processes
cas_1.add_constant('volatility', 0.2)
  # volatiltiy factor
cas_1.add_constant('final_date', dt.datetime(2017, 1, 1))
  # horizon for simulation
cas_1.add_constant('currency', 'EUR')
  # currency of instrument
cas_1.add_constant('frequency', 'D')
  # frequency for discretization
cas_1.add_constant('paths', 10000)
  # number of paths
cas_1.add_curve('discount_curve', r)
  # number of paths

Let us introduce a geometric Brownian motion in the above market environment.

In [70]:
gbm_cas_1 = dx.geometric_brownian_motion('gbm_1', cas_1)

We can obtain one trajectory generated on the predefined weekly time grid of GBM.

In [71]:
paths_gbm_cas_1 = pd.DataFrame(gbm_cas_1.get_instrument_values(), index=gbm_cas_1.time_grid)
In [72]:
%matplotlib inline
paths_gbm_cas_1.loc[:, :10].plot(legend=False, figsize=(10, 6))
Out[72]:
<matplotlib.axes._subplots.AxesSubplot at 0x7f49978406d8>

Next we define a new market enviroment which inherits from the previous one but has an additional Europen call option.

In [73]:
strike = 95.
cas_1_opt = dx.market_environment('cas_1_opt', cas_1.pricing_date)
cas_1_opt.add_environment(cas_1)
cas_1_opt.add_constant('maturity', dt.datetime(2017, 1, 1))
cas_1_opt.add_constant('strike', strike)
In [74]:
eur_call = dx.valuation_mcs_european_single(
            name='eur_call',
            underlying=gbm_cas_1,
            mar_env=cas_1_opt,
            payoff_func='np.maximum(maturity_value - strike, 0)')

The present value of the European call is uniquely given by no arbitrage arguments.

In [75]:
eur_call.present_value()
Out[75]:
10.479573

The delta is the sensitivity with respect to the initial value.

In [76]:
eur_call.delta()
Out[76]:
0.64149999999999996

The gamma is the sensitivity of the delta with respect to the initial value.

In [77]:
eur_call.gamma()
Out[77]:
0.017600000000000001

The vega is the sensitivity with respect to volatility.

In [78]:
eur_call.vega()
Out[78]:
37.115600000000001

The theta is sensitivity with respect to maturity.

In [79]:
eur_call.theta()
Out[79]:
-4.2138

The rho is sensitivity with respect to interest rate.

In [80]:
eur_call.rho()
Out[80]:
53.687800000000003

The previous quantities allow to understand the risks of the given derivative from the point of view of the chosen model in terms of sensitivities.

In the sequel we demonstrate the precise meaning of the option's delta by means of a running hedging portfolio.

In [81]:
path = gbm_cas_1.get_instrument_values()[:,0]
timegrid = gbm_cas_1.time_grid
presentvalue = eur_call.present_value()
n = len(path)
pnl = [presentvalue]
optionvalue = [np.maximum(0,path[0]-eur_call.strike)]
In [82]:
for i in range(n-1):
    r = dx.constant_short_rate('r', 0.)
    # a constant short rate

    running = dx.market_environment('running', timegrid[i])
    
    running.add_constant('initial_value', path[i])
    # starting value of simulated processes
    running.add_constant('volatility', 0.2)
    # volatiltiy factor
    running.add_constant('final_date', dt.datetime(2017, 1, 1))
    # horizon for simulation
    running.add_constant('currency', 'EUR')
    # currency of instrument
    running.add_constant('frequency', 'W')
    # frequency for discretization
    running.add_constant('paths', 10000)
    # number of paths
    running.add_curve('discount_curve', r)
    # number of paths
    gbm_running = dx.geometric_brownian_motion('gbm_running', running)
    opt_running = dx.market_environment('opt_running', running.pricing_date)
    opt_running.add_environment(running)
    opt_running.add_constant('maturity', dt.datetime(2017, 1, 1))
    opt_running.add_constant('strike', strike)
    eur_call = dx.valuation_mcs_european_single(
            name='eur_call',
            underlying=gbm_running,
            mar_env=opt_running,
            payoff_func='np.maximum(maturity_value - strike, 0)')
    #print(path[i])
    #print(timegrid[i])
    #print(eur_call.delta())
    pnl = pnl + [pnl[-1]+eur_call.delta()*(path[i+1]-path[i])]
    optionvalue = optionvalue + [np.maximum(0,path[i+1]-eur_call.strike)]
In [83]:
data = []
for j in range(n):
    data = data + [[optionvalue[j],pnl[j]]]
In [84]:
paths_hedge = pd.DataFrame(data, index=gbm_cas_1.time_grid)
%matplotlib inline
paths_hedge.loc[:,:].plot(legend=True, figsize=(10, 6))
Out[84]:
<matplotlib.axes._subplots.AxesSubplot at 0x7f4997644dd8>

Interest rate terminology

Prices of zero-coupon bonds (ZCB) with maturity $ T $ are denoted by $P(t,T)$. Interest rates are governed by a market of (default free) zero-coupon bonds modeled by stochastic processes $ {(P(t,T))}_{0 \leq t \leq T} $ for $ T \geq 0 $. We assume the normalization $ P(T,T)=1 $.

$T$ denotes the maturity of the bond, $P(t,T)$ its price at a time $t$ before maturity $T$.

The yield $$ Y(t,T) = - \frac{1}{T-t} \log P(t,T) $$ describes the compound interest rate p.a. for maturity $T$.

The curve $f$ is called the forward rate curve of the bond market \begin{align*} P(t,T) & =\exp(-\int_{t}^{T}f(t,s)ds) \end{align*} for $0\leq t\leq T$.

Let us understand a bit the dynamics of yield curves and forward rate curves at this point:

In [86]:
from mpl_toolkits.mplot3d import Axes3D
import copy as copylib
from progressbar import *
%pylab
%matplotlib inline
import pandas as pandas
pylab.rcParams['figure.figsize'] = (16, 4.5)
numpy.random.seed(0);
Using matplotlib backend: TkAgg
Populating the interactive namespace from numpy and matplotlib

First we load some historical data:

In [87]:
dataframe = pandas.read_csv('hjm_data.csv')
In [88]:
dataframe = dataframe/ 100 # Convert interest rates to %
pandas.options.display.max_rows = 10
display(dataframe)
0.08333333 0.5 1 1.5 2 2.5 3 3.5 4 4.5 ... 20.5 21 21.5 22 22.5 23 23.5 24 24.5 25
0 0.057734 0.064382 0.067142 0.066512 0.064991 0.063255 0.061534 0.059925 0.058444 0.057058 ... 0.034194 0.034772 0.035371 0.035985 0.036612 0.037252 0.037902 0.038562 0.039231 0.039908
1 0.057680 0.064506 0.067502 0.066842 0.065423 0.063852 0.062301 0.060846 0.059490 0.058198 ... 0.033790 0.034437 0.035108 0.035798 0.036504 0.037224 0.037959 0.038705 0.039461 0.040227
2 0.057758 0.064410 0.067354 0.066845 0.065577 0.064109 0.062611 0.061164 0.059782 0.058438 ... 0.032706 0.033294 0.033907 0.034539 0.035188 0.035853 0.036533 0.037224 0.037927 0.038639
3 0.057430 0.064103 0.066942 0.066215 0.064904 0.063462 0.062006 0.060601 0.059252 0.057933 ... 0.031325 0.031891 0.032486 0.033106 0.033748 0.034409 0.035088 0.035784 0.036493 0.037214
4 0.057412 0.063978 0.066358 0.065502 0.064168 0.062722 0.061262 0.059849 0.058488 0.057157 ... 0.030119 0.030667 0.031250 0.031862 0.032499 0.033161 0.033844 0.034546 0.035264 0.035997
... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
1259 0.046421 0.045093 0.042471 0.042081 0.042663 0.043224 0.043640 0.043940 0.044156 0.044314 ... 0.040260 0.040098 0.039950 0.039813 0.039687 0.039571 0.039464 0.039365 0.039273 0.039187
1260 0.046233 0.044976 0.042452 0.042131 0.042726 0.043285 0.043699 0.043998 0.044214 0.044374 ... 0.040307 0.040147 0.039999 0.039863 0.039737 0.039621 0.039514 0.039414 0.039320 0.039233
1261 0.046348 0.045311 0.043295 0.043266 0.043963 0.044565 0.045003 0.045323 0.045560 0.045739 ... 0.041276 0.041118 0.040972 0.040840 0.040718 0.040607 0.040505 0.040411 0.040324 0.040244
1262 0.046327 0.045347 0.043184 0.043144 0.043859 0.044482 0.044941 0.045279 0.045530 0.045721 ... 0.041135 0.040981 0.040841 0.040714 0.040599 0.040495 0.040400 0.040314 0.040235 0.040163
1263 0.046138 0.045251 0.042916 0.042833 0.043498 0.044054 0.044440 0.044708 0.044903 0.045057 ... 0.040665 0.040501 0.040353 0.040219 0.040098 0.039989 0.039891 0.039802 0.039721 0.039648

1264 rows × 51 columns

In [89]:
hist_timeline = list(dataframe.index)
tenors = [float(x) for x in dataframe.columns]
hist_rates=matrix(dataframe)
In [90]:
plot(hist_rates), xlabel(r'Time $t$'), title(r'Historical $f(t,\tau)$ by $t$'), show()
plot(tenors, hist_rates.transpose()), xlabel(r'Tenor $\tau$'), title(r'Historical $f(t,\tau)$ by $\tau$');