Total market risk capital = general market risk capital
+ basic specific risk capital (2)
+ incremental risk charge
where Incremental risk charge =
In this paper, we present a methodology for calculating IRC. First, a
Merton-type model is introduced for simulating default and migration.
The model is modified to incorporate concentration. The calibration is
also elaborated. Second, a simple approach to determine market data,
including equity, in response to default and credit migration is
presented. Next, a methodology toward constant level of risk is
described. The details of applying the constant level of risk assumption
and aggregating different subportfolios are addressed. Finally, the
empirical and numerical results are presented.
Simulation of Default and Credit Migration
The IRC encompasses all positions subject to a capital charge for
specific interest rate risk according to the internal models with
exception of securitization and nth-to-default credit derivatives.
Equity is optional. For IRC-covered positions, the IRC captures default
risk and credit migration risk only.
Simulation Model
Most of the portfolio models of credit risk used in the banking industry
is based on the conditional independence framework. In these models,
defaults and credit migration of individual borrowers depend on a set of
common systematic risk factors describing the state of the economy.
Merton-type models have become very popular. The Merton-type model (or
standardized Merton model) is
(3)
where
The independent standard normally random variables
The systematic risk
The idiosyncratic risk for issuer/obligor i
The weighted correlation reflecting the impact of systematic risk factor
on issuer/obligor i.
The normalized asset return or creditworthiness indicator for
issuer/obligor i
This model becomes the most popular one in default and migration risk
modeling and yields the core of the Basel II capital rule (see Heitfield
[2003]).
Similar to the original Merton model, this model is also assuming that
the default and migration only happens at the end, which achieves
significant simplification.
Simulation model for multiple-liquidity-horizon subportfolios
Liquidity horizons are determined for each position to reflect actual
practice and experience during periods of both systematic and
idiosyncratic stresses. The total portfolio shall be divided into the
subportfolios based on different liquidity horizons. Let’s assume that
there are two subportfolios with different liquidity horizons: 3 month
and 6 month. To model different liquidity periods, one can use the above
model (3) but calibrate different ’s, such as, and , for
different periods.
Alternatively, one can also use a multiple-period model as:
For 3 month (4)
For 6 month (5)
where is unique for different periods under issuer i and is an
exponentially declining weight (see Dunn [2008]).
Calibration of
The most popular approaches to calibrate the asset correlation are
Maximum Likelihood Estimation or regression based on time series default
data. Alternatively, in the new Basel Capital Accord, a formula for
derivation of risk weighted asset correlation for corporate, sovereign,
and bank exposures is given as (see Tasche [2004] and Basel
[2003]):
(6)
Where
Concentration
The phenomenon we need to model is that concentration will result a
higher IRC number, comparing to non-concentration case. Furthermore, the
more concentration a portfolio has, the higher IRC result it generates.
To achieve this, we model the effect of issuer and market concentration
as well as clustering of default and migration by introducing another
parameter, the concentration parameter .
There are two correlations we need to consider: correlation between
credit migration and default events of obligors and correlation between
credit migration/default events and systematic market risk factors. The
study (see Kim [2009]) shows that the correlation between credit
migration/default events and systematic market risk factors is very
small and negligible. However, correlation between credit migration and
default events of obligors is significant and cannot be ignored.
Therefore, the concentration parameter is solely dependent on
correlation between credit migration and default.
Our methodology is based on a simple mechanism for coupling
issuer/market concentrations to migrations and defaults. In the
simulation framework (3) or (4) and (5), the probability of a migration
or default increases with the asset volatility. Since the effect of
increasing concentration within a sector is to increase the probability
of migration/default events within that sector, we model increased
concentration as an increase in the volatility of the systematic risk
driver. All positions sensitive to that risk driver will have an
increased probability of migration/default events occurring. The
modified simulation model is
(7a)
Where is the weighted concentration factor depending on correlation
between issuer default and migration events and
(7b)
where if one uses (3), = 0 and . Otherwise, is time declining weight and
are independent standard normally random variables representing
systematic risks in different time periods.
Calibration of
The calibration is based on credit migration matrix. It can be derived
using either analytic closed-form or Monte-Carlo simulation. In theory,
one can use Pearson’s product moment or Kendall’s .
Determination of default and credit migration
The simulated asset return , combined with migration/default thresholds,
is used to ascertain when default or migration is deemed to occur. The
calculation of the thresholds of credit migration and default is based
on credit migration probability (see JP Morgan [1997]). Using a BBB
issuer as an example and given migration matrix, we can calculate the
thresholds as: . The rating bands and thresholds are shown in Figure 1
Figure 1 Credit migration rating thresholds (for BBB)
If the normalized asset of the issuer is smaller than , it defaults. If
the normalized asset is between and , it migrates to CCC, and so on. We
use an effective middle value to represent each band:
(8)
Calibration of transition matrix, default probability (PD),
and loss given default (LGD)
The straight forward cohort approach is used to estimate transition
matrices based on obligors’ rating history, which has become the
industry standard. The PD estimate is based on EDF data that is used for
calculation of PD benchmarked against internal default history. Internal
data is used for LGD parameter benchmarked against relevant external
proxy data.
Credit Spreads and Equity Prices
After simulating default and migration of all issuers/obligors, we need
to price every instrument in order to generate loss distributions. The
question is whether we should simulate market data or not?
The earlier version of Basel IRC paper (see Basel [2008]) requires
financial institutes to capture four risks: default, credit migration,
significant credit spread changes, and significant equity price changes.
However, the new guideline (see Basel [2009 a]) limits the risks to
default and credit migration only. In addition, a separate Basel paper
(see Basel [2009 b]) further states that IRC contains only
incremental default and migration risks, and all price risks belong to
the comprehensive risk. These messages give us a clear indication that
only default and credit migration are risk factors in IRC and all market
prices/data are not. Therefore, we recommend simulating default and
migration only but not simulating any market prices/data.
We assume all market prices/data are deterministic (flat) and use
forward prices/data for valuation. The fat tail behavior and market
correlations are embedded in the market. Keeping these parameters
constant ensures we measure only P&L variation due to credit rating
changes (migration or default) per IRC requirements. The selection of
credit spreads or equity prices, however, should reflect the credit
quality changes.
Credit spreads
All issuers/obligors shall be divided into credit groups based on
geographies and sectors. Assume that the credit spreads for different
ratings under each group are available. Then we can select associated
credit spreads to price a bond or a CDS according to the
creditworthiness simulation of the issuer/obligor.
Equity prices
In risk neutral world, the forward equity price at future time T is
(9)
Where r is the risk free interest rate and is the today’s spot
equity price
If the issuer defaults at T , the equity price should be 0. If the
issuer is upgraded or downgraded, the equity price should be larger or
smaller than the risk neutral forward price . This is the phenomenon we
are going to model:
(10)
The underlying dynamic of Merton model is
(11)
Where is the corporate asset value; r is the risk-free interest
rate; is the asset volatility and is the Wiener process.
Applying Ito’s lemma, we have
(12)
where y denote the standard normal variable
The Merton model states that the equity of a company is a
European call option on the asset of the company with maturity T and a
strike price equal to the face value of the debt that will become due atT .
The payoff of Merton model is
(13)
where D denotes the debt of the company.
The mathematical expression of Merton model is
(14)
where
We still use the BBB issuer as an example. Based on (8), (12), and (13),
the equity price at T, if default occurs, is
(15)
The equity price at T without credit quality changes is
(16)
We solve equations (14), (15), and (16) to get , , and D. Then, with the
known , , and D, we can obtain any equity price at T under any credit
rating according to (8) and (13). For instance, when the rating changes
from BBB to A, the equity price at T is
(17)
Constant Level of Risk
The constant level of risk reflects recognition by regulators that
securities/derivatives held in the trading book are generally much more
liquid than those in the banking book, where a buy-and-hold assumption
over one year may be reasonable. It implies that IRC should be modeled
under the assumption that banks rebalance their portfolio several times
over the capital horizon in order to maintain a constant risk profile as
market conditions evolve. Of course, we do not suggest that the constant
level of risk framework be taken literally as a model of banks’
behavior: clearly portfolios are altered on a daily basis, not simply
held constant for some period then instantaneously rebalanced. Rather,
we regard the rollover interpretation as being a reasonable
approximation to the way banks manage their trading portfolios over a
certain horizon. In general, one should model constant level of risk
instead of constant portfolio over one year capital horizon.
There are several ways to interpret constant level of risk: constant
loss distribution or constant risk metrics (e.g. VaR). We believe the
constant loss distribution assumption is the most rigorous. Under this
assumption, the same metrics (e.g. VaR, moments, etc.) can be achieved
for each liquidity horizon.
The liquidity horizon for a position or set of positions has a floor ofthree months . Let us use three months as an example. We
interpret constant level of risk to mean that the bank holds its
portfolio constant for the liquidity horizon, then rebalances by selling
any default, downgraded, or upgraded positions and replaces them so that
the portfolio is returned to the level of risk it had at the beginning.
The process is repeated 4 times over the capital horizon resulting 4
independent and identical loss distributions. The one year constant
level of risk loss distribution is the convolution of 4 copies of the
three month loss distribution. In Monte Carlo context, this can
be modeled by drawing 4 times from the single period loss distribution
measured over the liquidity horizon . The total PnL is the summary of
these 4 random draws.
An intuitive explanation is shown in Figure 2. A generic path with
appears in red; P&L contributions from each liquidity horizon appear in
blue. In this schematic, the position experiences downgrade, upgrade or
default, resulting in a loss or profit. This position is then removed
and replaced at the end of each liquidity horizon by rebalancing. The
final P&L for the path will be the summary of all losses and profits.
Figure 2 Constant level of risk
In addition, one needs to consider the reinvestment of all cash flows
realized during the liquidity horizon and rollover of expired deals.
Aggregation and Time Horizon Correlation
First we need to divide the portfolio into the subportfolios based on
liquidity horizons. If there is only one single-liquidity-horizon
subportfolio, the rebalance at the end of each liquidity horizon washes
out the time horizon correlation. However, if there are multiple
subportfolios, the time horizon correlations need to be addressed.
To elaborate the details, we assume there are two subportfolios with
liquidity horizons: 3 months and 6 months. Based on the default and
migration simulation and full re-valuation, we can generate loss
distributions at first liquidity horizons for 3-month and 6-month
subportfolios as , and .
There are two approaches to achieve the correlative aggregation: copula
approach or correlation matrix approach.
Copula approach
We conduct the second Monte Carlo simulation by generate 4 standard
normal random draws for scenario j: . These random draws represent a
Monte-Carlo path.
Three-month Subportfolio
The P&L distribution of three-month subportfolio is . The four draws of
loss distribution are, where is the accumulative normal. The total P&L
of the three-month subportfolio for scenario j is
(18)
Six-month Subportfolio
The P&L distribution of the six-month subportfolio is . We can
calculate correlation between and using Pearson product-moment. The two
correlated random draws are and . The two draws of loss distribution are
. The total P&L of the six-month subportfolio for scenario j is
(19)
Summing up (18) and (19), we can get the total P&L for scenario j as
(20)
Correlation matrix approach
Based on the four 3-month independent identical loss distributions: ,
and two 6-month independent identical loss distributions: , we can
construct a pair-wise sample correlation matrix . Applying the Cholesky
decomposition to the correlation matrix , we have , where is a lower
triangular matrix.
We conduct the second Monte Carlo simulation by generating 4 independent
standard normal random draws: for the four 3-month periods in a year and
2 independent standard normal random draws , for the two 6-month periods
to construct a path/scenario j. The random draw vector is . We can
obtain correlative random draw vector
by (21)
The total P&L for scenario j is
(22)
The final IRC will be 99.9% VaR based on distribution . In general, the
correlation matrix approach is more generic and can be easily extended
to any number of subportfolios.
Numerical and Empirical Results
The above methodology has been implemented. The empirical study shows
the results on P&L distributions, numerical stability & convergence,
concentration effect, and capital impact.
The loss distributions for the testing portfolio are shown in Figure 3
and 4.