Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Essays in fallout risk and corporate credit risk
(USC Thesis Other)
Essays in fallout risk and corporate credit risk
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
ESSAYS IN FALLOUT RISK AND CORPORATE CREDIT RISK
by
Pouyan Mashayekh Ahangarani
A Dissertation Presented to the
FACULTY OF THE GRADUATE SCHOOL
UNIVERSITY OF SOUTHERN CALIFORNIA
In partial Fulfillment of the
Requirements for the Degree
DOCTOR OF PHILOSPHY
(ECONOMICS)
August 2007
Copyright 2007 Pouyan Mashayekh Ahangarani
ii
Dedication
To My Parents
iii
Acknowledgments
I am really grateful to my advisor Professor Christopher Jones for his excellent
academic guidance during my research. Also I thank Professors Cheng Hsiao,
Caroline Betts and Antonios Sangvinatsos who were in my dissertation committee
and help me a lot and guide me by their helpful comments and advice.
iv
Table of Contents
Dedication ii
Acknowledgements iii
List of Tables vi
List of Figures vii
Abstract x
Chapter 1: Fallout Risk in Residential Mortgage Industry 1
1-1 Introduction 1
1-2 Pipeline Risk Management 2
1-3 Econometric Model 9
1-4 Data 12
1-5 Results 15
1-6 Conclusions 38
Chapter 2: A New Structural Approach for the Default 40
Risk of Companies
2-1 Introduction 40
2-2 Theoretical Model 45
2-3 Empirical Results 49
2-4 Conclusions 59
Chapter 3: The Importance of Simultaneous Jumps in 61
Default Correlation
3-1 Introduction 61
3-2 Firm Default 71
3-3 Default Correlation 75
3-4 Multivariate Jump Diffusion Model 79
3-5 Econometric Approach 81
3-6 Empirical Results 89
3-7 Conclusions 94
Bibliography 96
v
Appendices 103
Appendix A - Jump Transforms 103
Appendix B - Characteristic Function Assumptions 104
vi
List of Tables
Table 2-1 Distribution of the Companies Based on Their Credit Ratings 53
Table 2-2 Statistics of the Three Different Distances to Default 54
Table 2-3 Regression Results for Different Distances to Default 55
Table 2-4 Regression Results for Different Distances to Default 57
and Considering the Leverage Ratio
Table 3-1 Joint default probability (in bp) for Ford and General Motors 94
vii
List of Figures
Figure 1-1 Empirical exit distribution for refinance and purchase applicants 13
Figure 1-2 A sample of Market Price Data of Fannie Fixed 15
Rate 30 year mortgage Loan for coupon = 5.5
Figure 1-3a- Hazard Functions for refinance applicants, at time 17
segment=10 for FICO730,Married, LLA=135086 USD,
LTV=66, Lien=1, Occ=1, Age=46.
Figure 1-3b- Hazard Functions for Purchase applicants, at time 18
segment=10 for FICO730,Married, LLA=135086 USD,
LTV=66, Lien=1, Occ=1, Age=46.
Figure 1-4a- Hazard Functions for refinance applicants, at time 20
segment=10 for FICO730,Single Female, LLA=135,086 USD,
LTV=66, Lien=1, Occ=1, Age=46.
Figure 1-4b- Hazard Functions for purchase applicants, at time 21
segment=10 for FICO=730, single female, LLA=135,086 USD,
LTV=66, Lien=1, Occ=1, Age=46.
Figure 1-5a- Hazard Functions for refinance applicants, at time 21
segment=10 for FICO=730,single male, LLA=135,086 USD,
LTV=66, Lien=1, Occ=1, Age=46.
Figure 1-5b- Hazard Functions for purchase applicants, at time 22
segment=10 for FICO=730,single male, LLA=135,086 USD,
LTV=66, Lien=1, Occ=1, Age=46.
Figure 1-6a- Hazard Functions for refinance applicants, at time 23
segment=10 for FICO=830, Married, LLA=135,086 USD,
LTV=66, Lien=1, Occ=1, Age=46.
Figure 1-6b- Hazard Functions for purchase applicants, at time 23
segment=10 for FICO=830, Married, LLA=135,086 USD,
LTV=66, Lien=1, Occ=1, Age=46.
Figure 1-7a- Hazard Functions for rerfinance applicants, at time 24
segment=10 for FICO=530, Married, LLA=135,086 USD,
LTV=66, Lien=1, Occ=1, Age=46.
viii
Figure 1-7b- Hazard Functions for purchase applicants, at time 24
segment=10 for FICO=530, Married, LLA=135,086 USD,
LTV=66, Lien=1, Occ=1, Age=46.
Figure 1-8a- Hazard Functions for refinance applicants, at time 25
segment=10 for FICO=730, Married, LLA=70,000 USD,
LTV=66, Lien=1, Occ=1, Age=46.
Figure 1-8b- Hazard Functions for purchase applicants, at time 25
segment=10 for FICO=730, Married, LLA=70,000 USD,
LTV=66, Lien=1, Occ=1, Age=46.
Figure 1-9a- Hazard Functions for refinance applicants, at time 26
segment=10 for FICO=730, Married, LLA=300,000 USD,
LTV=66, Lien=1, Occ=1, Age=46.
Figure 1-9b- Hazard Functions for purchase applicants, at time 26
segment=10 for FICO=730, Married, LLA=300,000 USD,
LTV=66, Lien=1, Occ=1, Age=46.
Figure 1-10a- Distribution LTV for refinance applicants 27
Figure 1-10b- Distribution LTV for purchase applicants 28
Figure 1-11a- Hazard Functions for refinance applicants, 29
at time segment=10 for FICO=730, Married, LLA=135,069 USD,
LTV=20, Lien=1, Occ=1, Age=46.
Figure 1-11b- Hazard Functions for purchase applicants, 29
at time segment=10 for FICO=730, Married, LLA=135,069 USD,
LTV=20, Lien=1, Occ=1, Age=46.
Figure 1-12a- Hazard Functions for refinance applicants, 30
at time segment=10 for FICO=730, Married, LLA=135,069 USD,
LTV=80, Lien=1, Occ=1, Age=46.
Figure 1-12b- Hazard Functions for purchase applicants, 30
at time segment=10 for FICO=730, Married, LLA=135,069 USD,
LTV=80, Lien=1, Occ=1, Age=46.
ix
Figure 1-13a- Hazard Functions for refinance applicants, 31
at time segment=10 for FICO=730, Married, LLA=135,069 USD,
LTV=66, Lien=1, Occ=0, Age=46.
Figure 1-13b- Hazard Functions for purchase applicants, at time 31
segment=10 for FICO=730, Married, LLA=135,069 USD,
LTV=66, Lien=1, Occ=0, Age=46.
Figure 1-14a- Hazard Functions for refinance applicants, at time 32
segment=10 for FICO=730, Married, LLA=135,069 USD,
LTV=66, Lien=0, Occ=1, Age=46.
Figure 1-14b- Hazard Functions for purchase applicants, at time 33
segment=10 for FICO=730, Married, LLA=135,069 USD,
LTV=66, Lien=0, Occ=1, Age=46.
Figure 1-15a- Hazard Functions for refinance applicants, at time 34
segment=10 for FICO=730, Married, LLA=135,069 USD,
LTV=66, Lien=1, Occ=1, Age=22.
Figure 1-15b- Hazard Functions for purchase applicants, at time 35
segment=10 for FICO=730, Married, LLA=135,069 USD,
LTV=66, Lien=1, Occ=1, Age=22.
Figure 1-16a- Hazard Functions for refinance applicants, at time 35
segment=10 for FICO=730, Married, LLA=135,069 USD,
LTV=66, Lien=1, Occ=1, Age=75.
Figure 1-16b- Hazard Functions for purchase applicants, at time 36
segment=10 for FICO=730, Married, LLA=135,069 USD,
LTV=66, Lien=1, Occ=1, Age=75.
Figure 1-17a-Different Paths for Market Price Change for Refinance 38
Figure 1-17b-Different Paths for Market Price Change for purchase 38
Figure 2-1- The Probability Distribution Based on Their Credit Ratings 52
Figure 3-1- Ford and General Motors price 90
x
Abstract
Managing pipeline risk has been a challenge in the mortgage industry. Any locked
loan is like a put option on a callable bond that is given for free to the customer. The
mortgage bank has to hedge its risk against interest rate change, but the proportion of
the eventually funded loans is unknown beforehand. In this study, we extend the
destinations of a locked loan to four different states: closing, canceling, extending,
and renegotiation. We will try to find the probability of exiting to each destination as
a function of market price change and other borrowers’ attributes. Modeling these
probabilities helps the mortgage bank to hedge its pipeline risk more efficiently. In
addition, the different risk averseness of various borrowers, which has been an
interesting topic in finance theory, has been observed in this study. Women, married
couples, and younger people show more risk averseness than other groups. Merton
model will be challenged by introducing a new model for corporate defaults.
Empirically, the new model which drops the option theoretic approach of Merton has
a better explanatory power of the default probabilities. Jumps are also shown as an
important component of the default correlation.
1
Chapter 1- Fallout Risks in Residential
Mortgage Commitments
1-1-Introduction
Banks guarantee borrowers mortgage rates for a specified period. The
borrower and the bank lock a loan at a promised rate that is basically determined in
the money market at the time of lock. The potential borrower has the right but not the
obligation to get the mortgage loan any time during the lock period. Locking the loan
by the borrower is like buying a put option on a callable bond. Any mortgage loan
for the lending banks is like buying the bonds from borrowers. The bonds are
callable since the borrower can pay off the loan any time before the predetermined
life of the bond. This option exposes the bank to interest rate risk from the changes in
the money market rates that may occur during the lock period. The bank can get rid
of this risk by selling forward the committed mortgages; however, practically
prohibiting this risk is not completely possible because the proportion of the loans
that will be taken by the borrowers is uncertain. If fewer loans are sold, the bank is
exposed to the interest rate risk affecting the unsold portion and if too many are sold,
the bank has overhedged its interest rate risk.
In this paper, we will propose a model for possible outcomes of locked loan
during the lock period. The main factor which is driving different outcomes is the
market price change of a mortgage. Due to the callability of mortgage loans, we will
2
use the observed market price of the loans as an independent variable. Also, other
borrower attributes will be added to the model. The paper is organized as follows.
The next section will cover the literature review of the subject. After that, we will go
over the econometric techniques used in the paper. Finally, we will explain the data
and then show the results.
1-2-Pipeline Risk Management
Managing pipeline risk is one of the most challenging operations in the
competitive mortgage industry. The potential borrowers can lock a mortgage loan for
a specified period at a promised rate with any mortgage bank. After locking the loan,
the loan officer needs some time to investigate the credit quality of the potential
borrower. Therefore, the locked loan is a pending promise from the bank. As soon as
the credit quality of the borrower is approved, he will have the option to get the loan.
The approval from the loan officer can be uttered any time during the lock period.
The potential borrower has the option to get the loan or just cancel the lock.
Therefore, the lock contract is just like selling a put option to the potential borrower.
The borrower has the option to sell the bond to the bank during the lock period.
A mortgage loan is a callable bond. A callable bond is a bond that the issuer
has the right to redeem prior to its maturity date, under certain conditions. When
issued, the bond will have information that explains when it can be redeemed and
what the price will be. In most cases, the price will be slightly above the par value
for the bond and will increase the earlier the bond is called. An issuer will often call
3
a bond if it is paying a higher coupon than the current market interest rates.
Basically, the issuer can reissue the same bonds at a lower interest rate, saving them
some amount on all the coupon payments; this process is called refunding. In many
cases, the issuer will have the right to call the bonds at a lower price than the market
price. If a bond is called, the bondholder will be notified by mail and has no choice
in the matter. The bond will stop paying interest shortly after the bond is called, so
there is no reason to hold on to it. Generally, callable bonds will carry something
called protection. This means that there is some period of time during which the
bond cannot be called. A mortgage loan is a callable bond that is sold by the
borrowers. The borrowers will call the bond they have sold to the bank by
refinancing their mortgages. This usually happens when the interest rates are falling
and the borrowers can find lower refinancing opportunities. Usually, the mortgage
loan has a called protection plan as prepayment penalty in which the borrowers pay a
fine if they prepay the loans before a predetermined period.
During the lock period, the market price of the loan is subject to change. If
the market price of the loan increases, the lender that is going to buy the loan from
the borrower will be better off, but the potential borrower, who has the option not to
sell, will not be willing to sell the bond to the bank. Theoretically, when the put
option is out the money, the holder of the option will not exercise the option, but that
is not true in pipeline loans. Despite the increase in the market price of the mortgage
loan, the borrower still has the incentive to sell the bond due to the transaction costs
of buying a new house. The borrower’s decision to exercise the option depends on
4
factors other than the market price of the loan. Factors like the cost of finding
alternative financing and the risk of missing the opportunity to buy the desired home
will make some borrowers accept the loan despite the increase in the market price of
the loan. Also, when the market price of the loan decreases, the put option that the
potential borrower is holding will be in the money. Theoretically, the option should
be exercised for sure, but that is not also the case in the pipeline since the borrowers
should be able to fulfill some requirements. The borrower’s ability to accept the loan
can be hampered by the high transaction cost of seeking alternative financing. The
borrower’s decision is contingent upon the purchase of real estate, and that often
involves the simultaneous sale of another property. All in all, although the American
put option assumption of the locked loan cannot explain the pipeline behavior of the
potential borrower, it is expected that the change in the market price of the loan is an
important factor in the borrower’s decision during the lock period. Predicting the
percentage of funded loans is an important factor for mortgage banks. Consequently,
they can hedge the interest rate risk more efficiently. Benjamin, Heuson, and
Sirmans (1995) showed that the lenders who hedge their mortgage pipeline risk are
able to offer lower rates.
One aspect of decision making during the lock period can be explained by
gender and family status of the applicants. The literature on gender differences in
finance is mainly concentrated on the risk behavior of women and men. It is widely
claimed that women are more risk averse than men. A number of studies found that
women invest more conservatively than men (Bajtelsmit and Vanderhei, 1996 and
5
Hinz, McCarthy, and Turner, 1996). Bajtelsmit and Vanderhei (1996) found
significant gender differences in investment of pension assets based on plan
allocation data. Their dataset consisted of 1993 plan-level data on 20,000
management level employees from a single firm. Hinz, McCarthy, and Turner (1996)
used 1990 survey data for a subsample of 498 participants in the thrift saving plan,
the defined contribution plan for federal government workers. Jinakoplos and
Bernasek (1998), using U.S. sample data, found that single women exhibit relatively
more risk aversion in financial decision making than single men. Riley and Chow
(1992) studied asset allocation decisions and found risk aversion to be higher among
women than among men. Halek and Eisenhauer (2001) used survey data from the
University of Michigan Health and Retirement Study to look into the demography of
risk aversion and estimate measure of pure and speculative risk aversion. They found
a couple of variables including age, marital status, and gender important in risk
aversion. Their findings showed that single women are more risk averse than single
men, and older people are less risk averse than younger people. Also, marriage has a
significant effect in increasing risk aversion. Bajtelsmit and Bernasek (1996)
surveyed the existing literature on gender differences in investment. They stated that
women differ in access to information and lack of experience with risks. Lower
understanding of investment risk will deter women from taking risk. In this paper,
we will study the different behavior of people based on their gender and marital
status.
6
This study is related to the peculiarity of exercising an option. There have
been some studies in how efficient and rational the option holders are. Theoretically,
it is optimal to exercise an option precisely when the market value of the option
coincides with the intrinsic value, and (under complete markets and technical
regularity conditions) it is strictly sub-optimal to delay beyond the first such
opportunity. Even categories of investors that might be presumed to be less
sophisticated adhere relatively closely to this simple rule, although not as closely as
do more efficient investors like large brokerage firms. Overdahl and Martin (1994)
and Finucane (1997) showed that most option exercises in the US conform to theory
while a small fraction appear to be irrational. Engstrom, Norden, and Stromberg
(2000) presented similar findings for the Swedish equity option market. Poteshman
and Serbin (2003) examined the early exercise of call options in the US market by
different classes of investors. They found that different participants of the option
market were exercising their option in various levels of optimality. Duffie, Liu, and
Poteshman (2006), using a dataset on the CBOE put options, found that the
divergence between theory and practice is moderately small and can be explained by
variables that suggest a role for transaction costs as well as a preference by some
investors to obtain stock directly through exercise. The locked loans look like
American put options on a bond. In this study, we will explore the different
exercising behavior of people. Needless to say, these put options are different than
equity options in the sense that they are not tradable in the market and carry more
transaction cost relative to equity options.
7
The studies on exercising an American put option have been on tradable
options; the optimal exercising is when the intrinsic value equals the market value of
the option. Since locked loans are non-tradable American options, the optimality of
exercising will not be the same as tradable American options. Theoretically, for a
non-tradable American option, the optimal exercise should be close to the expiration
date in order to exploit all the opportunities of shifting to the lower rates. However,
most of the borrowers have to settle their home deals and may exercise earlier than
the expiration date.
Unfortunately, there are few papers in the academic literature about pipeline
risk management. Ho and Saunders (1983) priced borrower loan commitments in a
model in which fallout risk was stochastic and correlated but not completely
determined by interest rate changes. Their model specifically treated the take-down
amount of a single loan as a variable, but their article does not address the additional
complication that borrower commitments offer an American rather than a European-
style option: borrowers may exercise at any point during the commitment period.
The exact time of exercising the option is important for the lending bank since its
profit and hedging strategy is heavily dependent on the time the borrowers make
their decisions.
Rosenblatt and Vanderhoff (1992) found that in 38% of the cases analyzed,
borrowers chose to fund their loans even if the rates had fallen relative to the locked
rate. At the same time, the probability of the exercise of a mortgage commitment
rises to only 67% when rates increase. Rosenblatt and Vanderhoff used the binary
8
model of profit to find the relationship between the probability of closing with
interest rate changes and some other borrower attributes.
Hakim, Rashidian, and Roesenblatt (1999) selected a hazard model for the
analysis of mortgage pipeline risk and found that the pattern of closing is not
uniform during time and that the closing rate accelerates with high rates. Their
findings are in support of Rosenblatt and Vanderhoff (1992) who found that the
closing rate is slower for brokered loans, for second residences, and for borrowers
who are refinancing an existing mortgage.
Our contribution in this paper is extending the possible destinations to four
different states. There are four possible decisions that any potential borrower can
make during the lock period: 1) the loan could be funded, which means the borrower
has accepted the loan; 2) it can be canceled, which means the borrower has declined
to exercise his put option; 3) the lock can be extended, which means that the
borrower will ask for the extension of the put option (in other words, s/he will
exchange the put option with a farther expiration date put option of the same strike
price with the lending bank normally charging the borrower an extension fee); or 4)
through renegotiation, the borrower can get a better deal with a lower interest rate (in
other words, he will sell the bond to the bank with a higher price). This final case
should logically happen when the interest rate has plummeted significantly by the
end of the lock period in such a way that the loan officer has to lower the interest rate
or the borrower will walk away.
9
In this paper, we will incorporate the four different states as the ultimate
destination of a locked loan in the modeling. Previous studies have considered just
two states as possible outcomes of a loan in the pipeline: closing (funding) and
cancellation. We believe that, like cancellation, extension and renegotiation are
different kinds of fallout since the potential borrower declines to exercise his option
and instead asks for a higher strike price (renegotiation) or asks to exchange his
option with a longer maturity put option (extension).
1-3-Econometric Model
Let us think of time to exit as a continuous random variable T and consider a
large population of locked loans who enter the pipeline at time T = 0. The calendar
time of entry need not be the same for all people, and in practical cases it usually will
not be. Thus, T does not refer to calendar time but the duration of stay in the
pipeline. The population is assumed to be homogenous with respect to the systematic
factors, regressor variables that affect the distribution of T. This means that every
loan’s duration of stay in the pipeline will be a realization of a random variable from
the same probability distribution.
A loan can leave the pipeline for one of four destinations that are mutually
exclusive and exhaust all possibilities. Let
1
D ,
2
D ,
3
D , and
4
D represent closing,
cancellation, extension, and renegotiation, respectively. The hazard function for each
destination can be defined as:
10
dt
X t T D dt t T t P
im l X t
k
dt
k
) , | 1 , (
) ; (
0
≥ = + < ≤
=
→
θ
, k=1,2,3,4
where for small dt, dt X t
k
) ; ( θ is the probability of departure to state k in the short
interval (t,t+dt), given survival to t and given X, vector of explaining variables.
) ; ( X t
k
θ is called the transition intensities to state k having X as explanatory
variables. In practice, ) ; ( X t
k
θ denotes the fraction of the survivors up to t who
leave for state k on the following day, t+1. The total survivors ratio at t who leave on
the following day is the sum over k of those who leave for destination k, which
provides the relation between the hazard function ) ; ( X t θ and the transition
intensities ) ; ( X t
k
θ
∑
=
=
4
1
) ; ( ) ; (
k
k
X t X t θ θ
The survival functional at time t that is defined as the percentage of the locked loans
that have not left the pipeline is:
} ) ; ( exp{ ) ; (
0
∫
− =
t
ds X s X t F θ
Estimation of the hazard function is typically done by using a specific
functional form for the ) ; ( X t θ function. If time is a discrete variable, we can use
multinomial logit model for finding the hazard function for each time segment.
Assume that the interval [0,T] is divided to M subintervals
] , ( ], , ( ],..., , ( ], , 0 [
1 1 2 2 1 1 M M M M
t t t t t t t
− − −
. During each subinterval, the locked loan can
11
leave the pipeline for one of the destination k = 0, 1, 2, 3, 4, where k = 0 stands for
staying in the pipeline. This is valid for all subintervals except for the last one in
which the destinations could be just one of the k = 1, 2, 3, 4. Given that some
variables in X are time variant, we use the multinomial logit model on each
subinterval in order to eliminate the hassle of dealing with time dependency of the
explanatory variables.
Let y denote a random variable that can assume the values 0,1,…,K for K, a
positive integer, and let X denote a set of conditioning variables. For a discrete
choice model, we are interested in how ceteris paribus changes in the elements of X
affect the response probabilities, P(y = k|X), k = 0, 1,…K. Since the probabilities
must sum to unity, P(y = 0|X) is determined once we know the probabilities for k =
1,…,K.
Let X be a J dimension vector whose first component is unity. The
Multinomial Logit (MNL) has response probabilities:
] ) exp( 1 /[ ) exp( ) | (
1
∑
=
+ = =
K
h
h k
X X X k y P β β , k = 1,…,K
where
k
β is a J dimension vector of unknown parameters. Since the response
probabilities must sum to unity, we should have:
] ) exp( 1 /[ 1 ) | 0 (
1
∑
=
+ = =
K
h
h
X X y P β .
Estimation of the MNL model is best carried out by maximum likelihood. For each
observation i, the conditional log likelihood can be written as:
12
)] , ( log[ ) (
0
k i k
K
k
k i
X p D l β β
∑
=
= .
As usual, the unknown parameters can be estimated by maximizing the following
likelihood function:
) ( ) (
1
β β
∑
=
=
N
i
i
l L .
where N is number of observations. McFadden (1974) showed that the log-likelihood
function is globally concave; this fact makes the maximization problem
straightforward.
1-4-Data
Our data consists of two different sets of borrowers who received a rate lock during
the same commitment period. The first group consists of borrowers who applied to
refinance their existing mortgage. The second set consists of borrowers who intended
to purchase a new house. For the purpose of duration analysis, we divided the
commitment period into 10 segments. Figure 1-1 shows how exit time from the
pipeline has been distributed for the two types of borrowers.
13
Figure 1-1 Empirical exit distribution for refinance and purchase applicants
Here are some explanatory variables that have been used in this study:
LLA = The amount of loan in US dollar
LTV = The ratio of loan amount to property value
Lien = Dummy variable that is 1 for the first lien and 0 otherwise
OCC = Dummy variable that is 1 if the home will be owner occupied and 0
otherwise
Male = Dummy variable that is 1 if the borrower is male and 0 otherwise
Married = Dummy variable that is 1 if the borrower is married and 0 otherwise
FICO = Fico credit score of the borrower
0
5
10
15
20
25
30
35
12 3 4 5 6 78 9 10
Time period
in %
Purchase
Refinance
14
Age = Borrower’s age
LockCoupon = Borrower’s rate
The most important explanatory variable is the market price of the loan. In
this study, unlike previous papers, we have used the market price as the explanatory
variable instead of using the interest rate. The rationale behind using market price is
the market value of a loan depends on the whole yield curve, not just one interest
rate. Mortgage loans are like callable bonds, and the expected probability of the
prepayment has an important effect on the market price of the loan.
Fannie Mae sells pools of mortgage loans in the secondary capital market.
This pool consists of fixed-rate 30 year mortgage loans. Usually, these pools are sold
with different settlement dates fixed in each month. Everyday, for every pair of
coupon and settlement dates, various prices are posted. The farther the settlement
date is, the lower the price will be. Moreover, the higher the coupon rate, the higher
the price will be.
In order to remove the effect of time remained to settlement date, we used
market price with 30 days to settlement. If this price did not exist in the market data,
we used market prices of 2 settlement dates 1 less than 30 days and one more than 30
days (which is less than 60 days) for interpolation. Figure 1-2 shows the market price
movement for coupon rate of 5.5 for a couple of years. Coupon 5.5 has been the most
frequent rate for loans in the dataset.
15
Figure 1-2 A Sample of Market Price Data of Fannie Fixed Rate 30 year
Mortgage Loan for Coupon = 5.5
Market Price Data of Fannie Fixed Rate 30 year Mortgage Loan for Coupon=5.5
95
96
97
98
99
100
101
102
103
7/8/2003
9/8/2003
11/8/2003
1/8/2004
3/8/2004
5/8/2004
7/8/2004
9/8/2004
11/8/2004
1/8/2005
3/8/2005
5/8/2005
7/8/2005
9/8/2005
11/8/2005
1/8/2006
3/8/2006
5/8/2006
7/8/2006
9/8/2006
Date
Market Pruce Data with 30 days to settlement
1-5-Results
For each time segment, we ran a multinomial logit model. The explanatory
variables are market price change from lock date to the corresponding time segment
and other borrower attributes explained in the previous section. The market price is
Fannie Mae’s price with the relevant coupon rate for 30-year fixed rate loans. We
use a polynomial of degree four for capturing the effect of market price change. The
states are k = 1: closing the loan, k = 2: canceling the loan, k = 3: extending the
commitment period for the loan, k = 4: renegotiating the loan, and k = 0 which is
16
staying in the pipeline. We have found a functional form for four out of five states.
Clearly, for the last time segment, we have just four possible states since there is no
possibility for the loan to stay in the pipeline.
For the non-tradable option, it is optimal to exercise the option in the last
time segment. Based on our dataset, the probability of exiting are 32 and 26 percent
in the last time segment for refinance and purchase, respectively. For our analysis,
we used the last time segment since most of the decisions are made in that period. In
order to analyze the results, we provided the predicted probabilities for each time
segment as a function of market price change for refinance and purchase that are
shown for the last time segment in figures 1-3a and 1-3b. Figures 1-3a and 1-3b are
the base graphs, and all the comparison will -be made relative to these two figures
for refinance and purchase respectively.
From figures 1-3a and 1-3b, we can find the relationship between market
price change and the probabilities to different destinations. As the market price
increases, the probability of closing decreases. The borrower has the option to sell
the bond at a predetermined price. As the market price of a loan increases, the
borrower has a lower incentive to sell the bond at the previous price because the
bond is more valuable in the market. Also, with an increase in the market price, the
probability of canceling the lock increases since the potential borrower will be
tempted to cancel the lock and start a new loan. The probability of extension is also a
decreasing function of market price. For extending the commitment, the lending
bank usually asks for a fee. Thus, the decision to extend the commitment period may
17
make their lock pricey. However, the borrower sometimes has to get an extension
because of the time restriction inherited in closing the loan or risk losing the house.
This probability is constant for price changes around zero, but it is decreasing after a
threshold because the market price has increased well enough that, economically, it
is disadvantageous for the borrower to close the loan. Renegotiation is another state
of termination of the initial contract in which the borrower, through negotiation with
the loan officer, asks for a better rate. The loan officer, in order to keep the business,
has to give some of the gain from the rally back to the borrower. As can be seen from
the graph, renegotiation just happens when the market price increases.
Figure 1-3a- Hazard Functions for refinance applicants, at time segment=10
for FICO730,Married, LLA=135086 USD, LTV=66, Lien=1, Occ=1, Age=46.
18
Figure 1-3b- Hazard Functions for Purchase applicants, at time segment=10
for FICO730,Married, LLA=135086 USD, LTV=66, Lien=1, Occ=1, Age=46.
All the graphs indicate probability densities are nonlinear relative to the market price
change. The probability of cancellation for refinance borrowers is sensitive to market
price change around 2.5 points. The probability of extension is nearly constant
around zero market price movement. It decreases when the market price drops by
more than 2.5 points and decreases when the market price rises more than 1.5 points.
Also, for renegotiation, the probability is almost zero for downward movement of
market price and increases steadily for positive market price change. The probability
of closing is nonlinear with respect to market price change as well. For refinancing
borrowers, the probability of closing decreases smoothly as the market price
increases, but for purchase borrowers, the probability of closing is almost constant
between two-point sell-off and two-point rally and increases steadily when market
price goes down more than two points and drops when market rallies by more than
two points. The difference between purchase and refinance closing probabilities
19
suggest that refinancing applicants act more efficiently than purchasing borrowers.
These findings can be captured by a nonlinear function of market price change.
Comparing the purchase and the refinance groups shows that closing is
higher for borrowers who want to buy a new house rather than refinance an existing
loan. This makes sense since the borrower who wants to refinance is not under
pressure to buy a new house. Also, the borrower who wants to refinance is more
willing to extend his loan since he is not under the time pressure and can extend his
locked loan in order to buy some time flexibility.
Gender and Marital Status Difference
In the estimations, marital status and gender both have significance. Figures
1-4a and 1-4b show the probability to different estimations for single females at the
last time segment for refinance and purchase applicants, respectively. Figures 1-5a
and 1-5b are the same graphs for single males. The graphs explain the difference
between single males and single females. Single females are more efficient
borrowers compared to single males and are more sensitive to market price change.
Single males are less likely to exercise their options through extension and
cancellation. Interestingly, married couples exhibit similar fallout tendencies to those
of single females. Higher extension for single males can explain the lower risk
averseness of males since they want to pay a premium to extend their put options in
which they will have more time to acquire future opportunities. In other words,
single females and married people would like to exercise their options more often
and are less prone to taking another risk.
20
Figure 1-4a- Hazard Functions for refinance applicants, at time segment=10
for FICO730,Single Female, LLA=135,086 USD, LTV=66, Lien=1, Occ=1,
Age=46.
21
Figure 1-4b- Hazard Functions for purchase applicants, at time segment=10
for FICO=730, single female, LLA=135,086 USD, LTV=66, Lien=1, Occ=1,
Age=46.
Figure 1-5a- Hazard Functions for refinance applicants, at time segment=10 for
FICO=730,single male, LLA=135,086 USD, LTV=66, Lien=1, Occ=1, Age=46.
22
Figure 1-5b- Hazard Functions for purchase applicants, at time segment=10 for
FICO=730,single male, LLA=135,086 USD, LTV=66, Lien=1, Occ=1, Age=46.
Credit Quality of the Applicants
The FICO credit score, which is used as a measure of credit worthiness of
applicants, has significant impact on the probabilities of different destinations.
Figures 1-6a and 1-6b show probabilities in the last time segment for applicants with
a high FICO score of 830 for refinance and purchase, respectively, while figures 1-7a
and 1-7b show a low credit quality of 530. Comparing the graphs reveals that higher
credit quality applicants have a higher chance of closing. We tried the interactive
variable of high and low credit quality multiplied by market price change, which
were not significant. In other words, credit quality just shifts the curves up or down.
After locking the mortgage rates, the loan officer has some time to approve the loan,
and clearly people who have a higher credit quality have a better chance of approval.
The approval process of the loan is only dependant on the credit quality of the
borrower. Thus, the market price change should not have any effect on the
probability of the approval. At the time of lock, the approval is an unknown variable,
23
but the credit quality of the potential borrower has some predicting power in
explaining that probability.
Figure 1-6a- Hazard Functions for refinance applicants, at time segment=10
for FICO=830, Married, LLA=135,086 USD, LTV=66, Lien=1, Occ=1, Age=46.
Figure 1-6b- Hazard Functions for purchase applicants, at time segment=10
for FICO=830, Married, LLA=135,086 USD, LTV=66, Lien=1, Occ=1, Age=46.
24
Figure 1-7a- Hazard Functions for rerfinance applicants, at time segment=10
for FICO=530, Married, LLA=135,086 USD, LTV=66, Lien=1, Occ=1, Age=46.
Figure 1-7b- Hazard Functions for purchase applicants, at time segment=10
for FICO=530, Married, LLA=135,086 USD, LTV=66, Lien=1, Occ=1, Age=46.
Lock Loan Amount
The loan amount has been included in the model and was observed to be a
significant attribute. Figures 1-8a and 1-8b are related to low loan amounts of 70,000
USD for refinance and purchase, respectively. Figures 1-9a and 1-9b show similar
graphs for a 300,000 USD loan amount. The extension probability is slightly higher
25
for low loan amounts. However, the loan amount has no bearing on probabilities of
canceling and closing. The probability of renegotiation is higher for a greater loan
amount. This is in line with our intuition since a borrower with a larger loan has
more bargaining power. For purchase applicants, the impact of the loan amount does
not seem significant.
Figure 1-8a- Hazard Functions for refinance applicants, at time segment=10
for FICO=730, Married, LLA=70,000 USD, LTV=66, Lien=1, Occ=1, Age=46.
Figure 1-8b- Hazard Functions for purchase applicants, at time segment=10
for FICO=730, Married, LLA=70,000 USD, LTV=66, Lien=1, Occ=1, Age=46.
26
Figure 1-9a- Hazard Functions for refinance applicants, at time segment=10
for FICO=730, Married, LLA=300,000 USD, LTV=66, Lien=1, Occ=1, Age=46.
Figure 1-9b- Hazard Functions for purchase applicants, at time segment=10
for FICO=730, Married, LLA=300,000 USD, LTV=66, Lien=1, Occ=1, Age=46.
Loan to Value (LTV)
LTV is another significant factor in our model. Figures 1-10a and 1-10b show
LTV distribution for refinance and purchase, respectively. Maximum LTV for
conforming loans is 80%, which is also observed. That is why the distribution in
Figure 1-10b is concentrated around 80. Since a refinance borrower benefits from
27
increased equity due to both pay down of principal and increase in market value,
greater concentration of LTV should be less than 80%. That is precisely what we
observed. Figures 11a and 11b show the probability distributions for LTV = 20 for
both refinance and purchase, respectively. And figures 11a and 11b show the
probability distribution for LTV = 80. Comparing these graphs show that higher
LTV will result in higher extension and lower cancellation.
Figure 1-10a- Distribution LTV for refinance applicants
0 .01 .02 .03 .04 .05
Density
0 20 40 60 80 100
ltv
28
Figure 1-10b- Distribution LTV for purchase applicants
0 .05 .1 .15
Density
0 20 40 60 80 100
ltv
29
Figure 1-11a- Hazard Functions for refinance applicants, at time segment=10 for
FICO=730, Married, LLA=135,069 USD, LTV=20, Lien=1, Occ=1, Age=46.
Figure 1-11b- Hazard Functions for purchase applicants, at time segment=10 for
FICO=730, Married, LLA=135,069 USD, LTV=20, Lien=1, Occ=1, Age=46.
30
Figure 1-12a- Hazard Functions for refinance applicants, at time segment=10 for
FICO=730, Married, LLA=135,069 USD, LTV=80, Lien=1, Occ=1, Age=46.
Figure 1-12b- Hazard Functions for purchase applicants, at time segment=10 for
FICO=730, Married, LLA=135,069 USD, LTV=80, Lien=1, Occ=1, Age=46.
Occupied House
Another important element instrumental in predicting a borrower’s behavior
is transaction cost. For applicants who want to use the house as their main
occupancy, chances of not closing their loans are significantly lower than those who
will not occupy the house. (See Figures 1-13a and 1-13b.) Borrowers who do not use
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
-4 -3 -2 -1 0 1 2 3 4
Market Price Change
CL
CN
RG
XT
31
the house as the main occupancy are more likely to take an extension and, therefore,
take a longer time to make a decision.
Figure 1-13a- Hazard Functions for refinance applicants, at time segment=10 for
FICO=730, Married, LLA=135,069 USD, LTV=66, Lien=1, Occ=0, Age=46.
Figure 1-13b- Hazard Functions for purchase applicants, at time segment=10 for
FICO=730, Married, LLA=135,069 USD, LTV=66, Lien=1, Occ=0, Age=46.
Lien Type
In order to avoid paying mortgage insurance, a borrower must pay a
minimum of 20% as a down payment. If the borrower cannot afford to pay down
payment, he will need to take a second loan. The two loans are secured by the
32
property, and, thus, are referred to as first and second liens. Two borrowers with first
and second liens will exhibit different reactions to market price changes. Figures 1-
14a and 1-14b show the probability distributions for refinance and purchase,
respectively, for a borrower with a second lien. For these borrowers, the probability
of extension is higher than their counterpart. Usually, the second lien amount is less
than the first lien. Thus, borrowers have more flexibility in funding the loan, which
may result in a higher probability of extension.
Figure 1-14a- Hazard Functions for refinance applicants, at time segment=10 for
FICO=730, Married, LLA=135,069 USD, LTV=66, Lien=0, Occ=1, Age=46.
33
Figure 1-14b- Hazard Functions for purchase applicants, at time segment=10 for
FICO=730, Married, LLA=135,069 USD, LTV=66, Lien=0, Occ=1, Age=46.
Age
Age is also a significant variable in explaining the tendencies. Figures 1-15a
and 1-15b show the probability distribution for age 22, refinance and purchase
borrowers, respectively. Figures 1-16a and 1-16b are corresponding graphs for age
75 and borrower types. We observed older people to be more willing to extend their
lock compared to younger people in both purchase and refinance instances. This is in
accordance with the finding of Halek and Eisenhauer (2001). One reason could be
that older people are financially better off and less risk averse so they can afford to
pay a premium to extend their lock. Another reason could be that older people are
not as motivated to find a better rate elsewhere and are not willing to start the
process all over again.
34
Figure 1-15a- Hazard Functions for refinance applicants, at time segment=10 for
FICO=730, Married, LLA=135,069 USD, LTV=66, Lien=1, Occ=1, Age=22.
35
Figure 1-15b- Hazard Functions for purchase applicants, at time segment=10 for
FICO=730, Married, LLA=135,069 USD, LTV=66, Lien=1, Occ=1, Age=22.
Figure 1-16a- Hazard Functions for refinance applicants, at time segment=10 for
FICO=730, Married, LLA=135,069 USD, LTV=66, Lien=1, Occ=1, Age=75.
36
Figure 1-16b- Hazard Functions for purchase applicants, at time segment=10 for
FICO=730, Married, LLA=135,069 USD, LTV=66, Lien=1, Occ=1, Age=75.
Path Dependence
The two most important characteristics of rate locks are high transaction cost
and nontradability of the option. These two characteristics make the behavior of a
borrower very complicated, and, therefore, their option cannot be viewed as a
standard American option. When a borrower decides to close the loan, it may be
costly for him to reverse his decision. For example, a borrower observes rates are
higher than his locked rate and decides to close his loan. But right after he has
started the process of applying for his loan, the rates go down. Ideally, he should
give up the loan, but it may be more practical to close the loan. Therefore, we should
expect the fluctuation in market price to have great impact in the borrower’s final
decision.
Exercising the put option on the callable bond in this study is not only
dependent on the market price at the time of exercising but also on the path of
37
market price change. The path of market price change has been important in
explaining the probability distribution of different states. We ran the model assuming
different paths for market price. The increments of market price during the previous
time segments have been added as independent variables and significantly explain
the probabilities.
Figures 1-17a and 1-17b show three different market price change scenarios
and their associated probabilities for two types of borrowers, respectively. For all
three scenarios, market price rises up to three points. Path one is a straight line; path
two starts with a steep rise from the beginning and descends near the end, and path
three starts with a dip and ends on a steep ascend in the last time segment. For all
scenarios, the increase in market price will lower closing ratio, but the impact is
more severe for the path one, which has had the rise from the beginning. Path three,
which just ascends at the end has the least impact on the probability distribution.
This pattern shows that decisions made by borrowers in exercising their put options
are made long before commitment expires. Due to economical reasons, it may be
costly for them to reverse their decision or they make their decision based on the
expectation on the price. The higher the price has been before, the higher the impact
would be on the probability distributions. This finding begs the question on the
validity of the assumption that locked loans are ordinary American options.
38
Figure 1-17a-Different Paths for Market Price Change for Refinance
Different Paths for Market Price Change for Refinance
0
0.5
1
1.5
2
2.5
3
3.5
4
12 3 4 5 6 78 9 10
Time
Path1
Path2
Path3
Path1 Path2 Path3
CL 0.29 0.26 0.32
CN 0.15 0.15 0.16
XT 0.46 0.47 0.43
RG 0.10 0.12 0.09
Figure 1-17b-Different Paths for Market Price Change for purchase
Different Paths for Market Price Change for Purchase
0
0.5
1
1.5
2
2.5
3
3.5
4
12 345 678 9 10
Time
Path1
Path2
Path3
Path1 Path2 Path3
CL 0.64 0.55 0.67
CN 0.16 0.17 0.15
XT 0.16 0.15 0.16
RG 0.04 0.13 0.01
1-6-Conclusion
In this study, we extended the pipeline analysis to four different states.
Market price change has been used as an explanatory variable for the probability
39
distributions. In our opinion, we need to differentiate between the different states
since renegotiation and extension can be viewed as cancelled locks. We used the
multinomial logit model in order to find the hazard functions for multi-destination
duration analysis of the pipeline risk. We found that the probability distributions of
each kind of termination state are a nonlinear function of market price change.
Also, we found that this study can explain the risk averseness related to
borrower attributes such as gender, marital status, and age. Our findings are in
accordance with other studies that have found younger people, women, and single
people are more risk averse relative to older people, men, and married couple,
respectively.
Another important finding that questions the American option assumption of locked
loans is path dependence. Decisions made by potential borrowers not only depend on
the market price change but also on the path of market price. This makes the locked
loan more complicated than the simple assumption of the American option. More
should be done to further this study. After extension or renegotiation, the locked
loans are like new loans in the pipeline, and we need to explore how these new loans
behave relative to the market price change and whether their patterns of exiting the
pipeline are different from the fresh new loans or not.
40
Chapter 2- A New Structural Approach for the
Default Risk of Companies
2-1-Introduction
In this chapter, I will propose a new model for corporate default risk. There are
two types of models of default risk in the literature: structural models and reduced
based models. In reduced based models, the default is modeled as a surprise. The
probability of this surprise follows a jump diffusion process and therefore depends
on an intensity parameter also called hazard rate. This hazard rate can be constant
through time or allowed to be stochastic, thereby implying a term structure of
probabilities of default. This hazard rate is either estimated to fit historical
probability or fitted to current market data (calibration). The reduced form
approaches are well documented by the contributions of Jarrow and Turnbull (1995);
Jarrow, Lando, and Turnbull (1997); Duffie and Singleton (1999); Das and Tufano
(1996); Lando (1998); Iben and Litterman (1991); Madan and Unal (1993);
Shunbucher (1997); and Zhou (1997).
The structural approach relates the arrival of default to the dynamics of the
underlying structure of the firm, thereby giving an economic significance to the
establishment of the default rate. This approach was founded by Merton (1974,
1977) who used an application of option theory. In his theory, the value of the firm is
supposed to be shared by two broad categories of claimants: the shareholders and the
41
debt holders. Because of the limited liability of shareholders, they have a payoff that
is positive whenever the face value owed to creditors can be reimbursed. Otherwise,
it is zero. The shareholders’ claim is then just a call on the value of the assets of the
firm (also known as a European call). Thus, a bond is simply a right of a face amount
to be reimbursed with the sale of a put to shareholders on the assets of the firm. A
direct advantage of the structured approach is that credit default is not an
unpredictable event here; there is a way to see the corporate conditions that affect the
default rate. Relying on the evolution of the value of assets of the firm gives a
continuity of the credit standing evolution of the firm, which makes the credit risk
predictable. Unfortunately, since the value of the firm is not a tradable asset, the
parameters of the structural model are difficult to estimate. Reduced form
approaches appeared mainly because of this limitation. Black and Cox (1976)
provided an important extension of the Merton model. Their model has a first
passage time structure where a default takes place whenever the value of the assets
of a company drops below a barrier. The first passage models are more realistic in
the sense that they let the default happens anytime before horizon. Other extensions
of the Merton model are provided by Geske (1997); Kim, Ramaswamy, and
Sundareson (1993); Leland (1994); Longstaff and Schwartz (1995); Leland and Toft
(1996); and Zhou (2001).
Firms default when they cannot, or choose not to, meet their obligations. Credit
events are triggered by movements of the firm’s value relative to some (random or
non-random) credit event triggering threshold. So if the firm’s asset
t
V is less than
42
the face value of debt
t
D at time t, then the firms will go bankrupt. Consequently, a
major issue is the modeling of the evolution of the firm’s value and of the firm’s
capital structure. In the structural approach, the default happens when
t t
D V < . In
order to use the structural approach, we need to know the asset value
t
V and the
default point
t
D . Asset values are not observable in the market, but since the equity
price of firms can be found in the market, we are able to retrieve the asset values
from the option theoretic approach of Merton if we know the default point.
Finding the default point has been one of the challenges of the structural
approach in the credit risk modeling. In reality, firms often rearrange their liability
structure when they have credit problems. Hence, there is no analytic method to
derive the default point. Rather, it is estimated from a firm’s liabilities information
on its balance sheets. KMV Company has done empirical research based on large
scale statistical studies of historical defaults, and they have found that the basic
estimation for the default point is current liability plus half of long term liabilities
(Demircubuk and Tse, 2001). Their approach has been based on defining the short
term and long term liabilities in the balance sheet data. First, minority interests and
deferred taxes are excluded from the total liabilities since those two items do not
cause default stress. The problems arise when the firms do not break down their
liabilities into current (due within one year) and long term on their balance sheets.
There are no regulations to stop firms from reporting their statements in this way,
and there can be many reasons for them to do so. However, KMV has considered
these adjustments for finding the default point estimation. Also, KMV has developed
43
a model in which they have segmented the liabilities into six parts and have
estimated the default point as a function of those liabilities. All in all, finding the
default point has been dependent on the precise analysis of accounting data. Since
there are versatile accounting standards in different countries and since the firm level
data are very noisy (because of moral hazard problem), relying on these data has
been very risky.
The consensus in the finance literature is that the Merton model underpredicts the
probability of defaults and the credit spreads. There are a couple of papers in the
literature that test the Merton model. Jones, Mason, and Rosenfeld (1984) used the
bond prices of companies during 1977-1981 and found that the prices implied from
the Merton model overestimated the bond prices by 4.52%. Ogden (1987) found that
the yield spread is underpredicted 104 bp if the Merton model is used. Eom,
Helwege, and Huang (2004) found that the Merton model underestimates the credit
yield spread. KMV Corporation is a successful company that used the Merton model
to predict the default probabilities of individual firms. However, they found that the
probabilities from the Merton model were too far from reality, and they calibrated a
model in which the default probability is a function of distance of default found from
the Merton model. For calibrating the model, they used a unique and proprietary
database of corporate defaults of the last decades.
Since default is costly and violations to the absolute priority rule in bankruptcy
proceedings are common, in practice shareholders have an incentive to put the firm
into the receivership before the asset value of the firm hits the debt value (Hanson,
44
Pesaran, and Schuermann 2005). In addition, the lending banks have the incentive to
force the firm to default before the asset value hits the debt threshline (Garbade,
2001). Also, a borrower may be in default condition, e.g. a missed coupon payment,
without going into bankruptcy, which usually happens in the banking-borrower
relationship (Lawrence and Arshadi (1995)). Therefore, the default usually happens
when the asset value of the firm crosses the thresh line that is higher than the default
point characterized by the liabilities of the firm. Since we have:
i i i
E D V + = , where E
is the equity value of the firm, we can deduct that the default happens when:
i i
C E < < 0 . C is the positive threshold, which is time varying and is dependent on the
firm’s characteristics. The equity values are observable for the firms traded in the
stock markets.
Estimation of threshold line C for each firm is the next step in modeling the
default risk. As argued and elaborated in Pesaran, Hashem, Schuermann, Treutler,
and Weiner (2005), accounting information is likely to be noisy and might not be all
that reliable due to information asymmetries and agency problems between
managers, shareholders, and debtholders. Moreover, the accounting based route
presents additional challenges such as different accounting standards and bankruptcy
rules in different countries. Also, other firm specific characteristics such as leverage,
firm age, and management quality could be important in the determination of default
thresholds that are quite difficult to observe. In view of these measurement problems,
Pesaran et al. (2005) used the firm-specific credit ratings for finding the default
thresholds. There are different rating agencies that rank the firms and assign them a
45
credit rating. Moody, S&P, FITCH, CIBS, Nationsbank, and SBC are the companies
that rate the firms, and they have their own terminology.
In this paper, I will propose a new model for the default risk that is unlike the
Merton model. There is no need to use the default points, and it uses the equity
prices as information for finding the default probabilities. Unlike the Pesaran et al.
(2005) approach, it does not use just the equity prices. In the next section, I will
propose the model. After that, I will show the empirical results of the model compare
with other models. Finally, the paper concludes.
2-2-Thoretical Model
The main assumption of the Merton model is that the equity of a company is an
option on the asset value. In that model, the strike price is the liabilities of the
company, and time to maturity is usually assumed one year. However, these two
assumptions might be far from reality since the liabilities of the company exit after a
year or a definite time period and are changing through time. Also, the asset value of
the companies is not traded in the market, which is a key assumption in the option
pricing.
Unlike the Merton model, the model I propose implements the main assumption
of the asset pricing in which the equity price is the discounted dividend stream of the
stock under the martingle measure:
46
Equity Price = ] ) ( [
0
dt t Div e E
rt
Q
∫
−
τ
(2.1)
where } 0 ) ( : 0 inf{ = ≥ = t x t
i
τ is the time that company goes bankrupt.
t
x is the asset surplus at time t and is equal to the assets minus the debts. In order to
find the dividend stream, we assume that the manager of the company decides on
paying the dividend in such a way that the equity price is maximized while he is
taking into account that paying too many frequent and lavish dividends increases the
probability of bankruptcy. I assume that the asset value minus the debt value of the
company is the asset surplus in the company, which, if it hits zero, the company goes
bankrupt. The question is what is the optimal dividend policy? I assume that the asset
surplus is a process that satisfies the following stochastic differential equation:
) ( ) ), ( ( ) ), ( ( ) ( t dw t t x dt t t x t dx σ µ + =
and x will be absorbed at zero, which means the default happens as the asset value
crosses the default point. Considering the dividend policy, the asset surplus can be
written as:
∫ ∫
− + + =
t t
t div dw t t x dt t t x x t x
0 0
0
) ( ) ), ( ( ) ), ( ( ) ( σ µ , (2.2)
where the manager of the company maximizes (2.1) with respect to the process
defined in (2.2).
Shreve, Lehoczky, and Gaver (1984) showed that the solution to the above
optimization problem is subtracting x-U if x > U where U is the optimal threshold. In
other words, if asset surplus x is above the boundary U, then residuals should be paid
as dividends and the value function will be:
47
{ }
⎪
⎭
⎪
⎬
⎫
⎪
⎩
⎪
⎨
⎧
≥ + −
≤ ≤
=
∫
−
U x if U EV U x
U x if dt t Div e E
x EV
U
rt
Q
U
) (
0 , ) (
) (
0
τ
,
where r is the interest rate and the value function satisfies the following differential
equation:
) ( ' ' ) (
2
1
) ( ' ) (
2
x EV x x rEV x rEV
U U U
σ + = ,
with the following boundary conditions:
. 1 ) ( ' , 0 ) 0 ( = = U EV EV
U U
If
2 2
) ( σ σ = x , the general solution of the differential equation will be:
, ) (
2 1
2 1
x m x m
U
e c e c x EV + =
where
2 1
m and m are the roots of the following characteristic function:
, 0 2 2
2 2
= − + r rm m σ
And the answers are:
⎪
⎪
⎩
⎪
⎪
⎨
⎧
+ − −
=
+ + −
=
2
2 2
2
2
2 2
1
2
2
σ
σ
σ
σ
r r r
m
r r r
m
, in which
2
1
2
2 2 1
, 0 m m m m > > >
With adding the condition: 0 ) 0 ( =
U
EV , the optimal U can be found and
consequently the value function will be derived as:
48
2
1
2
2
2 1
log
1
m
m
m m
U
−
= ,
⎪
⎪
⎩
⎪
⎪
⎨
⎧
> − +
≤ ≤
−
−
=
U x U x U EV
U x
e m e m
e e
x EV
U m U m
x m x m
, ) (
0 ,
) (
2 1
2 1
2 1 (2.3)
In the above notation, x is the asset surplus of the company and EV is the equity
price observed in the market. When the asset surplus is greater than the threshold U,
the company pays the difference as dividend, and that is the reason the equity value
of the company at the time of paying dividend is equal to the equity value at U plus
the dividend. That is what is found in (2.3). When asset surplus is less than the
threshold U, the equity value is a nonlinearly increasing function of the asset surplus
since
2 1
0 m m > > . In other words, as the asset surplus increases, the equity value of
the company increases as well. Also, as
2
σ increases, the difference between
2 1
m and m increases, which makes the equity value more sensitive to the asset
surplus changes. This makes sense since, as the volatility of the asset surplus
increases, the probability of default in the worst case and the probability of paying
dividends increases.
The above formula does not need to consider the default point or the capital
structure of the company to determine the relationship between asset surplus value
and the equity value of the company. Also, there is no need to consider a fixed time
to maturity that is necessary for using the option pricing theory. In order to find the
probability of default or the distance to default, the parameters of the asset surplus
49
defined in (2.1) should be estimated from the market data since the asset surplus
value is not observable and just the equity values are observable in the market. The
likelihood function is:
∑ ∑ ∑
= = =
∂
∂
− −
− − +
− − =
n
t t
t
n
t
n
t
x
x EV
t EV
t EV t EV n
L
2 2 2
2
2 2
)
) (
ln( ) ( ln
) ) ( ) 1 ( (
2
1
) 2 ln(
2
) , (
σ
µ
πσ σ µ
(2.4)
in which the derivative of the equity relative to the asset surplus is:
U m U m
x m x m
t
t
e m e m
e m e m
x
x EV
t t
2 1
2 1
2 1
2 1
) (
−
−
=
∂
∂
When the parameters of the asset surplus are estimated, the distance to default can be
found as:
Distance to Default for a one year horizon
σ
µ
×
× +
=
252
252 x
(2.5)
2-3-Empirical Results
The data of 666 companies in the year 2004 have been selected from CRSP
database (WRDS database at Wharton school) for empirical comparison of the
Merton model with the model proposed in this paper. These are all the companies
that are traded for the 252 days of the year, and whose fiscal year ends by the end of
the December. All the firms in the financial industry are excluded from the database.
Finding the default point for the Merton model is not straightforward for financial
firms. The default point for other firms is assumed as the current liabilities plus half
of the long term liabilities. In the COMPUSTAT database of WRDS, variable data9
is the long term liability and data34 is the short term liability. Data34 + 0.5data9 has
50
been used as the default point for the firm in the Merton model. For each company,
the distance to default from the Merton model and the new model and finally the
distance to default from equity price are calculated.
The distance to default from the Merton model is widely used in industry. In the
Merton model, it is assumed that the asset value is equal to the equity value plus the
debt, and the equity value can be derived from the option pricing formula as:
) ( ) (
) (
t T d Fe d V E
t
t T r
t t i
− − Φ − Φ =
− −
σ (3.1)
where Φ is the cumulative distribution of standard Normal, F is the default point of
the firm, T is equal to 252 days, r is 0.04/252, and
t
d is defined as:
t T
t T V F
d
t
t
−
− + +
=
σ
σ µ ) )(
2
1
( ) / ln(
2
(3.2)
From (3.1) and (3.2), I can solve for σ . Consequently, the time series of asset values
can be found, and finally µ can be estimated from the asset values. After finding
µ and σ , the distance to default is derived using the following formula:
T
T
F
V
DD
t
σ
σ
µ )
2
( ) ln(
2
− +
= (3.3)
The value found in (3.3) is the distance to default implied from the Merton model
and will be called DDMerton hereafter.
The distance to defaults of the firms implied from the model proposed in this
paper is found from (2.5) after maximum likelihood estimation of the parameters
from (2.4) and will be called DDnew hereafter. Also, I found another distance to
51
default derived from (2.5), but I simply assumed that the equity prices are following
a simple diffusion model derived from:
) ( ) ( t dw dt t dEV σ µ + = (3.4)
The parameters of the model defined in (3.4) can be estimated by finding the average
and the standard deviation of the first difference of the equity value. I will name the
distance to default found from (3.5) from the parameters of (3.4) simply as the
DDeq.
The probability of default can be implied from the credit ratings of companies
from S&P Company. The distribution of the default probabilities and their ratings are
shown in Table 2-1 and Figure 2-1.
52
Figure 2-1- The Probability Distribution of the Companies Based on Their Credit Ratings
0
2
4
6
8
10
12
14
AAA AA+ AA AA- A+ A A- BBB+ BBB BBB- BB+ BB BB- B+ B B- CCC+ CCC
Credit Ratings
Probability in Percentage
Table 2-2 shows the statistics of the three distances to defaults from the three
different models: DDMerton, DDNew, and DDeq. On average, the distance to
default from the new model presented in this paper has a higher value relative to the
Merton model and DDEq.
53
Table 2-1 Distribution of the Companies Based on Their Credit Ratings
S&P Credit
Rating
Corresponding Probability of
Default in One Year in bp
Number of
Companies
Probability
Distribution
AAA 0 5 0.75
AA+ 0 2 0.3
AA 0.01 11 1.65
AA- 0.01 8 1.2
A+ 0.03 23 3.45
A 0.04 46 6.91
A- 0.08 58 8.71
BBB+ 0.12 65 9.76
BBB 0.21 83 12.46
BBB- 0.42 68 10.21
BB+ 0.72 40 6.01
BB 1.46 59 8.86
BB- 2.8 83 12.46
B+ 4.15 57 8.56
B 5.71 33 4.95
B- 10.55 15 2.25
CCC+ 15.93 6 0.9
CCC 17.83 3 0.45
54
Table 2-2 Statistics of the Three Different Distances to Default
Variable Obs Mean
Std.
Dev. Min Max
DDmerton 666 8.93 4.23 0.91 28.14
DDnew 666 10.60 8.41 0.10 41.74
DDeq 666 4.59 2.13 0.04 10.71
Distance to default has been used for finding the probability of default of
companies. The probability implied from the distance to default has been smaller
than the historical probabilities. However, distance to default has been used as an
informative variable in industry. The implied probabilities from the credit ratings of
the firms have been used as a dependent variable in order to see how the three
different distances to defaults can explain the default probabilities. For all three
models, the distance to default was a significant variable in explaining the default
probability. The regression results are shown in Table 2-3. The square of the distance
has been significantly nonzero, so it is added to regressions while it was not
significant for DDEq. Comparing the adjusted R-square shows that the DDNew has a
higher explanatory power relative to DDMerton and DDEq since the R-squares are
18, 23, and 17% for DDMerton, DDNew, and DDEq, respectively. Comparison of
regressions for DDEq and DDNew show that the transformation done in the model
presented in this paper increases the explanatory power of the new distance to default
since DDNew and DDEq just use the equity price information and do not use the
55
accounting information of the companies. While the Merton model uses the default
point as an input to the model,in order to compare the models better, the ratio of
default point to the equity price also has been added to the regressions. The results
are shown in Table 2-4. This variable, which is a proxy for the leverage of a
company, has been significant for the all three models. The coefficient of variables
for DDNew and DDEq model is positive, which makes sense, since as the leverage
of the company increases, the probability of default increases. Interestingly the R-
Sqaure of the DDNew model is 25% while for DDMerton is 20% this shows the
distance to default found from the new model has a better explanatory. Therefore, the
model proposed in this paper that drops the option theoretic assumption of the
Merton model carries more information compared to the distance to default implied
from the Merton model.
Table 2-3 Regression Results for Different Distances to Default
Model 1: Heteroskedasticity-Corrected Estimates Using the 666 Observations 1-666
Dependent Variable: defprob
Variable Coefficient Std. Error t-statistic p-value
const 4.95234 0.414919 11.9357 <0.00001 ***
ddmerton -0.554884 0.0566501 -9.7949 <0.00001 ***
ddmerton2 0.0158674 0.00190659 8.3224 <0.00001 ***
56
Statistics based on the weighted data:
Unadjusted R
2
= 0.187614
Adjusted R
2
= 0.185164
Model 2: Heteroskedasticity-Corrected Estimates Using the 666 Observations 1-666
Dependent Variable: defprob
Variable Coefficient Std. Error t-statistic p-value
const 3.27636 0.233502 14.0314 <0.00001 ***
ddnew -0.23326 0.0205545 -11.3484 <0.00001 ***
ddnew2 0.00426019 0.0004558159.3463 <0.00001 ***
Statistics based on the weighted data:
Unadjusted R
2
= 0.236763
Adjusted R
2
= 0.23446
57
Model 3: Heteroskedasticity-Corrected Estimates Using the 666 Observations 1-666
Dependent Variable: defprob
Variable Coefficient Std. Error t-statistic p-value
const 4.51487 0.431177 10.4710 <0.00001 ***
ddneweq -0.850019 0.126953 -6.6955 <0.00001 ***
ddneweq2 0.041599 0.00920547 4.5189 <0.00001 ***
Statistics based on the weighted data:
Unadjusted R
2
= 0.174565
Adjusted R
2
= 0.172075
Table 2-4 Regression Results for Different Distances to Default and Considering the
Leverage Ratio
Model 4: Heteroskedasticity-Corrected Estimates Using the 666 Observations 1-666
Dependent Variable: defprob
Variable Coefficient Std. Error t-statistic p-value
const 5.35262 0.43858 12.2044 <0.00001 ***
ddmerton -0.571522 0.0552708 -10.3404 <0.00001 ***
ddmerton2 0.0156459 0.00179456 8.7185 <0.00001 ***
ratio -0.810494 0.30609 -2.6479 0.00829 ***
58
Statistics based on the weighted data:
Unadjusted R
2
= 0.206049
Adjusted R
2
= 0.202451
Model 5: Heteroskedasticity-Corrected Estimates Using the 666 Observations 1-666
Dependent Variable: defprob
Variable Coefficient Std. Error t-statistic p-value
const 3.04878 0.211678 14.4029 <0.00001 ***
ddnew -0.231707 0.0220659 -10.5007 <0.00001 ***
ddnew2 0.0043569 0.00057132 7.6260 <0.00001 ***
ratio 0.486152 0.170649 2.8488 0.00452 ***
Statistics based on the weighted data:
Unadjusted R
2
= 0.259682
Adjusted R
2
= 0.256327
59
Model 6: Heteroskedasticity-Corrected Estimates Using the 666 Observations 1-666
Dependent Variable: defprob
Variable Coefficient Std. Error t-statistic p-value
const 3.52534 0.440168 8.0091 <0.00001 ***
ddneweq -0.653803 0.142902 -4.5752 <0.00001 ***
ddneweq2 0.0291605 0.0115951 2.5149 0.01214 **
ratio 0.950986 0.258266 3.6822 0.00025 ***
Statistics based on the weighted data:
Unadjusted R
2
= 0.14189
Adjusted R
2
= 0.138002
Industry equity prices have been used for finding the correlation between
different firms. The transformation derived from the model proposed in this paper
explains the distance to default better, so the asset surplus defined in this paper will
be a better variable to explain the default correlation between companies.
2-4-Conclusion
In this study, a new model for corporate default has been proposed. The Merton
model that has been widely used in industry is dependent on the assumption that the
equity value is an option on the asset value of the company. This assumption is far
60
from reality. First, the asset value of a company is not tradable, and second, using an
option approach, we need to assume an expiration date for the option that implicitly
assumes that the debts of company have a fixed age while in practice the debts are
changing through time.
In the model proposed in this paper, the asset pricing approach has been used,
and the dividend policy assumed follows an optimal strategy. In this model, there is
no need to use the default point that brings difficulties in the Merton model. Finally,
the empirical test for two models shows that the new model has a better explaining
power for default probabilities when the leverage ratios are added to the model.
More work should be done. By using CDS data, the explanatory power of the
distances to default can be tested by using CDS spreads. Also, by assuming the
geometric Brownian motion for the asset surplus, distance to default can be derived,
which may explain the CDS spreads better.
61
Chapter 3- The Importance of Simultaneous
Jumps in Default Correlation
3-1-Introduction
This section provides a brief summary and explanation of default risk and
the need for regulation in the banking system that was the main reason for the
development of credit risk mitigation. Then the credit derivative will be briefly
introduced. At the end, the summary of different approaches in credit modeling will
be represented.
Credit Risk
Default risk is intrinsically linked to the payment obligation that the obligor
should honor. An obligor, who does not have any payment obligations, does not have
any default risk either. Therefore this definition only covers the default risk of a
payment obligation but not the default risk of the obligor himself. In principal, an
obligor could not pay one of his obligations but honor another. But the bankruptcy
codes and contract laws have prevented the agents to default risk in one of their
obligations. So we can speak of the default risk of an obligor without specifying a
particular payment obligation because the obligor has to honor all his payment
obligations as long as he is able to. If he is not able to do so, a workout procedure is
entered. The obligor loses control of all his assets and an independent agent tries to
find ways to pay off the creditors using the obligor’s assets. The bankruptcy code
ensures that all creditors of the obligor are treated fairly and in accordance with a
62
determined procedure. In particular, it is ensured that a default on one obligation
entails a default on all other obligations. The default only occurs if the obligor really
cannot pay his obligations. The default almost always entails a loss to the creditor,
but the obligors who are not bound by bankruptcy codes, e.g. sovereign borrowers in
countries without a properly functioning legal system, frequently make use of the
possibility to default only on selected obligations, sometimes without being in real
financial distress.
The general properties of default risk that make their quantitative modeling
difficult are below:
1) Default events are rare.
2) They may occur unexpectedly.
3) Default events involve significant losses.
4) The size of these losses is unknown before default.
The Need for Regulation
Risk taking is a normal behavior of financial institutions given that risk and
expected return are so tightly interrelated. But the banking system is subject to
“moral hazard;” enjoying profit sharing is an incentive for taking more risks because
of the absence of penalties. Sometimes, adverse conditions are incentives to
maximize risks. When banks face serious difficulties, the bankers do not care to limit
risks. In such situations, and in the absence of aggregate behavior, failure becomes
almost unavoidable. By taking additional risks, banks maximize the chances of
63
survival. The higher the risk, the wider will be the range of possible outcomes,
including favorable ones. At the same time, the losses of shareholders and managers
do not increase because of the limited liability of shareholders. In the absence of real
downside risk, it becomes rational for bankers to increase risk, so the manager gets
the upside of the bet with a limited downside. The potential for upside gain without
the downside risk encourages risk taking because it maximizes the expected gain.
However, the bankruptcy of a bank has a negative externality for other banks and
financial institutions. Since banks work on the trust of people, if a bank goes
bankrupt, other banks will be in trouble, too. There are a couple of bankruptcy cases
in which other banks have set aside a lot of money in order to save a bankrupt bank.
Therefore, the banking system has imposed some regulations on banks lest the
banking system risk increases.
The first regulation in banking was the 1988 Basel Accord, which became the
standard for capital requirement for internationally active banks, first in Group-of-
Ten (G10) countries and Switzerland and subsequently in more than 100 countries.
The basic idea of the accord is that banks must hold capital of at least 8% of the total
risk-weighted assets. In order to calculate this total measure of assets, each asset is
multiplied by a risk weighting factor that, in principal, represents the credit quality of
that asset.
There were some imbalances in the 1988 BIS Accord. For example, corporate
loans received 100% risk weighting even though they might have a AAA rating that
64
was five times the cash requirement for a country bond of Turkey that was a
speculatively rated OECD country.
In order to address the obvious imbalances created by the 1988 accord, a new
capital accord was set in 2001 and implemented beginning in 2005. In the new Basel
Accord, there are two approaches for assessing the credit risk: 1) standardized and 2)
internal rating-based. Table 1 shows the proposed risk weights for the standardized
approach.
The standardized approach will better recognize the benefits of credit risk
mitigation compared to the 1988 Accord. A disadvantage of the standardized
approach is that exposure to unrated obligors will receive 100% risk weights. As
shown in the table, because certain low rated obligors carry an even higher risk of
150%, the standardized approach sets up a perverse incentive for obligors to become
unrated or for banks to loan to poor-quality unrated borrowers over certain rated
borrowers.
Under the internal rating-based approach, a bank could, subject to limits and
approval, use its own internal credit ratings. The ratings must correspond to
benchmarks for one year default probabilities. The internal ratings methodology
must be recognized by the bank’s regulator and have been in place for at least three
years. Under this new accord, credit derivatives can be used for mitigating the credit
risk of bank portfolios. That is the reason why credit derivatives have grown
tremendously during the last few years.
65
Credit Derivatives
A credit derivative is a derivative security that has a payoff that is conditioned on
the occurrence of a credit event. The credit event is defined with respect to a
reference credit (or several reference credits), and the reference credit asset(s) issued
by the reference credit. The reference credit is an issuer whose default triggers the
credit event. A credit event is a precisely defined event, determined by negotiation
between the parties at the outset of a credit derivative. Market standards typically
specify the existence of publicly available information confirming the occurrence,
with respect to the reference credit, of bankruptcy, insolvency, restructuring, failure
to pay, and cross default. If the credit event occurs, the default payment has to be
made by one of the counterparties. Besides the default payment, a credit derivative
can have further payoffs that are not default contingent.
The credit derivative, probably one of the most important types of new financial
products introduced during the last decade, was first publicly introduced in 1992 at
the International Swaps and Derivative Association (ISDA) annual meeting in Paris.
In addition to the credit derivative, there is a great variety of products: credit default
swaps (CDS), credit options, credit spread products, total rate of return swaps and
credit link notes (CLN), collaterized debt obligation (CDO), collaterized loan
obligations (CLO), and collaterized mortgage obligations (CMO).
The market for the credit derivative was created in the early 1990s in London and
New York, and the growth of the credit derivative market has been extraordinary
66
over the last decade. British Bankers’ Association (BBA) indicated that the notional
volume of credit derivatives increased globally from $180 bn in 1997 to $0.95 tn in
2002. The credit derivatives market grew by approximately 80% in 2004. The
notional of outstanding contracts rose from $3.5 trillion (Source: British Bankers
Association) in 2003 to $6.3 trillion in 2004 (Source: Bank for International
Settlements). Tavakoli Structured Finance Company estimated outstanding credit
derivative contracts would reach $8.0 trillion by the end of 2005, and $9.7 trillion by
the end of 2006. According to Deutsche Bank estimates, just the Credit Default Swap
(CDS) market notional value will top 30 trillion dollars in 2007.
The most famous credit derivative is the Credit Default Swap (CDS). The seller
of CDS agrees to pay the default payment to the buyer of CDS if default occurs. The
default payment is structured to replace the loss that a typical lender would incur
upon a credit event of the reference entity. Instead, the buyer pays a regular fee to the
seller of the CDS before the default happens, so the CDS is a kind of insurance
contract. A First to Default (FtD) is an extension of a CDS to portfolio credit risk.
Instead of referencing just a single credit, an FtD is specified with respect to a basket
of N reference credits. The protection buyer pays a regular fee to the protection seller
until any default event occurs or the FtD matures. The default event is the first
default of any of the reference credits. The basket for a typical FtD can contain
between two to twelve reference credits. The FtDs remove most of the default risk of
a basket of defaults, so the protection buyer, instead of buying N CDSs, can buy one
67
FtD and still remove a great portion of his credit risk. The most important parameter
of a basket of default is its correlation, which is the topic of this paper.
Credit Risk Models
There are two types of models of default risk in the literature: structural models
and reduced based models. In reduced based models, the default is modeled as a
surprise. The probability of this surprise follows a jump diffusion process and
therefore depends on an intensity parameter also called the hazard rate. This hazard
rate can be constant through time or allowed to be stochastic, thereby implying a
term structure of probabilities of default. This hazard rate is either estimated to fit
historical probability or fitted to current market data (calibration). The reduced form
approaches are well documented by contributions of Jarrow and Turnbull (1995);
Jarrow, Lando, and Turnbull (1997); Duffie and Singleton (1999); Das and Tufano
(1996); Lando (1998); Iben and Litterman (1991); Madan and Unal (1993);
Schunbucher (1997); and Zhou (1998).
The reduced form models can incorporate correlations between defaults by
allowing hazard rates to be stochastic and correlated with macro economic variables.
Duffie and Singleton (1999) modeled and simulated the correlated default times
using reduced form models. They emphasized the impact of correlated jumps in
credit quality on the performance of a large portfolio of positions. Lando (1998) also
used the Cox process for modeling the correlated default rates. The reduced form
models have a mathematical advantage, but their main disadvantage is that the range
of default correlations that can be achieved is limited as studied by Andreasen
68
(2001). Even when there is a perfect correlation between two hazard rates, the
corresponding correlation between defaults in any chosen period of time is usually
very low. We expect that when two companies are in the same industry, they should
have high default correlation that cannot be attained by reduced form models.
However, Jarrow and Yu (2005) showed that if a large jump is allowed in the default
intensity of a firm when there is a default by another company, then reduced form
models behave better in maintaining the correlated defaults. However, the method
used by them is a kind of contagion effect that should be differentiated from default
correlation.
The structural approach relates to the arrival of default to the dynamics of the
underlying structure of the firm, thereby giving an economic significance to the
establishment of the default rate. This approach was founded by Merton (1974,
1977) who used an application of option theory. In his theory, the value of the firm is
supposed to be shared by two broad categories of claimants: the shareholders and the
debt holders. Because of the limited liability of shareholders, they have a payoff that
is positive whenever the face value owed to creditors can be reimbursed; otherwise,
it is zero. The shareholders’ claim is then just a call on the value of the assets of the
firm (also known as a European call). Thus, a bond is simply a right of a face amount
to be reimbursed with the sale of a put to shareholders on the assets of the firm. A
direct advantage of the structured approach is that credit default is not an
unpredictable event here; there is a way to see the corporate conditions that affect the
default rate. Relying on the evolution of the value of assets of the firm gives a
69
continuity of the credit standing evolution of the firm that makes the credit risk
predictable. Unfortunately, since the value of the firm is not a tradable asset, the
parameters of the structural model are difficult to estimate. Reduced form
approaches appeared mainly because of this limitation. Black and Cox (1976)
provided an important extension of the Merton model. Their model has a first
passage time structure where a default takes place whenever the value of the assets
of a company drops below a barrier. The first passage models are more realistic in
the sense that they let the default happen anytime before horizon. Other extensions of
the Merton model are provided by Geske (1997); Kim, Ramaswamy, and Sundareson
(1993); Leland (1994); Longstaff and Schwartz (1995); Leland and Toft (1996); and
Zhou (2001a).
According to the continuous diffusion structural model, firms never default by
surprise. As Zhou (1997) argued, in reality, default can occur in both ways: firms can
default either gradually or by surprise due to unforeseen external shocks. The
philosophies behind the structural and reduced form approaches have been combined
in Zhou (2001a) using a jump diffusion model that allows both gradual and sudden
defaults. The jump diffusion approach overcomes some difficulties encountered in
traditional diffusion based pricing approach. In particular, a CDS pricing approach
based on the diffusion process produces zero credit spreads for very short maturities.
This happens because, if there is a finite distance to the default point, a continuous
process cannot reach that point in a very short period of time. This is problematic
70
because in reality the credit spreads would not go to zero even for contracts with
very short maturities.
Zhou (2001b) and Hull and White (2001) were the first to incorporate default
correlation into the Black and Cox first passage structural model. Zhou (2001b)
found a closed form formula for the joint default probability of two issuers, but his
results cannot easily be extended to more than two issuers. In addition, he did not
includ the jump in the model.
Another line of research in modeling joint default probability has been survival
time involving copulas. This was suggested for two obligors by Li (2000) and
extended to many obligors by Laurent and Gregory (2003). Their model has been
used in industry due to its computational capabilities. But since they have assumed a
fixed correlation between each pair of obligors, the model is very restricted and also
there is no known economic rationale for the model.
In the literature, correlation number has been used for capturing the
comovements of assets. The correlation number can explain all the dependency of
two random variables if they are normally distributed while the distribution of
random variables could be more complex to be captured by normally distributed
assumption. In this paper, comovements between firms will be explained by a
multivariate jump diffusion model. The approach is based on the structural modeling.
I assume that the correlation between firms is not only between the diffusion
components but also simultaneous jumps in firm assets can contribute to the
71
correlation. In order to fit the companies’ harmonies, we need to have a model that
can be estimated from market data.
In the next section, the theory of firm default will be explained. After that,
correlated defaults will be discussed. Next, the multivariate jump diffusion model
used in this research will be explained. After that, the econometric approach that is
used for parameter estimation will be shown. Finally, an empirical example of two
firms will be shown.
3-2-Firm Default
Firms default when they cannot, or choose not to, meet their obligations. Credit
events are triggered by movements of the firm’s value relative to some (random or
non-random) credit event triggering threshold. So if the firm’s asset
t
V is less than
the face value of debt
t
D at time t, then the firms will go bankrupt. Consequently, a
major issue is the modeling of the evolution of the firm’s value and of the firm’s
capital structure. This idea is the basis of the structural approach to default risk that
was founded by Merton (1974, 1977).
In the structural approach, the default happens when
t t
D V < . In order to use the
structural approach, we need to know the asset value
t
V and the default point
t
D .
Asset values cannot be observable in the market, but since the equity price of firms
can be found in the market, we are able to retrieve the asset values from option
theoretic approach of Merton if we know the default point.
72
Finding the default point has been one of the challenges of the structural
approach in the credit risk modeling. In reality, firms often rearrange their liability
structure when they have credit problems. Hence, there is no analytic method to
derive the default point. Rather, it is estimated from a firm’s liabilities information
on its balance sheets. KMV Company has done the empirical research based on large
scale statistical studies of historical defaults, and they have found that the basic
estimation for the default point is current liability plus half of long term liabilities
(Demircubuk and Tse, 2001). Their approach has been based on defining the short
term and long term liabilities in the balance sheet data. First, minority interests and
deferred taxes are excluded from the total liabilities since those two items do not
cause default stress. But problems arise when the firms do not break down their
liabilities into current (due within one year) and long term on their balance sheets.
There are no regulations to stop firms from reporting their statements in this way,
and there can be many reasons for them to do so. However, KMV has considered
these adjustments for finding the default point estimation. Also, KMV has developed
a model in which they have segmented the liabilities into six parts and have
estimated the default point as a function of those liabilities. All in all, finding the
default point has depended on the precise analysis of accounting data. Since there are
versatile accounting standards in different countries and since the firm level data are
very noisy (because of the moral hazard problem), relying on these data has been
very risky.
73
Since default is costly and violations to the absolute priority rule in bankruptcy
proceedings are common, in practice shareholders have an incentive to put the firm
into the receivership before the asset value of the firm hits the debt value (Hanson,
Pesaran, and Schuermann,2005). In addition, the lending banks have the incentive to
force the firm to default before the asset value hits the debt threshline (Garbade,
2001). Also, a borrower may be in a default condition, e.g. a missed coupon
payment, without going into bankruptcy. This usually happens in the banking-
borrower relationship (Lawrence and Arshadi, 1995). Therefore, the default usually
happens when the asset value of the firm crosses the threshline that is higher than the
default point characterized by the liabilities of the firm. Since we have
i i i
E D V + = ,
where E is the equity value of the firm, we can deduct that the default happens when:
i i
C E < < 0 . C is the positive threshold which is time varying and is dependent on the
firm’s characteristics. The equity values are observable for the firms traded in the
stock markets. We assume that the equity prices satisfy the following process:
t i t i t i
G LogE LogE
, , 1 ,
= −
+
,
t i
G
,
is the return of equity prices for firm i at time t, so the equity returns for a time
period H will be:
∑
=
+
+
− +
− +
− +
+ +
+
= × × × = = −
H
i
i t i
t i
t i
H t i
H t i
H t i
H t i
t i
H t i
t i H t i
G
E
E
E
E
E
E
Log
E
E
Log LogE LogE
1
,
,
1 ,
2 ,
1 ,
1 ,
,
,
,
, ,
) ... ( ) (
The equity value distribution is dependent on the distribution function of G
functions. If we assume a functional form for G functions, the distribution function
can be estimated from the historical data of equity prices.
74
Estimation of threshold line C for each firm is the next step in modeling the
default risk. As argued and elaborated in Pesaran, Hashem, Schurermann, Treutler,
and Weiner (2005), accounting information is likely to be noisy and might not be all
that reliable due to information asymmetries and agency problems between
managers, shareholders. and debtholders. Moreover, the accounting-based route
presents additional challenges such as different accounting standards and bankruptcy
rules in different countries. Also, other firm specific characteristics such as leverage,
firm age, and management quality could also be important in the determination of
default thresholds that are quite difficult to observe. In view of these measurement
problems, Pesaran et al. (2005) have used the firm-specific credit ratings for finding
the default thresholds. There are different rating agencies that rank the firms and
assign them a credit rating. Moody, S&P, FITCH, CIBS, Nationsbank, and SBC rate
the firms, and each agency has its own terminology. The rating companies use all the
accounting data and information including the economic sector and the geographical
region where the firms are working. They use inside information for the rating. So
the rating data are reliable for finding the firm probability of default. If we have the
distribution function of the equity returns, we can find the treshline that corresponds
to the default rate set by the rating agency. The probability for each rating comes as a
range. For example, the probability of default for Ba3 rating is in the range 72-101
bp. The mean of the default rate for that range will be used for finding the thresh
line.
75
3-3-Default Correlation
the times series of defaulted companies shows that the historical time series exhibits
much higher variation than the simulated time series that are based on independent
default rates. The majority of historical default rates are far from the average default
rate of the companies. Therefore, the correlation between default rates is an
important factor in explaining the historical bankruptcies in different industries. For
example, if we have two firms A and B with probability of default
a
P and
b
P , then
the default correlation can be defined as:
) 1 ( ) 1 (
b b a a
b a ab
ab
P P P P
P P P
− −
−
= ρ ,
where
ab
P is the probability of joint default. Now we can write
ab
P as:
) 1 ( ) 1 (
b b a a ab b a ab
P P P P P P P − − + = ρ ,
if we assume 1 P
a
<< = = p P
b
then we have:
p p p
ab ab
ρ ρ ≈ + ≈
2
ab
P .
This shows that the correlation between defaults is an important factor in the
probability of joint default.
Many banks and financial institutions are at the credit risk exposure of many
reference identities. Clearly, if they want to hedge their credit risk, they should buy
the protection for the individual exposures, which is very costly and inefficient for
them. Therefore, the new generation of credit derivatives has been developed in
order to hedge the credit risk of a portfolio. Many new credit derivatives are now
76
associated with a portfolio of credit risk. A typical example is the product with
payment contingent upon the time and identity of the first or second-to-default in a
given risk portfolio. Other derivatives are instruments that their payments are
contingent on the cumulative loss before a given time in the future. Collaterized loan
obligation (CLO) and collaterized bond obligation (CBO) are the common credit
derivatives for portfolios.
Modeling correlated default has been one of the challenges for researchers in
credit risk. Vasicek (1987, 1991) had the first works in this area. He used a single
factor model in order to obtain the correlation in asset values of firms. In his model,
the firms default when their asset value drops down a threshold while the asset
values are correlated between different firms that make the defaults correlated.
Reduced form models have not been successful in modeling the correlated defaults.
The reduced form models can capture high correlated defaults unless some kind of
contagion effect is considered in the model. Even when there is a perfect correlation
between two hazard rates, the corresponding correlation between defaults in any
chosen period of time is usually very low. We expect that when two companies are
in the same industry, they should have high default correlation; this cannot be
attained by reduced form models. However, Jarrow and Yu (1999) showed that if a
large jump is allowed in the default intensity of a firm when there is a default by
another company, then reduced form models behave better in maintaining the
correlated defaults. But the method used by them has a kind of contagion effect that
should be differentiated from correlation between default rates.
77
Another line of research in modeling joint default probability has been survival
time involving copulas. This was suggested for two obligors by Li (2000) and
extended to many obligors by Laurent and Gregory (2003). Their model has been
used in industry due to its computational capabilities. But since they have assumed a
fixed correlation between each pair of obligors, the model is very restricted and also
there is no known economic rationale for the model.
Zhou (2001b) and Hull and White (2001) were the first to incorporate default
correlation into the Black and Cox first passage structural model. Zhou (2001b)
found a closed form formula for the joint default probability of two issuers, but his
results cannot easily be extended to more than two issuers. Neither Zhou (2001b) nor
Hull and White (2005) included the jump in their models.
Jumps have been a major factor in credit risk analysis. With jump risk, a firm can
default instantaneously because of a sudden drop in its value. Under a diffusion
process, because a sudden drop in the firm value is impossible, firms never default
by surprise. Thus, the large credit spreads of corporate bonds, especially those with
short maturities, are unexplained in the structural approach. Some recent papers have
documented the large discrepancy between the predictions of structural models and
the observed credit spreads, which is also known as the credit risk premium puzzle
(Amato and Remolona, 2003). Huang and Huang (2003) calibrated a wide range of
structural models to be consistent with the data on historical default and loss
experience. They showed that in all models credit risk only explains a small fraction
of the historically observed corporate-treasury yield spreads. Similarly, Collin-
78
Dufresne, Goldstein, and Helwege (2001) suggested that default risk factors have
rather limited explanatory power on variation in credit spreads, even after the
liquidity consideration is taken into account. As a result, a credit model with the
jump risk is able to explain the credit spread of bonds and CDS in a short time
horizon to maturity. The jump diffusion model is consistent with the fact that bond
prices often drop in a surprising manner at or around the time of default (Beneish and
Press, 1995). Duffie and Lando (1997) attributed this phenomenon to incomplete
accounting information. The information is revealed to the market. Because of a
jump in market information, bond prices jump accordingly. Zhou (2001a) was the
pioneer who showed the importance of jump risk in credit risk analysis of an obligor.
He implemented the simulation method to show the effect of jump risk in the credit
spread of defaultable bonds and showed that the misspecification of stochastic
processes governing the dynamics of firm value, i.e., falsely specifying a jump-
diffusion process as a continuous Brownian motion process, can substantially
understate the credit spreads of corporate bonds. Most of the papers in credit risk
have come up with almost zero credit spreads in a short time horizon; this is not
realistic since the researchers have not considered the jump surprise in their models.
Jumps have more significant importance in the default correlation. Simultaneous
negative jumps enhance the chance of simultaneous defaults that increases the
correlation defaults.
79
This paper takes simultaneous jumps into consideration via a multivariate jump
diffusion model. The next section is devoted to explaining the theoretical part of this
model.
3-4-Multivariate Jump Diffusion Model
Researchers who have studied financial markets have long noted that financial
time series exhibit unusual behavior relative to what would be expected from the
Gaussian distribution. There are too many small changes and some outliers (although
not that frequent), but they are very important for participants in the financial market.
Modeling financial price returns in a way that the returns are a realization of
continuous time diffusion process plays a central role in modern financial
economics. This model simplifies the hedging calculations and derivative pricing.
Although these models are very important in finance literature, it is not clear that
with extreme violent movements in financial data, they can be reliable enough for
modeling the reality of data. Jump diffusion models have considered the surprise
jumps in the financial returns in addition to the Gaussian increments from the
diffusion part.
This section defines the affine jump diffusion model that will be the focus of this
analysis. For a given complete probability space ) , , ( P F Ω and the augmented
filtration {} 0 : ≥ t F
t
generated by a standard Brownian motion W in
n
R and
satisfying the stochastic differential equation,
80
t t t t t
dZ dW t Y dt t Y dY + + = ) , ( ) , ( σ µ ,
where
n
R D → : µ (the drift function) and
n n
R D
×
→ : σ (diffusion function) and
Z is a pure jump process with intensity { } 0 : ) ( ≥ t Y
t
λ and jump amplitude
distribution ν on
n
R . Intuitively, the drift term (.) µ represents an instantaneous
deterministic time trend of the process, the diffusion term shows the stochastic small
increments in the process and jumps are surprise changes in the model. The above
model is called an affine model if:
t t
Y K K t Y
1 0
) , ( + = µ
j ij ij ij t t
Y H H t Y t Y ] [ ] [ ] )' , ( ) , ( [
1 0
+ = σ σ (4.1)
t t
Y l l Y ' ) (
1 0
+ = λ
where
n n n
R R K K K
×
× ∈ = ) , (
1 0
,
n n n n n
R R H H H
× × ×
× ∈ = ) , (
1 0
,
n
R R l l l × ∈ = ) , (
1 0
and the jump transform
∫
=
n
R
z d z c c ) ( ) . exp( ) ( ν ψ , for
n
C c ∈ , is
known in closed form whenever the integral is well defined (Appendix A).
In order to find the probability distribution of the random variable Y, we need to
know the transition density function which, under regularity conditions, satisfy both
the Kolmogorov forward and backward equations of the Markov process. However,
the transition density functions are not analytically derivable except for a few very
simple functional forms like the Ornstein-Uhlenbeck process. So the estimation of
the parameters is not possible when those analytical transition density functions are
81
not available. An alternative approach is finding the Conditional Characteristic
Function (CCF), which is defined as:
) | ( ) , (
'
t
Y iu
t
Y e E u
T
= τ φ ,
N
R u ∈ , where t T − = τ , 1 − = i .
There is a one-to-one relationship between characteristic functions and the density
functions. So if the CCF of a stochastic process is available, we have all the
information about the conditional density function of that variable.
Duffie, Pan, and Singleton (2000) showed that the CCF of an affine jump
diffusion model has a closed form, which can be written as:
t
Y u D u C
t t
e Y Y u
)' , ( ) , (
) | ; (
τ τ
τ
φ
+
+
= .
(.) C and (.) D are functions satisfying the complex-valued Riccati equations:
), 1 )) , ( ( ( ) , ( )' , (
2
1
) , ( '
) , (
1 1 1
− + + =
∂
∂
u D l u D H u D u D K
u D
τ ψ τ τ τ
τ
τ
(4.2)
), 1 )) , ( ( ( ) , ( )' , (
2
1
) , ( '
) , (
0 0 0
− + + =
∂
∂
u D l u D H u D u D K
u C
τ ψ τ τ τ
τ
τ
with boundary conditions: iu u D = ) , 0(, 0 ) , 0 ( = u C . If the parameters of the jump
diffusion model are given, we can solve for functions C (.) and D (.) explicitly.
3-5-Econometric Approach
The estimation of continuous-time stochastic processes has been a challenge for
econometricians and statisticians. As mentioned in the previous section, there are a
82
few specific models for which the maximum likelihood is possible as there exist
explicit closed form transition density functions. However, these models have not
proved to be popular in finance due to unrealistically simplistic model specifications.
In the multivariate framework, the estimation problem becomes even more difficult.
Chan, Kariyli, Longstaff, and Sanders (1992) have used the Generalized Method of
Moments (GMM) for estimation of the parameters for a univariate diffusion model
after discretizing the model. Due to the bias from discretizing the model, simulation
based methods have been developed in order to have consistent estimates. However,
since the Euler discretization is an approximation, the model is misspecified, causing
an asymptotic bias of its estimators, which may be arbitrarily large. The indirect
inference uses simulations performed under the initial model to correct for the
asymptotic bias of which is Maximum Likelihood (ML) estimator of the discretized
method (Gourieroux, Monfort, and Renault, 1993). Besides the intensive
computation involved, these methods can compound the estimation error and
consequently may lead to poor finite sample properties (Knight and Yu, 2000). Jump
models are more difficult to estimate, at least by simulation-based methods. The
discontinuous sample paths create discontinuities in the econometric objective
function that have to be accommodated by rounding out the corners as in Andersen,
Bollerslev, and Diebold (2002) and Chernov, Gallant, Ghysels, and Tauchen (2003).
Recent work on bipower variation measures, which are developed in a series of
papers by Barndorff-Nielsen and Shephard (2003a, 2003b, 2004), has developed a
method to disentangle realized volatility into continuous and jump components, as in
83
Andersen et al. (2004), Huang and Tauchen (2004), Zhang, Zhou, and Zhu (2005).
They have used the bipower variation method to estimate the jump component. The
problem with the bipower method is that jumps are defined as a surprise in a short
time range (like intradaily data). Therefore, small changes in a day will be detected
as jumps in tranquil days. In this paper, another approach that is based on the GMM
estimation of characteristic function will be used.
The use of characteristic function for parameter estimation was developed by
Feuerverger and McDunnough (1981) and Feuerverger (1990). Das and Foresi
(1996) and Bates (1996) used characteristic function for estimation by inverting them
before estimation. Independent works by Singleton (2001) and Jiang and Knight
(2001) show that this inversion is not necessary, and we can use the characteristic
function directly for estimation. The justification for the Empirical Characteristic
Function (CCF) is that the CF is the Fourier transform of the cumulative distribution
function (CDF), and hence there is a one-to-one correspondence between the CF and
CDF. Another important property of the characteristic function is its uniqueness and
the fact that it carries all the same information as the likelihood function. Since ECF
uses all the information, it should be as efficient as the maximum likelihood method.
There are many cases in econometric analysis in which the likelihood function can
not be derived analytically while the characteristic function has an explicit functional
form. Switching regression introduced by Quandt (1958), regime switching models
introduced by Hamilton (1989), the variance gamma distribution proposed by Madan
and Senata (1990), stable distribution proposed by Mandelbrot (1963), discrete time
84
stochastic volatility model used in the modeling exchange rate, proposed by Ghysels,
Harvey, Renault (1996) are all examples of the privilege of using characteristic
functions. While the likelihood function could be unbounded, the characteristic
functions are always bounded (Grimmett and Stirzaker, 1992, Theorem 5.7.3). If two
random variables have the same characteristic function, then they have the same
probability distribution. Also, all the moments of the random variable can be derived
from the characteristic function:
0 1
| ) , (
1
] | [
= +
=
u t n
n
n t
n
t
u
du
d
i
Y Y E τ φ
We can construct the moment conditions as:
) ; , ( ) ; (
1
'
θ φ θ
t
Y iu
y u e u h
t
− =
+
where
. 0 ] | ) ; , ( [ ] | ) ; ( [
1
'
= − =
+
t t
Y iu
t
Y y u e E Y u h E
t
θ φ θ
Obviously, h satisfies
0 )] ; ( [ = θ
θ
u h E for all u in
N
R .
ECF estimator can be assumed as the GMM estimator of Hansen (1982), and the
parameters can be estimated from the following optimization problem:
θ
Min
∑ ∑
= =
× ×
T
t
t T
T
t
t
Y H W Y H
n
1 1
) ; ( ' ) ; (
1
θ θ (5.1)
where H is a K by 1 vector, in which each row of H corresponds to the moment
condition of one of the elements of U vector.
T
W is a positive semidefinite weighting
matrix that converges to a positive definite matrix
0
W almost surely. Under some
85
regularity conditions, the GMM estimator is consistent and asymptotically normally
distributed for arbitrary weighting matrices. When the system is identified, the GMM
estimator does not depend on the choice of
T
W . When the system is over identified,
Hansen (1982) shows that if
1 −
Σ =
T
W , the GMM estimator is asymptotically
efficient in the sense that the covariance matrix of the GMM estimator is minimized,
where Σ is the long run covariance matrix of ) ; ( θ
t
Y H .
The most difficult part of the ECF method is that choices of moments should be
made before estimation. Since for each realization of u we can have one moment, an
infinite number of moments can be generated for the ECF method. Feuerverger and
McDunnough (1981) showed that the asymptotic variance of the GMM estimator can
be made arbitrarily close to the Cramer-Rao bound by selecting the grid sufficiently
fine and extended. This led them to conclude that their estimator was asymptotically
efficient. This is not true, since when the grid is too fine, the covariance matrix
becomes singular and the GMM objective function is not bounded; hence, the
efficient GMM estimator can not be computed. Singleton (2001) proposed to use a
couple of u vectors on the axis, which is not necessarily optimal. Carrasco, Chernov,
Florens, and Ghysels (2005) proposed an estimation method based on a continuum of
moment conditions. They introduced the continuous counterpart of moment
conditions defined in (5.1) as:
|| ) (
ˆ
|| min arg
ˆ 2
1
θ θ
θ
T T T
h K
−
= (5.2)
where || . || is the Euclidean norm that in infinite case converts to:
86
τ τ π τ τ d f f f
R
) ( ) ( ) (
2
∫
= (5.3)
where f denotes the complex conjugate of f and π denotes a probability density
function (pdf) that they assumed to be Gaussian. In order to estimate parameters
from (5.2), we need to have an estimate of
2
1 −
T
K . Carrasco et. al (2005) suggested a
two-stage estimation that is first to estimate parameters
T
θ
ˆ
from (5.2) considering
T
K
ˆ
= 1 and retrieving
T
K
ˆ
matrix from )
ˆ
, (
ˆ
1
T T
h θ τ . Then the estimated parameter will
be:
2
1
2
1
), ; (
ˆ
ˆ
1
min arg || ) ; (
ˆ
|| min arg
ˆ
j T
T
j
j
T T T
h h K ϕ θ τ
λ
θ τ θ
θ θ
∑
=
−
= =
where
j
ϕ ˆ and
j
λ
ˆ
are the eigenvector and eigenvalue of
T
K
ˆ
matrix estimated from
the first stage. The authors confronted by close to zero eigenvalues that is the
counterpart of near singular covariance matrix. Therefore, they have considered a
penalizing term
T
α in order to solve the near zero eigenvalue problem.The C-GMM
estimator is defined as:
2
1
2
ˆ ), ; (
ˆ
ˆ
ˆ
min arg
ˆ
j T
T
j
T j
j
T
h ϕ θ τ
α λ
λ
θ
θ
∑
=
+
=
In this paper, I use the idea of Tikhonov approximation for the estimation of an
infinite moment conditions. Carrasco et. al (2005) have tried to use all the moment
information in a C-GMM estimator but finally came up with eliminating some of the
moments out of a finite set via using Tikhonov approximation. In their approach,
87
different moments that correspond to different values of the U variable in the
characteristic function are weighted by a Gaussian function defined in (5.3). The
choice of π function is very critical in the estimation since it affects the estimated
parameters. The problem is how we can choose different moments out of an infinite
set of moments in such a way that more information is incorporated in the
estimation. I return to the GMM theory first and explain what the infinite moments
mean in that context.
The inversion of covariance matrix is not possible when a lot of moment
conditions are implemented. If we assume L is the T by K matrix of observed
moments, where T is the number of observations and K is the number of moments,
then we can write the covariance matrix of L as Σ . Since Σ is a positive semidefinite
symmetric matrix we can decompose this matrix to its diagonalized form as:
1 −
Λ = Σ M M ,
where the columns of the M are the eigenvectors of covariance matrix Σ and Λ is a
diagonalized matrix in which its diagonal elements are eigenvalues and the
nondiagoanl elements are zero. Then if we have the moment conditions in a vector
G, we can write the objective function of GMM method as:
) ' ( )' ' ( ) ( ' ) ( ' '
1 1 1 1 1 1
G M G M G M M G G M M G G G
− − − − − −
Λ = Λ = Λ = Σ
P = M’G is the new vector of moments in the new projected space and are
orthogonal to each other. The elements of the diagonalized matrix
1 −
Λ is the inverse
of the eigenvalues. So the new objective function can be written as:
88
∑
=
−
= Λ
K
i i
i
p
P P
1
2
1
'
λ
, (5.1)
The inversion problem of the first objective function now is transformed to
having close to zero eigenvalues in the objective function defined in (5.1). If we
include more moments then the chance of having nonzero or close to zero
eigenvalues will be increased. This decreases the stability of the estimation.
Eliminating those moments causes the loss of more information, which makes the
ECF method inefficient. In order to circumvent this problem, the regularization
parameters will be used. This method is called Tikhonov approximation (Groetsch,
1993). The Tikhonov parameter is an α that changes the objective function as:
2
1
2
) (
i
K
i i
i
p T
∑
=
+
=
α λ
λ
α ,
As α goes to zero, ) ( α T converges to the true value of the objective function.
This approximation is a trade off between stability and the biasness of estimator.
Carrasco and Florens (2002), Carrasco et al. (2005) used the idea of a continuum of
moments in order to circumvent the inversion problem of covariance matrix while
they tried to use all the information of all moments lest they lose the efficiency of the
estimator. They also came up with the idea of using Tikhonov approximation to
generate stable estimators.
The asymptotic distribution of the estimated parameters is:
) ) ' ( , 0 ( )
ˆ
(
1 1
0
2
1
− −
Σ ⎯→ ⎯ − M M N T
d
θ θ
where M is the jacobian of the moments or
'
) ; (
θ
θ
∂
∂
=
t
Y H
M .
89
Newey and West (1987) developed a theory for testing any null hypothesis. They
proposed the Wald test for testing the null hypothesis where r(.) is an s-element
vector of continuous differential functions denoted by , s is the number of restrictions
that cannot exceed the number of parameters. The Wald test statistic is:
)
ˆ
( )] ( ' ) ' )( ( [ )'
ˆ
(
1 1 1
θ θ θ θ r R M M R Tr W
T
− − −
Σ =
Under the null hypothesis, it is asymptotically distributed as
2
s
χ .
3-6-Empirical Results
In this study, I selected two companies, General Motors and Ford. Both
companies are in the same industrial sector. The five year daily price data of both
companies from August 5, 2001 to August
5, 2005 was used for the analysis (Figure
3-1).
General Motors Corporation engages in the design, manufacture, and marketing
of cars and light trucks worldwide. It operates through automotive and financing and
insurance operations (FIO) segments. The automotive segment designs,
manufactures, and markets passenger cars, trucks, and locomotives, as well as related
parts and accessories. The company offers its vehicles primarily under the brands
Chevrolet, Pontiac, GMC, Oldsmobile, Buick, Cadillac, Saturn, and HUMMER. The
FIO segment provides a range of financial services, including: consumer vehicle
financing, full service leasing and fleet leasing, dealer financing, car and truck
extended service contracts, residential and commercial mortgage services, vehicle
90
and homeowners' insurance, and asset-based lending. The company’s automotive-
related products are marketed through retail dealers and distributors primarily in the
United States, Canada, and Mexico.
Figure 3-1- Ford and General Motors price
0
10
20
30
40
50
60
70
8/1/2001
10/1/2001
12/1/2001
2/1/2002
4/1/2002
6/1/2002
8/1/2002
10/1/2002
12/1/2002
2/1/2003
4/1/2003
6/1/2003
8/1/2003
10/1/2003
12/1/2003
2/1/2004
4/1/2004
6/1/2004
8/1/2004
10/1/2004
12/1/2004
2/1/2005
4/1/2005
6/1/2005
8/1/2005
Time
Ford
GM
Ford Motor Company manufacturers and distributes automobiles and finances
and rents vehicles and equipment. It operates in two sectors, Automotive and
Financial Services. Automotive sector sells cars and trucks. Ford primarily sells
Ford, Lincoln, and Mercury brand vehicles and related service parts in North
America, including the United States, Canada, and Mexico; it also sells Ford-brand
vehicles and related service parts in South America. The automotive sector also sells
Ford-brand vehicles and related service parts in Europe and Turkey, as well as in
Asia Pacific and Africa. In addition to producing and selling cars and trucks, Ford
Motor provides retail customers with a range of after-the-sale vehicle services and
91
products. The financial services sector primarily includes vehicle-related financing,
leasing, and insurance.
The credit rating for General Motors was downgraded to BB+ on May 24, 2005.
Ford’s rating was downgraded to BBB- on July 20, 2005. All ratings were
announced by FITCH Company. The probability of default for the FITCH rating is
shown in Table 3.
The multivariate jump diffusion is used for modeling the equity price data of
Ford and General Motors (Appendix B). The ECCF method has been used to
estimate the parameters of the models. The grid points have been selected from two
axis of u vector. Twenty points on each axis from 0.1 to 2 with 0.1 steps were
generated for each component of the u elements (the dimension of u is two). Since
the objective function is a complex number, each point on the axis produced two
moments: one for the real part and the other for the imaginary part. Eighty total
moments were generated for estimation, and the principal component method
explained earlier was used for the estimation.
The Nelder-Mead simplex (direct search) optimization method was used in the
MATLAB program for estimating the parameters. The algorithm belongs to the class
of direct search methods, a class of optimization algorithms that neither compute nor
approximate any derivatives of the objective function. In fact, the method was
inspired by the simplex method of Spendley, Hext, and Himsworth (1962) and the
simplex method of Nelder and Mead. The multi-directional search algorithm is
92
inherently parallel. The basic idea of the algorithm is to perform concurrent searches
in multiple directions. These searches are free of any interdependencies, so the
information required can be computed in parallel.
I assumed the jumps are normally distributed with mean µ and standard
deviation σ . The estimated parameters are as below (the standard deviations are in
parentheses):
⎟
⎟
⎠
⎞
⎜
⎜
⎝
⎛
=
) 000001 . 0 ( 000017 . 0
) 000001 . 0 ( 000024 . 0
K ,
⎟
⎟
⎠
⎞
⎜
⎜
⎝
⎛
− −
−
=
) 000165 . 0 ( 002271 . 0 ) 000097 . 0 ( 00058 . 0
) 000152 . 0 ( 002477 . 0 ) 000051 . 0 ( 000181 . 0
0
H ,
0233) 0.0240(0.0
0
= λ , 0.000008) -0.000126( = µ , 00077) 0.0238(0.0 = σ .
The parameters estimated for the restricted model is as below:
⎟
⎟
⎠
⎞
⎜
⎜
⎝
⎛
=
) 000009 . 0 ( 000160 . 0
) 000011 . 0 ( 000220 . 0
K ,
⎟
⎟
⎠
⎞
⎜
⎜
⎝
⎛
− −
−
=
) 000345 . 0 ( 000311 . 0 ) 000019 . 0 ( 00078 . 0
) 0000402 . 0 ( 000545 . 0 ) 000030 . 0 ( 000403 . 0
0
H
The estimated parameters show that Ford and GM are growing with a very slow rate
of 0.000024 and 0.000017 returns in each day, respectively. The parameters of the
diffusion part exhibit the correlation in tiny changes from two independent Brownian
motions. The jump part explains that both companies experience a jump every 42
days with the expected magnitude of close to zero percent but with a relatively huge
variance of two percent on that day. The very high variance of the jump makes the
93
jump dominant. These jumps are common for both firms that are captured by
considering them in the model and they let the model accommodate the surprise
default risk for both companies. The significant coefficient for the hump component
shows that the unrestricted model is better than the restricted one. However, the
Wald test has been used for comparing two models. The Wald statistics is 10.7 with
a p-value of 0.01. Therefore, the model with simultaneous jump has a better
explaining power relative to the model that does not consider the jump.
With 100,000 simulations, the default threshold has been found for each of the 10
year probability of default. Based on the simulations, the joint probability of default
has been found for different years. The results are in Table 3-1. The comparison of
joint default between two models shows that the joint default probability of Ford and
GM is underestimated by the model that does not consider the simultaneous jumps.
For all different time horizons, the joint default probabilities of two firms are higher
with simultaneous jump present in the model. Banks and financial institution need to
hold capital for 3 bp probability for keeping their AA rating (under Basel II
regulation). For a one year horizon, the probability of joint default is 5 bp, which is
more than 3 bp and much more than the 0.3 bp resulting from the independence
assumption.
94
Table 3-1 Joint default probability (in bp) for Ford and General Motors
Year Ford General
Motors
Joint default
if they are
independent
Joint
Default
Probability
from the
model with
jump
Joint
Default
Probability
from the
model
without
jump
1 42 72 0.3 34 30
2 107 189 2 90 83
3 187 320 6 164 150
4 274 452 12 243 224
5 363 574 21 321 296
6 448 685 31 393 366
7 527 784 41 466 429
8 600 875 52 527 493
9 666 947 63 589 550
10 726 1018 74 642 596
3-7-Conclusions
In this paper, a more general approach than using correlation number has been
used for finding the comovements of firm asset values. Jumps as the surprise
95
changes were shown to be significant as a default factor. A jump diffusion model
was the basis of taking both Brownian motion and jump into the default modeling. In
order to estimate the model, principal component analysis with Tikhonov
approximation was used. The empirical result from two well-known firms, Ford and
General Motors shows that common jumps are important default factors. If they are
not estimated and incorporated in the model, the joint probability of default will be
underestimated.
More work can be done as the extension of this study. The MATLAB code can
be extended to cover more than two firms. More firms can be considered in the
model in order to find the various comovement patterns in different firms. The
econometrics approach should be elaborated more in order to find the most
informative moments since some information may be lost in the Tikhonov
approximation. Unfortunately, the FtD data are not commonly available for
researchers. Upon the availability of the data, the implied correlation of firms can be
found from data and their comparison with the findings can validate the model.
96
Bibliography
Amato, Jeffrey and Eli Remolona (2003), “The Credit Premium Puzzle”, BIS
Quarterly Review, Vol. 18, 351-416.
Andreasen, (2001), Credit Explosives, working paper, Bank of America. Black,
Fisher and John C. Cox, (1976), “Valuing Corporate Securities: Some Effects of
Bond Indenture Provisions”, The Journal of Finance, 31 (2), 351-67.
Andersen, T.G., T. Bollerslev, and F. Diebold (2004), “Some like it Smooth, and
Some Like it Rough: Untangling Continuous and Jump Components in
Measuring, Modeling and Forecasting Asset Return Volatilities”, Unpublished
Paper: Economics Department, Duke University.
Andreasen, (2001), Credit Explosives, working paper, Bank of America.
Bajtelsmit V. L., Vanderhei J. A. (1996), “Risk Aversion and Retirement Income
Adequacy”, Positioning Pensions for 21
st
Century, Olivia S. Mitchell, Ed.
Philadelphia, University of Pennsylvania Press, 1997.
Barndorf-Nilsen, Ole and Neil Shephard (2003a), “Econometrics of Testing for
Jumps in Financial Economics using Bipower Variation”, Working Paper, Oxford
University.
Barndorf-Nilsen, Ole and Neil Shephard (2003b), “Realized Power Variation and
Stochastic Volatility”, Bernoulli, vol. 9, 243-165.
Barndorf-Nilsen, Ole and Neil Shephard (2004), “Power and Bipower Variation with
Stochastic Volatility and Jumps”, Journal of Financial Econometrics, vol. 2, 1-48.
Bates, D.,(1996), “Jumps and Stochastic Volatility: The Exchange Rate Process
Implicit in Deutschmark Options.” Review of Financial Studies, 9, 69-107.
Beneish, M., Press, E., (1995). Interrelation among events of default. Working
Paper, Duke University, Durham, NC.
Benjamin, J.D, A. Heuson, and C.F. Sirmans, (1995) “The Effect of Origination
Strategies on the Pricing of Fixed-Rate Mortgage Loans.” Journal of Housing
research, Vol.6, No. 1, pp.137-148.
97
Black, Fisher and John C. Cox, (1976), “Valuing Corporate Securities: Some Effects
of Bond Indenture Provisions”, The Journal of Finance, 31 (2), 351-67.
Carrasco, M. and J.P. Florens (2000), “Efficient GMM Estimation Using the
Empirical Characteristic Function”, Working Paper, CREST, Paris.
Carrasco, M., Chernov, M., Florens, J.P. and Ghysels, E. , (2005), “Efficient
Estimation of Jump Diffusion and General Dynamic Models with a Continuum of
Moment Conditions”, Working Paper.
Chan K.C.; G. Andrew Karolyi; Francis A. Longstaff; Anthony B. Sanders, (1992),
An empirical comparison of alternative models of the short term interest rate,
Journal of Finance, 1209-1227.
Chernov, M., A. R. Gallant, E. Ghysels, and G. Tauchen (2003), “Alternative
Models for Stock Price Dynamics”, Journal of Econometrics 116, 225-258.
Collin-Dufresne, Pierre, Robert Goldstein, and Jean Helwege (2003), “Is Credit
Event Risk Priced? Modleing Contagion via Updating of Beliefs”, Working Paper,
Carnegie Mellon University.
Das, Sanjiv and Silverio Foresi, (1996), “ Exact Solutions for Bond and Option
Prices with Systematic Jump”, Review of Derivative Research, Vol1, Number 1.
Das, Sanjiv and Peter Tufano, (1996),”Pricing Credit Sensitive Debt when Interest
Rates, Credit Ratings and Credit Spreads are Stochastic”, Journal of Financial
Engineering, 5 (2), June.
Demircubuk, N. and Tse, T., (2001), “Default Point Estimation”, KMV company.
Demircubuk, N. and Tse, T., (2001), “Default Point Estimation”, KMV company.
Duffie, D., Lando, D., (1997), “Term Structure of Credit Spreads with Incomplete
Accounting information. Working Paper, Stanford University, Stanford, CA.
Duffie, D., Lando, D., (1997), “Term Structure of Credit Spreads with Incomplete
Accounting Information. Working Paper, Stanford University, Stanford, CA.
Duffie D., R. Liu and A. M. Poteshman (2006), “ How American Put Options Are
Exercised”, Working Paper at University of Illinois at Urbana Champaign.
Duffie, Darrel and Ken Singleton, (1999), “Modeling Term Structure of Defaultable
Bonds”, Review of Financial Studies, Special 1999, 12 (4), 687-720.
98
Duffie, Darrel, J. Pan and Ken Singleton, (2000), “Transform Analysis and Option
Pricing for Affine Jump-Diffusions”, Econometrica, 68, 1343-1376.
Engstrom, M. , L. Norden, and A. Stromberg, (2000) “ Early Exercise of American
Put Option: Investor Rationality on the Swedish Equity Options Market”, Journal of
Future Market, 20 (2), 167-188.
Eom, Y., J. Helwege and J. Huang, (2004), “ Structural Models of Corporate Bond
Pricing: An Empirical Analysis”, Review of Financial Studies 17,499-544.
Feuerverger, A. (1990), “An efficiency result for the empirical characteristic function
in stationary time-series models”, The Canadian Journal of Statistics, 18,155-161.
Feuerverger, A. and P. McDunnough (1981), “On the Efficiency of Empirical
Characteristic Function Procedures”, J. R. Statis. Soc. B, 43, 20-27.
Finucane, J. T. (1997) “ An Empirical Analysis of Common Stock Call Exercise: A
Note”, Journal of Banking and Finance, 21, 563-571.
Garbade, Kenneth (2001). Pricing Corporate Securities as Contingent Claims,
Cambridge, MA: MIT Press.
Geske, Robert, (1977), “The Valuation of Corporate Liabilites as Compound
Options”, Journal of Financial and Quantitative Analysis, pp. 541-52.
Ghysels, E., Harvey, A. C., Renault, E. (1996), “Stochastic Volatility. In: Maddala
G. S. , Rao, C. R., eds. Handbook of Statistics, Vol 14, Statistical Methods in
Finance. Amsterdam: North-Holland, pp 119-191.
Gourieroux, C., A. Monfort and E. Renault (1993), “Indirect Inference”, Journal of
Applied Econometrics , 8, 85-118.
Groetsch, C. (1993), Inverse Problems in the Mathematical Sciences, Viewweg,
Wiesbaden.
Grimmet, GR. and David Stirzaker (2001), “Probability and Random Processes”,
Oxford University Press.
Hakim, S., M. Rashidian and E. Rosenblatt, (1999) “Measuring the Fallout Risk in
the Mortgage Pipeline”, Journal of Fixed Income; Sep 1999; 9, 2; page 62.
Halek M., Eisenhauer J. (2001), “Demography of Risk Aversion”, The Journal of
Risk and Insurance, Vol. 68, No. 1, 1-24
99
Hamilton, J. D. (1990), “A New Approach to the Economic Analysis of
Nonstationary Time Series and the Business Cycle”. Econmetrica 57:357-384.
Hansen, L.P., (1982), “Large Sample Properties of Generalized method of Moments
Estimator”, Econometrica 50:1029-1054.
Hanson, S. , Pesaran, M.H and Schuermann, T., (2005), “The Scope for Credit Risk
Diversification”, Working Paper, University of Cambridge.
Hinz McCarthy and Turner (1996), “Are Women Conservative Investors? Gender
Differences in Participant-directed Pension investments”, Positioning Pensions for
21st Century, Olivia S. Mitchell, Ed. Philadelphia, University of Pennsylvania Press,
1997.
Huang, Xin and Ming Huang (2003), “How Much of the Corporate-treasury Yield
Spread is Due to Credit Risk?”, Working Paper, Penn State University.
Huang, Xin and George Tauchen (2004), “ The Relative Contribution of Jumps to
Total Price Variance”, Working Paper, Duke University.
Hull, J., and White, A., (2001), ”Valuing Credit Default Swaps II: Modeling Default
Correlations”, Journal of Derivatives, Vol. 8, No. 3, 12-22.
Hull, J., and White, A., (2005), ”The Valuation of Correlation-Dependent Credit
Derivatives Using a Structural Model”, Working Paper, Joseph L. Rotman School of
Management, Vol. 8, No. 3, 12-2
Iben, Th. And R. Litterman, (1991), “Corporate Bond Valuation and the Term
Structure of Credit Spreads”, Journal of Portfolio Management, 17 (3), Spring, 1991,
52-64.
Jarrow, R. and Stuart M. Turnbull,(2000),”Princing Derivatives on Financial
Securities Subject to Credit Risk” , Journal of Finance,50 (1), March,53-85.
Jarrow, Robert, David Lando and Stuart Turnbull,(1997) , “A Markov Model of
Term Structure of Credit Spread”, Review of Financial Studies, 10 (2), Summer.
Jarrow, Robert, David Lando and Fan Yu, (2005), “Default Risk and Diversification:
Theory and Empirical Impilications”, Mathematical Finance, 15(1), 1-26.
Jiang, George and John Knight (2001), “A Nonparametric Approach to the
Estimation of Diffusion Process, with an Application to a Short-Term Interest Rate
Model”, Econometric Theory, 13,615-645.
100
Jinakoplos N. A., Bernasek A. (1998), “Are Women more Risk Averse?”, Economic
Inquiry, 36(4), 620-630.
Jones, E., Mason, S. and E. Rosenfeld, (1984), “Contingent Claims Analysis of
Corporate Capital Structures: An Empirical Investigation”, Journal of Finance 39,
611-627.
Kim, I. J., Ramaswamy, K., and Sundaresan, S. M.,(1993) “Valuation of Corporate
Fixed-Income Securities”, Financial Management, p.117-131.
Knight, J.L. and J. Yu (2000), “Empirical Characteristic Function Estimation in
Time Series”, Working Paper, University of Western Ontario.
Lando, D., (1998), “On Cox Processes and Credit Risk Bonds”, Review of
Derivatives Research, 2, 99-120.
Laurent, J-P and J. Gregory, (2003), “Basket Default Swaps, CDO’s and Factor
Copulas,” Working Paper, ISFA Actuarial School, University of Lyon.
Lawrence, Edward C. and Nasser Arshadi (1995), “A Multinomial Logit Analysis of
Problem Loan Resolution Choices in Banking”, Journal of Money, Credit and
banking 27, 202-216.
Leland, Hayne E., (1994), “Corporate Debt Value, Bond Covenants and Optimal
Capital Structure”, Journal of Finance, 49 (4), September, 1213-52.
Leland, H. E. and K. B. Toft, (1996), “Optimal Capital Structure, Endogenous
Bankruptcy and the Term Structure of Credit Spreads”, Journal of Finance, 51 (3),
July, pp. 987-1019.
Li, D.X., (2000), “On Default Correlation: A Copula Approach,” Journal of Fixed
Income, Vol 9, March, p. 43-54.
Longstaff, F. and E. Schwartz, (1995), “A Simple Approach to Valuing Risky Fixed
and Floating Rate Debt”, Journal of Finance, Vol 50, No. 3., 789-819.
Madan, D. B., Senata, E. (1990), “The Variance Gamma (VG) Model from Share
Market Returns”, Journal of Business 63:511-524.
Madan, D. and H. Unal, (1993), “Pricing of risks of Default”, Working paper,
College of Business, University of Maryland.
101
Mandelbrot, B. (1963), “The Variations of Certain Speculative Prices”, Journal of
Business 36:394-419.
McFadden, D. L. (1974), “Conditional Logit Analysis of Qualitative Choice
Analysis,” in Frontiers in Econometrics, ed. P.Zarembka. New York: Academic
Press, 105-142.
Merton, Robert C., (1974), “On the Pricing of Corporate Debt: The Risk Structure of
Interest Rates”, The Journal of Finance, 29, May, 449-70.
Merton, Robert C., (1977), “On the Pricing of Contingent Claims and the
Modigliani-Miller Theorem”, Journal of Financial Economics, 5, 241-9.
Newey, Whitney and Kenneth D. West, (1987), “Hypothesis Testing with Efficient
Method of Moments Estimation”, International Economic Review, Vol28, No.3.
Ogden, J. (1987), “Determinants of Ratings and Yields on Corporate Bonds: Tests of
the Contingent Claims Model”, Journal of Financial Research, 10,329-339.
Overdahl J. A. and P. G. Martin (1994), “Exercise of Equity Options: Theory and
Empirical Tests”, Journal of Derivatives, 2, 38-51.
Pesaran, M.Hashem, Til Schuermann, Bjorn-Jakob Treutler and Scott M. Weiner
(2005), “Macroeconomic Dynamics and Credit Risk: A Global Perspective.”
Forthcoming, Journal of Money, Credit and Banking; available at Wharton Financial
Institution Centre Working Paper #03-13B.
Poteshman A. M. and V. Serbin (2003), “Clearly Irrational Financial market
Behavior: Evidence from the Early Exercise of Exchange Traded Stock Options”,
Journal of Finance, 58, 37-70.
Quandt, R. E, “The Estimation of the Parameters of a Linear Regression System
Obeying Two Separate Regimes, “Journal of the American Statistical Association,
53,873-80.
Riley W. B., Chow K. V. (1992), “Asset Allocation and Individual Risk Allocation”,
Financial Analyst Journal, (Nov./Dec.) 32-37
Rosenblatt, E., and J. Vanderhoff. (1992) “The Closing Rate on Residential
mortgage Commitments: An Econometrics Analysis.” Journal of Real Estate
Finance & Economics.
102
Shreve, S. E., J. P. Lehoczky and D. P. Gaver (1984), “Optimal Consumption for
General Diffusions with Absorbing and Reflecting Barriers”, Siam J. Control and
Optimization, Vol 22, No.1, January 1984.
Schunbucher, P., (1997), “The Term Structure of Defaultable Bond Prices”, Working
Paper,University of Bonn.
Schunbucher, P., (2003), Credit Derivative Pricing Models, John Wile & Son.
Singleton, K. (2001), “Estimation of Affine Pricing Models Using the Empirical
Characteristic Function”, Journal of Econometrics, 102,111-141.
Spendley, W., Hext, G., and Himsworth, F., (1962), “Sequential application of
simplex design in optimization and evolutionary operation”, Technometrics.
Vasicek, O. (1987), “Probability of Loss on Loan Portfolio”, KMV Corporation.
Vasicek, O. (1991), “Limiting Loan Loss Probability Distribution”, KMV
Corporation.
Zhang, B., Zhou, H. and Zhu, H. (2005) ,“ Explaining Credit Default Swap Spreads
with Equity Volatility and Jump Risks of Individual Firms”, Working Paper, KMV
Corporation.
Zhou, Chunsheng, (1997), “A Jump-Diffusion Approach to Modeling Credit Risk
and Valuing Defaultable Securities”, Working Paper, Federal Reserve Board,
Washington, 47pp.
Zhou, C., (2001a), “The Term Structure of Credit Spreads with Jump Risk”, Journal
of Banking and Finance, Vol.25, 2015-2040.
Zhou, C., (2001b), “An Analysis of Default Correlations and Multiple Defaults”,
Review of Financial Studies, Vol. 14, No. 2, 555-576.
103
Appendix A- Jump Transform
The jump transform is defined as:
∫
=
n
R
z d z c c ) ( ) . exp( ) ( ν ψ , for
n
C c ∈ ,
where ν is a random variable whose distribution is dependent on the state variable z.
If we assume ν is normally distributed as ( ) Σ , µ N , then we will have:
) '
2
1
' exp(
)] ( [ )]' ( [
2
1
exp 2 ) '
2
1
' exp(
'
2
1
' )] ( [ )]' ( [
2
1
exp 2
) ' 2 ' ' 2 ' . 2 ' ' ' ' (
2
1
exp 2
) . 2 ' ' ' ' (
2
1
exp 2
) . 2 ' ' ' ' (
2
1
exp 2
) ' ' ' ' (
2
1
exp ) . exp( 2
) ( )' (
2
1
exp ) . exp( 2
) ( ) . exp( ) (
1
2
1
1
2
1
1 1 1 1
2
1
1 1 1 1
2
1
1 1 1 1
2
1
1 1 1 1
2
1
1
2
1
c c c
dz c z c z c c c
dz c c c c z c z
dz c c c c c c z c z z z z
dz z c z z z z
dz z c z z z z
dz z z z z z c
dz z z z c
z d z c c
n
R
Σ + =
⎭
⎬
⎫
⎩
⎨
⎧
+ Σ − Σ + Σ − − Σ × Σ + =
⎭
⎬
⎫
⎩
⎨
⎧
Σ + + + Σ − Σ + Σ − − Σ =
⎭
⎬
⎫
⎩
⎨
⎧
− Σ − + Σ + − Σ + Σ − Σ − Σ − Σ =
⎭
⎬
⎫
⎩
⎨
⎧
− Σ + Σ − Σ − Σ − Σ =
⎭
⎬
⎫
⎩
⎨
⎧
− Σ + Σ − Σ − Σ − Σ =
⎭
⎬
⎫
⎩
⎨
⎧
Σ + Σ − Σ − Σ − Σ =
⎭
⎬
⎫
⎩
⎨
⎧
− Σ − − Σ =
= =
∫
∫
∫
∫
∫
∫
∫
∫
−
−
−
−
− − − −
−
− − − −
−
− − − −
−
− − − −
−
−
−
µ
µ µ π µ
µ µ µ π
µ µ µ µ µ µ π
µ µ µ µ π
µ µ µ µ π
µ µ µ µ π
µ µ π
ν ψ
104
Appendix B- Characteristic Function
Assumptions
In this study, the following assumptions have been used for the parameters of the
jump diffusion model defined in (4.1):
0
1
= K ,
0
1
= H ,
0
1
= l ,
The time intervals are equally spaced,
Jumps are normally distributed and are i.i.d.
Therefore, the complex-valued Ricatti equations will be:
, 0
) , (
=
∂
∂
τ
τ U D
]. 1 )) , ( ( [ ) , ( )' , ( '
) , (
0 0 0
− + =
∂
∂
U D l U D H U D K
U C
τ ψ τ τ
τ
τ
, ) , 0 ( iU U D = . 0 ) , 0 ( = U C
From the assumptions, we can solve the above differential equation as:
iU U D = ) , ( τ ,
] 1 )) ( )' (
2
1
' ' [exp( ) ( )' (
2
1
' ) , (
0 0 0
− Σ + × + + × = iU iU iU l iU H iU iU K U C µ τ
Abstract (if available)
Abstract
Managing pipeline risk has been a challenge in the mortgage industry. Any locked loan is like a put option on a callable bond that is given for free to the customer. The mortgage bank has to hedge its risk against interest rate change, but the proportion of the eventually funded loans is unknown beforehand. In this study, we extend the destinations of a locked loan to four different states: closing, canceling, extending, and renegotiation. We will try to find the probability of exiting to each destination as a function of market price change and other borrowers' attributes. Modeling these probabilities helps the mortgage bank to hedge its pipeline risk more efficiently. In addition, the different risk averseness of various borrowers, which has been an interesting topic in finance theory, has been observed in this study. Women, married couples, and younger people show more risk averseness than other groups. Merton model will be challenged by introducing a new model for corporate defaults. Empirically, the new model which drops the option theoretic approach of Merton has a better explanatory power of the default probabilities. Jumps are also shown as an important component of the default correlation.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Macroeconomic conditions, systematic risk factors, and the time series dynamics of commercial mortgage credit risk
PDF
Three essays on the credit growth and banking structure of central and eastern European countries
PDF
Housing consumption-based asset pricing and residential mortgage default risk
PDF
Credit risk of a leveraged firm in a controlled optimal stopping framework
PDF
Empirical analysis of factors driving stock options grants and firms volatility
PDF
Essays on the econometrics of program evaluation
PDF
Pricing OTC energy derivatives: credit, debit, funding value adjustment, and wrong way risk
PDF
Essays on interest rate determination in open economies
PDF
Empirical essays on alliances and innovation in the biopharmaceutical industry
PDF
Essays on inflation, stock market and borrowing constraints
PDF
Two essays on the impact of exchange rate regime changes in Asia: examples from Thailand and Japan
PDF
Essays on the empirics of risk and time preferences in Indonesia
PDF
Essays in international economics
PDF
Essays on product trade-ins: Implications for consumer demand and retailer behavior
PDF
Essays on commodity futures and volatility
PDF
Essay on monetary policy, macroprudential policy, and financial integration
PDF
Growth, trade and structural change in low income industrializing economies
PDF
The effects of framing and actuarial risk probabilties on involuntary civil commitment decisions
PDF
Essays in panel data analysis
PDF
Essays in macroeconomics and macro-finance
Asset Metadata
Creator
Mashayekh Ahangarani, Pouyan
(author)
Core Title
Essays in fallout risk and corporate credit risk
School
College of Letters, Arts and Sciences
Degree
Doctor of Philosophy
Degree Program
Economics
Publication Date
08/08/2009
Defense Date
06/15/2007
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
credit risk,OAI-PMH Harvest
Language
English
Advisor
Jones, Chris (
committee chair
), Betts, Caroline M. (
committee member
), Hsiao, Cheng (
committee member
), Sangvinatsos, Antonios (
committee member
)
Creator Email
mashayek@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-m780
Unique identifier
UC1501163
Identifier
etd-MashayekhAhangarani-20070808 (filename),usctheses-m40 (legacy collection record id),usctheses-c127-537082 (legacy record id),usctheses-m780 (legacy record id)
Legacy Identifier
etd-MashayekhAhangarani-20070808.pdf
Dmrecord
537082
Document Type
Dissertation
Rights
Mashayekh Ahangarani, Pouyan
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Repository Name
Libraries, University of Southern California
Repository Location
Los Angeles, California
Repository Email
cisadmin@lib.usc.edu
Tags
credit risk