Download presentation
Presentation is loading. Please wait.
1
Sydney December 11, 2006 Seite 1 Lessons from implementations of Basel II and for Solvency II - Credit Rating Models for the Banking Book of Banks
2
Sydney December 11, 2006 | 21.06.2015Seite 2 Module Corporates+Banks (Bank A) Scorecard-based rating systems Estimation of individual PDs Use of a masterscale to transform individual PDs into rating grades Shadow-Rating, i. e. external ratings as benchmark Estimation of PDs for each external Rating grade based on the default history of the rating agencies Corporates: Division into 12 submodules - six regions each with stock- exchange quoted and non-quoted companies: 1.Germany 2.North-America 3.Great Britain/Ireland 4.Rest of the World - Developed Countries 5.Rest of the World - Emerging Markets 6.Rest of the World – Other Countries
3
Sydney December 11, 2006 | 21.06.2015Seite 3 Development Data Corporates Submodule Observations (with external rating) Stock-Exchange Quoted (with external rating) Non-Quoted (with external rating) Total10821 (2119) 3071 (1568) 7750 (551) Germany6548 (481) 1147 (336) 5401 (88) North-America1120 (713) 625 (515) 495 (42) UK /Ireland654 (256) 324 (167) 330 (60) Rest of the World Developed Countries 1418 (465) 667 (345) 751 (63) Rest of the World Emerging Markets 935 (148) 259 (54) 676 (42) Rest of the World Other Countries 146 (56) 49 (32) 97 (14)
4
Sydney December 11, 2006 | 21.06.2015Seite 4 Scoring Criteria used for Ranking Quantitative Part of Score: Use of vendor models ( Moody´s KMV ) ◊Use of RiskCalc- results based on balance sheet figures ◊For stock-exchange quoted companies in addition KMV- EDF (expected default frequency) based on stock price Qualitative Part of Score ◊5 to 6 qualitative factors depending on region ◊Discrete response categories ◊Transformation of responses into EDF using the estimated PDs of the benchmark external Rating
5
Sydney December 11, 2006 | 21.06.2015Seite 5 Calibration General Approach Determination of a PD(Score)-function Benchmark: external ratings and their estimated PDs Database as shown in slide 3: only data including external rating can be used for calibration First guess: Linear Regression between score and the logarithmic PD of the external rating Manual adjustments resulting in a stepwise linear function PD(Score)
6
Sydney December 11, 2006 | 21.06.2015Seite 6 Calibration Representation of the calibration dataset in the institute´s documentation – submodule Germany, non stock-exchange quoted companies
7
Sydney December 11, 2006 | 21.06.2015Seite 7 Score distribution shows that there is a region of data with low scores that is not represented by the data used for calibration Red: used for estimation Grey: whole dataset Calibration
8
Sydney December 11, 2006 | 21.06.2015Seite 8 If the figure shown in slide 6 is extended to the whole region of existing scores, it reveals the lack of calibration data for scores below -120. Thus the PD(score)-function is obtained via extrapolation Remark: the lower and upper bounds of the x-axis correspond to the minimum and maximum score values in the dataset Yellow line: result of linear regression Grey line: result of manual adjustment Grey dotted line: maximum PD of the model Calibration
9
Sydney December 11, 2006 | 21.06.2015Seite 9 Calibration Second Example: Submodule North America, non quoted companies Remark: the lower and upper bounds of the x-axis correspond to the minimum and maximum score values in the dataset Yellow line: result of linear regression Grey line: result of manual adjustment Grey dotted line: maximum PD of the model Transparent circles: Airlines
10
Sydney December 11, 2006 | 21.06.2015Seite 10 In this submodule extrapolation and restriction of the model PD to 20 % result in a rating distribution where nearly 15% of the obligors are assigned to the lowest rating grade “21” Rating distribution submodule North America non- quoted companies Calibration
11
Sydney December 11, 2006 | 21.06.2015Seite 11 Distribution of Scores belonging to the lowest Rating grade “21” may suggest that there is a differentiation of risk in the lowest rating grade which is not reflected by the rating system. For Comparison: the rating grades 1 to 20 cover a score range from -60 to 70. Calibration
12
Sydney December 11, 2006 | 21.06.2015Seite 12 Definition of Default Point of View of the institute´s model validation team: Definition of Default according to the Rating Agencies and according to Basel II are almost identical Argumentation: Similar verbal definition Backtesting with internally observed defaults delivers no statistical evidence of underestimating the PD (binomial test based on a sample containing 14 defaults) indirect argument that definitions are similar
13
Sydney December 11, 2006 | 21.06.2015Seite 13 Definition of Default Point of view of the Audit team: Definition of Default according to the Rating Agencies and according to Basel II are different. Argumentation: Rating agencies are not able to observe all criteria belonging to the Basel II definition of default (asymmetric information) There even exist differences between the default definition of Rating agencies, e.g. Moody´s refers primarily to rated bonds rather than to other liabilities as for example bank loans
14
Sydney December 11, 2006 | 21.06.2015Seite 14 Definition of Default Analysis of the validation data: 400 datasets carry a default flag 53 of these include an external rating from these 53 the external rating reflects a default state in only 14 cases The ratio 53/14 is an indication that there are differences between the default definitions
15
Sydney December 11, 2006 | 21.06.2015Seite 15 Definition of Default However, the ratio 53/14 overestimates the effect: Rating agencies may react after the institute has observed a default (time delay) Credit officer does not neccessarily update the information about the external rating for internally defaulted obligors Further analysis performed by the institute suggests a scaling factor of about 1.2 between internal and external default rates for this sample.
16
Sydney December 11, 2006 | 21.06.2015Seite 16 Module Global Corporates & Banks (Bank B) Expert- Systems Scorecard-based ratingsystem Use of a Masterscale with fixed PDs 26 rating grades (iAAA to iD), of which 7 default-grades (iCC+ and below) Benchmark: Judgement of a group of Senior Credit Officers Shadow-Rating with small development datasets
17
Sydney December 11, 2006 | 21.06.2015Seite 17 Analysis of the data (Global Corporates) Distribution of Obligors in Rating grades of iCC+ and below: Not weighted Exposure-weighted (possible for data younger than 2004) Conclusion: Ratingclasses don‘t separate defaulted from not defaulted obligors
18
Sydney December 11, 2006 | 21.06.2015Seite 18 Migrationmatrices Migrationmatrix 2003 – 2004 Strange Migration out of class iAAA (and also into class iAAA, which is not shown in the above table) Further Analysis showed contamination with so called Facility Ratings, containing information about the transactions (collateral/cash-coverage) Severe in historical data, partially even in recent Ratings Conclusion: a material amount of the ratings contains LGD- components
19
Sydney December 11, 2006 | 21.06.2015Seite 19 Module Corporates (Bank C) 1. initial situation model developing process (MEP) 2. design of rating system „Corporates“ 2.1. pooling standards 2.2. part of past (quantitative part) 2.3. part of future (qualitative part) 2.4. creditworthiness rating 2.5. support / burden and transfer stop 3. validation
20
Sydney December 11, 2006 | 21.06.2015Seite 20 1. initial situation MEP basic proceeding pool project data used: quantitative ratios out of annual balance sheet and qualitative ratios (questionnaires), default information provided data transformation on risk points between 0 and 100. Higher value means higher risk. determinating weights by means of which these risk points are included in the total score (using logistical regressions and adjustment of experts) estimation of PD allocated to a score with logistical regression classifying of these individual PD in a master scale
21
Sydney December 11, 2006 | 21.06.2015Seite 21 data base for model development and validation
22
Sydney December 11, 2006 | 21.06.2015Seite 22 1. initial situation MEP poor data quality of ratios ratios out of annual balance sheet are characterized by numerous and extreme outliers in approx. 30% of all observations at least one ratio is outside of the 1% or 99% quantile ratios of the qualitative section are in some cases significantly beyond the respective range -examples are given on the subsequent pages
23
Sydney December 11, 2006 | 21.06.2015Seite 23 Equity capital rate 0,5% to 99,5% quantile
24
Sydney December 11, 2006 | 21.06.2015Seite 24 fixing five parameters (0,25,50,75,100) and the ranges of value allocated to these five parameters generation of clusters depending on regions and sectors Clustering has a strong impact on model developping processes Clustering is based on profound expert know-how (e.g. external consultancy) especially for foreign clusters: external experts regular check of clustering required Transformation of the quantitative ratios in risk points
25
Sydney December 11, 2006 | 21.06.2015Seite 25 equity capital rate according to clustering high absolute frequency with 100 risk points for non-defaulted borrowers region: Germany sector: capital-intensive
26
Sydney December 11, 2006 | 21.06.2015Seite 26 Agenda 1. initial siutation model developing process (MEP) 2. design of rating system „Corporates“ 2.1. pooling standards 2.2. part of past (quantitative part) 2.3. part of future (qualitative part) 2.4. creditworthiness rating 2.5. support / burden and transfer stop 3. validation
27
Sydney December 11, 2006 | 21.06.2015Seite 27 2.1. Pooling Standards 1. population LRP: switching to gross and net liability according to economic point of view method of pool partner is unknown to LRP 2. completeness of data set different definitions of input box in LRP and pool partner (optional or compulsory entry) can result in different filling rate of pool input. example: key figure „short-range supplier credit target“ obliging guidelines for an agreement on a consistent proceeding for all pool partners are meaningful
28
Sydney December 11, 2006 | 21.06.2015Seite 28 2.2. part of past time (quantitative section) Integration of new quantitative ratios The current inventory system „Vorgangsbearbeitung Kredit“ (VK) only contains annual balance sheet items which are an integral part of the current features Integration of new ratios basing on other annual balance sheet items is not possible offhand This means abandoning historization of the non-rating relevant items
29
Sydney December 11, 2006 | 21.06.2015Seite 29 2.3. part of future questions: 1.retracebility of the assignment of risk points for the qualitative ratios? 2.consistent rating of similar issues? Analysis based on internal reports and systems tendency to central values, no consistent rating, retracebility often not possible
30
Sydney December 11, 2006 | 21.06.2015Seite 30 2.3. part of future 1. „Retraceability“: variable „Strength and weakness profile“ (weigth in total score: 30 %) Set of six subcriteria with different lists of checkable items Checklists shall provide retraceability but are not mandatory. Often they are not filled. Granularity of retraceability is limited
31
Sydney December 11, 2006 | 21.06.2015Seite 31 2.3. part of future 1. „Retracebility“ variable „Strategic planning“ (weigth in total score: 12 %) over 20% of the data show assignment of 50 risk points (RP) in most cases rationale for assignment of 50 RP is not clear: „real“ assessment based on information special rules as existing for large corps and their subsidiaries tendency to central values in most cases to all subcriteria 50 RP is assigned
32
Sydney December 11, 2006 | 21.06.2015Seite 32 2.3. part of future 2. „ consistent rating of similar issues“: variable „Steatigic Planning“ (weight in total score: 12 %) Analysis of data with lack of information huge difference in resulting RP (see example)
33
Sydney December 11, 2006 | 21.06.2015Seite 33 2.4. creditworthiness rating The experience of the credit experts influenced modelling in the follwoing aspects: imitation of the existing model selection of model selection of the analysed ratios selection of criteria fixing of cluster and classification limits for assignment of risk points data transformation fixing of weights of quantitave and qualitative ratios determination of score function composition of qualitative key figures regarding content characterization of qualitative key figures
34
Sydney December 11, 2006 | 21.06.2015Seite 34 Analyses executed by regulators reconstruction of modelling and score computation on basis of the sample used for the development Analog model development and score computation using own estimation of parameters of logistical regression maintaining data transformation (risk points and according limits) analysis of impact on allocation of borrowers in rating grades and estimation of PD.
35
Sydney December 11, 2006 | 21.06.2015Seite 35 retraceability of calculations estimation of parameters could be traced back by means of documentations and subsequent questionning (relative deviation under 0,1%) estimation of parameters for the quantitative ratios are sensitive with regard to different treatment of missing values (relative deviation of more than 20% using the substitution method applied for validation) estimation of parameters for the qualitative ratios are sensitive with regard to outliers, especially beyond the interval [0,100] (relative deviation of more than 15% for significant parameters, more than 50% for less significant ones) influence of individual extreme outliers on the coefficents used for the estimation of PD: 1,5% on the intercept, 2,5% on the slope (3544 observations, relative deviation)
36
Sydney December 11, 2006 | 21.06.2015Seite 36 comparison LRP-Model with the purely statistical model
37
Sydney December 11, 2006 | 21.06.2015Seite 37 Difference of rating grades impacts on total borrowers in- sample: Expert-driven model assigns worser rating grades impacts on defaulted borrowers: Expert-driven model assigns too optimistic rating grades
38
Sydney December 11, 2006 | 21.06.2015Seite 38 Comparison of discriminatory power variations of discriminatory power can be mainly observed in the lower areas for bad borrowers
39
Sydney December 11, 2006 | 21.06.2015Seite 39 estimation of PD impacts on the determined PD estimation of parameter with logistical regression yields different results different functional relation between score and PD: expert- driven model more conservative für low scores (good borrowers), to progressive for higher scores (bad borrowers) different distribution of scores different distribution of PD small variation in average, but strong impact on single borrowers
40
Sydney December 11, 2006 | 21.06.2015Seite 40 conclusions Due to the high importance of qualitative ratios, quality assurance of inputs is treated with special importance. The influence of experience of credit experts on the different steps of modelling should be checked within validation. In-sample shows the expert-based model weaknesses especially with regard to the allocation of worse borrowers. Analog analysis should be executed out-of-sample and out-of-time.
41
Sydney December 11, 2006 | 21.06.2015Seite 41 adjustment on the rule-conform default definition adjusted default defintion (Annex 7 Part IV para 44-46), thus means higher default rate in the historical data these information are not available determination of a scaling factor of 1,18 by means of few observations (59) scaling of the average default rate with this factor in the long run a more differentiated consideration of the rule- conform default definition will be necessary other default definition (e.g. delay of payment) will cause probably the default of other borrowers new default definition is already established, with an increasing data set generated out of the new rating system, we are able to consider appropriately the rule-conform default defintion
42
Sydney December 11, 2006 | 21.06.2015Seite 42 LRP-specific PD adjustment on the own mean default rate: The LRP notices in their inhouse sample a lower default rate than on the pool sample and thus prefers to calibritate on its inhouse mean default rate. Pool: total default rate: 243/20773 = 1.17 % complete records: 99/8780 = 1.13 % mean Individual-PD: 1.13% mean PD after allocating to rating grades: 1.19% LRP: total default rate: 26/2737 = 0.95% complete records: 26/2734 = 0.95% mean Individual-PD : 1.32% mean PD after allocating to rating grades : 1.40%
43
Sydney December 11, 2006 | 21.06.2015Seite 43 comparison of rating distributions
44
Sydney December 11, 2006 | 21.06.2015Seite 44 Relevant para from the directive annex VII, part 4, para 57(a): If a credit institution uses data that is pooled across credit institutions it shall demonstrate that: (a) The rating systems and criteria of other credit institutions in the pool are similar with its own; annex VII, part 4, para 49:A credit institution’s own estimates of the risk parameters PD, LGD, conversion factor and EL shall incorporate all relevant data, information and methods... modifications of relevant informations of pool data before use possible and necessary respectively, eg. different distributions to keep in mind: issue may change due to changement in default definition
45
Sydney December 11, 2006 | 21.06.2015Seite 45 2.5. Support/Burden and Risk of Transferstop Consideration of further risk drivers, which are not only dependent on the observed borrower: Support/Burden: Modification of individual creditworthiness rating when borrower is backed up (Support), or debited (Burden) by a company Transferstop: Modification of creditworthiness rating (after application of Support/Burden) at existing transferstop-risks
46
Sydney December 11, 2006 | 21.06.2015Seite 46 Support/Burden (S/B) Rating-Agent determines, if there is a linkage between borrower and company in terms of support or debts Possible degree of linkage (30%, 50% or 70%) as well is determined by Rating-Agent The S/B-providers rating is set in the height of the determined percentage of the rating In case of rerating of a S/B-provider the ratings of all S/B-clients have to be checked. Information concerning S/B-clients can be provided not until the planned system-upgrade in 2006. Multiple-layered relations can emerge. The timeliness of the incorporated ratings has to be looked after.
47
Sydney December 11, 2006 | 21.06.2015Seite 47 Agenda 1. initial siutation model developing process (MEP) 2. design of rating system „Corporates“ 2.1. pooling standards 2.2. part of past (quantitative part) 2.3. part of future (qualitative part) 2.4. creditworthiness rating 2.5. support / burden and transfer stop 3. validation
48
Sydney December 11, 2006 | 21.06.2015Seite 48 3. Validation Basic Approach Analysis of the discriminative power Binomialtestings, because of scarce data with huge exception- areas Determination of quality-measures (Brier-Score, Devianz etc.) Determination of the significance of weights, new rating on trial basis Change of weighting in the scorefunction only for huge aberrations Recalibration of the functional relation between score and loss probability Frequency: annual
49
Sydney December 11, 2006 | 21.06.2015Seite 49 Validation in the Transition phase Partial use of the development-sample Slow creation of database from the data of the new rating- system The keyfigure of “strong/weak points” from the qualitative area has been changed in its compound. For reasons of compatibility to the historic inventory these keyfigures will be transformed back to their former composition. This figure‘s discriminative power as well as its influence on the scorefunctions composition can be statistically analysed not until a sufficiently enclosing database is built. Transitional solution with regard of the extended solution of losses (Annex 7 Part IV Nr. 44-46)
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.