13.07.2015 Views

Confidence Intervals Around Reserve Estimates - Casualty Actuarial ...

Confidence Intervals Around Reserve Estimates - Casualty Actuarial ...

Confidence Intervals Around Reserve Estimates - Casualty Actuarial ...

SHOW MORE
SHOW LESS
  • No tags were found...

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

1988 CASUALTY LOSS RESERVE SEMINAR3G: CONFIDENCE INTERVALS AROUND RESERVE ESTIMATESModerator: Roger M. Ha]meConsulting ActuaryMilliman & Robertson, Inc.Panel: Spencer M. GluckConsulting ActuaryMilliman & Robertson, Inc.Robin A. HarbageLoss <strong>Reserve</strong> ManagerProgressive CorporationRodney E. KrepsAssistant ActuaryFireman's Fund Insurance CompaniesRecorder: Donald K. RaineyAnalystMilliman & Robertson, Inc.579


MR. HAYNE: Welcome to Session 3G. This is the session on <strong>Confidence</strong> <strong>Intervals</strong><strong>Around</strong> <strong>Reserve</strong> <strong>Estimates</strong>. Before we start, a couple of items. First, all of the opinionsthat are going to be expressed today are the opinions of the speakers and not necessarilythose of the American Academy, <strong>Casualty</strong> <strong>Actuarial</strong> Society or the companies whomthose speakers are employed by.All of the speakers would graciously entertain questions during their talks so we can tryto leave some time at the end for questions and answers, but please, if you do havequestions during the presentations, do not hesitate to ask them. If it looks like things aregoing to drag on too much, they will speed matters up themselves and ignore you.When you do ask questions, for the sake of the recorder, please try to use microphones.The speakers will also try to rephrase or repeat the question, hopefully not rephrase toobadly, for the benefit of the recording, but if you could use the microphone in the middle,that would be appreciated.Our first speaker today is Robin Harbage, an AMP and corporate actuary at Progressive.He joined Progressive in 1987 and he is responsible for establishing corporate loss andloss adjustment expense reserves and overseeing re-insurance. He came to Progressiveafter seven years with Nationwide, where his previous responsibilities included auto andhomeowners pricing, actuarial research, and personal and commercial lines and lossreserves.Robin received a Bachelors of Mathematics from the College of Wooster, an MBA fromOhio State and Associate of the <strong>Casualty</strong> <strong>Actuarial</strong> Society and he is a member of theAmerican Academy of Actuaries. Robin?MR. HARBAGE: There is a hand-out which I had printed and if you have it, it will helpbecause some of the slides that I have are a lot of numbers and hard to read from all theway in the back, I am sure. There is nothing complicated about what I am going to betalking about.In fact, the reason I am here is probably not to present anything that is revelational ornew. It is just simply the fact that the company for whom I work is one of the few, if notthe only, company that publicly discloses both its methodology and its purpose behindsetting confidence intervals around its loss reserves.This is what I am here basically to present, something that is actually in practice, so thatyou can take a look at this and say, "Looks interesting. Could not do it in my company.Maybe we should try and do this" or whatever.I would be curious at the outset just to take a quick straw poll. You do not have to sayanything because I do not want to put you on the record, but just indicate by a hand vote,how many people here do some procedure to estimate the confidence interval aroundtheir loss reserve estimate?(Show of hands.)Of those people that do something to quantify what their loss reserve estimate is, onceyou quantify it, how many people do anything with it?580


(Show of hands.)Very interesting. What you have in front of you is actually, if you have the hand-out, justan appendix out of a loss reserve report that Progressive puts out on an annual basis. Themethodology has been in place for several years.What we are doing is a very simple simulation model that allows us to say here is anobjective measure, or as objective as it can be, of what is a confidence interval aroundloss reserves and, once we have determined that; because our stated objectives are (I) tokeep the underlying loss reserves as close as possible to zero adequacy and (2) to be surethat we have the confidence that our long-term overall reserves, including supplemental,have adequacy in our reserves; we go ahead and put in place a supplemental reservebased on this measurement.I might talk briefly about the purpose for measuring confidence intervals. In an earlierspeech today, I heard somebody say that they feel one of the responsibilities of anactuary is not just to say, "This is my point estimate," but to say in addition to that,"This is my point estimate and this is how variable it is."Why is that important? Well, obviously, for a company that has a surplus that isapproximately equal to the loss reserves, if they have a nice point estimate, but thevariability of that point estimate is something like plus or minus fifty percent, they canimpair their surplus if they are at one end or the other of the range.Another thing that is important about the loss reserve confidence interval is simply thatit tells your management something about how volatile your estimate is so they do notget too much comfort from your point estimate. It quantifies the potential error and itwould allow them to take some management action if they deem it necessary.In our case, we do take some action to quantify and to build into our total loss reservessomething inherent in this estimate. If I could have the first slide?(Slide)This is actually taken from a page in the hand-out if it is possible to read. The methodwe use is a very simple simulation model where we look at our past history and judge howvolatile our actual reserves have been. We look at this in three pieces.First of all, we take the number of incurred claims, so we can get an estimate of howvolatile our frequency and our recording pattern of these claims are. The top is merely astandard accident period, number incurred, triangle. The bottom is the loss developmentfactors from age to age.After looking through the history, we pick both a high point and a low point, which isusually nothing more than the actual high or low development factor from history. Thenwe run a simulation which allows these factors to randomly vary over, say, five hundredsimulations from the high to the low, assuming a uniform distribution within this intervaland also assuming that our development in the future will stay within this interval.The fact that we assumed a uniform distribution is probably a conservative assumptionbecause there is a very concentrated mode within this distribution. Assuming that thedevelopment is somewhere within the high and low value is not a conservative assumptionbecause it could fall outside of this range.581


We hope in our model, and this is our assumption, that the conservative estimateoverweighs the nonconservative estimation.The second slide -- yes?QUESTION: (Inaudible)MR. HARBAGE" To measure what the distribution might be from a historicaldistribution -- yes. Any suggestions you have that can improve this method, if you haveideas, I am willing to listen to them because it can only help me if I can improve this.I will say one comment about the method. It is very simple for a specific reason. Thefact that it is simple means that the people to whom I have to present it and who have toaccept it can understand it. It can be something then that they can feel comfortablewith.We also try and limit the number of assumptions as much as possible, so I will say it isprobably overly simple, but it is helpful in that respect because it is easy to present andeasily understood by the majority of people that have to understand it, which is thesenior management of the company for whom I work.The understanding of people in the industry is helpful as well, although I am sure wecould complicate the model for them quite a bit.(Slide)The second step is the average payments over time. Again, we are developing this toultimate. All we do is take the historical age to age development, select the high andthe low value and allow the model to vary over the interval from the high to the low on auniform distribution.(Slide)The third slide is of the final piece. We looked at our ALAE over the time as well as ourlosses, so we are building a model for all of the loss plus loss adjustment expensereserves. We split, for the sake of our company, our ALAE into two pieces. Theallocated loss adjustment expenses split into legal fees and adjuster fees.We feel that there are vast differences in the payment pattern for these two types ofloss adjustment expenses, so we track them separately. They are actually run throughthe model separately because they have different patterns.Here, what we have done is taken our ratio of legal fees to our paid losses, so instead ofgetting a multiplicative loss development, we are getting an additive loss development.It is the change in the ratios that is shown down below. Again, selecting high and low,and we allow it to vary between the high and low value.(Slide)582


The next slide, which is one run of the simulation, shows if you allowed the randomvariation in these three sets of parameters, what we would get by accident period. Thereis an estimate for ultimate number incurred, our ultimate average paid and our ultimatelegal and adjuster allocated loss adjustment expenses.(Slide)The next slide is the final set of numbers from which we can derive our answer. I shouldnote that we used to do this for the corporation as a whole in one lump sum. Last year,we made one enhancement and that was to say we think we have a little moreheterogeneous population than we had in the past.In fact, our company has written personal lines for a long period of time and has begun towrite a larger volume of commercial lines, and felt it wise to break these out and showthe loss development factors as different sets of simulations and then combine themodels.What we have here, on line one is what we feel would be the average reserves as of12/31/87 for the different reserve dates, the standard deviation, and the coefficient ofvariation. This leads down to line four where we can determine based on this particulardistribution, using this simulation of five hundred random events, what would be theninety-nine percent confidence interval on a one-tail test, because we only care aboutthe confidence of being adequate. We do not really care about the other side, which isthe confidence of running off inadequate.What would be the percentage of our reserves that we would have to set up in order toassure this ninety-nine percent confidence over time that our reserves will not beinadequate? We have, as line five displays, the probability of adequacy, which based on astandard normal distribution, is the ninety-nine percent confidence here.That number then is .057 or a 5.7 percent supplement that we say we should have on topof our point estimate. A side benefit of this, then, is a feeling, because it comes straightfrom the model based on the variability by accident year at the current run-off date,what percentage of the supplemental ought to be allocated to different years.Of course, this is required because this reserve once we have set it up, has to go intoSchedule P. It allows us also to say what is the inherent variability of the differentaccident years, and you can see it goes from a high of fifty-nine down to a low in thisparticular exhibit of g.8 back in '82. We have a smooth percentage which is actuallybased on the three different models that we have run, combined together. This exhibit isjust on our personal lines business.(Slide)The last exhibit is merely the way that we have combined together the three models. AsI said, we used to run this as a combined model for the whole company. Once we get thethree models with their separate supplemental required, we still need to book asupplemental for the whole company and we need to somehow combine these together.One method, of course, would be very simply to say, "Let's take 5.7 percent of thereserve for our personal lines and an additional 17.8 percent for one of our commerciallines and another 18.6 percent for the rest," which are the percentages on the far righthand side, "and just add them together."583


ROBIN A. HARBAGEAPPENDIX FIVE -- SUPPLEMENTAL RESERVEBACKGROUNDIn Parts 2 and 4 of this report, we cite the use of a supplemental reserve asa provision for adverse development. This reserve implements the policystatement on page 32 in our 1987 Annual Report:"We establish supplemental reserves ($34.7 million at 12/31/87 and$22.9 million at 12/31/86) to achieve 99% certainty that total reservesare adequate."This appendix sets forth a method to quantify the adequacy probability whichthe supplemental reserve provides, and to produce an allocation of the reserveby accident year for statutory reporting purposes.We use Monte Carlo methods to simulate future loss and allocated lossadjustment expense development to produce a distribution of the 12/31/87reserve level. From the mean and standard deviation of this distribution, wecalculatethe adequacy probability of the supplemental reserve.In the past, we have run this model for all business types combined. With thegrowth of businesses with diverse loss development patterns, we decided thatone model was no longer sufficient. In particular, we ran separate models fortwo major components of commercial insurance lines so as not to distort ormask the development of diverse segments.Exhibit 19, Sheet IThe top matrix is the cumulative number of incurred claims by accident yearand quarter of maturity. The figures in the column labeled "28" for accidentyears 1978 and later are ultimate claim counts. Their calculation isexplained below.The data above and to the left of the line in the bottom matrix are thechanges by accident year from one evaluation date to the next. This is thesame calculation that is on page 45 (Appendix Two, Exhibit 5).The rows labeled "high" and "low" are the historically largest and smallestchanges. The data below and to the right of the line in the bottom matrix areforecasted development factors which are randomly picked from the rangedetermined by the "high" and "low" factors. The critical assumptions are thatthe development factors are uniformly distributed within the range, and theycannot fall outside the range. We feel the uniform distribution assumption isconservative because it implies more dispersion than the actual, unknowndistribution which most likely has a mode, and therefore will result in asmaller adequacy probability for a given supplemental reserve level. Theexclusion of values outside the range naturally has the opposite effect: lessdispersion and therefore higher probability. Our judgment is that the effectof the distribution assumption is more powerful than that of the excludedvalues. The net effect is a conservative measure of adequacy probability.584


The ultimate incurred claim counts for accident years 1982 and later in thecolumn labeled "28 (ULT)" in the top matrix are simply the product of theclaim counts to date and the randomly selected development factors. Forexample, the 1987 accident year ultimate claim count: 165,580 = 159,799 x1.0362. The multiplication will not be exact because the computer carries thefactors with many more decimal places.Exhibit 19, Sheet 2This exhibit parallels Sheet I using average paid losses instead of incurredclaim counts.Exhibit 19, Sheets 3 and 4These exhibits are conceptually identical to the prior two except that theydeal with allocated loss adjustment expenses (ALAE).We split ALAE into two pieces: legal fees and outside adjuster fees. Theseexhibits simulate independently future development for these two pieces.The purpose is to produce ultimate ratios of ALAE to losses by accident year.On Sheets 3 and 4, the historical ratios are shown in the top matrices. The• ultimate ratios appear under the column labeled "40 (ULT)"The approach is very similar to that in Sheets 1 and 2 except the developmentfactors are additive rather than multiplicative. For example, accident year1987 Legal Fees/Paid Losses ultimate ratio: .0498 = .0031 + .04677. Again,the calculations may not be exact because the number of decimal places printedis much less than used by the computer.Exhibit 19, Sheet 5The top portion of this exhibit pulls together various pieces from Sheets I, 2and 3 to produce a loss and ALAE reserve.Column (I):Column (2): ~Column (3):Column (4) & (5):Column (6):Column (7):Column (8) & (9):Column (i0):Column (ii):Ultimate incurred claim count from Sheet I.Ultimate average paid loss from Sheet 2.Ultimate losses. (i) x (2).Ultimate ratios from Sheets 3 & 4.Ultimate ALAE ratio. (4) + (5).Ultimate losses and ALAE. (3) + (3) x (6).Paid losses and ALAE as of 12/31/87.Loss and ALAE reserve as of 12/31/87. (7) - (8) - (9).Cumulative sum of column (I0). Thus, the $319,408,000on the accident year 1987 line is the reserve for allaccident years as of 12/31/87. The $133,382,000 abovethat is the reserve for all accident years prior to1987 as of 12/31/87.It is essential to remember that the figures [except for columns (8) and (9)]printed on the top portion of Sheet 5 are the result of only one simulationor, in other words, one pass through all the random selections and subsequentcalculations on Sheets I, 2, 3 and 4.585


The simulation is performed 500 times. The result, conceptually, is 500column (ll)'s on Sheet 5. This leads to the bottom part of Sheet 5.Line (I):Line (2):Average of the 500 simulations of column (ii).Standard deviation of the 500 simulations of column(ll).Line (3): Coefficient of variation. (2)/(i).Lines (4) & (5)Line (6):Line (7):Line (8):The calculation of these lines represents a departurefrom past practices. Previously, we used the model todetermine if the current supplemental reserve levelresulted in a sufficiently high probability of adequacyto meet our objective. This year we set our targetprobability level at 99% on line (5) and calculated therequired number of standard deviations from the average12/31/87 reserve to meet this target.This is the supplemental reserve factor as of 12/31/87by reserve date for this reserve segment. Thesefactors will result in the same adequacy probabilityfor the components of the 12/31/87 reserves.This line displays the 12/31/87 supplemental reserveand its components by reserve date.By starting with 12/31/87 and subtracting the priorreserve date's supplemental reserve as of 12/31/87, theaccident year components are derived.Line (9): The distribution of line (8) by accident year. It isline (8) divided by the 12/31/87 supplemental reserveon line (7).Line (i0): Line (9) smoothed. This distribution determines theallocation of the supplemental reserve to accident yearin Schedules 0 and P of the 1987 Annual Statement andthe allocation to runoff date in Exhibit 21 on page 79.After completing this model for the grand total reserves excluding CV Localand Transportation, the model was run for each of these business typesindependently.Exhibit 20Once probability models for the three business types are developed, it isnecessary to blend the distributions together. It is inappropriate to simplysum the three supplemental reserve amounts since this ignores that each is aportion of the same corporation. Upward development in one segment may beoffset by downward development in another. To blend the individual models andachieve the overall probability, we measure the dependence of one model on theother two and merge the distributions based on the correlation. Exhibit 20displays the correlation of the three segments developed through regressionsof pairs of segments. The combined distribution is displayed at the bottom ofthis exhibit.586


Exhibit 19Sheet 1THE PROGRESSIVE CORPORATIONGRAND TOTALS EXCLUDING CV LOCAL & TRANSPORTATIONAccidentNumber IncurredYear 4 8 12 16 20 2428 (Ult)1978 39,484 41,661 41,460 41,3891979 54,791 55,690 55,551 55,4741980 49,924 52,166 52,064 52,0311981 69,550 73,225 73,156 73,0931982 90,791 92,947 92,794 92,6951983 82,334 84,585 84,399 '84,3281984 86,340 89,151 88,856 88,7881985 116,095 121,294 120,9691986 129,722 135,7851987 159,79941,35655,43052,02873,04292,64084,27141,33655,42252,01573,01992,6164132755 41051,99973 00592 57384 17688 682120,777134,844165,580AccidentYear814 _12/8 16/12 20/16 24/20 28/24CumulativeLDF's19781979198019811982198319841985198619871 0551361 0164071 0449081 0528391 0237461 0273391 0325571 0447821 046738 I1 04370.995175 0.9982870.997504 0.9986].30.998044 0.9993660.999057 0.9991380.998353 0.9989330.997801 0.9991580.996691 0.9992340.997320 ~ 1.00000.9962 0.99860.9953 0.99880.9992020.9992060.9999420.9993020 9994060 999324I 0 99990 99940 99940 99990.9995160.9998550.9997500.9996850.999740I 0.99920.99940.99940.99940.99920.99978220.99978340.99969230.9998082I 0.99950.99960.99950.99960.99950.99960.99950.99890.99880.99840.99311.0362HighLow1.05281.02370.99~I 1.00030.9952 0.99830.99990.99920.99990.99900.999810.99931587


Exhibit 19Sheet 2THE PROGRESSIVE CORPORATIONGRAND TOTALS EXCLUDING CV LOCAL & TRANSPORTATIONAccidentYearAverage Paid12 24 36 48 60 7272 (Ult)AverageIncurred19781979198019811982198319841985198619877067958601,0011,0311,0291,0741,2941,4011,551$ 8269281 0191 1871 2101 2271 3101 5231 661$ 888007106281306330425 1,496652$ 928 $ 9461,048 1,0641,145 1,1661,336 1,3611,360 1,3951,388 1,426$ 9531,0781,1781,3701,416$ 968i,i011,1841,3741,4231,4561,5601,7881,9752,233AccidentYear24/12 36/24 48/36-- 60/48 72/60 Ult/72CUMULATIVELDF'S19781979198019811982198319841985198619871 1686301 1671471 1859651 1864531 1731211 1918861 2195461 1769811 1855071 21551.0757381.0848361.0850121.0792341.0795711.0843751.0875941.0848091.08741.08021.045009 1.0190351.041147 1.0150821.034845 1.0187371.042650 1.0183821.040753 1.0262031.043687 1.0273591.050166 1.02121.0465 1.02091.0456 1.02721.0393 1.0225I 0075561 013221I 0104791 0071391 014794 11 01231.00751.00791.01051.0116I. 0155564i. 0215948i. 0047279I 0026086i 0048i 0087i 01321 0050i 0073i 01991.00481.02111.04241.08221.18881.4395HighLow1.2195 1.0876 1.0502 1.02741.1731 1.0792 1.0348 1.01841.01481.00711.02161.0026588


Exhibit 19Sheet 3THE PROGRESSIVE CORPORATIONGRAND TOTALS EXCLUDING CV LOCAL & TRANSPORTATIONAccidentYearLegal Fees/Paid Losses4 8 12 16 20 24 28 32 3640 (Ult)19781979198019811982198319841985198619870008 .0090 .02190014 .0099 .02540010 .0111 .02630031 .0106 .02510032 .0124 .02670038 .0141 .02640038 .0123 .0246.0039 .0118 .0239.0026 .0107.0031.0354 .0435 .04780372 .0440 .04700357 .0411 .04400356 .0410 .04360360 .0429 .04590373 .04490353.0493.0484.0447.0449.0504 .0512.0488 .0490.0447051404900456046604860505~0487.0479.0497.0498AccidentYear8-4 12-8 16-12 20-16 24-20 28-24 32-28CUMULATIVE36°32 40-36 LDF'S1978197919801981198219831984198519861987.0083.0085.0101.0075.0093.0104.0086.0079.0081 I.009330129 .01350154 .01180152 .00940145 .01040142 .00940122 .01100123 .0106 l0121 1.0113401288 .0129901316 .012560081 .0044 .0015 .00100069 .0029 .0015 .00040055 .0029 .0008 .0000 J0054 .0027 .0012 I .000960069 .0030 l 00137 .000450076IJ.00366 00081 .0003800751i.00379 00145 .0000600746 .00278 00138 .0001400759 .00278 00133 .0006100634 .00324 00148 .00017.0008.0002 J.00064.00051000640005500039000670006500025000200022 .0002200022 .0008600022 .0017000022 0026900022 0056400022 0134200022 0240000022 0390500022 04677HighLow.01038 .01542 .01450 .00812 .00436 .00151 .00104 .00080 .00022.00754 .01212 .00936 .00537 .00269 .00076 .00001 .00021 .00022589


Exhibit 19Sheet 4THE PROGRESSIVE CORPORATIONGRAND TOTALS EXCLUDING CV LOCAL & TRANSPORTATIONAccidentYearAdjuster Fees/Paid Losses8 12 16 20 24 28 3236 40 (uzt)1979198019811982198319841985198619870406030402360195011401090117010500880470 .04690348 .03440260 .02620205 .02080109 .01140109 .01100110 .011301010467 .04670342 .03430263 .02620209 .02100120 .01230114• 0466 .0460 .0458• 0349 .0355 .0352.0262 .0262.0210.0457 045703520261020701160127-011101010071AccidentYear8-4 12-8 16-12CUMULATIVE20-16 24-20 28-24 32-28 36-32 40-36 LDF'S1979198019811982198319841985198619870064 - 00010043 - 00030024 00010010 00040004 00040000 00010007 .0003 I-.000220005 I-.00023 -.0004300026 -.00005 -.OOii8-.0002 .0000-.0002 .0001.0002 -.0001.0001 .0001• 0006 .0003 I '£.0004 [ .00004.00024 -.00005.00092- 0002• 0006 -.0002 -.0001 I .00026 .00026m0006 .0006 -.0002 ]-.000180000 .0000 [-.00025 -.000080000 I -.00004' -.00033 -.0001700038 -.00055 -.00006 -.0000200079 .00014 .00009 -.0000200013 -.00030 .00003 -.0001100041 .00023 .00001 -.0001800027 .00008 .00014 -.00018.00012 -.00006.00014 -.00019.00023 -.00031.00035 -.00066.00035 .00139.00027 .00022.00013 .00000.00036 -.00173HighLow.00642 .00043 .00063-.00074 -.00034-.00394i.00026 .00110 .00058 .00020 .00000.00144 -.00050 -.00063 -.00037 -.00031.00040.00011590


THE PROGRESSIVE CORPORATIONGRA~] TOTALS EXCLUDING CV LOCAL & TRANSPORTATIONS]~IJLATION WIl$000)Exhibit 19Sheet 5AccidentYear(1} IZ) 131UltJJ~ate Ul%Lma%e Ult~na~eNumber Average LossesIncurred Paid (1] X IZ)14) IS] 16] 17)Ul%JJ~a%eU1%JJ~a%eUltlma%e Ultimate ALAE Losses * ALAELegal Adjuster 14) + (5] (3) + [13) x 16)](8] 19)Paid at 12/31/87Losses ALAE{10)Loss + ALAE<strong>Reserve</strong> at17J'31/87(7) - 18] - 19)111}rum.Sum1980 51,999 1.184 61,560 .045619~] 73,005 1.374 100,304 .0466]98Z 92,573 1.423 131)705 .04861983 84,176 1.456 122)591 .05051984 88)68Z 1.560 138,301 .04871985 120,777 1.788 215,904 .0479]986 ]34,844 1.975 266,267 .0497]987 165,580 Z.233 369)737 .0498.0352 .0808 66,533.0261 .0726 107)590.0207 .0694 140)841.0116 .0621 130,209.0127 .0614 146,794.0111 .0590 228,645.0101 .0598 282,197.0071 .0569 390,78161,556 4,923100,277 7,13Z130,995 6,775119,956 6,857132,174 6,160197,310 6,953218,567 4,542202,351 2,40453]811,0703,3968,461t4,38359,088186,0Z636,80336,9843~3,0544%,45049,91174,294133,382319,408~n<strong>Reserve</strong> Dates12/31/87 IZ/31/86 12/31/85 12/31/84 1Z/31/83 12/31/82 12/31/81 12/31/80 12/31/79(1) Average <strong>Reserve</strong> D 12/31/87. IO00). $314,Z61 $134,896(Z) Standard Deviation. [000) ~ 7,68Z $ 3)106(3) C~fficient of Variation. (Z)/(]]. .OZ~ .023014) .057~Coefficient of Variation. 2.3315) Pcob~biJity of Adequacy. 99.0~16) Coefficient of Variation x 14]. .057 .0.~4(7) Supplemental <strong>Reserve</strong>,ll) x 16). IO00]. $ 17)913 $ 7,24318) Supplemental <strong>Reserve</strong> byAccident Year, (000). $ 10)670 $ 1;888(9) Percentage Distribution byAccident Year. 59.6~ 10.5~(10] Smoothed Percentage Distributionby Accident Year. 65.0~ 14.Z~$ 77)591 $ 51,697 $ 42,983$ Z)297 $ 1,406 $ I)041.0296 .OZ7Z .OZ4Z.069 .063 .0565,355 $ 3,278 $ Z)429Z,077 $ 850 $ 75511.6Z 4.7Z 4.ZZ13.02 3.ZZ 3.bZ$ 39,0~ $ 36,960 $ 36)805 $ 36,75]$ 718 $ 40 $ 13 $ 6.0184 .0011 .0004 .O00Z• 043 .003 .001 .000$ 1,674 $ 92 $ 3] $ 141,582 $ 61 $ 17 $ 148.8X 0.3~ 0.1~ o. IX1.1~ 0.5~


Exhibit ZOSUPPLE]~NTAL RESERVE HODEL SUt@IARYCorrela%ionCV LocalOther907.CV LocalTransportTransport50ZOther30XZSupplementalDistrib. Standard Coefficient Z a% 99ZMean Means Deviation Variance of Variation <strong>Confidence</strong>~O%her314,Z61 69.1X 7,68Z 59,013,1Z4 O.OZ~,4 5.7ZCV Local40,018 8.8Z 3,051 9,308,601 0.076Z 17.8ZTranspor%.100,413 Z~.IZ 8,009 64,1(-F+,081 0.0798 18.6ZTo %a 1 45 ~, , 69 Z ! 00.0X l 5,36 Z Z 36,004,355~ 0.0338 7.9Z*99% <strong>Confidence</strong> interval for one tail test.**236,004,355 = Sum of covariances, where the covariance is equal to theproduct of the coefficient of correlation and the independentstandard deviations (e.g. Covariance of Other and CV Local =0.9 x 7,682 x 3,051).592


This would ignore the fact that these distributions are not one hundred percentcorrelated, that in fact, there are differences between them and when one line can behaving high development, another one can be having low development.So, we measure what is the correlation between these. We simply do a regression of oneset of development factors onto the other for our different lines of business anddetermine, what the correlation is between them and then combine together thedistribution based on the percentage of correlation.Thank you.QUESTION: I have a quick question.MR. HARBAGE: Yes.QUESTION: When you do this analysis from the simulation, do you divide the autoliability or do you come back down and just do the model on the overall book?MR. HARBAGE: The model is based on combined auto liability and physical damage sowe do not have them broken apart. That is another set of segmentation we could do toget a little more homogeneity in the underlying distributions.As I say, it used to be in bulk for the whole company. We have taken the first step,which would be to break it down according to the lines of business. Yes, we could breakit down one more step which would be between the casualty and the property lines withinthat. Thank you.MR. HAYNE: Continuing along with the practical aspects of this, we have anotheractuary here who is a recent Fellow of the <strong>Casualty</strong> <strong>Actuarial</strong> Society, Rodney Kreps,who is going to present what he and his company have been doing in this area.Rodney's background is, I would say, varied, at least. He got his bachelor's degree atStanford and Ph.D. at Princeton, both in theoretical physics. He worked as an academicphysicist for twelve years and he received tenure as associate professor of physics.He then resigned to go back to the land to, I guess, discover his roots and the roots of hisforefathers, and subsequently became, believe it or not, a remodelling electrician andreturned to California. He was much in advance of the article from this spring. Hereally saw the light and figured the actuarial profession was an interesting place to beand joined Fireman's Fund as an actuarial trainee in '81 and received his Fellowship in the<strong>Casualty</strong> <strong>Actuarial</strong> Society in 1988.Rodney is going to speak on some of the approaches that they have been doing in tryingto monitor or estimate confidence intervals in loss reserves.DR. KREPS: I have no overheads. There are really only two things that I want to say.One of them is very simple and you do not need an overhead for it. The other one iscompletely full of numbers and it would be invisible, anyway.593


So, I am going to start off with the simple one. The fundamental point of a confidenceinterval is to give you sort of "sleep at night" comfort. If your situation is such that youcan actually reserve up to ninety-nine percent confidence levels, that is wonderful. Youcan sleep very soundly.Perhaps most of us are not in that fortunate position and we have to reserve somewherecloser to the mean value. Then the question becomes. Well, how shaky are we?What I want to address is a slightly different way of looking at this. We are all used totalking in terms of ninety-nine percent, ninety-five percent confidence intervals. That isgood, but that is talking about what is going to happen in the rare event.When you spread your arms wide and say, "Once every twenty years, it will exceed this"and everybody nods. Then you put a point right here and say, "Right, and this is where itis."This microphone is not working. Is it possible to hear me in the back? Good. Theseafter lunch sessions are really difficult. The tendency to sleep is very strong. I used tohave academic classes that I lectured to, to two hundred people at a time and they wouldsort of fade off especially after lunch.What I want to touch on can be illustrated by the following. If I come to you and say,"Okay. Your reserves should be one hundred and thirty to one hundred and sixty milliondollars and your ninety-five percent confidence limit is one hundred and seventy million",I am saying one kind of thing.If I say to you, "Your reserves should be one hundred and forty-three to one hundred andforty-seven million dollars and your ninety-five percent confidence interval is onehundred and seventy million", I am saying something different.In both cases, I am telling you that your mean value is one hundred and forty-five andthat your ninety-five percent confidence interval is one hundred and seventy, but I havedescribed two quite different distributions, one of which is very spread out and one ofwhich is very concentrated.It is that notion that I want to try and quantify here and give some examples for andshow you how we use it. Intuitively, it corresponds to having your arms spread out andsaying, "Once in twenty years, we will exceed that." Then you say, "Well, what aboutthis year?"How close can you bring your hands together until so much probability has squeezedbetween your fingers that you do not believe it anymore? What is the smallest rangethat you can say? How soft, or fuzzy, is this number? How small can you make this andstill have it be believed? How small can you make it and still believe it yourself? Thatis perhaps a more relevant question.I think the main reason for doing this actually comes from my physics backgroundbecause any measurement that you make has an associated intrinsic uncertainty. Ifnothing else is right from the quantum level, it is going to be there. Anything that youlook at is fuzzy. There just is not anything like a precise number. It has always got fuzz.Every time somebody says to me, "Well, just give me the right number", you know, itmakes what hair I have left stand up and I want to say, "No, it is not a number, it is afuzz. It is a range. It is more peaked in the center, but really, it is a range."594


What I want to propose to you is this notion of minimum range, and you may now turn toPage 1 of the hand-outs, if you would do so, please.What I am suggesting is that it is the smallest reasonable measure of the spread of adistribution. Now, it is up to you to interpret what "reasonable" means. My suggestionfor it is that it is the middle third of a distribution, as the hand-out says) because whenyou are looking at the middle third) it means your result is twice as likely to be outsidethe range as in it.You just cannot squeeze your hands any closer and still believe it. It is as likely to beabove it as it is in it and it is as likely to be below it as it is in it. So, for me, thisconstitutes the smallest range that we can talk about.Now another name for this, which we just heard in the main session, is the softness of theestimate. All the estimates are soft and it is just a question of how soft. [ also like theterm, the "width" of the estimate.A perhaps slightly surprising fact is that typically, when you are looking at the middlethird of a probability distribution and especially a very skewed one, the mean is going tolie near the top end of that third. If it is skewed off to the high side, why don't we seelots of stuff up there? Well, you do not see lots of it. You see lots of little ones and afew great big ones.3ust a word of warning. One of the tendencies I have seen in actuaries is to say, "Oh,that is an outlier. We will throw that one out of the data." But it is those outliers thatgive you the skewness of the distribution.Going to page 2 what we have here is a graph of the minimum range, that middle third,as a percent either of the mean or median of a distribution.Now, for a normal distribution, which is those solid black squares, the middle third is .#3of the standard deviation. It is just a straight line. But for the log normal distributionrepresented by the two curves below it, as you increase the standard deviations you getmore skew. The minimum range, first of all, is different from the median and the meanand, second of all, drops below the fraction of the mean that the normal curve has.But it is asymptotic to the normal curve as it down to zero. As your distribution comestogether, as it gets more concentrated, the log number looks like the normal.By way of contrast, the gamma distribution approaches zero like a square root. So, itsminimum ranges, even for small standard deviations, such as five percent, is up aroundeighteen percent. That is a fairly big minimum uncertainty on your loss reserveestimates. It really depends on what the shape of that distribution is.(Slide)On page 5, we have the minimum ranges as a percent of the mean or median for theNCCI loss groups five through twenty-one.3ust to come back to this point on outliers, the reason I keep showing the median isbecause of this tendency to throw out the outliers. There is a tendency to do medialreserving, to use medial averages rather than mean averages. If we could pay off on themedian, that would be nice, but typically, we pay on the mean.595


Page ~ is really a companion to this one. It gives you the dollar amounts associated withthe loss groups as of 1/I/88. You can see that loss group five, which is some two hundredmillion dollars, has a five percent minimum range. When you get down to loss grouptwenty, which is two million dollars, you are looking at a minimum range of aroundthirty-five to forty percent, which is quite large.Having given my plug for looking at minimum range, if we go to page 5, we will see atypical page out of our reserve reports. I have sanitized the data a bit. It is no accidentthat the incurred dollars are exactly a hundred million dollars and I have sort of mixedsome of the lines together, but the overall behavior is typical.I direct your attention over to the column labelled "width", because this is, exactly thenotion of minimum range applied to accident year, mean incurred data. This is done bystandard chair ladder approximation using age-to-age factors.For the incurred estimates of reserves, where it says "incurred current", you have thewidths to go with them and, as you would hope you would see, that the widths decrease asyou go back to older years, and the very large ones are in the most current year.Accident year '88 as of two quarters has, of course, a huge width.At the bottom of that column, you see the totals for this particular line. The ninehundred and twenty-two for the width assumes that each of the accident years developsindependently. Now this is a bad assumption. Since you are much more likely to havecorrelations between accident years than not. On the other hand, this also gives you anidea, even with the most optimistic possible view, of how big your width is. It is likelylarger than this.Page 6 is the same exhibit with slightly different data.There is a blip in the middle of the width for accident year 1980. You see it decreasessmoothly and suddenly blips out at a hundred and sixty. This is a sign for us to go backand look at that data and see what is making that blip. You can use these estimates ofwidths to give you clues where you may have data problems.If we go to the last page, you will see a line that is not nearly so well behaved, where thewidths do not decrease terribly fast and there are lots of blips. Going clown to thebottom of the exhibit, when you look at the standard deviations, you can see that thestandard deviation of the methods which is related to the width is much larger than onthe previous exhibits.What is also typical is this width is just the process variance assuming that your model isperfect. You say, "Yes, I do believe the incurred model" and then this tells yousomething about the process variance. But if you say "Well, maybe I should use a paidmodel, maybe I should use something else, or maybe I do not have the parameters right",there are four primary methods that we use to do the estimates. You see two of them infront of you there.Typically, the deviation between the methods, or parameter risk, is much larger than theprocess risk. It is the combination of those that goes over and gives the ninety-fivepercent confidence interval. What we do with that confidence interval is to look at itand try not to be too worried by it.596


In recapitulation, it is implicit in all the estimates that we make that we are not talkingabout a single number. We acknowledge that by the fact that we do not quote ourreserve estimates to the penny. We can certainly make our estimates to the penny, butit does not make any sense to do so. We implicitly honor the fact that it is a range byrounding to the nearest thousand or whatever is convenient. I am simply suggesting thatwe would do well to also make an explicit acknowledgment of the intrinsic fuzziness ofthe estimates, by using a well defined parameter.Are there any questions?MR. ROBERTS: I am Lew Roberts from the New 3ersey Insurance Department, I wouldlike to make an observation that there is intrinsic fuzziness in our estimates of thefuzziness because if we put a range around our ultimate value it is going to be a longtime before anybody knows whether our point estimate was right, as for whether ourestimate of the fuzziness was right, nobody will ever really know at all.DR. KREPS: What you are saying is not only do we not know the estimate very well, butwe do not even know the extent of the fuzziness of the estimate and we probably neverwill know until those lines come in, which can be many, many years sometimes.It is also true that as the data comes in and our models change, our estimates of thefuzziness will change. But, again, our experience is that those estimates are relativelystable. The distance between a four percent fuzziness and a six percent fuzziness is afifty percent change but it does not change how you feel about it a lot. The differencebetween a four percent fuzziness and a forty percent fuzziness, that you feel.MR. ROBERTS: While we cannot be sure or have a very good estimate of the accuracyof the fuzziness, and this is what really gives the value, that if we find a wide fuzziness,we would feel a lot less confident than now.DR. KREPS: You are saying if there is a wide fuzziness, then we have much lessconfidence in what we have done. There is less confidence in the hardness of thenumber. The method may be great. We believe that this is the right way to do it and itis just that the nature of the line is such that it is very fuzzy.MR. ROBERTS: Another thing and that is that we may find it very hard to get a preciseestimate of the expected value or our estimate of the expected value itself subject tosome, but no matter how accurate our estimate of the expected value, the actual resultswill have another variance.DR. KREPS: The actual results will have another variance entirely. This is quite true,of course. There is an argument against estimating variance. This is the argument thatgoes, nOh you can't get it any better than by ten million? Well, let's take the low end ofthings, then, shall we?" This is where you have to stand your ground and say, nHey, therange is ten million, my estimate is this. If you want something different, it is yourresponsibility." Any further questions?(No response.)597


MINIMUM RAN GETHE SMALLEST REASONABLE MEASURE OF THE SPREAD OF A DISTRIBUTIONSUGGESTION: THE MIDDLE THIRD, BECAUSE THE RESULT ISAS LIKELY TO BE ABOVE THE INTERVAL AS IN ITAS LIKELY TO BE BELOW ]HE INTERVAL AS IN ITTWICE AS LIKELY TO BE OUTSIDE THE INTERVAL AS IN ITGOALIASES: WIDTH OR SOFTNESS OF ESTIMATENOTE THAT THE MEAN ESTIMATE WILL LIE NEAR THE TOP OF THE RANGEFOR A SKEW DISTRIBITIONPAGE 1


MINIMUM RANGES AS PERCENT70%60%GAMMA-MEDIANZ


MINIMUM RANGES AS PERCENT45%40Z -O~00Z


220BEGINN]ATG OF LOSS GROUPBY LOSS GROUPSRA.~TGE~9O~200/ID~0\


TESTING LINE A LONG-TAILED SANITIZED CREATIONDATA BEFORE ANY TREATHENT OF KNOHN SPECIAL CASESACCIDENT PERIOD SELECTED RESERVES I000) EVALUATED a 2Q88STATISTICAL BASE IS 5 PERIODS OF LENGTH 4 QUARTERSSELECTEDS ARE MEAN INCURRED AND PAID-INCURRED"PRIOR" IS AT PRIOR TEST DATE= 4Q87Ot,OEND1988-41987-41986-41985-41984-41983-41982-41981-41980-41979-41978-41977-41976-41975-41974-41973-41972-41971-41970-41969-4CLOSED1,34415,07732,26143,61749,92550,44939,74936,20235,41535,94834,67734,21239,59039,87732,35027,66822,96518,98114,8273,971EMERGEDPAID INCURRED1,729 9,34016346 33,44933555 44,83444 457 52,89350586 54,62151,133 53,05340.379 41,12136 520 36,99535 999 36,42536314 36,75334.879 34,97034649 34,99940 OZ2 40,76640143 40,39132377 32,39627 708 27,73022 965 22,97018981 18,98114 827 14,8273,971 3,971INCURREDPAIDULTIMATE CURRENT PRIOR NIDTH ULTIMATE CURRENT PRIOR39,492 37,762+ 2,984 50,109 48,379+43,278 26,531+ 35,380÷ 622 46,209 29,462+ 38,311+48,644 15,088+ 21,325+ 404 51,408 17,852+ 24,089+54,211 9,753+ 13,476+ 340 54,683 10,225+ 13,948+55,308 4,721+ 7,046+ 185 55,965 5,378+ 7,704+53,245 Z,lll+ 3,339+ 203 53,946 Z,81Z+ 4,040+41,226 845+ 1,160+ 124 41,561 1,181+ 1,496.37,062 542+ 789+ 157 37,120 600+ 848+36,487 487+ 533+ 140 36,368 368+ 415+36,873 658+ 743+ 145 36,546 231+ 417+35,047 167+ 193+ 156 35,012 133+ 159+35,031 382+ 417+ 28 34,718 68+ 104+40,759 736+ 750+ 44 40,072 49+ 63+40,402 259+ 282+ 17 40,178 35+ 58+32,405 27+ 32+ 5 32,402 24+ 28+27,725 16+ 26+ 10 27,719 lO+ 20+22,967 1+ 1+ 9 22,974 8+ 8+18,983 1+ 1+ 2 18,989 7+ 7+14,828 i+ i÷ 2 14,833 6+ 5+3,971 3,973 I* i+TOTALS FOR LINE 100,000+ 85,505+ 922 116,842+ 91,730+HIDTH6,2751,3265342851071891191438Zl412121139Z21,(+81TOTAL FOR HIDTH DOES NOT INCLUDE HOST RECENT PERIOD.TOTAL FOR RESERVE INCLUDES CASE OUTSTANDING FOR HISSING EARLY PERIODS OFCONFIDENCE INTERVAL CALCULATION ON RESERVES 8Y TEST DATE:HETHODS USED ARE MEAN AND LAST-THO, INCURRED AND PAID-INCURREDDOLLARS IN THOUSANDSTESTDATE1988-41987-41986-41985-41984-41983-4---STANDARD DEVIATIONS--- MEAN 95Z CONFIDENCEOF BETHEEN OF INTERVALMETHODS HETHODS TOTAL HETHODS DOLLARS PERCENT5,891 10,053 11,652 110,909 22,838 20.6Z1,435 4,058 4,304 90,482 8,436 9.32779 Z,Z61 2,391 96,650 4,688 4.9Z551 1,180 1,303 98,041 2,553 2.62412 923 l,Oll 93,703 1,982 2.IZ373 1,012 1,078 85,919 2,i14 2.52PAGE 5


TESTING LINE A LONG-TAILED SANITIZED CREATIONDATA BEFORE ANY TREATMENT OF KNOHN SPECIAL CASESACCIDENT PERIOD SELECTED RESERVES (O00l EVALUATED • 2Q88STATISTICAL BASE IS S PERIODS OF LENGTH 4 QUARTERSSELECTEDS ARE HEAN INCURRED AND PAID-INCURRED"PRIOR" IS AT PRIOR TEST DATE: 4Q87O~0END1988-41987-41986-41985-41984-41983-41982-41981-41980-41979-41978-41977-41976-41975-41974-41973-41972-41971-41970-41969-4CLOSED1486,35614,56820,54232,96532,85624,26023,73922,298Z1,55018,33415,15513,66511,1638,5637,9697,1876,2466,7727,78ZEMERGEDPAID INCURRED190 4,2776,604 22,22114,823 23.42420,717 28 26333,237 38.07032,903 35.42124,297 25.73923,757 24 14622,301 22.51321,602 21,83818,350 18,49215,164 15.17913,667 13.69411,174 11.2048,563 8,5637,969 7,9697,187 7,1906,246 6,2466,772 6,7727,78Z 7,782INCURREDPAIDULTIHATE CURRENT PRIOR HIDTH ULTIHATE CURRENT PRIOR33,358 33,168+ 5,556 23,413 23,222+37,Z14 30,609+ 34,680+ 839 35,352 28,747+ 32,818+29,020 14,197+ 18,642+ 354 32,276 17,453+ 21,898+30,753 10,036+ 12,918. 625 30,222 9,604+ 12,386+39,387 6,150+ 8,588+ 358 40,201 6,963+ 9,401+3S,903 2,999* 4,431+ 234 36,018 3,114+ 4,546+25,830 1,533+ 1,962+ 64 25,274 977+ 1,405+24,271 514÷ 789+ 93 24,324 566+ 841+22,647 245+ 278+ 160 22,582 281+ 314+21,839 236+ 237+ 66 21,730 127+ 128+18,513 162+ 166÷ lO 18,424 73+ 78÷IS,ZOO 36+ 47+ 18 1S,Z13 48+ 60+13,711 44+ 44* 40 13,701 34+ 34+11,Z16 42+ 42÷ 21 11,195 21+ 21+8,573 9+ 19÷ 17 8,575 11+ Zl+7,979 9+ 9* 18 7,977 7+ 7+7,190 3+ 66+ 7,191 4* 67+6,246 6,248 Z+ 2÷6,772 6,774 I+ I+7,782 7,784 I+ I+TOTALS FOR LINE 100,000+ 82)925+ 1,Z05 91,168+ 84,038+HIDTH4,4362,825786815563291941451378621Ii47352827I3,119TOTAL FOR HIDTH DOES NOT INCLUDE HOST RECENT PERIOD.TOTAL FOR RESERVE INCLUDES CASE OUTSTANDING FOR MISSING EARLY PERIODS OFCONFIDENCE INTERVAL CALCULATION ON RESERVES BY TEST DATE:METHODS USED ARE MEAN AND LAST-THO, INCURRED AND PAID-INCURREDDOLLARS IN THOUSANDSTESTDATE1988-41987-41986-41985-41984-41983-4---STANDARD DEVIATIONS--- MEAN 952 CONFIDENCEOF BETHEEN OF INTERVALMETHODS HETHODS TOTAL METHODS DOLLARS PERCENT6,460 6,194 8,950 99,285 17,542 17.7Z2,749 1,859 3,319 84,787 6,505 7.7Z1,299 Z,068 2,442 80,49S 4,786 S.9Z1,088 910 1,418 8Z,386 Z,780 3.42685 754 1,019 84,414 1,998 2.4Z409 721 829 71,729 1,625 Z.3ZPAGE 6


TESTING LINE A LONG-TAILED SANITIZED CREATIONDATA BEFORE ANY TREATHENT OF KNONN SPECIAL CASESACCIDENT PERIOD SELECTED RESERVES (000) EVALUATED ~ 2Q88STATISTICAL BASE IS 5 PERIODS OF LENGTH 4 QUARTERSS£LECTEDS ARE HEAN INCURRED AND PAID-INCURRED"PRIOR" IS AT PRIOR TEST DATE: 4Q87O~0¢-.END1988-41987-41986-41985-41984-41983-41982-41981-41980-41979-41978-41977-41976-41975-41974-41973-41972-41971-41970-41969-4CLOSED2251,9892,9154,5537,7149,1048 1677 17167976 5695 8035 06562133 7793 108,7812,1171,5871,3721,316EHERGED INCURRED PAIDPAID INCURRED ULTIHATE CURRENT PRIOR NIDTH ULTIHATE CURRENT PRIOR286 1,356 17,588 17,302+ 1,0Z6 10,606 I0,320+2,083 4,154 13,188 11,104+ 11,960+ 626 11,639 9,555* 10,411+2,981 5,254 13,300 10,318+ 10,794+ 285 11,656 8,674+ 9,150+4,655 7,705 16,402 11,747+ 12,243+ 1,404 14,376 9,720+ 10,217+7,912 10,738 19,975 12,062+ 13,163+ 1,025 20,426 12,513+ 13,614+9,177 11,710 19,722 10,544+ 11,503÷ 1,340 19,903 10,725+ 11,684+8,090 9,463 14,785 6,695+ 7,192+ 511 15,134 7,044+ 7,541+7,212 8,358 12,464 5,251+ 5,718+ 203 12,088 4,875+ 5,342+6,981 7,857 11,327 4,345+ 4,403+ 1,335 11,115 4,134+ 4,192+6,610 7,073 9,552 2,942+ 3,169+ 946 9,247 Z,636+ 2,864+5,866 6,309 7,735 1,869+ 2,038+ 151 7,801 1,935+ 2,104+4,771 5,415 6,56Z 1,791+ 1,952+ 242 6,130 1,359+ 1,519+5,821 5,995 7,114 1,293+ 1,391+ 322 7,255 1,433+ 1,531+3,411 3,507 4,023 611+ 617+ 166 4,140 728+ 734+3,169 3,265 3,587 418+ 419+ 306 3,760 591+ 592+2,799 2,984 3,394 595+ 601+ 149 3,261 461+ 468+2,117 2,240 2,443 326+ 396+ 178 2,430 313+ 383+1,587 1,620 1,763 175* 175+ 187 1,804 216+ 216+1,372 1,588 1,607 234+ 244+ 204 1,549 176+ 186+1,316 1,371 1,371 54+ 275+ 184 1,482 165* 387+HIDTH1,0004982989851,1631,6221,0295021,34537811366148i014122052021601408TOTALS FOR LINE I00,000+ 88,575+ Z,958 87,900+ 83,458+2,984TOTAL FOR NIOTH DOES NOT INCLUDE HOST RECENT PERIOD.TOTAL FOR RESERVE INCLUDES CASE OUTSTANDING FOR HISSING EARLY PERIODS OF 314CONFIDENCE INTERVAL CALCULATION ON RESERVES BY TEST DATE:HETHOOS USED ARE HEAN AND LAST-THO, INCURRED AND PAID-INCURREDDOLLARS IN THOUSANDSTESTDATE1988-41987-41986-41985-41984-41983-4---STANDARD DEVIATIONS--- MEAN 95Z CONFIDENCEOF BETNEEN OF INTERVALHETHODS METHODS TOTAL HETHODS DOLLARS PERCENT3,651 16,249 16,654 106,838 32,643 30.623,455 13,857 14,281 97,843 27,992 28.623,392 12,017 12,487 94,740 24,475 25.823,375 10,591 11,116 89,599 21,787 24.323,066 9,190 9,687 83,378 18,988 22.822,788 7,416 7,923 70,661 15,529 22.02PAGE 7


Thank you.MR. HAYNE: Our third speaker is Spencer Gluck. He is an actuary with Milliman andRobertson with an extensive background in loss reserve analysis for primary insurers andre-insurers and his current practice emphasizes medical malpractice.Quoting his bio here, "In previous incarnations", Spencer has been a Vice President forKramer Capital Consultants, a manager in the actuarial division of Peat Marwick and aregional actuary for the Insurance Services Office. Spencer is currently a Fellow of the<strong>Casualty</strong> <strong>Actuarial</strong> Society, a member of the American Academy of Actuaries and holdsa bachelors degree in mathematics and a masters in education from Cornell. Spencer.MR. GLUCK: Thank you. I heard somebody drop the word "boot-strapping" earlier andboot-strapping is somewhat related to what Robin presented to you, and I am going topresent you a boot-strapping in a little more formal context, and that is boot-strapping inconjunction with an analysis of loss reserves by regression methods.How many people here attended Ben Zehnwirth's presentation on regression?(Show of hands)Well, that is good. That will help. I am not going to spend too much time on goingthrough all the subtleties of the right way to apply regression analysis to loss reserves,but we will start off with number one there, the title page.(Slide # I)We have regression expressed as one pseudo-equation here. That is all you need to knowabout regression for this talk. The hand-out I gave you is just the same as these picturesand most of the pictures are going to be easy to see.Past actual data I have written there in a little triangle so that will be a familiar triangleand you do regression to it. What you get out is fitted data. I have given you twotriangles of fitted data, fitted data for the past period that you actually fit and fitteddata projected for that future period that we are interested in.I should make a note here that it will come up that frequently we are working withconverted data before we do our regression. For example, very commonly we will takethe logs on the data before we do the regression, so what I am calling actual data may beactual converted data.Let's go on to the next page, please.(Slide #2)Okay. We take our actual data in the past period. We subtract our fitted data for thepast period. Then we have our residuals for the past period. The residuals are veryimportant and, of course, you get lots of other things out of regression -- parameterestimates, standard areas of the parameters, et cetera, et cetera -- but for this bootstrapanalysis, we are basically just going to work with these residuals where we aredoing this to measure variability and the residuals, the difference between our fit and ouractual data is going to give us the most information about that variability. Onward.605


(Slide #3)The next picture says required properties of residuals. I could not think of a better wordfor "required". We can relax these requirements to some degree, but let's talk aboutwhat they are.First and most important is that they have to be random. That is to say that therecannot be anything systematic, no patterns in the residuals. If there are patterns in theresiduals, it means you have got a bad model and you have got to throw it out and comeup with a better model.None of the techniques we are going to do here, nothing we can do here, can solve thatproblem. If there are patterns, if there is anything systematic in your residuals, you havegot a bad model and you need a better one.Furthermore, if we assume that the residuals are independent, that they are identicallydistributed and that they are normally distributed, then the least-squares regression willgive you optimal estimates of those parameters. To the extent that we relax some of thelower ones, the regression may still give us unbiased estimates of the parameters. Theywill no longer be optimal.But with this boot-strap technique, we will have the ability to relax those assumptions tosome degree and get some estimate of how that increases the variability of our projectedresults. On to the next one.(Slide #4)I want to talk a little more here about randomness. If this is too hard to read on thescreen, you might look at your hand-out. This is driving home some of the points thatBen Zehnwirth made earlier, just some examples of nonrandomness in the residuals andwhat I have got here is plots of the residuals in the upper left-hand corner by accidentyears and in the lower left-hand corner by development period, in the lower right-handcorner by calendar year.This is an example of very poorly behaved residuals right here. There is an obvious trendin the residuals. You can see it in the accident year or the calendar year pictures andyou cannot use this model. Nothing that you measure out of this model will really bemeaningful because of that pattern.(Slide #5)On the next page, we have another example of poorly behaved residuals, especially here,looking down at the development period, the lower left-hand corner, and here it isobvious that the curve that we are using in this regression analysis does not fit thedevelopment pattern very well.You can see that out of the tail, the curve is way over the development pattern and sinceloss reserve estimates are very sensitive to what goes on, if you use this model, you aregoing to overproject your reserves by a large margin. When I first showed this, Rogerpointed out that if you look in the upper right-hand corner, the R-squared on thisparticular plot is ninety-seven percent, so be wary of high R-squared.There is really no substitute of looking at the error plots which will tell you, again, about606


this question of whether the errors are, in fact, random.(Slide #6)On the last page, on Page 6, I have got another, similar kind of better behaved errors. Ido not know if they are perfectly behaved, but these are a little closer to well behavederrors in that they are relatively randomly scattered and there are no obvious patterns inthe residuals. This is a case where we might say our model is fitting well enough, let's goon and analyze the variability around it.The basics of the boot-strap is that the basic model is valid. If the model itself is notvalid, specifically meaning that the errors are not random, then you can throw out therest of this. When you get to that stage, junk the model and then find one that worksbetter.We have been talking about the basic concept of the boot-strap. As a matter of fact,could you, Roger, hand me the Slide 3? We are going to pop that one back occasionally,so if you will pull that one out of the pile, let's pop that one up just for a second.(Slide #3)There we go, the properties of residuals. For now, for this simpler case of the bootstrap,we are going to accept all except the bottom one. The boot-strap does not requireus to make any particular distributional assumption about how the residuals aredistributed, except that they are random, which I emphasized already as absolutelynecessary. We cannot be talking otherwise.For now, we are also going to assume that the residuals are independent and identicallydistributed. Now we have a bunch of these residuals that are independent and identicallydistributed and, as I said, we do not really have to assume that they are normallydistributed so what do we know about them?Well, let's say that we are looking at a twelve-by-twelve triangle, which is seventy-eightdata points, so assuming we fit to that whole triangle, we have seventy-eight residualsthat we are looking at and we are assuming that each of these residuals comes from thesame distribution and they are all independent of each other, so we know a lot about thatdistribution of residuals. After all, we have got an empirical distribution. We have gotseventy-eight of them to look at. That is exactly what we do in the boot-strap.We say, "Let's assume that the distribution of residuals is a discrete distribution." In thiscase, it has seventy-eight possible results, all equally likely, and the seventy-eightresiduals we have are the ones, so that is the distribution. Let's go on to Slide Number 7.(Slide #7)What we do, looking at the middle part of that slide, is we randomly generate moreresiduals. We select from that discrete distribution we have randomly seventy-eightpoints for the upper half of triangle and something less than seventy-eight points, I think,in the lower half, unless you stick a tail on it. Yes?QUESTION: Is there any reason we are selecting from a discrete distribution rather thanusing the normal distribution that corresponds to those residuals?607


MR. GLUCK= There are pluses and minuses. You could do a normal distribution or someother parametric distribution if you had some other form of the residuals you wanted toassume. If they are not behaving terribly normally, then if we force them to a normal,we will get smaller errors.On the other hand, if you have seventy-eight errors to choose from, if we just use themdirectly, then we are certainly never going to get an error outside of the range of thoseseventy-eight where there may be some range of possibility, so if you fit them to aparametric distribution, whether it be normal or some other parametric distribution andselect from that, you have the possibility of generating the very rare, but occasionallarge, error. So, there are pluses and minuses.The traditional way -- nothing about the boot-strap is that traditional, but the traditionalway to use the boot-strap is to use the errors directly. There is nothing to be said thatthis is not a very reasonable alternative.We have got an upper and a lower triangle of randomly generated residuals and we go toour fitted data from our original regression, both past and future period, and we add therandomly generated residuals which are all going to average zero, by definition, and thatwill give us a new triangle of pseudo-data.This is based on that particular error pattern. This is just another data set that we mightjust as likely have gotten, selecting randomly from that same error pattern. The firstthing we are going to do with that pseudo-data is we are going to take the lower half ofthe triangle as pseudo-data and we are just going to store it, although we may have toreconvert it again.We talked about how the data has probably been through a conversion process. In manymodels, that includes a log so that to reconvert it back, we take an anti-log. So, thatlower triangle of pseudo-data there is a certain reserve answer, so we take it backwardthrough the conversion process to make it a reserve answer. On to the next page.(Slide #8)Then we do something else with the upper triangle of pseudo-data. Here we have anotherpseudo-equation. We take the upper triangle of pseudo-data and we go back and we dothe regression on it again. This gives us fitted pseudo-data for the future period.In other words, if we make believe that that triangle of past pseudo-data was our actualdata we had, fit the regression, project the reserves that that would imply, and that isrepresented there by the lower triangle of fitted pseudo-data. Next, please.(Slide)Now, I have thrown in the word converted, which I have mentioned a few times. Iprobably mean "de-converted", but I keep getting all mixed up with this. I mean thatthese are converted back to actual loss projections again, through an anti-log if we hadoriginally taken a log transformation, or backwards through whatever othertransformation we take.608


Now, we can compare the converted pseudo-data for the future period. That is more orless like the actual data in this pseudo-environment, what actually emerged in thefuture. We compare that to the converted fitted pseudo-data. This is what wasprojected from the upper half of the triangle and the difference between those two areprojection errors in our pseudo-data environment.This is the actual difference between what we projected through the regression, which isthe middle triangle and what the actual future pseudo-data looked like) which is the lefthandtriangle. Now we can do this many times. Now we can regenerate a new set ofpseudo-data.We go on regenerating new sets of pseudo-data as many times as we have the patienceand computer time to spend on it. That can be a lot of computer time, depending on howmany times you want to go through this.What you wind up with after you have gone through it are those projection errors. Youwind up with an empirical distribution of those projection errors. If you have gonethrough the boot=strap a hundred times, then we have a hundred readings on what theprojection error might be.I have kept this in the triangle picture) because if you are only worried about thevariability around your total reserve estimate) maybe you would add everything in thelower triangle together. Adding it together assumes that the lower triangle is end periodloss payments.You also can now look at the variability for any individual data point in that triangle oryou may want to sum across the triangle and look at the variability for total accidentyear reserves. You may be interested in summing down the diagonals and looking at thevariability for total future calendar years. Now) we have an empirical distribution ofprojection errors.(Slide #I0)What can we do with an empirical distribution of projection errors? Well) we cancalculate just about any statistics we want; such as bias) which I want to talk aboutnow. We can talk about variability) but to some extent) those projection errors may notaverage zero.In fact, if your transformation is non-linear, for example, if we have done a logtransformation on the data) then we should not expect that those projection errors willaverage zero. That is an important point for a lot of you to consider because I know ithas become popular to fit the inverse power curve to development factors.Fitting the inverse power curve involves first doing a log transformation on the data -- inthis case, development factors minus one -- and then you fit a curve by regression andyou take an anti-log. Well, that does not give you a mean estimate. That gives you abiased low estimate because the transformation is non-linear.In addition to calculating the bias) we can calculate variance) standard deviation)skewness. We could even calculate the co-efficient of Kurtosis if we want. I do notknow why we would want to) but we could. I did not bother. Then) empirically) straightout of the distribution) we have empirical confidence intervals because we have an actualdistribution of projection errors.609


(Slide #I I)Here, I have just talked about different pieces of the variance. We have measured theparameter as well as the process risk here, but not quite all of the risks.The statistical error is the closest thing have to the process risk. I like this term better,to stay away from the process risk) because this is only equal to the process risk if themodel that I have used exactly describes the process. I suspect it does not because themodels we are going to use are going to be simplified) reduced parameters, and can workon a triangle of data, so they are going to involve a somewhat simplified version of theprocess and, therefore, we would expect that this statistical error would probably belarger than the true process risk.Again, if the errors appear to be random around our model, our model may be reasonablygood and the statistical error is the error in the data around the model. If you lookthere, you are comparing the pseudo-data compared to the average pseudo-data for thefuture period.This is just the variability in those future losses due to the error structure itself. It hasnothing to do with our projection yet. In the bottom, ! have written the term as"parameter estimation error". I wrote it that way because our literature sometimes says"parameter risk" and some of the international literature says "estimation error", so ! putall the words together so everybody will know what we are talking about, in "parameterestimation error."Here, what we have is the fact that the our regression parameters themselves will varybecause of the variability of the data and we have measured that, to some degree, withthe boot-strap, too, because we have generated a hundred or more different sets of data.This piece lets us see how much the projections will vary due to the variability in thehistorical period data rather than the future period data and the variability that gives usin our parameter estimates. Here, we have broken it down and we have actual readingson both the process risk and the parameter risk.Before I go on, I want to emphasize that that is not all the risk. It is convenient, the waywe frequently talk about process and parameter risks adding up to all the risk. In thiscontext, there is a third error and that is, in addition to the process risk and theparameter risk if our model accurately describes the process.Since that undoubtedly is untrue, we have a third area o£ error, which is a modelselection error, if you will. It has been expressed in a number of different terms, but thefact that there is a decent chance that our model does not perfectly describe theprocessing, so there is additional error there.In a lot of informal discussion, I think we lump that in and call that part of the parameterrisk, but here I have specified parameter estimation errors in estimating the model wehave selected and leave us an unanalyzed error) the fact that the model itself may not beappropriate.(Slide # 12)The last three pages are some sample output from a run of the boot-strap with just ahundred trials. Maybe you will see, when we look at the histogram at the end, thatmaybe we need more than a hundred trials, but this is just a hundred.610


I have developed these programs relatively recently. They are debugged to the extentthat I know that when you start them up, they produce an answer at an end, but I knowthere are mistakes in there, so these answers are probably not right. In any case, it isjust an example.This is an example of the power and the different things you can estimate with it. In theleft column, I start with the actual regression fit to the data. The bias is just thedifference on the average between the projected and the actual results in the bootstrap.Those funny little things are negative signs. We have to work on the format, too.You can see that we have a very substantial bias, and I want to look into that, becausethe bias on all my runs came out larger than I expected it was going to and yet, if youlook at the actual data, you will realize that it is the fitted result corrected for the bias,seems to be a reasonable projection, so that bias seems to be real and it seems to bethere. It is bigger than what I would have expected from the log transformation aloneand it may be related, perhaps, to a non-normality of the errors, but I am not really surewhat all the causes of that bias are.Now we have standard deviation and variance. I have it in total as for the entire lossreserve projection and I also have similar measures for each accident year individuallyand for each calendar year individually. Of course, you can add down the columns andyou will not get the total.Then I have also measured the skewness, which is something we might be interested inand I have broken down that variance into the statistical error and the parameterestimation error defined as I have just defined them previously.(Slide # 13)On the next page, just a quick gander at this page. This was the actual confidenceintervals coming out of that boot-strap, the actual distributions we get out of -- actually,I said a hundred, but it has been a hundred and one trials that I ran through. It is easierto pick off percentiles if you do a hundred and one trials. Again, on each of those samepieces and the total, and, of course, the individual pieces do not add up to the total.(Slide 1~)Finally, on the last page, a little picture. This is the histogram of a hundred and onetrials. This is for the total reserves. It shows how the results of those different trialscame out. As you can see, it is kind of bumpy. After viewing that, you might say thatwe should run it a thousand times.As of my first runs of the model, I am taking about six seconds per iteration, on a 386machine, through the boot-strap so it is taking me ten or twelve minutes to run a hundredtimes. I can make it run faster than that, but it is still going to take a long time to run ita thousand times.Once you think you have got a pretty solid model and you are close to using it, it mightmake sense to set your machine running overnight and run it a thousand times. If thosebiases that I am measuring are real then it is very important to do this analysis andmeasure those biases even to get a good mean answer, much less confidence intervals.611


I am just about done. I would like to pop back that one slide, number three again, thoseproperties of residuals, because I did say you could relax some of those constraints.(Slide)Basically, the easiest one is "identically distributed." To the extent that the errors arenot identically distributed, we are not going to assume that there are differentdistribution forms for the errors, however there may be different variances. Based onlooking at our fit, we may see that there is a lot more variance in the tail, for example,than there is earlier on.As long as we can model that, as long as we can measure it through our data and create amodel of heteroscedasticity that reflects the greater variance in the tail if that is whatthe data shows, then we can account for it in the boot-strap."Independence" -- what I have here in mind, obviously, if I said they are not random,forget the model. What I am talking about here is even though they might be random,they can still be dependent in the sense that they may be auto correlated.For example, it might not be uncommon to see some auto correlation within a givenaccident year as we look across the development year axis. So, we can, again, once wefit a model, we can measure whether there is an auto correlation in the residuals and ifthere is, we can model it.If we are modeling autocorrelation, and heteroscedasticity, then we start with ourempirical distribution of errors or of residuals, which have autocorrelation andheteroscedasticity in them, and we have to kind of run them backwards through ourautocorrelation and heteroscedasticity models to create independently identicallydistributed errors. Then we randomly generate independently identically distributederrors from that distribution, run those back through the heteroscedasticity andautocorrelation models, to create errors that have heteroscedasticity and autocorrelationand then we create our pseudo-data that way.Obviously, the tendency is going to be, when you put in some autocorrelation andheteroscedasticity, to increase your errors of projection, especially if theheteroscedasticity involves greater variability out in the tail which is disproportionatelyimportant to loss reserve analysis.[ am not going to get into too much specifics about how I did that, except to say that ifyou can model these things, you can do it. The power of the boot-strap is that you do nothave to solve the equations every time. If you have got a pure linear model, theequations are solvable in general.If you have got a linear model with a non-linear transformation of the data, then ingeneral, you may only be able to solve those equations approximately and if you have anon-linear regression model, I would not even know where to begin to solve thoseequations to project variances.So, the boot-strap technique is general and we can even run it on a non-linear model,although if it takes me five or six seconds per iteration to run it on a linear model, Icannot imagine how long it is going to take me to run it on a non-linear model, but I amworking on it.That is it. Do I have any questions from the floor? Yes?612


QUESTION: Aside from practical difficulties of taking half a day for iteration, there isreally nothing in anything you have done which requires that the basic loss reserveestimate you are using in the first place comes out of a regression model, is there? Youcould do this residual concept, couldn't you, just on selected loss development factors. Infact, you could examine different sets of residuals.MR. GLUCK: Well, number one, that is what Robin presented already, something morerelated to doing this technique on loss development factors. I have run it on lossdevelopment factors) as well, still a regression fit to loss development factors. There isnothing to say you cannot do that.The only problem is that you are using an over-parameterized model like a lot ofindividual loss development factors, and you are going to get much tighter fits aroundthem. To some degree, the tightness of those fits is artificial because you are just usingso many parameters relative to the number of data points. You can correct for that,obviously.For example, in my model, I did correct all my residuals by a factor of the square root ofN over N plus P where P is the number of parameters in the model to pick up thatadditional variability, I suppose if you did that in the development factor model, whichwas heavily over-parameterized, you would pick up some of it right there by blowing upyour errors by that factor,Any other questions?QUESTION: (Inaudible)MR. GLUCK: Let me just repeat the question. The question was= If you do see patternsor trends in your residuals, you might introduce another parameter into your model ormake some other kinds of modifications to your model and to your fit. That is closeenough.Yes, absolutely. For example, if you have an unanalyzed trend in your residuals, throwthe model out or throw that particular model out and that perhaps means introduce amore complex model with an additional trend parameter or more than one additionaltrend parameter, whatever is necessary.The point I was making is that you have to first get to a model where the residualsappear reasonably random before it even makes sense to look at this boot-strap becauseit is not going to give you meaningful results if there is an unanalyzed trend in yourresiduals.But yes, and in fact, I specifically picked a model that did not recognize a trend in orderto generate some of those bad errors. I wanted to have some examples of bad errors toshow. In fact, the third page with the reasonably well behaved errors was essentially thesame model with an additional parameter to recognize trends introduced into the modelwhich seemed to do a pretty good job of producing random looking errors; so yes, we havebeen through all that.QUESTION: Do you have any ideas on testing the residuals for autocorrelation?613


MR. GLUCK: What I have done is I have calculated kind of a modified Durbin-Watsonstatistic, modified because what I did is I only looked for autocorrelation within accidentyears along the development period axis. I did not look for it other ways. I just did notexpect to find any in other directions.It is modified because I have a lot of breaks in the data rather than really having a timeseries with seventy-three points in a row. I have twelve points in a row and then I haveeleven points in a row. Ignoring the jumps, I calculated a Durbin-Watson statistic andactually just calculated the leg one autocorrelation parameter, the sample lag one autoand lag two and lag three, et cetera, et cetera.So I had the Durbin-Watson statistic to help me see if the auto correlation wassignificant and then a specific read-out of the auto correlation function in which, incertain cases, I fitted a model for that auto correlation function if it looked significant.Any other questions? Yes?QUESTION: In all of this analysis, we are only using paid triangles. Do you take theposition that the outstanding of these triangles is worthless information and just chuck itout? I am concerned, that for a person who works for a reinsurance company, given thelong delays on payment, might they end up with a triangle of zeros?MR. GLUCK: Did everybody hear that question all right? Basically, I am concerned, asyou are. I am not willing to a priori throw out the outstandings. Basically, these modelsas I have expressed them here do work best on paid triangles and if you are in a long lagsituation, that could be trouble.I think we need some more development of better regression models that work on theoutstanding, but to use the boot-strap, it really has to be a complete model. That is tosay, if the input is a triangle of outstandings and a triangle of incremental paidstogether, then the model has to fit both the outstandings and the incremental paids.Even though ultimately we are only interested in the incremental paids, we have to buildour model to be able to fit both because that becomes necessary in the generation ofpseudo-data. So yes, we are working on trying to get a better package of models,including some that work on outstanding losses.It may be that in many types of insurance that paid losses are -- I know there is some apriori opinion out there, Professor Zehnwirth in particular, that the paid losses are muchbetter, harder data and, therefore, should be more predictive than the outstanding is.There is a lot of logic to that. I am hoping that if we develop some better models, somegood linear or, close to linear, models that work on outstanding losses, we can actuallytest that and see how predictive they are before we throw them away. Yes, BenZehnwirth who I just mentioned.DR. ZEHNWIRTH: I guess there are a lot of things I would like to say, but first of all, Iwould like to congratulate you on your extremely lucid exposition. I share a lot of yourviews.614


There are a couple of general comments that I would like to make which I guess aredirected to all three presentations. The first one is that when you derive your standarderrors, uncertainties associated with your mean reserves, the derivation of thoseuncertainties of standard errors, must be based on the model you use to derive thoseforecasts. That is very important.I think Ed Weissner here was a bit concerned about his multiplicity of zeros in the earlydevelopment as far as paid losses are concerned. I have analyzed incremental paid lossesfor some of the largest insurance companies in the world, and the first three or fourdevelopment years are all zeros. There is a hell of a lot of information in that data, so ldo not think we should discard it. Quite often, there is more information in that datawhen you analyze it than there is in incurred losses.I also want to say there is something about confidence intervals which I think is quiteimportant. I think what is important is to understand what the mean forecast is and whatyour forecast of the standard error is. Also, whatever the company is risk adverse and towhat extent it is, if the mean forecast is a hundred million and the standard error is ahalf a million, then I know that if I set aside a hundred million dollars, the future is goingto be the same as the past, because quite often the future is not what it used to be, thenmy expected loss is very small, l would expect to lose very little if I set aside a hundredmillion dollars.However, if the standard error is, say, ten million, there is a decent chance that if I setaside one hundred million dollars, I will stand to lose ten million dollars. This questionabout what kind of company should be looking at a ninety-nine percent confidenceinterval or a twenty-five percent or whatever, I do not think that is really the criticalissue.I think the critical issue is that you need to know what is the coefficient of variation.What is the standard error relative to the mean forecast and then base your final decisionas to what you are going to say in your books or on your balance sheet in terms of whatkind of losses, what is your expected loss, which, is on the average is the standard error.The standard error measures the average distance between the mean forecast and thesort of loss that you are likely to have.MR. GLUCK: 3ust to comment on that, number one, of course, I like using the bootstraphere because the title of the session was <strong>Confidence</strong> <strong>Intervals</strong>, and the boot-strapgets you not only to standard errors but also to confidence intervals.Again, I am not sure how complex that aggregate distribution is likely to be, but in theruns I have gotten, the skewness has not been insignificant, so I might think that it mightbe interesting to at least go one moment beyond standard errors and at least look at theskewness and that may be significant, again, especially if you are interested in beingninety percent confident or whatever it is, it might pay to go one step further and look atthe skewness as well, if we have the ability to do so.DR. ZEHNWIRTH: One other comment. You mentioned three types of errors -- processerror or the process noise, which we have no control over that. If I knew the distributionof IQs, by actually measuring the IQs of each person in this room, so I know the IQs of allthe people in this room, and then you asked me to forecast what is the IQ of the personchosen at random in this room, there is going to be a certain error. The standard errormight be fifteen. There is a lot of variability in IQs here.615


(Laughter)I have no control over that, unless, of course, I am going to subject you to all thiseducational program, perhaps, to improve your IQ. But I have got no control over thatvariability, that fifteen, that process area.The other area you mentioned is the estimation error because we are only actuallylooking at a sample, a part of a process, one sample part of the process. That is all wecan observe, unfortunately, so you have got what is called the estimation error.The third you mentioned is the model specification error. What is it if my model really isnot the model that is really driving the data.There is a fourth one, which is the most difficult one to deal with and I think I mentionedthat before. The future ain't what it used to be and that is that even though you mighthave captured all the trends in the data, you might have estimated social inflation.Now, social inflation might have been the same in the last four years, but before that itwas different. Now, you might decide to project based on the social inflation of sayfifteen percent for the next twenty-five years, but social inflation could change in thenext two or three years. This is that fourth kind of error.The other one, of course, is that, while it is not really an error, but when we do our lossreserving, we do it annually. We do not sort of project now and sit around and waittwenty-five years to find out whether we are right or not. Thank you.MR. GLUCK: 3ust on the unquantified errors, I think that as hard as we try to quantifythe errors around things, I think it has just been that the future is not likely to be thesame as the past, no matter how steady everything -- even if our model looks like it fitwell over a ten-year period, there is no saying that the patterns won't change in thefuture.There are all kinds of unquantified and unquantifiable variabilities. As a consultant, I ama little reluctant and worried about promulgating confidence intervals, especially to lesssophisticated audiences. I think the confidence intervals that we talk about quantifyinghere are minimum confidence intervals.In other words, if my analysis says this is a ninety percent confidence interval, I knowthere are sources of error that I have not quantified and, therefore, the real ninetypercent confidence interval is bigger. They give a lot of useful information as minimumconfidence intervals, especially if we are running a number of different methods.If the confidence intervals overlap substantially from two different methods, we can say,"Hey, these methods are producing reasonably consistent results"; whereas, if we havetwo methods where the confidence intervals do not substantially overlap at all, we cansay, "Hey, one of these models, despite the fact that we would not have gotten this far ifwe did not think the model fit reasonably well, one of these models really is not fitting ormaybe both of them are not fitting."So I think there is a lot of useful information in the confidence intervals, but I am also alittle reluctant about misuse, because I think that everything we do in confidenceintervals will only give us minimum estimates of confidence intervals and there willalways be a layer of variability that we have not quantified.616


Yes?QUESTION: Isn't there a far more greater danger in putting out point estimates thanthere is in a range of estimates?MR. GLUCK: The question was how can a range be more dangerous than a pointestimate. There is some difference. What we do now is we give a point estimate and wesay, "There is my best estimate of the reserves" and we do not actually say if it is amean or a median. It is some kind of central number and I do not know what it is, andthere is a lot of uncertainty around it and I do not know how much.It does not sound like the most useful information, but it is certainly true. It is my bestguess and there is a lot of uncertainty. If I say that the range, with ninety percentconfidence, is from here to there, then I am making a very precise scientific statementthat I do not know if I should be making because I do not know if my science is preciseenough to make that statement scientifically.QUESTION: (Inaudible)MR. GLUCK-" In many ways, statement of a confidence interval is a more precisestatement than "Here is a central estimate with a lot of variability around it." I think wehave got to try to give that information, but we have got to hem and haw and caveat. "Aninety percent confidence interval is at least this large, n is the way I would say it, and itis probably larger.Is there more discussion?(No response.)MR. HAYNE: That was very good. I would like all of you to join me in expressing ourthanks and appreciation to the speakers.MR. HAYNE: Of course, we will be available for individual questions for a limitedamount of time.617


t~00Ast-•-I /P~s'~HZU'IH12)I--JO0U'IH~O(~


ii• ii • i i ii iA~u~l fPaJfI Ir~+~a f~D


Required Properties of ResidualsRandomIndependentIdentically DistributedNormally Distributed620


1.7I 1 1 ; I I I T I ~ I ~ I I I I I ICOMPANY ARm~iduel Plotm us Timm1.20.70.;:'-0.3-0.8o$i : o o° B -° : "~ i !o •. . . . . . -- ,o ...... b . . . . . . . . . . . . . . . . . . . . . . . . . . .©o : ~ o a! oo , o..... ~......i~ . . . . . . . . . . . . . . . . . . . . ~ ...................... ,.......... ~ _acsi•i6 i o ~Genmrml LiebilitwModel : HOERL1Dmte : In Period Los~ PmgmentsR mgumrmd : .748F-mtatietic : 108.923Comf?ioientst-mtetietlcm-2.6692 -11.90963.3168 10.9223-.8594 -13.1725o-1.3o76 78 80 82 84 86 88AccidentYear1.7 ~-'i .........................................................i1.21.2 ................................................ , ..............-!........................................ c ...,,~ .............. o ......................0 . 7 ~-.! ......................................oaavm• ~ o ooe. 7 -: .......................o! o8Dg !0.2 ...... ~ ................. oi :o o-0.3 ............................... o: B o-0.8 ,-- ooooe .,~ ................... i" °cio : c o ioooo•. . . . . . . . . . . . . . . 1 ..... o ...A.ooooi0.2 ....................................... • ,~ ..............o! o. o o-0.3 . . . . . . . . . . . . . . . . . . . . o ............oooo- o o: a m o u-e.8 . . . . ° ~ ....oaooi-1.3I-1.3 ~- oo@ 24 48 72 96 128 144Deuelopment Period (in months)76 78 80 82 84 86 88CalendarYear621


0 . 7' ' ' I "r , , I ' ' ' l ' ' ' [ ' ' ' I ' ' 'COMPANYAR.sidull PlotB us Tim~8 . 4,o ao o a• oOenural LilbilitgModll : HOER~CUM3Data : In Period Loss Pagments0 . 1':' B ° 6 "•m~ ! a no,o ~R |qulr|d : . 9 6 6F - s t l t i s t i c : 1 0 4 4 . 1 4 4Coefficientst-mtatistics- 0 . 2: io :. ..= ........... ~ . . . . ~ : ....... ~ ......... ..- 4 . 0 0 1 7 - 3 6 . 8 4 3 2o ! i. 2 0 6 4 1 7 . 2 3 7 51 . 8 9 6 9 4 2 . 6 4 4 9-0.5 . . . . ! . . . . . . ~ . . . . . . . . ? ......... : .........oio i !ii i :- 8 . 8 .: ........... : ............ i ............. ~ ............. i... . . . . . i,,, i , , , l , , , t , ,,I76 78 80 82 84 86 88Accident Y i i rj , , , l ~ 1 , ,,, ,,, ,,, ,,,8. 7 .............................................. ~ ...............................................e.79me . 4 ................ - ............................ ; .............................................i =oo,. o R: oIt a oa (' O a E0.3. .................. " o •- ..... . . ..o-....... ...............................co - n o oo : oo a " ~ "o ,ae.4e . l!i i o ...i....................................................................•.................omooo 0oo o i oo ,oiaaoo|oBo- 0 . 2o. . . . . . ~ . . . . . o . .: .................|° e: i- e . e . . . ............ : .... a .....oo- 8 . 5ooiio- 8 . 5coo- 8 . 8oI~J-L ; : i : ~ ~ , ~_1 z L : [ J t i ,I- 8 . 80 24 48 72 96 128 14476 78 80 82 8486 88Deuelopment Per±od (in months)CalendarYear622


0e.8' ' ' I ' ' ' I ' ' ' I ' ' ' I ' ' 1 I ' ' 'COMPANYAResiduel Plotm vm Time@.5 ......BGenerel LiebilitwModel : HOERL3$oDate : In Period Loss Pewmentsooao ¢o8.2 ":" b --.o.-..0 ...... B ..........................i = aR squarmd : .924F-statistic : 291.@@9! 8 i °D8~ ° m °o° : .Coefficientst-statistics-@. 1 ..¢......0 ...... :--'"'~ .............................................................o: °i o .mo o oi 0oo-8.4 "~ .............. = .......................................................................i.-4.2728 -33.6166• 214216.1@6@3.2151 24.5726-.7633 -22.1759o." a !-0.7 ": ........................................... "...............................................i t i e i i i i , I I I I I I I I I I76 78 88 8e 84 86 88Accident Ymar@.8e.8@.6@.So$@.2-@.1: o :: o m! ,:, !'~•'""e ....................= ,omi = 8 u :: o II o iom= ......... u ......................mo: Bo c moIB O........ ~ ............. : .... o . . . . . . . m ...................... : .................i o i" == 9 :Doio@.2Lm-@. 1 !aoo |ni....................... o¢......a...~.....g ..... T.8 : m•o :................................ o ....o i B °a: oo o : o o© ! o 6............. ~......~......?......~ ..........o ! i =o -Im i =o.mF o /~-8.4 ;. -= .....@4 L Io. . . . . . . . . . o .....o= i "o: 0©-@.7-e.7 LDo1@ 24 48 72 96 128 144Deuelopment Period (in months)7678 88 8~ 84 86CalendarYear88623


t~stb~.,C'-+//~.,,,ao.,,Iv/, (~eSidU AtI'I / 0.+~,FJ-+-,., ~-~Q


~sti J ,i i i i:t~l.n/G


Calculable from Empirical Distributionof Projection ErrorsBiasUar lanceStandardDeviationOoefficient<strong>Confidence</strong>of SkewnessInteruals627


Sourcesof UarianceTotal Projectio_n~[ :E[Pseudo DataittedPseudo Dat~] ~0%O0Statistical Error :E[Pseudo Data - E(PseudoData)] ~Parameter Estimation Error :E[Fitted Pseudo Data - E(Fitted Pseudo Data ) ]~


COMPANY AGeneral LiabilityIn Period Loss Payments@PROJECTED RESERVES AND ANALYSIS OF ERRORSInflation (+) or Discount (-) Rate: 0(i) (2) (3)Acc. OriginalYear Fit Bias74 83 5,5175 114 ,,2176 70 4577 331 ,,13578 985 ,,23679 402 22180 1,810 5,1,01881 1,295 ,,46582 549 34583 2,149 ,,1,27584 1,937 ,,69985 1,285 78486 7,278 ,,3,91087 5,183 ~2,111(4) (5) (6) (7) (8) (9)Total Projection Error Sources of VarianceStatistical ParameterCorrected Std. (Process) EstimationFit Dev. Variance Skew. Error Error134 23 525 "0.683 499 46135 23 526 0.161 332 16225 18 312 0.308 86 238466 65 4,281 "0.331 3,820 8061,221 ii0 12,033 5'0.086 5,345 6,062181 60 3,541 0.424 1,047 3,5492,828 244 59,535 '50.147 55,374 7,0681,760 146 21,341 "0.125 16,809 6,008204 74 5,548 "0.016 3,632 2,2393,424 423 179,075 "0.676 183,118 4,8472,636 244 59,641 5'0.034 51,054 6,465501 155 23,987 5'0.016 17,226 5,49511,188 1,093 1,195,294 "0.293 1,121,580 33,7517,294 486 236,301 "0.123 234,421 27,837Cal.Year90 4,458 5,1,44891 4,632 "1,85992 4,209 ~1,81793 3,669 "1,75394 1,583 "16995 1,388 ~37696 1,139 "46397 879 ~40498 616 "32799 137 62i00 i01 22I01 70 4102 47 "i103 21 6104 523 "5TOTAL 23,471 "8,5265,906 436 190,360 0.014 169,089 16,4386,491 553 305,959 "0.284 287,863 19,1366,026 566 320,361 "0.363 296,063 17,4045,422 527 277,842 "0.567 256,064 14,7371,752 234 54,541 "0.316 50,960 5,0481,764 182 33,234 ~0.246 32,767 4,5671,602 186 34,548 0.044 28,914 3,5501,283 152 23,252 "0.351 19,297 2,354943 140 19,571 "0.185 17,549 1,32175 30 875 "0.121 317 49579 22 472 0.092 183 27266 14 202 0.395 73 13848 i0 I01 0.458 37 6515 6 36 0.867 9 28528 77 5,866 "0.322 3,343 2,13631,997 1,558 2,427,103 "0.231 1,652,398 722,447NotesProjection Bias:Projection Error:Statistical Error:Estimation Error:E(Projected <strong>Reserve</strong>s) - E(Actual <strong>Reserve</strong>s)Projected <strong>Reserve</strong>s - Actual <strong>Reserve</strong>sE(Actual <strong>Reserve</strong>s) - Actual <strong>Reserve</strong>sProjected <strong>Reserve</strong>s - E(Projected <strong>Reserve</strong>s)629


COMPANYA@General LiabilityIn Period Loss PaymentsCONFIDENCE INTERVAL ANALYSISProbability Distribution of <strong>Reserve</strong>s(i) (2) (3) (4) (5) (6)(7) (8)PercentilesAcc.Year 5 i0 25 50 75Cal.Year74 i00 109 118 131 15175 98 106 121 134 15176 ,~6 3 16 25 3777 360 378 421 463 50378 1076 1086 1151 1221 128479 63 109 148 188 22080 2459 2504 2669 2835 298881 1570 1584 1662 1755 184682 87 109 155 206 25983 2882 2918 3105 3336 368184 2218 2333 2480 2630 277085 270 293 410 488 58986 9562 9766 10266 11286 1187187 6487 6700 6982 7277 75909O169163445491361250314819503034011294672712557788895177173545941420269324819843204135303778612974818090 5186 5328 5565 5880 6195 6561 659591 5527 5802 6098 6502 6785 7050 752792 5043 5412 5695 5950 6421 6778 696293 4648 4809 5010 5425 5692 6071 622694 1440 1467 1580 1745 1872 2099 216095 1503 1531 1621 1745 1896 2016 203696 1284 1406 1478 1584 1724 1861 188297 1022 ii01 1199 1267 1377 1463 155098 708 766 834 944 1025 1135 118699 28 37 55 75 93 II0 128i00 45 50 66 80 95 105 116i01 41 48 58 67 75 84 87102 32 36 41 49 56 58 64103 3 8 ii 15 19 21 23104 417 432 471 516 579 633 645TOTAL 29622 29992 30879 31997 32984 33940 34478630


Oimtribution of Totll RmmmrvEm®181'Ii i --'i................. !,i1612928 3032 34(in thoumlnds or doZlmrs)3638(X 1000)631


632

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!