About the return period of a catastrophe - Natural Hazards and ...
←
→
Page content transcription
If your browser does not render page correctly, please read the page content below
Nat. Hazards Earth Syst. Sci., 22, 245–263, 2022 https://doi.org/10.5194/nhess-22-245-2022 © Author(s) 2022. This work is distributed under the Creative Commons Attribution 4.0 License. About the return period of a catastrophe Mathias Raschke Independent researcher: Stolze-Schrey-Str. 1, 65195 Wiesbaden, Germany Correspondence: Mathias Raschke (mathiasraschke@t-online.de.com) Received: 21 March 2021 – Discussion started: 20 April 2021 Revised: 26 November 2021 – Accepted: 17 December 2021 – Published: 31 January 2022 Abstract. When a natural hazard event like an earthquake af- characteristics and dependence is not realized by the previ- fects a region and generates a natural catastrophe (NatCat), ous risk models in science and industry. We compare our risk the following questions arise: how often does such an event estimates to these. occur? What is its return period (RP)? We derive the com- bined return period (CRP) from a concept of extreme value statistics and theory – the pseudo-polar coordinates. A CRP is the (weighted) average of the local RP of local event in- 1 Introduction tensities. Since CRP’s reciprocal is its expected exceedance frequency, the concept is testable. As we show, the CRP is After a natural hazard event such as a large windstorm or an related to the spatial characteristics of the NatCat-generating earthquake has occurred in a defined region (e.g., in a coun- hazard event and the spatial dependence of corresponding lo- try) and results in a natural catastrophe (NatCat), the follow- cal block maxima (e.g., annual wind speed maximum). For ing questions arise: how often does such a random event oc- this purpose, we extend a previous construction for max- cur? What is the corresponding return period (RP, also called stable random fields from extreme value theory and consider recurrence interval)? Before discussing this issue, we under- the recent concept of area function from NatCat research. line that the extension of river flood events or windstorms in Based on the CRP, we also develop a new method to estimate time and space depends on the scientific and socioeconomic the NatCat risk of a region via stochastic scaling of histori- event definition. This definition may vary by peril and is not cal fields of local event intensities (represented by records of our topic even though they influence our research object – measuring stations) and averaging the computed event loss the RP of a hazard and NatCat event. for defined CRP or the computed CRP (or its reciprocal) for The RP of an event magnitude or index is frequently used defined event loss. Our application example is winter storms as a stochastic measure of a catastrophe. For example, there (extratropical cyclones) over Germany. We analyze wind sta- are different magnitude scales for earthquakes (Bormann and tion data and estimate local hazard, CRP of historical events, Saul, 2009). However, their RP may not correspond with the and the risk curve of insured event losses. The most destruc- local consequences since the hypocenter position also deter- tive storm of our observation period of 20 years is Kyrill mines local event intensities and effects. For floods, regional in 2002, with CRP of 16.97 ± 1.75. The CRPs could be suc- or global magnitude scales are not in use (Guse et al., 2020). cessfully tested statistically. We also state that our risk esti- For hurricanes, the Saffir–Simpson scale (National Hurricane mate is higher for the max-stable case than for the non-max- Centre, 2020) is a magnitude measure; however, the ran- stable case. Max-stable means that the dependence measure dom storm track also influences the extent of destruction. (e.g., Kendall’s τ ) for annual wind speed maxima of two Extratropical cyclones hitting Europe, called winter storms, wind stations has the same value as for maxima of larger are measured by a storm severity index (SSI; Roberts et block size, such as 10 or 100 years since the copula (the al., 2014) or extreme wind index (EWI; Della-Marta et al., dependence structure) remains the same. However, the spa- 2009). Their different definitions result in quite different RPs tial dependence decreases with increasing block size; a new for the same events. In the rare scientific publications about statistical indicator confirms this. Such control of the spatial risk modeling for the insurance industry, such as by Mitchell- Wallace et al. (2017), better and universal approaches for the Published by Copernicus Publications on behalf of the European Geosciences Union.
246 M. Raschke: About the return period of a catastrophe RP are not offered. In sum, previous approaches are not sat- comments by Raschke (2015b). The sampling of multivariate isfactory regarding the stochastic quantification of a hazard block maxima is simpler. However, the univariate sampling or NatCat event. This is our motivation to develop a new and analysis are also not trivial. An example is the trend over approach. Building on results of extreme value theory and decades in the time series of a wind station in Potsdam (Ger- statistics, we mathematically derive the concept of combined many). Wichura (2009) assumes a changed local roughness return period (CRP), which is the average of RPs of local condition over the time as the reason; Mudelsee (2020) cites event intensities. As we will show by a combination of ex- climate change as the reason. isting and new approaches from stochastic and NatCat re- The research of spatial dependence of natural hazards is search, the concept of CRP is strongly related to the spatial not an end in itself; the final goal is an answer to the question association/dependence between the local event intensities, about the NatCat risk. What is the RP of events with aggre- their RPs, and corresponding block maxima, such as annual gate damage or losses in a region equal to or higher than a de- maxima. fined level? By using CRP, we quantify the risk via stochastic Spatial dependence is not suitably considered in previous scaling of fields of local intensities of historical events and research about NatCat. The issue is only a marginal topic averaging corresponding risk measures. This new approach in the book by Mitchell-Wallace et al. (2017, Sect. 5.4.2.5) significantly extends the methods to calculate a NatCat risk about NatCat modeling for the insurance industry. Jongman curve. Previous opportunities and approaches for a risk esti- et al.’s (2014) model for European flood risk considers such mate are the conventional statistical models that are fitted to dependence explicitly. However, their assumptions and esti- observed or re-analyzed aggregated losses (also called as-if mates are not appropriate according to Raschke (2015b). In losses) of historical events, as used by Donat et al. (2011) and statistical journals, max-stable dependence models have been Pfeifer (2001) for annual sums. The advantages of such sim- applied to natural hazards without a systematic test of the ple models are the controlled stochastic assumptions and the stability assumption. Examples are the snow depth model by small number of parameters; the disadvantages are high un- Blanchet and Davison (2011) for Switzerland and the river certainty for widely extrapolated values and limited possibil- flood model by Asadi et al. (2015) for the Upper Danube ities to consider further knowledge. The NatCat models in the River system. Max-stable dependence means that the copula (re-)insurance industry combine different components/sub- (the dependence structure of a bi- or multivariate distribu- models for hazard, exposure (building stock or insured port- tion) and corresponding value of dependence measures are folio), and corresponding vulnerability (Mitchell-Wallace et the same for annual maxima as for 10-year maxima or those al., 2017, Sect. 1.8; Raschke, 2018); additionally, they of- of a century (Dey et al., 2016). Also, Raschke et al. (2011) fer better opportunities for knowledge transfer such as the proposed a winter storm risk model for a power transmis- differentiated projection of a market model on a single in- sion grid in Switzerland without validation of the stability surer. However, the corresponding standard error of the risk assumption. The sophisticated model for spatial dependence estimates is frequently not quantified (and cannot be quan- between local river floods by Keef et al. (2009) is very flex- tified). The numerical burden of such complex models is ible. However, it needs a high number of parameters, and high. Tens of thousands of NatCat events must be simulated the spatial dependence cannot be simply interpolated as is (Mitchell-Wallace eta al., 2017, Sect. 1). Thus, the question possible with covariance and correlation functions (Schaben- arises of what the stochastic criterion for the simulation of berger and Gotway, 2005, Sect. 2.4). Besides, the random a reasonable event set in NatCat modeling is. As far as we occurrence of a hazard event is more like a point event of a know, scientific NatCat models for European winter storms Poisson process than the draw/realization of a random vari- (extratropical cyclones) are based on numerical simulations able. For instance, the draw of the annual random variable is (Della-Marta et al., 2010; Osinski et al., 2016; Schwierz et certain; the occurrence of a Poisson point event in this year al., 2010) and are not intensively validated regarding spatial is not certain but random. dependence. In the research of spatial dependence by Bonazzi et To answer our questions, we start with topics of extreme al. (2012) and Dawkins and Stephenson (2018), the local value statistics in Sect. 2, where we recall the concept of extremes of European winter storms are sampled by a pre- max-stability for single random variables, bivariate depen- defined list of significant events. Such sampling is not fore- dence structures (copulas), and random fields. We also ex- seen in (multivariate) extreme value statistics; block maxima tend Schlather’s (2002) first theorem with a focus on spatial and (declustered) peaks over thresholds (POT) are the estab- dependence. The more recent approaches to area functions lished sampling methods (Coles, 2001, Sects. 3.4 and 4.4; (Raschke, 2013) and survival functions (Jung and Schindler, Beirlant et al., 2004, Sect. 9.3 and 9.4). Event-wise spatial 2019) of local event intensities within a region are imple- sampling is a critical task; the variable time lag between mented therein. In Sect. 3, we derive the CRP from the con- the occurrences at different measuring stations, such as river cept of pseudo-polar coordinates of extreme value statistics gauging stations, makes it confusing. The corresponding as- and explain its testability, possibility of scaling, and cor- signment of Jongman et al. (2014) of one local/regional flood responding risk estimate. Subsequently, in Sect. 4, we ap- peak to peaks at other sites is not convincing, according to the ply the new approaches to winter storms (extratropical cy- Nat. Hazards Earth Syst. Sci., 22, 245–263, 2022 https://doi.org/10.5194/nhess-22-245-2022
M. Raschke: About the return period of a catastrophe 247 clones) over Germany to demonstrate their potential. This ap- The copula approach represents a basic distinction between plication implies several elements of conventional statistics, the marginal distributions and the dependence structure; which are explained in Sect. 5. Finally, we summarize and it was introduced by Sklar (1959). As there are different discuss our results and provide an outlook in Sect. 6. Some univariate distributions (types), there are different copulas stochastic and statistical details are presented in the Supple- (types). Mari and Kotz (2001) present a good overview about ment and Supplement data. In the entire paper, we must con- copulas, their construction principles, and different views on sider several stochastic relations. Therefore, the same mathe- dependence. Max-stability is also a property of some cop- matical symbol can have different meanings in different sub- ulas, called max-stable copula or extreme copula. A max- sections. We also expect that the reader is familiar with statis- stable copula remains the same for pairs of component- tical and stochastic concepts such as statistical significance, wise maxima (Xn Yn ) as it was already for the underlying goodness-of-fit tests, random fields, and Poisson (point) pro- pairs (XY ); the copula parameters including dependence cesses (Upton and Cook, 2008). measure such as Kendall’s (1938) rank correlation are equal. The formal definition is (Dey et al., 2016, Sect. 2.3) n 2 Max-stability in statistics and stochastic Cn (u, v) = C u1/n , v 1/n . (5) 2.1 The univariate case 2.3 Max-stability of stochastic processes Before introducing CRP and its properties, we discuss and The spatial extension of the bivariate situation and corre- extend the concept of max-stability in extreme value statis- sponding distribution is the random field Z(x) at points x tics, with a focus on random processes and fields. Max- in the space Rd with d dimensions (e.g., Schlather, 2002). stability has its origin in univariate statistics. The cumu- In our application, R2 is the geographical space, and x is lative distribution functions (CDFs) Fn (x) of maximum the corresponding coordinate vector. At one point/site, x Xn = Max(X1 . . . Xn ) of n independent and identically dis- in Rd , Fx (z) is the marginal distribution of the local ran- tributed (iid) random variables Xi with CDF F (x) (for the dom variable Z. There are various differentiations and vari- non-exceedance probability Pr(X ≤ x)) is ants such as (non-)stationarity or (non-)homogeneity. A max- Fn (x) = F (x)n . (1) stable random field has max-stable marginal distributions and the copulas between two margins are also max-stable. A CDF F (x) is max-stable if the linear transformed maxi- Schlather (2002) has formulated and proofed a construction mum (with parameters an and bn ) has the same distribution of a max-stable random field (we cite his first theorem with (Coles, 2001, Def. 3.1): nearly the same notation). Fn (an x + bn ) = F (an x + bn )n = F (x). (2) TheoremR 1: Let Y be a measurable random function and µ = E max{0, Y (x)}dx ∈ (0, ∞) . Let 5 be a Poisson The Fréchet distribution (Beirlant et al., 2004, Table 2.1) is Rd such a max-stable distribution, also called extreme value dis- process on Rd × (0, ∞) with intensity measure d3(y, s) = tribution, with the following CDF: µ−1 dys −2 ds , and Yy,s i.i.d. copies of Y ; then sup 1 Z(x) = (y,s)∈5sYy,s (x − y) G(x) = exp − α , x ≥ 0, α > 0. (3) x sup = (y,s)∈5smax Yy,s (x − y), 0 (6) For the unit Fréchet distribution, the parameter is α = 1, and the transformation parameters are bn = 0 and an = n. Most is a stationary max-stable process with unit Fréchet margins. distribution types are not max-stable, but their distribution Extreme value statistics is interested in the max-stable de- of maxima (1) converges to an extreme value distribution by pendence structure (copula) between the margins, the unit increasing sample size n, called the block size in this context Fréchet distributed random variables Z at fixed points x in (Beirlant et al., 2004, Sect. 3). We can only refer to some space Rd . From the perspective of NatCat modeling in the ge- of a very high number of corresponding publications (e.g., ographical space R2 and with Y (x) ≥ 0, the entire generating De Haan and Ferrira, 2007; Falk et al., 2011). Coles (2001) process is interesting. The Poisson (point) process 5 repre- gives a good overview for practitioners. sents all hazard events (e.g., storms) of a unit period such as a hazard season or a year; it has two parts, s and y. The point 2.2 Max-stable copulas events s on (0, ∞) are a stochastic event magnitude and scale the field of local event intensity sx (x): It is also well-known that a bivariate CDF F (xy) can be replaced by a copula C(uv) and the marginal CDFs Fx (x) sx (x) = sYy,s (x − y), (7) and Fy (y): which represents all point events sx (x) at sites x. The ran- F (x, y) = C Fx (x), Fy (y) = Pr(X ≤ x, Y ≤ y). (4) dom coordinate y is a kind of epicenter in the meaning of https://doi.org/10.5194/nhess-22-245-2022 Nat. Hazards Earth Syst. Sci., 22, 245–263, 2022
248 M. Raschke: About the return period of a catastrophe NatCat with the (tendentiously) highest local event intensity hazard models. Punge et al.’s (2014) hail simulation includes such as maximum wind speed, maximum hail stone diame- maximum hail stone diameter that acts like ln(s) in Eqs. (6) ter, or peaks of earthquake ground accelerations. The copied and (7). Raschke (2013) already stated similarity between random function Y (x) determines the pattern of a single ran- earthquake ground motion models and Schlather’s construc- dom event in the space Rd . Y (x) or its local expectation con- tion. However, the earthquake magnitude can have a wider verges to 0 or is 0 if the magnitude ||x|| of the coordinate influence on the geographical event pattern than simple scal- vector converges to infinity due to the measurability condi- ing. This was one of our motivations to extend and generalize tion in Theorem 1. This also applies to NatCat events with the Schlather’s construction (Eq. 7) with dimension d of R d : limited geographical extend. − 1 Schlather (2002) has demonstrated the flexibility of his sx (x) = s 1+β Yy,s (1 + β)s −β d (x − y) , β > −1, (9) construction by presenting realizations of maximum fields for different variants of Y (x). Its measurability condition and for the corresponding field of maxima (Eq. 6), we write is fulfilled by classical probability density functions (PDFs, first derivative of the CDF; Coles, 2001, Sect. 2.2) of random sup 1+β 1 −β − d variables. For instance, Smith (1990, an unpublished and fre- Z(x) =(y,s)∈5s Yy,s (1 + β)s (x − y) , quently cited paper) used the PDF of the normal distribution. β > −1. (10) We present some examples of the random function Y (x) in the Supplement, Sect. 4, to illustrate the universality of the As we show in the Supplement, Sect. 2, the marginal Pois- approach. Y (x) can also imply random parameters such as son processes sx (x) in Eq. (9) have the same exceedance fre- variants of standard deviation of applied PDF, and it can be quency (Eq. 8) as Eq. (7). Correspondingly, Z(x) in Eq. (10) combined with a random field. is also unit Fréchet distributed as in Eq. (6). Schlather’s con- Both s and sx (x) with fixed x are point events of Pois- struction is a special case of Eqs. (9) and (10) with β = 0; son processes with intensity s −2 ds. This is the expected point Eqs. (9) and (10) only imply max stability of spatial depen- density and determines the exceedance frequency. The latter dence in this case, which is what we discuss in the following is the expected number of point events sx (x) > z and s > z: section. Z∞ 3(z) = s −2 ds = 1/z. (8) 2.4 Spatial characteristics and dependence z We now illustrate spatial max-stability and its absence by ex- The entire construction of Theorem 1 is also a kind of shot amples of Eqs. (9) and (10) with standard normal PDF as noise field according to the definitions of Dombry (2012). random function Y (x) in a one-dimensional parameter space Furthermore, Schlather (2002) has also published a construc- Rd=1 . For this purpose, we apply the simulation approach tion of a max-stable random field without a random function of Schlather (2002) and generate random events within a but with a stationary random field. The logarithmic variant range (−10, 10) for local event intensities within the re- of Theorem 1 (logarithm of Eqs. 6 and 7) also results in a gion/range (−4, 4) in R1 by a Monte Carlo simulation. Ac- max-stable random field; however, the marginal maxima are cording to Schlather’s procedure, which processes a series of unit Gumbel distributed, and Eq. (8) would be an exponential random numbers from a (pseudo-)random generator, only the function. The Brown–Resnick process – well-known in ex- events for the large s are simulated; this implies incomplete- treme value statistics (e.g., Engelke et al., 2011) – generates ness for smaller events. This does not significantly affect the a max-stable random field with such unit Gumbel distribu- simulated field Z(x) of maxima. However, we can only con- tions as a result of random walk processes (Pearson, 1905). sider this simulation for β ≥ 0 in Eqs. (9) and (10) since the It is implicitly a construction according to Theorem 1, as for edge effects increase for increasing s if β < 0. In Fig. 1a, exponential transformation (inverse of logarithmic transfor- we show fields for one realization 5 of Schlather’s theorem mation), and the random walk with drift is the random func- (n = 1, equivalent to 1 year or one season in NatCat model- tion of Theorem 1. The origin of a Brown–Resnick process ing) for the max-stable case with β = 0 in Eqs. (9) and (10). in Rd can be fixed but can also be a random coordinate as With the same series of random numbers, we generate fields y is in Theorem 1. of n = 100 realizations of 5 in Fig. 1b. It has the same pat- The construction of Theorem 1 is already used to model tern n = 1 and is the same when we linear transform the local natural hazards in the geographical space. Smith (1990) has intensities sx , with division by n = 100. The entire generat- applied the bivariate normal distribution as Y (x) in a rain- ing processes are max-stable, just as the resulting marginals storm modeling. The Brown–Resnick process has already and association/dependence between marginals are. In con- been applied to river flood (Asadi et al., 2015). Blanchet trast to this total max-stability, the example with β = 0.1 re- and Davison’s (2011) model for snow depth and Raschke et sults in different patterns for n = 1 and n = 100 in Fig. 1c al.’s (2011) model for winter storms, both in Switzerland, are and d. The shape of the event fields gets sharper for larger s; also max-stable. There are also similarities to conventional only the marginals are max-stable, not their spatial relations. Nat. Hazards Earth Syst. Sci., 22, 245–263, 2022 https://doi.org/10.5194/nhess-22-245-2022
M. Raschke: About the return period of a catastrophe 249 Figure 1. Examples of simulated fields of local event intensities and enveloping field of maxima (bold green line) generated with standard normal PDF as Y (x) in (6, 7, 9, 10) and the same series of numbers from pseudo-random generator: (a) max-stable and n = 1, (b) max-stable and n = 100, (c) non-max-stable and n = 1, and (d) non-max-stable and n = 100. The strongest event has a broken red line. Figure 2. Spatial dependence in relation to the distance measured by Kendall’s τ : (a) max-stable fields of Fig. 1, (b) non-max-stable fields of Fig. 1, and (c) limit cases. To illustrate the effect on spatial dependence quantita- It is now like a survival function of a random variable (de- tively, we have generated local maxima Z(x) from Eq. (10) creasing with the value of functions between 0 and 1), which by Monte Carlo simulation with 100 000 repetitions and the describes the exceedance probability in contrast to a CDF for computed corresponding dependence measure Kendall’s τ non-exceedance probability (Upton and Cook, 2008). Jung (Kendall, 1938; Mari and Kotz, 2001, Sect. 6.2.6). As de- and Schindler (2019) have already applied such aggregat- picted in Fig. 2a and b, the functions are the same if β = 0 ing functions to German winter storm events and call them and differ if β = 0.1; the dependence is decreasing by in- explicitly a survival function. However, not every normal- creasing n if β > 0. In Fig. 2c, the functions are shown for the ized aggregating decreasing function is based on an actual limit cases of full dependence with the same value of sx (x) random variable. Moreover, survival functions are not used at each point x and full independence with sx (x) = 0 every- in statistics to describe regions of random fields or random where except one point. function as far as we know. Nonetheless, we use the area Beside our extension of Schlather’s theorem, we also con- function (Eq. 11) to characterize and research the spatiality sider a more recent approach from NatCat research to under- of the event field sx (x) in a defined region. As an example, stand the spatial characteristics. Raschke (2013) described an the area function for the strongest events in Fig. 1 is shown earthquake event by its area function for the peak ground ac- in Fig. 3a. The differences between the variants n = 1 versus celerations. This is a cumulative function and measures the n = 100 and β = 0 versus β = 0.1 correspond with the dif- set of points in the geographical space (the area) with an ferences between these events in Fig. 1. In Fig. 3b, the limit event intensity higher than the argument of the function. The cases are depicted to illustrate the underlying link between area function is limited here to a region and is normalized as area function and spatial dependence. follows (u and l symbolizes the region’s bounds, the integral We also use the parameters of a random variable Z in the denominator is the area of the region in R2 , IA is an with PDF f (x) and CDF F (z), as well as survival func- indicator function): tion F̄ (z) = 1 − F (z), to characterize our area function. These parameters are expectation E[Z] (estimated by sample Ru mean/average), variance Var[Z], standard deviation SD[Z] IA (sx (x) > z) dx l (the square root of variance), and a coefficient of varia- A(z) = . (11) Ru tion (CV) Cv[Z] (Coles, 2001, Sect. 2.2; Upton and Cook, dx 2008) with l https://doi.org/10.5194/nhess-22-245-2022 Nat. Hazards Earth Syst. Sci., 22, 245–263, 2022
250 M. Raschke: About the return period of a catastrophe Figure 3. The area function and corresponding characteristics: (a) area function of the biggest event of Fig. 1, (b) area functions for the limit cases (examples), and (c) relation of CV to average for the max-stable case of Fig. 1b and (d) for the non-max-stable case of Fig. 1d (for events with average > 1, the distance between the support points is 0.1 for the computation of the average in the region [−4, 4]). 3 The combined return period (CRP) Z∞ Z0 E[Z] = zf (z)dz = zdF̄ (z), 3.1 The stochastic derivation −∞ IA Let the point event Yx,i be the local intensity at site x of a Z0 hazard event and i be a member of the set of all events of a p Var[Z] = (z − E[Z])2 dF̄ (z), SD[Z] = Var[Z], defined unit period such as a year, hazard season, or half sea- son. This local event intensity might be the maximum river IA discharge of a flood, the peak ground acceleration of an earth- SD[Z] quake, or the maximum wind gust of a windstorm event. The CV = . (12) E[Z] entire number of events with Yx,i > y during the unit period is K = ∞ P According to Eq. (12), any scaling of Z by a factor S > 0 i=1 A (Yx,i > y). K is (at least approximately) a I results in proportional scaling of expectation and standard Poisson-distributed (Upton and Cook, 2008) discrete random deviation in Eq. (12), and the CV remains constant. Corre- variable with an expectation – the expected exceedance fre- spondingly, random magnitude s in Eqs. (9) and (10) only quency – that is the local hazard function in a NatCat model scales the field sx (x) in the max-stable case with β = 0 and (this is not the hazard function/hazard rate of statistical sur- influences the expectation of A(z) but not the CV. Thus, the vival analysis; Upton and Cook, 2008): CV is independent from the expectation. This does not apply 3y (y) = E[K]. (13) to the non-max-stable case with β 6 = 0 in Eqs. (9) and (10). These different behaviors are detectable for the examples of This is the bijective frequency function and the local hazard Fig. 1b and d in Fig. 3c and d. For the max-stable case, the curve. Its reciprocal determines the hazard curve for the RP: scale/slope parameter of the linearized regression function does not differ significantly from 0 according to the t test 1 1 Ty (y) = = . (14) (Fahrmeir et al., 2013, Sect. 3.3). For the max-stable case, the 3y (y) E[K] regression function is statistically significant with a p value of 0.00. Linearization is provided by the logarithm of CV and As Yx is a point event, its RP T = Ty (Yx ) is also a point expectation/average. For completeness, the full dependence event of a point process with frequency function according case of Fig. 3b corresponds with a CV of 0. to Eq. (14) but now with the argument/threshold variable z As per Sect. 2, Schlather’s first theorem has parallels to since the scale unit is changed: NatCat models, is used already in hazard models, and was 3T (z) = 1/z. (15) extended here to the non-max-stable case regarding the spa- tial dependence and characteristics. Statistical indication for Since Eq. (15) is the same as Eq. (8), Schlather’s theorem and max-stability is the independence of the spatial dependence our extensions directly apply to RP with T = sx in Eqs. (7) measure from the block size (e.g., 1 versus 10 years) and in- and (9). For completeness, the marginal maxima have a CDF dependence between CV and expectation of the area function for n unit periods (a unit Fréchet distribution for n = 1 ac- (Eq. 11). Otherwise, non-max-stability is indicated. cording to Eq. 3): Gn (z) = exp (−n3T (z)) = exp(−n/z). (16) This is applicable because the probability of non-exceedance for level z of the block maxima is the same probability as per Nat. Hazards Earth Syst. Sci., 22, 245–263, 2022 https://doi.org/10.5194/nhess-22-245-2022
M. Raschke: About the return period of a catastrophe 251 which no events occur with T > z, which is determined by Serial averaging (averaging the last result with a further RP) the Poisson distribution; Eq. (6) also implies this link, and also implies a weighting; the first considered RPs would be Coles (2001, p. 249 “yp ”) has also mentioned this. The same less weighted than the last in the final CRP. The general for- applies to the relation between frequency and maxima of lo- mulation of averaging of RP with weight w is cal event intensity. Pn Schlather’s theorem is also based on and implies Ti wi Tc = Pi=1 n . (20) the concept of pseudo-polar coordinates. According to i=1 wi De Haan (1984) and explained well by Coles (2001, Sect. 8.3.2), two linked max-stable point processes with ex- The corresponding continuous version within the region’s pected exceedance frequency (Eq. 15) and point events T1 bounds u and l in space Rd is and T2 are also represented by pseudo-polar coordinates with Ru radius R and angle V : T (x)w(x)dx l Tc = . (21) Ru T1 R = T1 + T2 , V = ⇐⇒ w(x)dx T1 + T2 l 1−V T1 = RV , T2 = R(1 − V ) = T1 . (17) If w(x) = 1 applies in Eq. (21), then the denominator is the V area of the region. Furthermore, the CRP TC is the expecta- As we describe in the Supplement, Sect. 1, the expectation tion of the area function (Eq. 11). This also applies to other of (1 − V )/V is 1 and for the conditional expectation of weightings if we consider it in the area function, here written unknown RP T2 with known T1 applies (association is pro- for RP T (x), vided): Ru w(x)IA (T (x) ≥ z)dx E [T2 |T1 ] = T1 . (18) A(z) = l , (22) Ru The interest in extreme value theory and statistics (Coles, w(x)dx l 2001, Sect. 3.8; Beirlant et al., 2004, Sect. 8.2.3; Falk et al., 2011, Sect. 4.2) is focused on the distribution of with empirical version for n measuring stations in the ana- pseudo-angle V with CDF H (z). As Coles (2001) writes lyzed region: “the angular spread of points of N [the entire point pro- Pn cesses] is determined by H , and is independent of radial dis- wi IA (Ti ≥ z) A(z) = i=1Pn . (23) tance [R]”, angle and radius occur independently from each i=1 wi other, and H determines the copula between two marginal maxima Z(x) in Theorem 1. The weighting, especially the empirical one, can be used in According to Coles (2001, Sect. 3.8), the pseudo-radius R hazard research to compensate for an inhomogeneous geo- in Eq. (17) is a point event of a Poisson process with fre- graphical distribution of measurement stations or a different quency 3(z) = 2/z – the double of Eq. (15). This means the focus than the covered geographical area such as the inhomo- average of two RPs, T1 and T2 , results in a combined return geneous distribution of exposed values or facilities in NatCat period (CRP) Tc , research. It has the same effect on the area function as a dis- tortion of the geographical space as used by Papalexiou et T1 + T2 al. (2021). Weighted or not, CRP and CV are parameters of Tc = , (19) 2 the area function. with exceedance frequency function (Eqs. 8 and 15). We do 3.2 Testability not have a mathematical proof that Eqs. (18) and (19) also ap- ply to non-max-stable-associated point processes. However, Before the CRP is applied in stochastic NatCat modeling, max-stable and non-max-stable cases have the same limits: it should be tested statistically to validate the appropriate- full dependence (T1 = T2 ) and no dependence/full indepen- ness. A sample of CRPs can be tested by a comparison of dence (T1 = 0 if T2 > 0 and vice versa; T = 0 represents the its exceedance frequency function (Eq. 15) and their em- lack of a local event). Therefore, Eq. (19) should also apply pirical variant. Therein the empirical exceedance frequency to the non-max-stable case between these limits. This can be of the largest CRP in the sample is the reciprocal of the heuristically validated as we demonstrate by an example in length of the observation period. The second largest CRP the Supplement, Sect. 3. is hence associated with twice the exceedance frequency of More than one RP can be averaged since the averaging of the largest CRP and so on. It is the same as for empirical two RPs can be done in serial (and the pseudo-polar coordi- exceedance frequency for earthquake (e.g., the well-known nates are also applied to more than two marginal processes). Gutenberg–Richter relation in seismology; Gutenberg and https://doi.org/10.5194/nhess-22-245-2022 Nat. Hazards Earth Syst. Sci., 22, 245–263, 2022
252 M. Raschke: About the return period of a catastrophe Richter, 1956). However, not all small events are recorded; the product of local loss ratio LR , determined by local event the sample is thinned and incomplete. This completeness intensity yx,i and local exposure value Ei over all sites i issue is well known for earthquakes and is less important (Klawa and Ulbrich, 2003; Della-Marta et al., 2010), here if only the distribution (Eq. 16) of maximum CRPs n X is tested. There are several goodness-of-fit tests (Stephens, LE = LR,i yx,i Ei , (26) 1986, Sect. 4.4) for the case of known distribution. The i=1 Kolmogorov–Smirnov test is a popular variant. with the local vulnerability function LR,i (yx,i ). To get the 3.3 The scaling property of CRP event loss for the scaled event, the observed yx,i is replaced by The CRP also offers the opportunity of stochastic scaling. ! The CRP Tc and all n local RPs Ti in Eq. (20) (and T (z) in −1 3y,i yx,i −1 yxs,i = 3y,i = Ty,i STy,i yx,i , (27) Eq. 21) are scaled by a factor S > 0: S Pn with local hazard function (Eqs. 13 and 14) and its inverse i=1 STi wi Tcs = STc = P n , Ts,i = STi . (24) function. The scaling factor S in Eq. (27) is the same for all j =1 wi sites/locations i, as it is in Eq. (24) for the CRP. This fac- tor S must be adjusted iteratively until the result of Eq. (26) This means for the pseudo-polar coordinates in Eq. (17), converges to the desired event loss. The scheme in Fig. 4a which applies to the max-stable case, that includes all elements and relations of the scaling approach. Rs = ST1 + ST2 = SR, Therein, the numerical determination in the scaling scheme has only one direction, from scaled CRP to the event loss. Ts,1 ST1 Vs = = = V. (25) The idea of CRP averaging is also illustrated by Fig. 4b. Ts,1 + Ts,2 S (T1 + T2 ) The standard error of the averaging is the same as for the The pseudo-angle V is not changed as expected since estimates of an expectation by the sample mean (Upton and pseudo-radius and pseudo-angle are independent in the Cook, 2008, keyword central limit theorem). pseudo-polar coordinate for the max-stable case (Sect. 3.1). According to the delta method (Coles, 2001, Sect. 2.6.4), This also means that a scaling must be more complex statistical estimates and their standard error can be approxi- if there is non-max-stability. We cannot offer a general mately transferred in another parameter estimation and corre- scaling method for this situation; however, it must con- sponding standard error by the determined transfer function sider/reproduce the pattern of the relation CV versus CRP and its derivatives. A condition of this linear error transfer (example in Fig. 3d) adequately. Irrespective of this, the cor- is a relatively small standard error of the original estimates. responding event field of local intensities (e.g., maximum The delta method could be used to compute the reciprocal of wind gust speed) can be computed for the scaled local RPs ELRP – the exceedance frequency and corresponding stan- via the inverse of the local hazard function: T (z) in Eq. (14) dard error – or the exceedance frequency is computed di- or 3(z) in Eq. (13). rectly by averaging the reciprocal of scaled CRPs, and the transfer proxy acts implicitly. We also apply the idea of lin- 3.4 Risk estimates by scaling and averaging ear transfer proxy when we average the modeled event loss for the historical events being scaled to the same defined The main goal of a NatCat risk analysis is the estimate of CRP. The unknown sample of ELRPs, which represents the a risk curve (Mitchell-Wallace et al., 2017, Sect. 1), the bi- ELRP’s distribution for a fixed CRP, is implicitly transferred jective functional of event loss in a region, and the cor- to a sample of event loss. If the proxies perform well, the responding RP, which is called the event loss return pe- difference to the risk estimates via CRP averaging should be riod (ELRP) TE . As mentioned, there are two approaches for small. such estimates with corresponding pros and cons. There is a further chain of thoughts as argument for the We introduce an alternative method. Under the assumption different variants of averaging. The scaling implies subsets of max-stability between ELRP TE and CRP TC , according to of intensity fields of all possible intensity fields. The links Eq. (18) with T1 = TC and T2 = TE , the expectation of an un- between the fields of a subset are determined by the scal- known ELRP TE is the CRP TC of the local event intensities. ing of their CRPs. Correspondingly, every subset generates This means that the CRP is an estimate of ELRP. We can av- a risk curve with CRP (now also an ELRP) versus event erage the CRP of many events with the same event loss to get loss. We also assume a certain unknown probability per sub- a good estimate of ELRP. However, observations of events set that is applied if all these subsets generate the entire risk with the same loss are not available. Nonetheless, we can ex- curve via an integral like the expectation of a random variable ploit the stochastic scaling property of CRP to rescale the (Eq. 12). The corresponding empirical variant (estimator) is local intensity observations of historical events to get the re- the averaging. However, we can average three values: CRP, quired information. The modeled event loss LE is the sum of exceedance frequency, or event loss. Nat. Hazards Earth Syst. Sci., 22, 245–263, 2022 https://doi.org/10.5194/nhess-22-245-2022
M. Raschke: About the return period of a catastrophe 253 Figure 4. Schemes of the scaling approach: (a) elements and relations, (b) schematic example for estimation of RP of event loss by averaging of CRP. All mentioned estimators for risk curve via scaling and We analyzed 57 winter storms over 20 years from au- averaging over n events are tumn 1999 to spring 2019 (Supplement data, Tables 1 and 2) to validate the CRP approach. Different references (Klawa n 1X and Ulbrich, 2003; Gesamtverband Deutscher Versicherer, T̂E (LE ) = Tcs,i (LE ) , n i=1 2019; Deutsche Rück, 2020) have been considered to select n the time window per event. In our definition, the winter storm 1X season is from September to April of the subsequent year. It 3̂E (LE ) = 1/Tcs,i (LE ) n i=1 accepts a certain opportunity of contamination of the sam- 1X n ple of block maxima by extremes from convective windstorm L̂E (TE ) = LE,i (Tcs = TE ) . (28) events and a certain opportunity of incompleteness from ex- n i=1 tratropical cyclones outside our season definition. The term winter storm is only based on the high frequency of extrat- The right side of the equations in Eq. (28) implies actual val- ropical cyclones during the winter. The seasonal maximum ues which can be and are being replaced by estimates. Corre- is also the annual maximum of this peril. sponding uncertainties must be considered in the final error The maxima per half season (bisected by the turn of the quantification. year) are analyzed to double the sample size and to in- We draw attention to the fact that the explained scaling crease estimation precision. The appropriateness of this sam- does not change the CV of Eq. (23); this implies indepen- pling is discussed in Sect. 5.1. We considered records of dence between CRP and CV (Sect. 2.4). Therefore, the pre- wind stations in Germany of the German meteorological ser- sented scaling only applies to the max-stable case of local vice (DWD, 2020; FX_MN003, a daily maximum of wind hazard. For the non-max-stable case, the scaling factor S in peaks (m s−1 ), usually wind gust speed) that include mini- Eq. (27) must be replaced event-wise by Si , which repro- mum record completeness of 90 % for analyzed storms, at duces the observed relation between CRP and CV. An exam- least 90 % completeness for the entire observation period, ple without max-stability was already shown in Fig. 3d. and minimum 55% completeness per half season. Therefore, we only consider 141 of 338 DWD wind stations (Supple- 4 Application to German winter storms ment data, Table 3). We think this is a good balance between large sample size and high level of record completeness. 4.1 Overview about data and analysis The intensity field per event is represented by the maxi- mum wind gust for the corresponding time window of the We have selected the peril of winter storms (also called ex- event at each considered wind station. The local RP per event tratropical cyclones or winter windstorms) over Germany is computed by a hazard model per wind station. This is to demonstrate the opportunities of the CRP because of an implicit part of the estimated extreme value distribution good data access and since we are familiar with this peril per station, as explained in Sect. 5.1. The resulting CRPs (Raschke et al., 2011; Raschke, 2015a). Our analysis follows per event and corresponding statistical tests are presented in the scheme in Fig. 4a, important results are presented in the the following Sect. 4.2. We have considered two weightings subsequent sections, and the technical details are explained per station, capital, and area. Both are computed per wind in Sect. 5. At first, we provide an overview. station by assigning the grid cells with capital data of the https://doi.org/10.5194/nhess-22-245-2022 Nat. Hazards Earth Syst. Sci., 22, 245–263, 2022
254 M. Raschke: About the return period of a catastrophe Figure 5. Results of the analysis: (a) comparison of area and capital-weighted CRPs, (b) comparison of theoretical and observed exceedance frequency of capital-weighted CRP, and (c) test of distribution of seasonal maxima of CRP. Global Assessment Report (GAR data; UNISDR, 2015) via tion period is Kyrill that occurred in 2007. It has CRPs of the smallest distance to a wind station. We also use these 16.97 ± 1.75 and 17.64 ± 1.81 years (area and capital). Both capital data to spatially distribute our assumed total insured are around the middle of the estimated range of 15 to 20 years sum of EUR 15.23 trillion for property exposure (residential (Donat et al., 2011). Further estimates are listed in the Sup- building, content, commercial, industrial, agriculture, and plement data, Table 1. business interruption) in Germany in 2018. This is based In Fig. 5b, the results are validated according to Sect. 3.2. on Waisman’s (2015) assumption for property insurance in The empirical exceedance frequency matches well with the Germany and is scaled to exposure year 2018 under con- theoretical one for Tc ≥ 1.65. Small CRPs are affected by sideration of inflation in the building industry (Statistisches the incompleteness of our record list. In the medium range, Bundsamt, 2020) and increasing building stock according to the differences between the model and empiricism are not the German insurance union (GDV; GDV, 2019). It is con- statistically significant. In detail, we observe 27 storms with firmed by the assumptions of Perils AG (2021); however, Tc ≥ 1 within 20 years; 20 storms were expected. According their data product is not public. We also used loss data of to the Poisson distribution, the probability of 27 exceedances the GDV (2019) for property insurance when we fitted the or more is 7.8 %. A two-sided test with α = 5 % would re- vulnerability parameters for the NatCat model. These event ject the model if this exceedance probability were 2.5 % or loss data of 16 storms during a period of 17 years are already smaller. scaled by GDV to exposure year 2018. The seasonal and annual maxima of CRP must follow The spatial characteristics are analyzed in Sect. 4.3 accord- a unit Fréchet distribution (α = 1 in Eq. 3) according to ing to Sect 2.4, focusing on the question of if there is max- Eq. (16). We plot this and the empirical distribution in stability or not in the spatial dependence and characteristics. Fig. 5c. The Kolmogorov–Smirnov (KS) test (Stephens, Finally, we present the estimated risk curve for the portfolio 1986, Sect. 4.4) for the fully specified distribution model of the German insurance market in Sect. 4.4 including a com- does not reject our model at the very high significance level parison with previous estimates. Details of the vulnerability of 25 % for the capital-weighted variant. Usually, only the model are documented in Sect. 5.2. The concrete numerical level of 5 % is considered. This result should not be affected steps, the applied methods to quantify the standard error of seriously by the absence of one (probably the smallest) max- estimates, and the consideration of the results from vendor imum due to incompleteness issues. In summary, we state models are explained in Sect. 5.3–5.5, respectively. that the CRP offers a stable, testable, and robust method to stochastically quantify winter storms over Germany. 4.2 The CRP of past events and validation 4.3 Spatial characteristics and dependence As announced, we have computed the CRP according to Eq. (20) with the wind gust peaks listed in the Supplement As discussed in Sect. 2.4, the spatial characteristics is an im- data, Table 2, and local hazard models according to Eq. (30). portant aspect from a stochastic perspective. Therefore, we Our local hazard models are discussed in Sect. 5.1 and pa- have analyzed the relation between distance and dependence rameters are presented in the Supplement data, Table 4. An measure. We have applied Kendall’s τ (Kendall, 1938; Mari example for a complete CRP calculation is also provided and Kotz, 2001, Sect. 6.2.6) and show the dependence be- therein (Table 8). We have considered two weightings for the tween the maxima of the half of a season and maxima of two CRP, a simple area weighting and a capital weighting (Sup- seasons for 9870 pairs of stations in Fig. 6a. Since the sample plement data, Table 3). In Fig. 5a, we compare the estimates size is relatively small, the spreading is strong; it is caused by which do not differ so much; the approach is robust in the estimation error. example. The most significant winter storm of the observa- Nat. Hazards Earth Syst. Sci., 22, 245–263, 2022 https://doi.org/10.5194/nhess-22-245-2022
M. Raschke: About the return period of a catastrophe 255 Figure 6. Spatial characteristics of winter storms over Germany: (a) estimated Kendall’s τ versus distance, (b) differences between Kendall’s τ for different block sizes, (c) estimated Kendall’s τ versus distance for seasonal maxima in Germany and Switzerland (Raschke et al., 2011), (d) area functions for storm Kyrill with scaling to CRP 100 years, (e) relation CV to CRP of capital-weighted area functions, and (f) approximation of this relation by special stochastic scaling. Furthermore, the differences between the estimates of 4.4 The risk estimates Kendall’s τ for maxima of one and two hazard seasons are almost perfectly normally distributed (CDF in Fig. 6b) and Before we estimated risk curves according to the approach of should be centered to 0 in case of max stability (Fig. 2b). Sect. 3.4, we must estimate a vulnerability function (Eq. 31) This does not apply with sample mean of 0.051 and standard which determines the local loss ratio LR in the event loss ag- deviation of 0.182. The corresponding normally distributed gregation (Eq. 26). First, we fit the scaling parameter on the confidence range for the expectation has a standard deviation event loss data of the General Association of German Insur- of 0.002 and a probability of 0.00 that the actual expectation ers (GDV, 2019) for 16 historical events from 2002 to 2018, is 0 or smaller. as plotted in Fig. 7a. The details of the vulnerability function For completeness, we compare the current estimates of and its parameter fit are explained in Sect. 5.2. Then, we use Kendall’s τ with those for Switzerland from Raschke et the vulnerability function in the three variants of risk curve al. (2011) in Fig. 6c. The spatial dependence is higher for estimates of Sect. 3.4 – averaging event loss, CRP, or its re- Germany. A reason might be differences in the topology. ciprocal, the exceedance frequency. Details of the numerical We have also computed the area functions and show exam- procedure are explained in Sect. 5.3, which corresponds with ples in Fig. 6d for winter storm Kyrill. The different weight- the scheme in Fig. 4a. ings result in similar area functions. Figure 6e plots the CRP In Fig. 7b, the three estimated risk curves according to and CV of all events. The regression analysis reveals the sta- the three estimators in Eq. (28) are presented for max-stable tistical dependence between CRP and CV. For the linearized scaling and differ less from each other, which indicates the regression function, the p value is 0.002 (t test; Fahrmeir robustness of our approach. The empiricism is presented by et al., 2013, Sect. 3.3). Due to two statistical indications of the historical event losses and their empirical RP (observa- non-max-stability, we develop a local scaling that considers tion period 17 years of GDV loss data) and capital-weighted the global scaling factor and the ratio between local RP and CRP. In addition, we present the range of two standard er- CRP for every event with loss information. In this way, we rors of the estimates of loss averaging which imply the sim- could reproduce the observed pattern (Fig. 6f). Details of this plest numerical procedure. Details of uncertainty quantifica- workaround are presented in the Supplement data, Sect. 7. tion are explained in Sect. 5.4. The differences between the scaling variants (max-stable or The differences between max-stable and non-max-stable not) for storm Kyrill do not seem to be strong (Fig. 6d). scaling in the risk estimates are demonstrated in Fig. 7c. For smaller RP, no significant difference can be stated in contrast to higher RP. This corresponds with the differences between https://doi.org/10.5194/nhess-22-245-2022 Nat. Hazards Earth Syst. Sci., 22, 245–263, 2022
256 M. Raschke: About the return period of a catastrophe Figure 7. Estimates for insured losses from winter storms in Germany: (a) reported versus modeled event losses, (b) current risk curves and observations, and (c) influence of max-stable and non-max-stable scaling and comparison to scaled, previous estimates (Donat et al., 2011; Waisman, 2015; European Commission, 2014). the CV in relation to the CRP for max-stable and non-max- pendence model (max-stable or not) and the corresponding stable cases in Fig. 6f. These are also higher for higher CRP. spatial characteristics to loss estimates for higher ELRP. We also compare our results with previous estimates in Fig. 7c. For this purpose, we must scale these to provide comparability as well as possible. The relative risk curve of 5 Technical details of the application example Donat et al. (2011) is scaled simply by our assumption for 5.1 Modeling and estimation of local hazard the total sum insured (TSI) for the exposure year 2018. The vendor models of Waisman (2015) are scaled by the average As mentioned, the maximum wind gusts of half seasons of of ratios between modeled and observed event losses from winter storms (extratropical cyclones) – block maxima – storm Kyrill since a scaling via TSI was not possible (uncer- have been analyzed. Therein, the generalized extreme value tain market share and split between residential, commercial, distribution (Beirlant et al., 2004, Sect. 5.1) is applied: and industry exposure). The result of the standard model of European Union (EU) regulations (European Commission, G(y) = 2014), also known as Solvency II requirements, is also based exp − exp y−µ , if γ = 0 σ on our TSI assumption, split into the CRESTA zones by the exp − 1 + γ y−µ −1/γ , if γ 6 = 0, with y > µ − σγ . (29) σ GAR data. The CRESTA zones (https://www.cresta.org/, last if γ > 0, and access: 12 October 2020) are an international standard in the y < µ − σγ if γ < 0 insurance industry and correspond to the two-digit postcode zones in Germany. As discussed below, the Gumbel distribution (Gumbel, 1935, The risk estimate of Donat et al. (2011) is based on a 1941), as a special case in Eq. (29) with extreme value γ = 0, combination of frequency estimation and event loss distri- is an appropriate model. The scale parameter is σ , and the bution by the generalized Pareto distribution, which is fitted location parameter is µ. The local hazard function (Eqs. 13 on a sample of modeled event losses for historical storms. and 14) can be derived directly from the estimated variant of The corresponding risk curve differs very much from other Eq. (29) according to the link between extreme value distri- estimates and overestimates the risk of winter storms over bution and exceedance frequency (Eq. 16) (the accent sym- Germany. The standard model of the EU only estimates the bolizes the point estimation): maximum event loss for RP of 200 years; the estimated event loss is very high. The vendor models vary but have a similar y − µ̂ T̂y (y) = 1/3̂y (y) = exp . (30) course as our risk curves. The non-max-stable scaling is in σ̂cor the lower range of the vendor models, whereas the unreal- We apply the maximum likelihood (ML) method for the pa- istic max-stable scaling is more in the middle. The concrete rameter estimation (Clarke, 1973; Coles, 2001, Sect. 2.6.3). names of the vendors can be found in Waisman’s (2015) pub- The wind records’ incompleteness per half season has been lication. The reader should be aware that the vendors might considered in the ML estimates by modifying the procedure have updated their winter storm model for Germany in the as explained in the Supplement data, Sect. 5. A Monte Carlo meantime. simulation confirms the good performance of our modifica- The major result of Sect. 5 is the successful demonstra- tion. The biased estimate of σ for our sample size n = 40 tion that the CRP can be applied to estimate reasonable risk was also detected, in which we considered σ̂cor = σ̂/0.98 as curves under controlled stochastic conditions. In addition, we the corrected estimation. Landwehr et al. (1979) have already have discovered the strong influence of the underlying de- stated such bias. In addition, the exceedance frequency is well estimated by Eq. (30) in contrast to the RP T̂ . The latter Nat. Hazards Earth Syst. Sci., 22, 245–263, 2022 https://doi.org/10.5194/nhess-22-245-2022
M. Raschke: About the return period of a catastrophe 257 is strongly biased. We also corrected this as documented in random variables (n = 40). All statistics validate the Gumbel the Supplement data, Sect. 6. The analyzed half-season max- distribution. ima, record completeness, and parameter estimates are listed To provide reproducible results, we also present a compu- in the Supplement data, Tables 4–6. tational example for the CRP in Table 1 with reference to all We have validated the sampling of block maxima per needed equations and information. The entire calculation for half season. The opportunity of correlation between the first storm Kyrill is presented in the Supplement data, Table 8. and second half-season maxima has been tested for signif- icance level α = 5 % and hypothesis H0 : uncorrelated sam- 5.2 Modeling and estimation of vulnerability ples. Around 6 % fail the test with Fisher’s z transformation (Upton and Cook, 2008). This corresponds to the error of To quantify the loss ratio LR at location (wind station) j and the first kind (Type I error, e.g., Lindsey, 1996, Sect. 7.2.3), event i in the loss aggregation (Eq. 26), we use the approach the falsely rejected correct models. Therefore, we inter- of Klawa and Ulbrich (2003) for Germany with a certain pret the correlation as statistical insignificant. Similarly, the modification. The event intensity is the maximum wind gust Kolmogorov–Smirnov homogeneity test rejects 4 % of the speed v. v98 % is the upper 2 % percentile from the empiri- sample pairs for the period September to December and Jan- cal distribution of all local wind records. The relation with uary to April (first half season to subsequent second half sea- vulnerability parameter aL is son) for a significance level of 5 %. There are no significant 3 LR,i,j = aL max 0, vi,j − v98 % . (31) differences between the samples. To optimize the intensity measure of the hazard model, we Donat et al. (2011) have used a similar formulation but have considered the wind speed with power 1, 1.5, and 2 as with an additional location parameter. This is discarded here the local event intensity in a first fit of the Gumbel distribu- since the loss ratio must be LR = 0 for local wind speed tion by the maximum likelihood method. According to these, v < v98 % . This is also a reason why a simple regression anal- power 1.5 offers the best fit of wind gust data to the Gumbel ysis (Fahrmeir et al., 2013) is not applied to estimate aL . We distribution. Such wind measure variants were already sug- formulate and use the estimator gested by Cook (1986) and Harris (1996). Pn k 3 We do not apply the generalized extreme value distribu- 1X j =1 Ej,i max{0, (v − v98 % )} âL = , (32) tion in Eq. (29) with extreme value index γ 6 = 0 but the n i=1 LE reported,i Gumbel case with γ = 0 for the following reasons. Differ- ent stochastic regimes γ < 0 and γ ≥ 0 for different wind with k historical storms, corresponding reported event stations imply fundamental physical differences: finite and losses LE , n wind stations, and local exposure value Ej,i infinite upper bounds of wind speed. Such fundamental dif- being assigned to the wind station. Ej,i would be fixed for ferences between different wind stations would need reason- every station j if there were wind records for every storm i able explanations (especially for very low bounds versus in- at each station j . However, the wind records are incomplete, finite bounds). River discharges at different gauging stations and the assumed TSI must be split and assigned to the sta- could imply such physical differences since there are vari- tions a bit differently for some storms. The exposure share ants with laminar and turbulent stream or very different re- of the remaining stations is simply adjusted so that the sum tention/storage capacities of catchment areas (e.g., Salazar over all stations remains the TSI. et al., 2012). Similar significant physical differences do not Our suggested estimator (Eq. 32) has the advantage that exist for wind stations that are placed and operated under it is less affected by the issue of incomplete data (smaller consideration of rules of meteorology (World Meteorologi- events with smaller losses are not listed in the data) than the cal Organization, 2008, Sect. 5.8.3) to provide homogeneous ratio of sums over all events, and the corresponding standard roughness conditions. Besides, we also found several statisti- error can be quantified (as for the estimation of an expecta- cal indications for our modeling. Akaike and Bayesian infor- tion). The current point estimate is âL = 9.59×10−8 ±5.97× mation criteria (Lindsey, 1996; here over all stations) indicate 10−9 . that the Gumbel distribution is the better model than the vari- An example of our vulnerability function (with the aver- ant with a higher degree of freedom. Furthermore, the share age of v98 % over the wind stations) is depicted in Fig. 8 and of rejected Gumbel distributions of the goodness-of-fit test compared with previous estimates for Germany. It is in the (Stephens, 1986, Sect. 4.10) is with 6 % around the defined range of previous models. Differences might be caused by significance level of 5 % (the error of the first kind – falsely different geographical resolutions of corresponding loss and rejected correct models). We have also estimated γ for each exposure data. A power parameter of 2 in Eq. (31) might also station and got a sample of point estimates. The sample mean be reasonable since the wind load of building design codes is 0.002, very close to γ = 0; this confirms our assumption. (EU, 2005, Eurocode 1) is proportional to the squared wind Moreover, the sample variance is 0.018 which is around the speed. The influence of deductibles (Munich Re, 2002) per same as what we get for a large sample of estimates γ̂ for insured object is not explicitly considered but smoothed in samples of Monte-Carlo-simulated and Gumbel-distributed our approach. https://doi.org/10.5194/nhess-22-245-2022 Nat. Hazards Earth Syst. Sci., 22, 245–263, 2022
You can also read