Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 14 submissions in the queue.
posted by Fnord666 on Tuesday May 29 2018, @06:07AM   Printer-friendly
from the flood-insurance-FTW dept.

Common Dreams reports

A Maryland city was devastated [May 27] after 6-inches of heavy rain caused a downtown flash flood. Major damage is reported and many cars have been swept away.

Ellicott City was still recovering from a flash flood two years ago that killed two and forced the historic city to rebuild much of its Main Street. Residents said Sunday's flood seemed even worse than the storm in July 2016--which was called an extremely rare "one-in-1,000 year event", and cost the city tens of millions of dollars in damages.

Additional information at:
The Baltimore Sun
The Washington Post
USAToday


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 2) by qzm on Tuesday May 29 2018, @06:20AM (25 children)

    by qzm (3260) on Tuesday May 29 2018, @06:20AM (#685461)

    Perhaps a bit less spending on civic improvements like parks, cycleways, pedestrian precincts, and mostly on many many many meetings, public consultations, and consultant investigations in to those.

    Perhaps a little more spending on basic infrastructure and maintenance, even when it is not 'sexy'.

    These floods are not caused by more water, but by less ability to handle the normal amounts of water.
    As more land is urbanised, the required support engineering is simply being ignored.

    • (Score: 2) by c0lo on Tuesday May 29 2018, @06:30AM (8 children)

      by c0lo (156) Subscriber Badge on Tuesday May 29 2018, @06:30AM (#685463) Journal

      These floods are not caused by more water, but by less ability to handle the normal amounts of water.

      [Citation needed]
      No, seriously, when I hear "1 in 1000 years flood", it makes me think at "water in excess of the normal amounts".
      As you claim that is actually "normal amounts of water", I think you should present the evidence for it.

      So, how about you present the median/average for the last 50 years and compare with the one in the latest event?
      Here, a helping hand: the 2018 flood event saw 8 inches of water falling in two hours [wikipedia.org]

      --
      https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
      • (Score: 2) by Runaway1956 on Tuesday May 29 2018, @02:01PM (7 children)

        by Runaway1956 (2926) Subscriber Badge on Tuesday May 29 2018, @02:01PM (#685601) Journal

        Your request for citations is not unreasonable - but I'll pass on it just the same.

        Meanwhile, it is a known fact that urban sprawl continues. A ten acre Walmart parking lot is not known for it's permeability - nor are the hundreds of miles of streets, sidewalks, and concrete foundations and driveways in a typical medium sized neighborhood. There is little place for water to soak in, and many ephemerals are just paved over, preventing water from freely running off.

        Your citation of 8 inches of water in two hours? Interesting - but, how widespread was that? Did it extend a hundred miles in every direction, or was that a local event that only involved a hundred square miles? From the paragraph above your direct link,

        Floods

        The town is prone to flooding from the Patapsco River and its tributary the Tiber River. These floods have had a major impact on the history of the town, often destroying important businesses and killing many. Ellicott City has had major devastating floods in 1817, 1837, 1868,[58] 1901, 1917, 1923, 1938, 1942, 1952, 1956, 1972 (Hurricane Agnes), 1975 (Hurricane Eloise), 1989, 2011, 2016, and 2018.

        So, one has to ask, just how far outside the ordinary is this particular flood? Hmmmmmm . . . Only 7.4 inches of rain fell, compared to 8 inches in the article you cite. But, was this truly a "worse flood" than all those others recorded in the town's history? Maybe this kind of thing happens every fifty years on average?

        when I hear "1 in 1000 years flood

        it makes ME think, "How in hell do these guys know that? There are no records going back 1000 years, they must be pulling numbers out of their asses! In most of Europe, China, and parts of the mideast, where people actually kept written records, they probably have a pretty good idea how often bad floods happen. In the US? Written language was introduced with the arrival of the Europeans, and we haven't deciphered the various records maintained by previous civilizations. Tying knots in yarn is probably perfectly understandable, to the people who were raised tying knots - but we don't understand it at all. Mayan and Aztec both had something roughly equivalent to cueniform writing, but we haven't truly deciphered that either. So - where are the records?

        I know for a fact that all of the rivers in America flooded routinely before the Euros started reforming the land to their liking. There were no dams, except beaver dams. Torrential downpours rolled straight down from Bimidji, to New Orleans, with nothing to slow it down.

        • (Score: 4, Informative) by c0lo on Tuesday May 29 2018, @02:21PM (6 children)

          by c0lo (156) Subscriber Badge on Tuesday May 29 2018, @02:21PM (#685615) Journal

          when I hear "1 in 1000 years flood

          it makes ME think, "How in hell do these guys know that?

          Their specific meaning [usgs.gov]

          The term “1000-year flood” means that, statistically speaking, a flood of that magnitude (or greater) has a 1 in 1000 chance of occurring in any given year. In terms of probability, the 1000-year flood has a 0.1% chance of happening in any given year.

          ---

          here are no records going back 1000 years, they must be pulling numbers out of their asses!

          Learn first, be dismissive later (if you still feel the need) 100-year flood/Probability [wikipedia.org]

          A common misunderstanding is that a 100-year flood is likely to occur only once in a 100-year period. In fact, there is approximately a 63.4% chance of one or more 100-year floods occurring in any 100-year period. On the Danube River at Passau, Germany, the actual intervals between 100-year floods during 1501 to 2013 ranged from 37 to 192 years.[5] The probability Pe that one or more floods occurring during any period will exceed a given flood threshold can be expressed, using the binomial distribution, as

          Pe = 1 − [ 1 − ( 1/T ) ]N

          where T is the threshold return period (e.g. 100-yr, 50-yr, 25-yr, and so forth), and N is the number of years in the period. The probability of exceedance Pe is also described as the natural, inherent, or hydrologic risk of failure.[6][7] However, the expected value of the number of 100-year floods occurring in any 100-year period is 1.

          --
          https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
          • (Score: 0, Troll) by khallow on Tuesday May 29 2018, @02:50PM (4 children)

            by khallow (3766) Subscriber Badge on Tuesday May 29 2018, @02:50PM (#685631) Journal
            You didn't answer the question. Runaway wasn't asking about the definition, but about how they know.

            As to the 1 in 1000 year event, what of stalled hurricanes (that is, very slow moving hurricanes)? They can dump a lot more than 6 inches in a very short period of time. And I bet any place in Maryland (even far inland) would get more than one of those things in a millennium.

            I think there's a simpler explanation here. The flash flood wasn't a 1 in 1000 event.
            • (Score: 3, Insightful) by c0lo on Tuesday May 29 2018, @03:00PM (3 children)

              by c0lo (156) Subscriber Badge on Tuesday May 29 2018, @03:00PM (#685641) Journal

              You didn't answer the question. Runaway wasn't asking about the definition, but about how they know.

              It was you that took pride you had maths at least as a hobby?
              If so, would you mind to explain to Runaway how do you determine that N when you assume a binomial distribution and have a representative data set, even if you don't have the full 1000 years "population"?

              --
              https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
              • (Score: 1) by khallow on Tuesday May 29 2018, @03:18PM (2 children)

                by khallow (3766) Subscriber Badge on Tuesday May 29 2018, @03:18PM (#685652) Journal

                If so, would you mind to explain to Runaway how do you determine that N when you assume a binomial distribution and have a representative data set, even if you don't have the full 1000 years "population"?

                I wouldn't do that since that wasn't what he asked. But I'll also note that weather is not a binomial distribution. It commonly has a long tail.

                • (Score: 1) by khallow on Tuesday May 29 2018, @03:23PM (1 child)

                  by khallow (3766) Subscriber Badge on Tuesday May 29 2018, @03:23PM (#685654) Journal

                  But I'll also note that weather is not a binomial distribution.

                  And by that, I mean that the size of extremes are not determined by normal weather conditions. The binomial condition you described doesn't give you a means to extrapolate extreme weather conditions. It merely describes how often those extremes are expected to occur assuming certain conditions (such as independence of weather from year to year), and the span of time that one looks at.

                  • (Score: 2) by Osamabobama on Tuesday May 29 2018, @06:22PM

                    by Osamabobama (5842) on Tuesday May 29 2018, @06:22PM (#685774)

                    The important thing here is that there is some distribution being used, with the data being fit to it, leading to probability estimates about possible flood levels. Let's assume the model uses the appropriate distribution.

                    Does the model get updated, though? For instance, if a farm upstream were to build a levee around a field in the flood plain, the crops in that field may survive the flood (yay!), but some measure of water would no longer be able to spend a few days flooding that field, instead being diverted downstream to flood someplace else more severely.

                    Or, maybe the models are accurate, and we hear about a 1000-year flood in one location, but nothing about the 999 other locations that get a normal level of flooding. This one seems less likely, especially when the same location gets more than one such flood in a lifetime.

                    It's like a hypothetical statistician and a gambler, both observing a series of coin flips come up heads over and over. The statistician is confident in his knowledge that, eventually, it will even out to about 50-50 heads/tails. But the gambler will quickly start to suspect that the coin isn't fair.

                    --
                    Appended to the end of comments you post. Max: 120 chars.
          • (Score: 0) by Anonymous Coward on Tuesday May 29 2018, @03:11PM

            by Anonymous Coward on Tuesday May 29 2018, @03:11PM (#685648)

            WTF, that's the geometric distribution, not binomial. https://en.m.wikipedia.org/wiki/Geometric_distribution [wikipedia.org]

            Besides that, it also assumes that there is no correlation for the reasons behind any two floods and that the probability of a flood is exactly the same every year. If this is the type of back of the napkin, first-pass analysis being used you should be scared for your town. There is no reason to expect those assumptions to hold, so no reason to be surprised that the model predicts the wrong thing, and so no reason to try attributing it to any specific cause.

            And anyway, even if those assumptions were good enough, we need to know many towns are seeing too few or the expected rate of "hundred year floods", keeping track of a bunch and only reporting the places with "excess" floods is cherry picking.

            There are so many things wrong here, its total cargo culting. I don't see any hope of coming to a correct conclusion using this process.

    • (Score: 0) by Anonymous Coward on Tuesday May 29 2018, @07:11AM (14 children)

      by Anonymous Coward on Tuesday May 29 2018, @07:11AM (#685470)

      East of Atlanta, there is a rainstorm nearly every day. You can track it back to Atlanta, where the urban heat island effect disturbs the weather.

      So, did something upwind of that Maryland city change in the past 1000 years? I'm betting it did.

      Not that a couple occurrences mean the "1000 year" forecast is wrong of course! The next event might be 4321 years away. These events don't go on a perfect 1000-year cycle like clockwork.

      On the other hand, "1000 year" may have been nonsense from the start.

      • (Score: 3, Informative) by c0lo on Tuesday May 29 2018, @08:01AM (12 children)

        by c0lo (156) Subscriber Badge on Tuesday May 29 2018, @08:01AM (#685483) Journal

        On the other hand, "1000 year" may have been nonsense from the start.

        "One in X years" is conveying the statistical "reality" of "the chance of one such event in any given year is 1/X".

        So, "1 in 1000 years" should just tell you "the probability that we derived from our model** of such an event to happen in any given year is 0.1%".
        This is actually used by the insurance companies - they seems pretty good to make money from such a "non-sense", so maybe, just maybe, there's actually some sense into it?

        ---

        ** yes, of course, the derived chances have the same credibility as the model they used to derived it from.
        Somehow, I don't expect the insurance companies to share this model with the rest of us.

        --
        https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
        • (Score: 0, Troll) by khallow on Tuesday May 29 2018, @03:26PM (6 children)

          by khallow (3766) Subscriber Badge on Tuesday May 29 2018, @03:26PM (#685657) Journal

          so maybe, just maybe, there's actually some sense into it?

          Are you going to continue to try to snow us with mathematical non sequiturs or are you going to admit that these estimates can sometimes be in error?

          • (Score: 2) by c0lo on Tuesday May 29 2018, @03:36PM (3 children)

            by c0lo (156) Subscriber Badge on Tuesday May 29 2018, @03:36PM (#685661) Journal

            My post @Tuesday May 29, @08:01AM
            Yours @Tuesday May 29, @03:26PM

            Are you going to continue...

            This thread is done, happy now?

            --
            https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
            • (Score: 0, Redundant) by khallow on Tuesday May 29 2018, @03:55PM (2 children)

              by khallow (3766) Subscriber Badge on Tuesday May 29 2018, @03:55PM (#685682) Journal
              That depends. Do you understand the gist of my posts? The mathematics of the frequency of 1 in N events doesn't inform you of the size of the 1 in N event. Plus, we have here a confounding factor of the urbanization of the region and the building of large parts of the town on a flood plain.

              I think there's an alternate scenario here. The 1 in 1000 estimate was bullshit, provided as political cover for the leaders of the town in question (and perhaps a number of state-level officials as well). Having this extreme event happen again so soon (and being able to easily visualize much worse flooding events like stalled hurricanes that are likely to happen inside of a thousand years), indicates to me that the original statement was likely in error - and probably deliberately so.
              • (Score: 2) by c0lo on Tuesday May 29 2018, @04:21PM (1 child)

                by c0lo (156) Subscriber Badge on Tuesday May 29 2018, @04:21PM (#685698) Journal

                someone found some details [soylentnews.org], you may want to check.

                --
                https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
                • (Score: 1) by khallow on Wednesday May 30 2018, @01:18AM

                  by khallow (3766) Subscriber Badge on Wednesday May 30 2018, @01:18AM (#686026) Journal
                  Ok, I checked. First, it's not based on actual measurements as suspected. Second, it's on a fine enough scale that there's potential for a lot of 1 in 1000 events, even if their estimates were accurate. Just look at the map and the graph of rainfall. They have potential for several dozen 1 in 1000 events just on that map based on location and time span. They'll have a significant degree of dependence with neighboring regions, but it's not a stretch to generate many mostly independent possibilities from such a scale plus, a region the size of Maryland (not to mention the entire East Coast). For example, get 100 mostly independent observations and you have a 1 in 10 chance of a 1 in 1000 event every year.
          • (Score: 3, Informative) by aristarchus on Tuesday May 29 2018, @09:38PM (1 child)

            by aristarchus (2645) on Tuesday May 29 2018, @09:38PM (#685919) Journal

            It's statistics, khallow! Are you innumerate?

            continue to try to snow us with mathematical non sequiturs

            This is NOT about Anthropogenic Global Warming! Just because it was snowing in Maryland in May does not mean that the overall average global temperatures are not rising? And just because you have two once-in-a-thousand-year floods back to back does not mean that they were not both once-in-a-thousand-year events. Please, khallow, argue in good faith, and do not go to the dark side of Runaway.

        • (Score: 0) by Anonymous Coward on Tuesday May 29 2018, @03:31PM (3 children)

          by Anonymous Coward on Tuesday May 29 2018, @03:31PM (#685658)

          This is actually used by the insurance companies - they seems pretty good to make money from such a "non-sense", so maybe, just maybe, there's actually some sense into it?
          [...]
          Somehow, I don't expect the insurance companies to share this model with the rest of us.

          I doubt they are making money by using the "binomial" (actually "geometric") distribution (see my other post above). At a minimum I would expect them to include info about basic stuff like how if an area just flooded it will be more likely to flood again for some period of time since the groundwater has been "recharged", etc.

          • (Score: 2) by c0lo on Tuesday May 29 2018, @03:40PM (2 children)

            by c0lo (156) Subscriber Badge on Tuesday May 29 2018, @03:40PM (#685664) Journal

            I doubt they are making money by using the "binomial" (actually "geometric") distribution

            Likely they are using more refined models. The thingy with binomial distribution on Wiki was the only model I could publicly find.
            I'll appreciate if someone could link to more relevant papers.

            --
            https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
        • (Score: 2) by HiThere on Tuesday May 29 2018, @06:38PM

          by HiThere (866) Subscriber Badge on Tuesday May 29 2018, @06:38PM (#685786) Journal

          It's an estimate of the probability based on that data available at the time it was calculated. It's also highly rounded. You don't see reports of "it's a 1 in 997 year flood".

          The numbers aren't updated as frequently as they should be, but even if they were you shouldn't take them that seriously...they aren't intended to be taken that seriously. They are *estimates*.

          --
          Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
      • (Score: 2) by MostCynical on Tuesday May 29 2018, @08:52AM

        by MostCynical (2589) on Tuesday May 29 2018, @08:52AM (#685498) Journal

        "Rainstorm" is not the same as a "1 in 1000 year flood event".

        Something that happens every other day
        VS
        Something that is likely to happen, on average, once in a thousand years.

        Insurance companies are very good at this.

        Also, "1-in-a-1000" events *could* happen several times in a week, and *still* be 1-in-a-thousand events (although actuaries *do* review their tables, and if something becomes, or is assessed to be, more likely, your premiums will go up.)

        --
        "I guess once you start doubting, there's no end to it." -Batou, Ghost in the Shell: Stand Alone Complex
    • (Score: 0) by Anonymous Coward on Tuesday May 29 2018, @07:00PM

      by Anonymous Coward on Tuesday May 29 2018, @07:00PM (#685802)

      yeah, the "news" showed feet of water rushing down main street. that water didn't just come from some light rain shower. something goddamn overflowed. why the presstitutes don't bother mentioning what all the fat leaches in various bureaucracies have been doing all this time or where these damn yankees built their stupid town is beyond me. if you depend on leaches to protect you from flood waters, then you get flooded and the news will lie about it like it's Gaia mad at your carbon footprint.

  • (Score: -1, Offtopic) by Anonymous Coward on Tuesday May 29 2018, @11:27AM

    by Anonymous Coward on Tuesday May 29 2018, @11:27AM (#685535)

    Our local expert mathematicians are already showing how normal this is.

    Citation: see above.

    Move on sheeple. To much congregating allows the wolves to circle. Trump will tweet soon. All is well.

  • (Score: 3, Informative) by Spamalope on Tuesday May 29 2018, @02:21PM (2 children)

    by Spamalope (5233) on Tuesday May 29 2018, @02:21PM (#685614) Homepage

    I've had 10 inches of rain in a hour events at my house at least three of the past 5 years here in Houston where it's tougher because it's so flat. It's got to rain at that level for 10+ hours before it's devastating, but that's managed by having large water containment areas around waterways.

    You'd have to have a waterway that's mostly constrained by high banks and then build in the open area you get where two rivers join to get nailed like they did. Basically build not just in the flood plain but within the normal river flood stage area. Would they actually do that?

    https://www.topoquest.com/map.php?lat=39.26754&lon=-76.79298&datum=nad27&zoom=8&map=auto&coord=d&mode=zoomin&size=m [topoquest.com]

    Holy crap, that looks like exactly what they did. I don't know why you'd build on the riverbank past an area where the river is constrained by high banks so flood waters have nowhere to go until they get to you and expect something different. I'm sure upstream paved areas are the immediate cause, but they've built within the flood stage riverbanks.

    • (Score: 2) by forkazoo on Tuesday May 29 2018, @05:45PM

      by forkazoo (2561) on Tuesday May 29 2018, @05:45PM (#685748)

      And the make things extra-fun, the current administration is working hard to roll back the regulations that basically ask people to not build in the places where Federal flood insurance is going to pay to rebuild there over and over and over...

    • (Score: 0) by Anonymous Coward on Tuesday May 29 2018, @09:52PM

      by Anonymous Coward on Tuesday May 29 2018, @09:52PM (#685931)

      Ellicott City was founded about 240 years ago in that location precisely because it was on a river for the mills and other water-powered industries. The WaPost has an interesting article [washingtonpost.com] about the worst flood they had, which was a flash flood event in 1868 when they didn't even get any rain.

      The storm over the weekend was one where the storm ran into a stationary cold front right over the area. The storm just sat there dumping rain. The odd thing is that I live only about 10 miles from Ellicott City as the crow flies and all we got was about five minutes of rain out of that whole storm.

  • (Score: 4, Informative) by Anonymous Coward on Tuesday May 29 2018, @04:06PM (2 children)

    by Anonymous Coward on Tuesday May 29 2018, @04:06PM (#685690)

    First of all, the 1/1000 years was referring to the "rainfall", not the flood:

    The National Weather Service's Hydrometeorological Design Studies Center (HDSC) has completed an exceedance probability analysis for this rainfall event, based on the rainfall estimates and observed data shown above. A link to their analysis can be found here. At time durations of 5 minutes to 3 hours, the observed rainfall at the Ellicott City gauge has a probability of occurrence of less than or equal to 1/1000. This does not mean this extreme rainfall will only occur once in a thousand years. However, it is a rare and unlikely event. Based on statistical analysis, there is a 0.1% chance or less of this rainfall occurring in these time durations and location in any given year. For more details on the calculations, see the full report [noaa.gov].

    https://www.weather.gov/lwx/EllicottCityFlood2016 [weather.gov]

    From the full report (linked above) we see this was specifically regarding having "~6 inches of rainfall in 3 hours 3 hours at the ELYM2 gauge":

    Figure 1 shows how the maximum observed rainfall amounts compared to corresponding rainfall frequency estimates for AEPs up to 1/1000 (0.1%) for durations from 5 minutes to 6 hours for a rain gauge in Maryland - ELYM2 Ellicott City (39.27333°N, 76.80444°W). The rain gauge is part of the Hydrometeorological Automated Data System (HADS). The AEPs are estimates from NOAA Atlas 14 Volume 2. As can be seen from Figure 1, observed rainfall amounts have probabilities of less or equal to 1/1000 for durations up to 3 hours.

    Also, what are these AEPs:

    ANNUAL EXCEEDANCE PROBABILITY (AEP) - The probability associated with exceeding a given amount in any given year once or more than once; the inverse of AEP provides a measure of the average time between years (and not events) in which a particular value is exceeded at least once; the term is associated with analysis of annual maximum series (see also AVERAGE RECCURENCE INTERVAL).

    http://www.nws.noaa.gov/oh/hdsc/glossary.html [noaa.gov]

    So its a "probability associated with rainfall of at least a certain amount within a certain timeframe occurring at least once per year." I don't like that weasel word "associated with". I'm having trouble finding an actual example calculation here.

    • (Score: 0) by Anonymous Coward on Tuesday May 29 2018, @04:44PM (1 child)

      by Anonymous Coward on Tuesday May 29 2018, @04:44PM (#685709)

      Ok, well figuring out where exactly these numbers came from looks like it is not possible in a reasonable time frame (and maybe not possible at at all):

      3.3 Approach
      The approach used in this project largely follows the regional frequency analysis using the method of
      L-moments described in Hosking and Wallis (1997). This section provides an overview of the
      approach. Greater detail on the approach is provided in Section 4.2.
      NOAA Atlas 14 introduces a change from past NWS publications by its use of regional frequency
      analysis using L-moments for selecting and parameterizing probability distributions. Both annual
      maximum series and partial duration series were extracted at each observing station from quality
      controlled data sets. Because of the greater reliability of the analysis of annual maximum series, an
      average ratio of partial duration series to annual maximum series precipitation frequency estimates
      (quantiles) was computed and then applied to the annual maximum series quantiles to obtain the final
      equivalent partial duration series quantiles.
      Quality control was performed on the initial observed data sets (see Section 4.3) and it continued
      throughout the process as an inherent result of the performance parameters of intermediate steps.
      To support the regional approach, potential regions were initially determined based on
      climatology. They were then tested statistically for homogeneity. Individual stations in each region
      were also tested statistically for discordancy. Adjustments were made in the definition of regions
      based on underlying climatology in cases where homogeneity and discordancy criteria were not met.
      A variety of probability distributions were examined and the most appropriate distribution for
      each region and duration was selected using several different performance measures. The final
      determination of the appropriate distributions for each region and duration was made based on
      sensitivity tests and a desire for a relatively smooth transition between distributions from region to
      region. Probability distributions selected for annual maximum series were not necessarily the same as
      those selected for partial duration series.
      Quantiles at each station were determined based on the mean of the data series at the station and
      the regionally determined higher order moments of the selected probability distribution. There were a
      number of stations where the regional approach did not provide the most effective choice of
      probability distribution. In these cases the most appropriate probability distribution was chosen and
      parameterized based solely on data at that station.
      [...]
      Hosking and Wallis (1997) describe regional frequency analysis using the method of L-moments.
      This approach, which stems from work in the early 1970s but which only began seeing full
      implementation in the 1990s, is now accepted as the state of the practice. The National Weather
      Service has used Hosking and Wallis, 1997, as its primary reference for the statistical method for this
      Atlas.
      The method of L-moments (or linear combinations of probability weighted moments) provides
      great utility in choosing the most appropriate probability distribution to describe the precipitation
      frequency estimates. The method provides tools for estimating the shape of the distribution and the
      uncertainty associated with the estimates, as well as tools for assessing whether the data are likely to
      belong to a homogeneous region (e.g., climatic regime).
      The regional approach employs data from many stations in a region to estimate frequency
      distribution curves for the underlying population at each station. The approach assumes that the
      frequency distributions of the data from many stations in a homogeneous region are identical apart
      from a site-specific scaling factor. This assumption allows estimation of shape parameters from the
      combination of data from all stations in a homogeneous region rather than from each station
      individually, vastly increasing the amount of information used to produce the estimate, and thereby
      increasing the accuracy. Weighted averages that are proportional to the number of data years at each
      station in the region are used in the analysis.
      The regional frequency analysis using the method of L-moments assists in selecting the
      appropriate probability distribution and the shape of the distribution, but precipitation frequency
      estimates (quantiles) are estimated uniquely at each individual station by using a scaling factor,
      which, in this project, is the mean of the annual maximum series, at each station. The resulting
      quantiles are more reliable than estimates obtained based on single at-site analysis (Hosking and
      Wallis, 1997).

      http://www.nws.noaa.gov/oh/hdsc/PF_documents/Atlas14_Volume2.pdf [noaa.gov]

      It goes on about the various interpolations, adjustments, etc like this for a long time. It sounds like they are even resorting to moving the location of the stations (their most accurate info?) for some reason:

      adjustments of regions, such as moving stations from one region to another or subdividing a region, were made to reduce heterogeneity.

      • (Score: 0) by Anonymous Coward on Tuesday May 29 2018, @04:54PM

        by Anonymous Coward on Tuesday May 29 2018, @04:54PM (#685712)

        Same AC. Another general takeway here is these estimates are coming from models that are totally statistical. There is no physics going on here at all, which I didn't expect. I really expected them to take into account elements of the local water cycle, etc.

        It looks more like if you plugged in data to an ML algo and had it optimize for homogeneity (ie physically closer stations should have more similar patterns than far away stations), then let this feed back to tuning the data prep until the results looked "realistic" to an ensemble of humans.

  • (Score: 2) by istartedi on Tuesday May 29 2018, @08:27PM (5 children)

    by istartedi (123) on Tuesday May 29 2018, @08:27PM (#685851) Journal

    The town appears to have had many floods [wikipedia.org]:

    "Ellicott City has had major devastating floods in 1817, 1837, 1868,[58] 1901, 1917, 1923, 1938, 1942, 1952, 1956, 1972 (Hurricane Agnes), 1975 (Hurricane Eloise), 1989, 2011, 2016, and 2018"

    --
    Appended to the end of comments you post. Max: 120 chars.
    • (Score: 2, Informative) by Anonymous Coward on Tuesday May 29 2018, @09:15PM (4 children)

      by Anonymous Coward on Tuesday May 29 2018, @09:15PM (#685899)

      From the other post [soylentnews.org]:

      At time durations of 5 minutes to 3 hours, the observed rainfall at the Ellicott City gauge has a probability of occurrence of less than or equal to 1/1000. This does not mean this extreme rainfall will only occur once in a thousand years. However, it is a rare and unlikely event. Based on statistical analysis, there is a 0.1% chance or less of this rainfall occurring in these time durations and location in any given year.

      So you can see that #1 it is about rainfall, and #2 they distinguish between the "extreme events happening once in a thousand years" and the "event having 1/1000 chance of happening in any given year based on their statistical analysis". I think they can't tell us the probability of it happening once, twice or three times in a thousand years according to the analysis because their numbers are based on a byzantine procedure that nobody understands and those calculations were not part of the original design. However the simplest analysis would be:

      n = 1000
      x = 1:5
      y = dbinom(x, n, 1/n)
      data.frame(Ntimes = x, Prob = round(y, 4))

      Resulting in:

        Ntimes   Prob
      1      1 0.3681
      2      2 0.1840
      3      3 0.0613
      4      4 0.0153
      5      5 0.0030

      So there is a 36% chance it happens once, 18% chance it happens twice in a thousand years, etc. If you sum it up you get the geometric distribution 1 - (1-1/1000)^1000 ~ 0.63 probability of it happening at least once in the thousand years.

      In short, this whole "1,000 year flood" concept is going to be very confusing to the uninitiated even in the simplest case, and the real case is literally too complicated for anyone alive to determine.

      • (Score: 1) by istartedi on Wednesday May 30 2018, @08:01AM (3 children)

        by istartedi (123) on Wednesday May 30 2018, @08:01AM (#686159) Journal

        I get that "1000 year flood" means 1/1000 chance each year and your statistical approach makes sense. We only have 200 years of observation, during which 16 floods occurred. Their 1/1000 is for the rain, not the flood itself; but let's set that aside and assume you can't have the flood without rain that meets the criteria. Given that we have 16 floods in 200 years, let's also extrapolate out and assume that it was flooding like that before the town was built. Perhaps that's a bad assumption too; but we just don't have the data. It's all I've got though. It makes me want to plug N=80 into your program. It would be non-zero, but very small. In fact yes, the nature of statistics is that the odds of this could be *even lower* than 1/1000, and we could still get a cluster of 16 floods in the last 200 years; but it seems rather disingenuous to call it a "1000 year event" in the media when it happens that often.

        Maybe it's all lies. Maybe it's damned lies. Maybe it's... well, you know.

        --
        Appended to the end of comments you post. Max: 120 chars.
        • (Score: 0) by Anonymous Coward on Wednesday May 30 2018, @04:15PM (2 children)

          by Anonymous Coward on Wednesday May 30 2018, @04:15PM (#686322)

          Not sure what you mean. If you want to assume that the rate of these floods is 80 per 1000 years it would be this:

          n = 1000
          x = 0:120
          y = dbinom(x, n, 80/n)

          dat = data.frame(Ntimes = x, Prob = round(y, 4))

          plot(dat$Ntimes, dat$Prob)

          It'll look like a normal curve peaking at 80.

          • (Score: 2) by istartedi on Wednesday May 30 2018, @05:23PM (1 child)

            by istartedi (123) on Wednesday May 30 2018, @05:23PM (#686360) Journal

            No, what I'm saying is that if we start with the premise that it's a 1/1000 event, it's possible to have one every year but as each year goes on it's a very small probability. At some point, the original assumption of 1/1000 starts to look suspect.

            For a real-world example, there are cases where people have won multi-million dollar lottery prizes *twice*. This is obviously within the realm of probability. It's even within the realm of probability for them to win 10 multi-million dollar lottery prizes in one year. At some point though, you start realizing that the odds of that are so long, that the fix must be in.

            In the case of this flood it's obviously not "fixed"; but I'm saying that the underlying assumption of 1/1000 has to be flawed somehow when you have 16/200 from actual data. OK, the 1/1000 is for the rain not the flood; but who cares about the rain? It's the flood that impacts people's lives. They're using the 1/1000 to determine what gets rebuilt, when they should be using the 16/200. They're encouraging people to rebuild, when they should be coming up with a relocation plan.

            --
            Appended to the end of comments you post. Max: 120 chars.
            • (Score: 0) by Anonymous Coward on Wednesday May 30 2018, @06:08PM

              by Anonymous Coward on Wednesday May 30 2018, @06:08PM (#686385)

              I understand now. Here it is for 16 out of 200 if the probability is 1/1000 (this is in R):

              > dbinom(16, 200, 1/1000)
              [1] 1.407112e-25

              Here it is for at least 16:

              > sum(dbinom(16:200, 200, 1/1000))
              [1] 1.422514e-25

              So yea you could probably come up with a better model than binomial with p = 1/1000 with little effort. Perhaps even p = 1/1000 is correct but some other assumption behind the model is wrong. Who knows?

(1)