National Weather Service United States Department of Commerce
layer hidden off the screen
noaa picture  Site Specific Development
Notes
weather picture
edge <<< Back To Site Specific Development Notes

 

Site Specific Results email - June 28, 2001

Subject: Performance of WFO Site-Specific Hydrologic Model
Date: Thu, 28 Jun 2001 12:28:07 -0500
From: "Bob Cox" <Bob.Cox@noaa.gov>

To whom it may concern (and may be interested) ...

Since another quarter has flown by, I thought this might be a good time to
let everyone know where we are with development of the WFO site-specific
hydrologic forecast model. As you know, MBRFC has been working with OHD and
WFO-EAX for several months now to develop, implement, and test the
functionality of the model, with the goal of delivering it to the WFOs in
AWIPS Build 5.1.3. To date, Russ Erb has been primarily responsible for the
software development at OHD and has been able to provide new, improved
versions of the executables to us on a fairly regular basis. He has made
great strides toward creating a tool which should become extremely valuable
to the WFOs.

At MBRFC, we have been busy defining the functionality of the model,
performing the necessary hydrologic development, and testing the model under
a variety of circumstances. We now have the model up and running on our
system for the thirty locations for which we currently provide headwater
guidance in the WFO-EAX hsa. Accomplishing this task required
(re)development of 1-hour unitgraphs for all sites, then defining all
unitgraphs, rating curves, basin boundaries, etc. in the IHFS DB. Now that
we finally have everything functional on our system, we are ready to port
the necessary software and parameters to the EAX system for more extensive
operational testing. Meanwhile, we will begin expanding our development
efforts into our remaining WFO areas.

Initial testing of the model performance looks promising. We have tested
the model against existing procedures, such as OFS and headwater tables and
are reasonably satisfied with the results. Most of this testing, however,
has been for relatively moist soil conditions ... things just haven't dried
out much in the EAX area this spring. We do intend to continue testing for
the dry side of things, however, hopefully later this summer.

I have also tested the performance of the model for about a dozen actual
rainfall events over the past couple of months, with good results. In
nearly every case, the model has produced a crest forecast within a foot or
less of the observed value, provided that an accurate estimate of the 1-hour
rainfall time series was input. Typically, however, the StageII estimates
have been significantly lower than the gage reports and had to be adjusted
upwards before an accurate forecast was possible.

I have attached the results of four case studies from rainfall events of
last week. These results are typical of what we have been seeing so far.
We have also produced a slide presentation of our experiences with the model
so far and will update this as necessary. You can find it on our web page
at http://www.crh.noaa.gov/mbrfc/slide_show. (This is essentially the same
slideshow which Larry presented at the WR SOO/DOH conference last month.) (The slide show currently is not available, please check back at a later time)

That's all I know. If you have questions or comments, feel free.
Bob


Performance of the WFO Hydrologic Forecast Model
For the Rainfall Events of June 20-21, 2001


Over the past couple of months, MBRFC has been testing the performance of the newly developed WFO hydrologic model against existing procedures such as OFS and flood advisory (headwater) tables and, sporadically at least, during some actual rainfall events. The attached charts depict the performance of the model for four locations in and around Kansas City during the rainfall events of June 20-21, 2001, and are very indicative of the accuracy we have encountered with the model to date.

In the attachments, the hydrographs shown in red are the result of applying the Stage II estimates from the EAX radar, without any further adjustments, to compute the hourly MAPs for each forecast location. A significant under-simulation is evident, producing crest forecasts which, on average, would have been 3.7 feet too low.

The hydrographs shown in blue in the attachments are the result of adjusting the Stage II estimates by factors ranging from 1.17 to 1.75, based on the average bias demonstrated at locations where storm total precipitation reports were available. This improved the simulations dramatically, resulting in a mean absolute forecast error at crest of 0.6 foot. The event at Easton, Kansas, was noteworthy. This constituted a new flood of record for this location and forced the evacuation of over 200 residents. The model (adjusted) produced a crest forecast within 0.1 foot of the observed crest, nearly 9 feet over flood stage.

In all four cases, the timing of the crest forecasts were within an hour of that observed. All computations for these events were performed by applying hourly MAPs to the API-MKC runoff model, keying to the hourly FFH value valid for 12Z of June 19th. Although the model allows for baseflow adjustment based on a pre-existing river stage, this option was not employed because that adjustment had already been accounted for in the computation of FFH values.

To summarize, when provided accurate estimates of precipitation, these modeling techniques seem to be performing exceptionally well. The advantage of the one-hour computation interval is evident. However, without adequate ground-truth by which to determine the extent that the WSR-88D is underestimating precipitation, accurate hydrologic forecasts are not consistently possible. Like every other tool we deal with, it is only as good as the data that drives it.

Stranger Creek at Easton, Kansas
Fishing River at Mosby, Mo
Blue River at Stanley, Kansas
Indian Creek at Overland Park, Kansas