Space-Time
Scale Sensitivity of the Sacramento Model to Radar-Gage Precipitation Inputs
B.D.
Finnerty, M.B. Smith, V. Koren, D.J. Seo, G. Moglen, Journal of Hydrology,
203 (1997) 21-38.
Abstract: Runoff timing and volume
biases are investigated when performing hydrologic forecasting at space-time
scales different from those at which the
model parameters were calibrated. Hydrologic model parameters are
inherently tied to the space-time scales at which they were
calibrated. The National Weather Service calibrates rainfall runoff
models using 6-hour mean areal precipitation (MAP) inputs derived from
gage networks. The space-time scale
sensitivity of the Sacramento model runoff volume is analyzed using 1-hour,
4x4 sq. km. next generation weather radar
(NEXRAD) precipitation estimates to derive input MAPs at various scales
ranging from 4 x 4 sq. km. up to 256 x 256 sq. km. Results show
surface runoff, interflow, and supplemental baseflow runoff components
are the most sensitive to the space-time scales analyzed. Water balance
components of evapotranspiration and total channel inflow are also sensitive.
Comparing
Mean Areal Precipitation Estimates from NEXRAD and Rain Gauge Networks
D. Johnson, M.B. Smith, V. Koren,
and B. Finnerty, Journal of Hydrologic Engineering, Vol. 4, No. 2 April,
1999, 117-124.
Abstract: Mean areal precipitation values (MAPX)
derived from next generation weather radar (NEXRAD) Stage III data are
compared with mean areal precipitation (MAP) values derived from
a precipitation gauge network. The gauge derived MAPs are computed
using Thiessen polygon weighting, whereas the radar-based MAPXs utilize
the gridded Stage III radar precipitation products that have been conditioned
with gauge measurements and have been merged with overlapping radar fields.
We compare over 4,000 pairs of MAPX and MAP estimates over a 3-year time
period for each of the eight basins in the southern plains region of the
United States. Over the long term, mean areal estimates derived from
NEXRAD generally are 5-10% below gauge-derived estimates. In the
smallest basin, the long-term MAPX mean was greater than the MAP.
For storm events, a slight tendency for NEXRAD to measure fewer yet more
intense intervals of precipitation is identified. Comparison of hydrologic
simulations using the two forcings indicates that significant differences
in runoff volume can result. This work is aimed at providing insight
into the use of a data product that is becoming increasingly available
for public use. It also is aimed at investigation the use of radar
data in hydrologic models that have been calibrated using gauge-based precipitation
estimates.
Semi-Distributed
and Lumped Modeling Approaches: Case Study of NEXRAD Data Application to
Large Headwater Basins in the Arkansas River Basin
M.B. Smith, V. Koren, Z. Zhang,
D. Wang. 1999, Spring Meeting of the AGU, Boston.
Abstract: In response to the nationwide
implementation of the WSR-88D weather radar platforms, the Hydrologic Research
Lab of the National Weather Service (NWS)
has developed a two phased plan to address the question: �How can the NWS
most effectively utilize the radar precipitation
estimates to improve its river forecasts�? Inherent in this question is
the suitability of current NWS models as well as the applicability of
distributed parameter hydrologic models for NWS streamflow forecast generation.
In Phase I, research has addressed the use of NEXRAD data with existing
NWS hydrologic models, primarily the Sacramento Soil Moisture Accounting
(SAC-SMA) model. Modeling tests in this phase have involved the SAC-SMA
applied to basins in a lumped and semi-distributed format.
Results of continuous simulations with 4 basins over a multi-year period
have shown that the SAC-SMA applied in a lumped mode at an hourly
time step provides satisfactory agreement with observed streamflow records.
In the semi-distributed simulations, each basin was disaggregated into
between 5 and 8 sub-basins in an effort to capture the spatial variability
of precipitation. Calibration of the sub-basin SAC-SMA parameters
was accomplished by uniformly adjusting parameters in all sub-basins.
In such a mode, the modeling scenario was one of distributed inputs,
not distributed model parameters. Surprisingly, the semi-distributed approach
did not lead ot significant improvement over the lumped approach.
In some cases, hydrograph timing was improved compared to the lumped simulations.
However, overall goodness-of-fit statistics showed a slight degradation
of simulation accuracy compared to the lumped simulations for several
basins. Examination of the simulations indicates that the method of uniformly
calibrating the sub-basins model parameters is flawed and can lead
to large simulation errors, which may unduly bias the statistics.
In Phase 2, we plan to investigate distributed
modeling approaches. These plans will be presented along with the
findings from Phase 1.
Scale
Dependencies of Hydrologic Models to Spatial Variability of Precipitation
V.I Koren, B.D. Finnerty, J.C.
Schaake, M.B. Smith, D.J. Seo, Q.Y Duan, Journal of Hydrology 217(1999)
285-302.
Abstract: this study is focused on analyses of the scale
dependency of lumped hydrological models with different formulations of
the infiltration process. Three lumped hydrological models of differing
complexity were used in the study: The SAC-SMA model, the Oregon State
University (OSU) model, and the simple water balance (SWB) model.
High Resolution (4x4 km) rainfall estimates from the next generation radar
(NEXRAD) Stage III in the Arkansas-Red river basin were used in the study.
These gridded precipitation estimates are a multi-sensor product which
combines the spatial resolution of the radar data with the ground truth
estimates of the gage data. Results were generated from each model
using different resolutions of spatial averaging of hourly rainfall.
Although all selected models were scale dependent, the level of dependency
varied significantly with different formulations of the rainfall-runoff
partitioning mechanism. Infiltration-excess type models were the
most sensitive. Saturation-excess type models were less scale dependent.
Probabilistic averaging of the point processes reduces scale dependency,
however, its effectiveness varies depending on the scale and the spatial
structure of rainfall.
The
Potential for Improving Lumped Parameter Models using Remotely Sensed Data
V. Koren Paper J1.10, 13th Conference
on Weather Analysis and Forecasting, August 2-6, 1993, Vienna, Virginia,
pp 397-400.
Lumped parameter models are based upon using
averaged input informatin for an entire drainage basin.. Preipitation is
the most important input data in flash food forecasting. Simulation
errors in estimating runoff may be significant if the precipitation varies
spatially and temporally within the basin. Excess precipitation may
be considerably underestimated in these cases. To reduce errors in
simulated runoff, the model parameters that control infiltration must be
shifted from their 'actual' values, making analyses of their physical reliability
difficult.
Breaking a basin into anumber of sub-basins is
commonly used to take into account variations of input data and basin characteristics.
It is often necessary to use a large number of sub-basins and to calibrate
parameters each of them using a limited number of outlets with observed
runoff. An additional difficulty is that the number and location
of sub-basins may also vary from one individual storm to another.
Remote sources of data, such as radar or satellites,
provide estimates of precipitation values with high resolution in space
and time. However, it is difficult to use this data in lumped parameter
models. This data appears to be surperfluous for them. There
are at least three factors which can increase the accuracy of hydrograph
simulations by lumped paramtere models (Koren, 1991): (a) better estimates
of mean areal precipitation totals, (b) a reduction of time steps, and
(c) use of variability characteristics of precipitation.
It is easy to make use of the first two factors
inlumped models. Benefits derived from runoff simulations will depend
on the quality ofradar calibration. Borovikov (1969) stated that
the accuracy of mean areal precipitation estimates for basins of less than
5000 sq. km. made by radars calibrated using only historical data were
35-45% higher than the accuracy levels when mean values were estimated
by rain gages, specifically if there was one gage per 1000 -20000 sq. km.
For operationally on- line calibrated radars, the advantage was 50-60%
(Berjulov, 1975) One can expect that the accuracy of precipitation
estimates by radars such as the NWS WSR-88D , when utilizing lumped parameter
models, will increase significantly.
This paper will present a lumped parameter model
using distribution functions of precipitation.
Reformulation
of the SAC-SMA Model to Account for the Spatial Variability of Rainfall
V. Koren, M.B. Smith, D.J. Seo,
B.D. Finnerty (HRL internal publication)
Abstract: Sensitivity analyses using high resolution radar
precipitation estimates pointed out that the Sacramento Soil Moisture Accounting
Model (SAC-SMa) is very sensitive to spatial scale. The fast runoff
components, especially surface runoff, may be underestimated significantly
if the model is calibrated at one scale and is applied to another scale.
Rainfall spatial variability is a main factor of this dependency,
If there are high resolution measurements of rainfall, such as radar, then
probabilistic averaging can be used to account for the spatial variation
of rainfall (Koren, 1993).
Use
of Soil Property Data in the Derivation of Conceptual Rainfall-Runoff Model
Parameters
V. I. Koren, M. Smith, D. Wang,
Z. Zhang. 80th Annual Meeting of the AMS, Long Beach, Ca. January
Abstract: Parameters for conceptual models such as the
Sacramento Soil Moisture Accounting model (SAC-SMA, the NWS operational
model) can be derived from observed hydrograph analysis, but are not readily
derived from physical basin characteristics. While soil property
data are available now through the entire country as high resolution gridded
files (e.g., STATSGO), they are used mostly as a qualitative information.
It restricts significantly application of these models (e.g., ungaged basins,
semi-distributed versions, etc.).
The basic physics of the SAC-SMA model is a two soil layer
structure. Each layer consists of tension and free water storages
that interact in generating soil moisture states and five runoff components.
Most of the 16 parameters of the model have to be calibrated using historical
rainfall/runoff data. Initial model parameters are usually estimated
based on hydrograph analysis at a river basin outlet. This study
is focused on developing a procedure to derive the SAC-SMA model parameters
based on soil texture data. To quantify relationships of model parameters
with soil properties, the assumption was made that the SAC-SMA tension
water storages relate to an available soil water, and that free water storages
relate to gravitational soil water. Porosity, field capacity, and
wilting point derived from STATSGO dominant soil texture for eleven standard
layers were used in estimating available and gravitational water storages.
SCS runoff curve numbers and saturated hydraulic conductivity of different
soils were also used. Analytical relationships were derived for 11
SAC-SMA model parameters. Preliminary tests on a few basins in different
regions suggest that most parameters derived from soil properties agreed
reasonably well with calibrated parameters for those basins. Accuracy
statistics of hydrographs simulated using calibrated and derived parameters
were also close. Although calibrated parameter simulations usually
give higher accuracy, the gain is not significant. It means that
parameters derived from soils data are very reasonable, and can be
improved by using calibration if observed historical data are available.
Strategy
for Utilizing Radar-Based Precipitation Estimates for River Forecasting
S. Lindsey, ASCE International
Symposium on Engineering Hydrology, San Francisco, Ca., July 25-30, 1993,
940-945
Abstract: The rainfall-runoff models that are used by
the National Weather Service (NWS) for river forecasting are generally
applied on a lumped basis to each headwater or local area above a forecast
point. A 6-hour time interval generally is used for computations.
A few basins are sub-divided because of their large size or a significant
range in elevation. Spatial and temporal resolution of rainfall-runoff
computations are controlled by the characteristics of the available data
networks.
The commissioning of NEXRAD (WSR-88D) radars, during
the period from 1991 through 1996, makes available high resolution
precipitation estimates at a 1-hour time interval. The quality of
these estimates creates a large discrepancy between the precipitation information
available and the manner in which precipitation is currently utilized by
the models. Research is under way to determine how best to use the
high resolution precipitation estimates at the NWS River Forecast Centers
(RFCs).
Techniques to incorporate information on the spatial
and temporal distribution of precipitation events within a large basin
are under investigation. In the first phase, a gridded approach will
be taken to derive a unit hydrograph which accounts for ths spatial distribution
of rainfall. The second phase focuses on the comparison and testing
of different approaches. Modeling at the grid level as well as disaggregating
current basins into ungaged subbasins will be examined. The use of
synthetic unit hydrographs derived from topographic and other data in a
Geographic Information System (GIS) to model ungaged sub-basins is being
explored to obtain response functions.
The impact of these approaches will be evaluated
on basins in the Arkansas and Red River drainages. The necessary
data are being collected for a number of forecast points. The techniques
will be evaluated with the aim of developing preliminary guideline for
the use of high resolution precipitation estimates for operational river
forecasting.
Distributed
Modeling: Phase 1 Results NWS Technical
Report 44 (200+ pages) February 1, 1999 (michael.smith@noaa.gov
Table of Contents
-
Introduction
-
The Sensitivity of the Sacramento Model to Precipitation Forcing of Various
Spatial and Temporal Scales
-
Comparison of Mean Areal Precipitation Estimates from NEXRAD Stage III
and Rain Gage Networks.
-
Numerical Experiments on the Sensitivity of Runoff Values to Level of Basin
Dissaggregation
-
Semi-Distributed vs Lumped Modeling Tests
-
Case Study in Upscaling and Downscaling of SAC-SMA Parameters
-
Major Conclusions
-
Recommendations
Appendices
Statistical
Comparison of Mean Areal Precipitation Estimates from WSR-88D, Operational,
and Historical Gage Networks.
D. Wang, M.B. Smith, Z. Zhang,
S. Reed, V.I. Koren 15th Annual Conference on Hydrology, 80th meeting of
the AMS, Long Beach, Ca., January 10-14, 2000
ABSTRACT: The mean areal precipitation (MAP) estimates derived
using precipitation data from the River Forecast Center's (RFC) Weather
Surveilance Radar 1988-Doppler (WSR-88D), RFC's operational gage
network and National Climatic Data Center's (NCDC) cooperative observer
gage network are statistically compared over eight basins in Arkansas-Red
River Basin. 6-hr radar-based MAP (MAPX), Operational MAP (MAPO)
and historic MAP (MAPH) estimates in the period from June 1, 1993 to May
31, 1998 are used. The MAPX values are derived from the gridded hourly
NEXRAD stage III precipitation estimates before June 15, 1996 and there
after from a mixed use of the stage III and P1 processing algorithms,
whereas the MAPO and MAPHvalues are computed by a calibration preprocessor
using a Thiessen polygon weighting method. In terms of long-term
averages, MAPX are in very good agreement with MAPO and MAPH. The
overall average ratios of MAPX to MAPO and MAPH values over the eight basins
are 0.985 and1.011, respectively. However, the MAPX values are strongly
dependent on the processing algorithms. Underestimation in a range
of 3~6% was found for MAPX values in comparison to MAPO estimates before
June 15, 1996 while overestimation was noted for MAPX values after
June 15, 1996. When radar and gages predicted the same amount of
precipitation, the radar estimates tended to be more intense and less spread
out. Effects of the three MAP estimates on SAC-SMA runoff output
are also studied. Statistical analysis of three simulations vs runoff
observation reveals that percent bias of MAPX simulation is -9.98%, while
MAPO and MAPH simulations are -14.72% and -17.73, respectively.
|