|
Redesign
Document
for
Transfer of Data for RFC River Stage
Verification
7/9/2001
Revised: 10/24/2001
Introduction
This document addresses
methods to improve the certainty of transfer of observed and forecast
stage data values intended for use in RFC river forecast verification.
The transfer process is invoked periodically to extract observed
and forecast data values for selected locations from the Integrated
Hydrologic Forecast System Data Base (IHDB) and to place them in
the Verification Data Base (VDB).
The current process has
some shortcomings that need to be overcome.
The current transfer
method can not guarantee that a value intended for use in verification,
after being deposited in the IHDB, will be transferred from the
IHDB to the VDB. Selection of values for transfer is based on a
comparison of a summary quality code value associated with a data
value (higher value of the code over a previously transferred value's
code will allow a value to be transferred). There are circumstances
in which "better" data values will be placed in the IHDB
but will not be transferred because the quality code of an already
transferred value can not be exceeded.
Also, with the introduction
of SHEF-encoded RVD products being issued by WFOs, it has been discovered
that the transfer process does not distinguish between WFO-issued
forecasts for a location stored in the IHDB and RFC-issued forecasts
stored for that same location. This can lead to a corruption of
the verification data set.
Redesign Goals
The two primary goals
of this redesign are to assure: 1) that an observed value desired
for verification use always reaches the VDB, and 2) that RFC-issued
forecast values are separated from WFO-issued forecast values by
the transfer process.
Secondary goals of the
redesign are: 1) that the value destined for verification use is
placed as far forward in the AWIPS data processing stream as possible
so that the value reaches all appropriate destinations (e.g. IHDB,
OFS, archive, VDB), 2) that a single, simple interface be used for
the insertion of the revised data, 3) that RVD data can be posted
to the IHDB for use by other applications, and 4) that transfer
performance is not seriously degraded.
Redesign Considerations
The two redesign issues
will be separated as: 1) difficulties introduced by the RVD product,
and 2) the certainty of a data value getting dropped into the VDB.
Also, as in most redesigns, there are short-term fixes which satisfy
primary goals and objectives and more robust, long-term solutions
that address all goals and objectives. For the two redesign issues,
we will look at short and long term solutions.
1. RVD Issue
At this time, the process
to transfer data from the IHDB to the VDB can not distinguish between
a forecast or observed value provided by an RFC's product (RVF)
from that provided in a WFO product (RVD). To ensure that the verification
process analyzes only RFC-generated information, the values provided
within an RVD must not be transferred to the VDB.
a. Short-term solution
The quick fix that satisfies the primary goal is to not store
values provided within an RVD product into the IHDB. This requires
no additional work other than to keep RVD products from being
input to the shefdecode process. However, if those data values
are needed for other applications and uses (secondary goal), not
storing them in the IHDB makes their use more complicated. There
is no performance penalty with this approach.
b. Interim-term solution
A fix satisfying both redesign goals can be accomplished with
the cooperation of the offices issuing the RVD products to use
the SHEF Type-Source code (TS) of FE. This indicates that the
value is the public version of the forecast, and this makes sense
as the RVD is the public product. These values then would be kept
out of the VDB by requesting that only values with a SHEF TS of
FF are to be transferred. However, any time an RVD is prepared
that uses the SHEF TS code of FF, those values will make their
way to the VDB.
c. Long-term solution The more robust solution to this problem
is to modify the VDB transfer process to restrict transfers from
a limited set of products. For instance, a filter can be applied
that will only transfer values if the product in which they arrived
was of the form PITRVF (i.e. the CCCNNN portion of the so-called
AFOS PIL). There will be a performance penalty to an unknown degree
with the additional screening necessary with this approach.
2. Transfer Issue
At this time, there is
no guarantee that revised observations posted to the IHDB will make
their way to the VDB. The VDB transfer process is controlled by
a criteria exceedance check. The criteria checked is the integer
value associated with the individual bits set for the IHDB Quality
Code or IHDBQC (see http://hsp.nws.noaa.gov/oh/hrl/ihfs_qc/IHFSqc.pdf).
Since observations reach an RFC from outside sources, the SHEF Data
String Qualifiers (see http://hsp.nws.noaa.gov/oh/hrl/shef/version_1.3/chapter4.htm#CHAPTER4)
associated with a data value, and hence the IHDBQC, are not always
under RFC control. As such, there is no way to know what the criteria
to be exceeded is. A method is needed that will recognize that the
transfer from the IHDB to the VDB must take place under all conditions.
This issue has two
components. The first is the interface that must exist to allow
the setting of a revision that is desired in the VDB and the second
is the strategy to ensure VDB insertion.
a. Interface
An interface, preferably
a GUI, must be provided to the user to set the location(s), dates
and times, and values for the revisions. Whatever the solution
to the transfer problem, the interface for setting the revised
value can be identical as presented to the user, but the underlying
insertion methods would change based on the solution pursued.
The dates, times and values would be controlled completely by
the user, but the locations would be restricted to only those
that are in the VDB location table.
b. Insertion
The issue here is
the assurance that a value intended for verification use makes
its way into the VDB.
i. Short-term
solution
The primary goal
of assuring that data values consistently reach the VDB can
be met by inserting verification values directly into the VDB,
setting the IHDBQC bits such that they could not be exceeded,
thereby preventing verification values from being overwritten
by the transfer process. The interface for this could simply
be a script with an embedded dbaccess session, postponing the
need for a more sophisticated user interface. The script would
be the only new code needed for this quick fix. There would
be little, if any, performance penalty associated with this
approach.
ii. Long-term
solution
A full solution
would require changes to the IHDBQC and also to the code for
the VDB data transfer process, but with potentially significant
performance penalties. It would, however, place the revised
values far closer to the head of the AWIPS data processing stream
so that revisions find their way to all downstream locations.
(1) IHDB Quality
Code
The IHDBQC is
a field in various IHDB tables, particularly the PE tables,
indicating the quality of the data based on internal and external
tests of the value. It is proposed that revisions intended
to be placed in the VDB via this method be constructed in
SHEF utilizing the SHEF Data String Qualifier ("DQM"),
indicating that a manual edit has been applied to the data.
At this time, however, there is no bit in the IHDBQC that
indicates that a manual edit has been applied. Bit 22 appears
to be a placeholder for this but it can not distinguish between
the default state and an applied manual edit. It is proposed
that one of the reserved IHDBQC detail bits be used to indicate
that a manual edit has been applied.
(2) VDB Transfer
Code
Once that is accomplished,
the VDB transfer process must change to always transfer data
values from the IHDB to the VDB where this bit is set on (i.e.
its value is one) and the posting time of the data value in
the IHDB is after the VDB posting time. Since this approach
requires examining individual bits of an integer value for
every piece of data, significant processing slowdowns are
possible.
Conclusions
With full functionality
in mind, the recommendation is to pursue the long-term solutions
for both redesign issues. An evaluation of the resources necessary
to effect the solutions would be required against the availability
of those resources to determine schedules before a final redesign
decision could be made. And if lack of resources would delay final
resolution, interim solutions using the short-term approaches for
either or both issues could be considered to provide at least partial
redesign functionality.
|