REddyProc typical workflow

Importing the half-hourly data

The workflow starts with importing the half-hourly data. The example, reads a text file with data of the year 1998 from the Tharandt site and converts the separate decimal columns year, day, and hour to a POSIX timestamp column. Next, it initializes the sEddyProc class.

A fingerprint-plot of the source half-hourly shows already several gaps. A fingerprint-plot is a color-coded image of the half-hourly fluxes by daytime on the x and and day of the year on the y axis.

For writing plots of data of several years to pdf see also

Estimating the uStar threshold distribution

The second step, is the estimation of the distribution of uStar thresholds, to identify periods of low friction velocity (uStar), where NEE is biased low. Discarding periods with low uStar is one of the largest sources of uncertainty in aggregated fluxes. Hence, several quantiles of the distribution of the uncertain uStar threshold are estimated by a bootstrap.

The friction velocity, uStar, needs to be in column named “Ustar” of the input dataset.

##   aggregationMode seasonYear  season     uStar        5%       50%
## 1          single         NA    <NA> 0.4162500 0.3807315 0.4488932
## 2            year       1998    <NA> 0.4162500 0.3807315 0.4488932
## 3          season       1998 1998001 0.4162500 0.3807315 0.4488932
## 4          season       1998 1998003 0.4162500 0.3361209 0.4056250
## 5          season       1998 1998006 0.3520000 0.3298500 0.3900000
## 6          season       1998 1998009 0.3369231 0.2461368 0.3868269
## 7          season       1998 1998012 0.1740000 0.2273631 0.4263509
##         95%
## 1 0.6600387
## 2 0.6600387
## 3 0.6600387
## 4 0.5838667
## 5 0.4460104
## 6 0.5114091
## 7 0.6600387

The output reports annually aggregated uStar estimates of 0.42 for the original data and 0.38, 0.45, 0.66 for lower, median, and upper quantile of the estimated distribution. The threshold can vary between periods of different surface roughness, e.g. before and after harvest. Therefore, there are estimates for different time periods, called seasons. These season-estimates are by default aggregated to entire years.

The subsequent post processing steps will be repeated using the four \(u_*\) threshold scenarios (non-resampled and tree quantiles of the bootstrapped distribution). They require to specify a \(u_*\)-threshold for each season and a suffix to distinguish the outputs related to different thresholds. By default the annually aggregated estimates are used for each season within the year.

##    season   uStar       U05       U50       U95
## 1 1998001 0.41625 0.3807315 0.4488932 0.6600387
## 2 1998003 0.41625 0.3807315 0.4488932 0.6600387
## 3 1998006 0.41625 0.3807315 0.4488932 0.6600387
## 4 1998009 0.41625 0.3807315 0.4488932 0.6600387
## 5 1998012 0.41625 0.3807315 0.4488932 0.6600387


The second post-processing step is filling the gaps in NEE using information of the valid data. Here, we decide to use the same annual \(u_*\) threshold estimate in each season, as obtained above, and decide to compute uncertainty also for valid records (FillAll).

The screen output (not shown here) already shows that the \(u_*\)-filtering and gap-filling was repeated for each given estimate of the \(u_*\) threshold , i.e. column in uStarThAnnual, with marking 22% to 38% of the data as gap. For gap-filling without prior \(u_*\)-filtering using sEddyProc_sMDSGapFill or for applying single or user-specified \(u_*\) thresholds using sEddyProc_sMDSGapFillAfterUstar see vignette("uStarCases").

For each of the different \(u_*\) threshold estimates a separate set of output columns of filled NEE and its uncertainty is generated, distinguished by the suffixes given with uStarSuffixes. "_f" denotes the filled value and "_fsd" the estimated standard deviation of its uncertainty.

## [1] "NEE_uStar_f" "NEE_U05_f"   "NEE_U50_f"   "NEE_U95_f"  
## [1] "NEE_uStar_fsd" "NEE_U05_fsd"   "NEE_U50_fsd"   "NEE_U95_fsd"

A fingerprint-plot of one of the new variables shows that gaps have been filled.

Partitioning net flux into GPP and Reco

The third post-processing step is partitioning the net flux (NEE) into its gross components GPP and Reco. The partitioning needs to distinguish carefully between night-time and day-time. Therefore it needs a specification of geographical coordinates and time zone to allow computing sunrise and sunset. Further, the missing values in the used meteorological data need to be filled.

Now we are ready to invoke the partitioning, here by the night-time approach, for each of the several filled NEE columns.

The results are stored in columns Reco and GPP_f modified by the respective \(u_*\) threshold suffix.

## [1] "Reco_U95"    "GPP_U95_f"   "Reco_U50"    "GPP_U50_f"   "Reco_U05"   
## [6] "GPP_U05_f"   "Reco_uStar"  "GPP_uStar_f"

Visualizations of the results by a fingerprint plot gives a compact overview.

For using daytime-based flux partitioning see sEddyProc_sGLFluxPartition computing columns GPP_DT and Recco_DT.

Estimating the uncertainty of aggregated results

The results of the different \(u_*\) threshold scenarios can be used for estimating the uncertainty due to not knowing the threshold.

First, the mean of the GPP across all the year is computed for each \(u_*\)-scenario and converted from \({\mu mol\, CO_2\, m^{-2} s^{-1}}\) to \({gC\,m^{-2} yr^{-1}}\).

##    uStar      U05      U50      U95 
## 1919.176 1903.914 1958.870 1985.817

The difference between those aggregated values is a first estimate of uncertainty range in GPP due to uncertainty of the \(u_*\) threshold.

In this run of the example a relative error of about 4.2% is inferred.

For a better but more time consuming uncertainty estimate, specify a larger sample of \(u_*\) threshold values, for each repeat the post-processing, and compute statistics from the larger sample of resulting GPP columns. This can be achieved by specifying a larger sequence of quantiles when calling sEstimateUstarScenarios in place of the command shown above.

Storing the results in a csv-file

The results still reside inside the sEddyProc class. We first export them to an R Data.frame, append the columns to the original input data, and write this data.frame to text file in a temporary directory.