Contributors: A. Mayer (Deutscher Wetterdienst), A.C. Mikalsen (DWD), H. Konrad (DWD), B. Würzler (DWD), T. Usedly (DWD), J.F. Meirink (KNMI)
Issued by: Koninklijk Nederlands Meteorologisch Instituut (KNMI) / Jan Fokke Meirink
Date: 19/01/2024
Ref: C3S_D312b_Lot1.0.4.8_201903_Updated_KPIs_v1.0 and C3S2_D312a_Lot1.3.7.1_202401_Unified_KPI_Approach_v1.1
Official reference number service contract: 2021/C3S2_312a_Lot1_DWD/SC1
History of modifications
List of datasets covered by this document
Related documents
Acronyms
List of figures
List of tables
Scope of the document
In this document, the general quality assessment approach for C3S_312b_Lot1 and C3S2_312a_Lot1 is outlined. This approach introduces an objective quality measure across the different datasets provided within the project. It was developed under C3S312b_Lot1 in Copernicus phase 1, and consists of a description of common validation methods, including metrics and terminology, as well as a practical approach for the derivation of Key Performance Indicators (KPIs) and their evaluation for the Interim Climate Data Records (ICDRs). In phase 2, the suitability and the harmonious application of the KPI approach for the different data records were evaluated.
Executive summary
The climate data records (CDRs) and their temporal extensions by interim climate data records (ICDRs) provided within C3S_312b_Lot1 and C3S2_312a_Lot1 need to be quality assured before they are made publicly available. In order to harmonize the quality assurance (QA) of the different data records, a common approach was developed in phase 1. The approach is outlined in this document, along with an introduction to general validation methods and metrics used for QA of the CDRs. The method involves two steps. First, Key Performance Indicators (KPIs) for ICDRs are derived based on deviations of the corresponding CDRs from a reference data record. Second, these KPIs are evaluated for the ICDRs on a regular basis. This method involves an assessment whether ICDRs are consistent with their corresponding Thematic Climate Data Records (TCDRs) in terms of deviations from a reference data record. The document also includes a revision of the KPIs for the TCDRs that have been provided in the tender for this project [D7].
After application of the new common approach on quarterly basis in phase 1, , the suitability and practical utility of the ICDR KPI method was investigated in phase 2. It was concluded that the method works well in practice: it is straightforward to apply and readily gives indications when anomalies are present in an ICDR. If this is the case, more in-depth analyses follow to find the cause of the anomalies and to determine whether they are acceptable or whether corrections are necessary (if possible). The focus of the KPI method is on global scales. Hence, there might be a risk that regional anomalies remain undetected. However, in practice regional artifacts will also show up in evaluations at global scales, and more detailed regional KPIs would compromise the ease of application of the method. More in-depth analyses would lead to less comparable results and therefore undermine the strength of having a dataset-independent method that provides quantifiable and comparable measures when a deviation of the ICDR from its TCDR becomes too large. Therefore, it was decided not to implement them.
The analysis showed that the KPI method is applied homogeneously among the different ICDRs by different providers in the project. In that respect, no further optimization is possible. In summary, the approach developed in phase 1 will continue to serve as the standard for QA in phase 2.
1. Introduction
An important aspect of the C3S_312b_Lot1 (phase 1) and C3S2_312a_lot1 (phase 2) project is quality assurance (QA) of the various data records on Essential Climate Variables (ECVs) provided. This will allow users at different levels – scientists, commercial customers, service providers and the general public – to get confidence in the quality of the CDRs and to judge whether the CDRs are fit for their purpose. The QA system should follow state-of-the-art practices for validation of satellite-based data records and take into account existing guidance on error characterization.
Within the recently finalized QA4ECV project, a generic QA system has been set up. This system allows data providers to systematically organize their QA information. It consists of six quality indicators:
(1) product details,
(2) traceability chains,
(3) quality flags,
(4) validation,
(5) metrological traceability, and
(6) assessment against standards.
In C3S_312b_Lot1, this QA information is spread over several documents. For example, the first quality indicator – product details – is addressed in the Algorithm Theoretical Baseline Documents (ATBDs) and Product User Guide and Specifications (PUGSs).
This document focuses on the fourth quality indicator: validation. Firstly, it describes a general approach for product validation, including the adopted metrics and terminology. Secondly, it proposes a guideline for the establishment of KPIs for the ICDRs, based on the corresponding CDRs, and a concrete method for the evaluation of these KPIs on a frequent (quarterly) basis.
Specific details of the validation methods, including for example the reference datasets used, will depend on the ECV considered. For each data record, the Product Quality Assurance Document (PQAD) describes how the validation is performed, which reference datasets are used, which evaluation metrics are adopted, etc. In the Product Quality Assessment Report (PQAR) the validation results are presented, and it is concluded whether the KPIs are fulfilled.
Validation approaches for different data records have many aspects in common. Section 2 of this document summarises these aspects and gives some further background. In particular, the typically adopted metrics and terminology are outlined.
While for CDRs, the KPIs are based on requirements such as laid down by the Global Climate Observing System (GCOS), in particular GCOS-200 (2016), a different approach was adopted for ICDRs. KPIs were designed to assess the consistency between ICDRs and their corresponding CDRs, since this is regarded as their most relevant property. In Section 3, a method is presented for the establishment of KPIs along this line as well as for the evaluation of the KPIs on a regular basis. This unified method is successfully being applied for the various ICDRs in the project.
2. General validation methodology
Here we give some general guidelines regarding the methodology and terminology of product validation in C3S_312b_Lot1 and C3S2_312a_Lot1. This is heavily based on the review of validation practices for satellite-based Earth observation data by Loew et al. (2017).
According to Loew et al. (2017), validation can be defined as '(1) the process of assessing, by independent means, the quality of the data products derived from the system outputs and (2) confirmation, through the provision of objective evidence, that specified requirements, which are adequate for an intended use, have been fulfilled'. Both aspects are addressed here, and the 'specified requirements' are given by the KPIs. In the validation process five steps, which are highlighted in Section 2.1, ought to be followed. Additional analyses, including checks on the reported uncertainties are described in Section 2.2.
2.1 The validation process
The first step is quality checking. It involves the selection of data to enter the validation process using available quality information. This holds for satellite products, reference data and ancillary data. The second step is spatio-temporal collocation. Various constraints must be considered. In particular, satellite products and reference data should have comparable space/time sampling, and comparable space/time resolution. Another, sometimes conflicting, requirement is that sufficient statistics should be gathered. The third step is homogenization. It includes conversions and e.g. application of averaging kernels needed to make the satellite and reference data comparable. After this step, a set of satellite measurements (x = xi, i=1 ... n) and a corresponding set of reference data (y = yi) is available for quantitative comparison. The (xi,, yi) here refer to gridded and aggregated products: the KPIs are in most cases defined for monthly-mean global-mean quantities. Thus, the index \( i \) here refers to time slots (days or months) rather than to spatial locations. Since aggregated products are considered, the second step in the validation process (spatio-temporal collocation) will not be entirely feasible and resulting uncertainties should be taken into account.
The fourth step is the calculation of metrics quantifying the consistency between satellite products and reference observations. The common metric for systematic differences is the bias b :
\[ b(x,y)= E[x-y]= \frac{1}{n} \sum_{i=1}^n (x_{i} - y_{i}), \quad (1) \]where E is the expectation operator. For error distributions deviating strongly from Gaussianity, the median difference may be considered instead of the bias. To measure statistical spread, the root-mean-square deviation (RMSD) is commonly used:
\[ RMSD(x,y)= \sqrt{E[(x-y)^2]}= \sqrt{\frac{1}{n} \sum_{i=1}^n (x_{i} - y_{i})^2}. \quad (2) \]Often the RMSD is corrected for the bias between the datasets, yielding the bias-corrected or centered RMSD (cRMSD):
\[ cRMSD(x,y)= \sqrt{E[(x-y-b(x,y))^2]}. \quad (3) \]In some cases, the absolute value of deviations rather than their square, as in the RMSD, is taken. The resulting metric is the mean absolute deviation (MAD):
\[ MAD(x,y)= E[|x-y|]=\frac{1}{n} \sum_{i=1}^n |x_{i} - y_{i}|. \quad (4) \]A range of other metrics exists, including the linear (Pearson) correlation coefficient to measure statistical dependency between the datasets. Finally, the temporal stability \( \beta \) is defined as the change in the bias of a data set over time:
\[ \beta= \frac{d}{dt}(x-y). \quad (5) \]It can be estimated by linear regression analysis of the time series of \( x-y \) . Before estimating the stability, it is important to first de-seasonalise the datasets and check for breakpoints.
The final step involves analysis and interpretation. Here the main aim is to confirm that the specified requirements, as given by the KPIs, have been fulfilled with statistical significance. KPIs have been established for accuracy and stability, where accuracy is in most cases expressed through the bias and in some cases through the MAD. A useful relation for the uncertainty of the stability estimate, needed to verify the conformance with the requirements, is given by Weatherhead et al. (1998). This uncertainty depends on the length, variability and autocorrelation of the time series.
2.2 Additional analyses
Additional analyses and consistency checks beyond the KPI verification can be included. The spatial variation of deviations from the reference dataset is relevant information for users, even if it may not be captured in the KPIs (if these are based on global means). This can be visualized by maps and quantified by calculating statistics segregated by e.g., latitude band, surface type (land, water, snow, etc.) or time of day (daytime, nighttime, twilight). The calculation of time series anomalies can be an additional analysis, for instance.
Consistency checks on the reported product uncertainties are also important and useful (Merchant et al., 2017). Denoting the standard uncertainties in the satellite and reference measurements by
\( u_{x_{i}} \)
and
\( u_{y_{i}} \)
, respectively, and the uncertainty due to collocation mismatch by σ, one can check whether (Immler et al., 2010):
where k is the so-called coverage factor, with k=1 and 2 corresponding to significance levels of 68% and 95%, respectively. If the fraction of the dataset for which Eq. ( 6 ) with k=2 holds is much smaller than 95%, the estimated uncertainties are probably too small. In contrast, if the fraction of the dataset for which Eq. ( 6 ) with k=1 holds is much larger than 68%, the uncertainties have probably been estimated too large. In practice, the reference measurement uncertainty \( u_{y_{i}} \) is not always known and σ is also difficult to quantify, so that statements about \( u_{x_{i}} \) may remain inconclusive.
3. ICDR KPI evaluation approach
According to the project plan, the project team is supposed to frequently report on the compatibility of ICDRs with the KPIs as part of the quarterly status reports. In view of these requirements, the KPIs for the ICDRs should be applicable for routine quality checks on a quarterly basis and should be harmonized throughout the project to the extent possible.
The project proposal [D6] is not very specific on how to compute KPIs: the term ‘accuracy’ remains undefined and the initial performance targets (PTs), in Table 5-8 in [D7], vary strongly, sometimes suggesting an upper bound for the absolute bias (based on GCOS for some records and on validation reports for others), sometimes suggesting a range from negative to positive bias values. Besides this, for the relatively short individual ICDRs, the criteria for stability as described in the proposal will not allow meaningful conclusions.
Here, we present a variable-independent approach to monitor ICDR chains generated for the Copernicus Climate Data Store (CDS). It is a general strategy, which aims to provide a unified, objective evaluation of the different ICDRs. Given that very different opinions on “the best” approach may exist, the aim here is to propose a practical concept and common strategy for all ICDRs delivered to the CDS. The primary aim is to allow shortcomings in the ICDRs to be detected with respect to the CDR performance for which the KPIs were initially designed.
The focus in the proposed evaluation is on consistency of the ICDR with the CDR. Criteria for stability are not included because these will not allow meaningful conclusions for the relatively short individual ICDRs.
3.1 Basic idea
Our idea is to define KPIs to perform a unified, quick, but reliable quality check of the ICDRs. As ICDRs are released every 3 months, performing a comprehensive, in-depth validation with each release is not feasible. Therefore, we rather propose to compare the performance of new ICDRs with a suitable reference dataset with the performance of the (deeply validated) CDRs against the reference dataset.
If the ICDR differs from the reference dataset in the same way the CDR does, we conclude that the ICDR is of the same quality as the CDR (i.e. the test indicates that the key performance of the ICDR is good/positive). But if the ICDR, statistically speaking, differs unusually widely from the reference dataset, we infer that we need to have a deeper look into the ICDR to identify possible errors (i.e. the test indicates that the key performance of our ICDR is bad/negative). To decide whether the ICDR differs, in a statistical sense, unusually widely from the reference dataset, we perform a statistical hypothesis test, a binomial test. Therefore, we predefine a reference Performance Target PT, i.e. a well-chosen range where most of the TCDR differences (say 95%) are found. Then, we check how many values of the ICDR lie within this range. We get three possible outcomes:
- All ICDR values are within the range of the PT: the key performance of the ICDR is good (KPI = good).
- Some ICDR values are outside range of the PT, but the binomial test shows that it is plausible: the key performance of the ICDR is still okay (KPI = good).
- Some ICDR values are outside the range of the PT, but the binomial test shows that it is not plausible: the key performance of the ICDR is bad (KPI = bad).
To decide when we consider the amount of ICDR values falling out of the interval only just to be plausible, we predefine a threshold probability (i.e. a significance level) of e.g. 5%.
In the following, we outline this strategy in more detail.
3.2 KPI accuracy
3.2.1 Choice of a reference dataset
The approach to verify the KPI accuracy is to first identify a suitable reference dataset (subscript r below), against which differences of (near-) global mean values can be computed for the tested dataset (i.e. the CDR/ICDR that is delivered to the CDS; subscript t below). The chosen reference dataset should have as much spatial and temporal overlap with the tested dataset as possible. As most of our data have global coverage, a reference dataset with global coverage would consequently be preferential. Any choice, however, will most probably be a subjective one in the end. If we feel that there is no suitable reference data set available (in time), we will directly take the actual CDR as reference data set, although this implicitly assumes that the true time series is stable in terms of mean value and variability.
3.2.2 CDR: Basis for the derivation of performance targets
The CDR will be used to derive PTs for the ICDR as follows. We compute spatially averaged (preferentially global mean) values of both the tested dataset, \( x_{j} \) , and the reference dataset, \( y_{j} \) , and get the respective differences \( d_{j} = x_{j}-y_{j} \) , for every available, daily or monthly, time slice \( j=1,...,N \) . In particular, this means that we generally avoid regridding of the datasets. Regridding is only applied if we feel a particular need for it, e.g. due to special characteristics of the data sets. If \( d_{j} \) has a strong seasonal cycle, it is advised to de-seasonalise it in a further step.
If the actual CDR is taken as reference data set, we define its long-term mean as the reference, i.e. \( y_{j}= \frac{1}{N} \sum_{j=1}^N x_{j} \) . Again, if \( d_{j} \) (or equivalently \( x_{j} \) ) has a strong seasonal cycle, de-seasonalisation is recommended.
Subsequently, suitable PTs for the ICDR, i.e. an update for Table 5-8 in [D7], are defined by the 2.5th and 97.5th percentiles, \( P_{2.5} \) and \( P_{97.5} \) , respectively, in the time series \( d_{j} \) . Thus, the new PTs for KPI accuracy are \( T_{lower}=P_{2.5} \) and \( T_{upper}=P_{97.5} \) . The updated Table 5-8 in [D7] will also specify that we expect (only) 95 % of all available ICDR values to be in the \( [T_{lower},T_{upper}] \) interval.
3.2.3 ICDR: Testing its performance
The ICDR is an extension of an existing CDR and will be assessed on the basis of the CDR distribution.
As for the TCDR, the differences \( d_{i} = x_{i}-y_{i} \) will be computed for every time slice in the ICDR (denoted by subscript \( i=1,...,n \) to distinguish it from the CDR), where \( y_{i} \) is either the global mean of the reference dataset for that time slice or the long-term global mean of the CDR. The same reference dataset needs to be used, or otherwise a significant deviation of CDR and ICDR biases would be expected simply due to the different references. The KPI check is carried out by verifying that 95 % of the values in the difference time series \( d_{i} \) lie between the upper and lower bounds obtained before, i.e. \( T_{lower} \leq d_{i} \leq T_{upper} \) for 95 % of all \( i \) . This procedure allows monitoring the quality of the ICDR with respect to the CDR to a certain degree.
The proposal [D6] states that “[if] the KPI is negative, then the project team will assess the data product in more detail investigating possible causes for the quality deterioration.” The above procedure does not involve an indication of good data by KPI > 0 and bad data by KPI < 0 anymore. Consequently, we define a new criterion for the need of detailed assessment in the following.
The supposedly stochastic experiment can be characterized by two potential outcomes for the difference value at a given time:
\( T_{lower} \leq d_{i} \leq T_{upper} \quad \text{('successful') with a probability of } p_{0} \)
\( T_{lower} > d_{i} \quad \text{or} \quad d_{i} > T_{upper} \quad \text{with a probability of}\ 1 - p_{0} \)
i.e. the probability pn,k to draw k successful (option 1) samples in a total of n samples ( d1, …, dn ) is given by the binomial distribution:
\[ p_{n,k} = \tbinom{n}{k} p_{0}^k (1-p_{0})^{n-k}, \quad (7) \]
Given k 'successful' ICDR difference values in a total of n, we test the hypothesis that the probability for one single sample of the ICDR difference value to fall into the 95% interval of the CDR is also 95% or higher (null hypothesis), i.e. \( p_{0} \geq 0.95 \) , by carrying out a one-sided binomial test. The respective alternative hypothesis is that \( p_{0} < 0.95 \) , i.e. the ICDR bias deviates significantly from the CDR bias.
By defining a significance level of 5%, α=0.05, and computing the cumulative probability \( p_{n,k}^{cum} = \sum_{j=0}^k p_{n,j}, \) for 1 to k 'successful' samples in the total of n ICDR difference values, we are able to verify or falsify the null hypothesis:
\( p_{n,k}^{cum} < \alpha: p_{0} < 0.95, \quad \text{i.e. reject null hypothesis; the ICDR bias deviates from the CDR bias;} \textbf{ the ICDR needs to be assessed in detail.} \)
\( p_{n,k}^{cum} \geq \alpha: p_{0} \geq 0.95, \quad \text{null hypothesis verified; the ICDR bias performs at least equally well as the CDR bias;} \textbf{ no further assessment of the ICDR is required at this stage.} \)
In python, for example, \( p_{n,k}^{cum} \) can be computed by scipy.stats.binom_test(k, n, p0, alternative='less'). Matlab and R have similar commands on offer.
The ICDR should always be treated as a whole, i.e. by cumulating past deliveries, so that the sample of the bias in the ICDR becomes larger over time. Optionally, we also check the latest ICDR addition separately to identify a possible break early.
Note that in some cases, it might be necessary to slightly change or extend the above mentioned method to make it applicable to each individual dataset, e.g. in case of a three-dimensional data set by performing the test for each altitude interval.
3.2.4 Example
Panel A of Figure 3-1 shows the time series of the global mean value of a synthetic tested dataset (blue), the global mean value of the respective reference dataset (red), and the resulting differences (orange). The interval defined by \( T_{lower} = P_{2.5} \) and \( T_{upper} = P_{97.5} \) based on the TCDR is indicated by thin black lines (not to be confused with the thick black lines indicating the periods for the CDR and the ICDR). The values corresponding to this interval are \( T_{lower} = -1.06 \) and \( T_{upper} = 4.95 \) . Panel B shows that k=9 of the n=10 ICDR difference values are successful, i.e. fall inside the 95% interval of the CDR.
Figure 3-1: Illustration of the ICDR evaluation approach. See text for more details.
The program call scipy.stats.binom_test(9, 10, 0.95, alternative='less') returns a cumulative probability for 0-9 successful draws of \( p_{n,k}^{cum}\approx 0.4 \) , i.e. above the significance level \( \alpha=0.05 \) so the ICDR bias complies with the CDR bias and no action is required. Note that \( p_{n,k}^{cum} \) is smaller than \( \alpha \) , and action required, only for \( K \leq 7 \) in this example with n=10 (see Table 3-1).
Table 3-1: Total number of ICDR-difference values n and corresponding minimum number of cases \( k_{min,95\%} \) that need to lie within the \( [T_{lower},T_{upper}] \) range to pass the binomial test with a significance level of 95%.
n | kmin,95% |
3 | 2 |
6 | 5 |
9 | 7 |
10 | 8 |
12 | 10 |
15 | 13 |
3.3 Concluding remarks
We drop the initially foreseen KPI stability for ICDR checks completely since the stability itself, as defined in Eq. (5), is already included in existing validation reports for brokered CDRs, and will be included in non-brokered datasets in the C3S validation documents. The main reason for doing so is that the ICDR periods are expected to be too short for detecting trends in noisy (or rather highly variable) data. The above approach for the KPI accuracy/bias will also enable us to detect a significant trend in the ICDR bias, because eventually the ICDR bias will deviate significantly from the CDR bias if it grows consistently smaller or larger over time. Consequently, we propose to investigate possible trends in the bias once the binomial test for KPI accuracy/bias indicates the need for action.
The KPI accuracy of forthcoming ICDRs will be reported in the Quarterly Reports by providing the percentage of data that fall inside the interval, together with the information whether further validation is required based on the binominal test \( p_{n,k}^{cum} \geq \alpha=0.05 \) (i.e. no action) or not (i.e. action).
If an ICDR bias deviates significantly from the CDR bias for comprehensible and documentable reasons, the criterion of 95% of all ICDR difference values falling inside the 95% interval from the CDR or the significance level might have to be relaxed.
The usage of the percentiles in the CDR difference values implies that – in contrast to the concept of 'accuracy' – the optimal situation for the ICDR is not a near-zero bias, but a bias close to the mean/median bias of the CDR. We think that this makes sense as zero bias in the ICDR could be problematic if the CDR bias is clearly off-zero.
Any ICDR that is not simply the extension of a CDR would obviously be treated differently than proposed here.
In addition, we are aware of possible data quality variations or missing data within the ICDRs since accuracy and completeness of time-averaged satellite data can differ due to instrumental, transmission or system errors. More precisely, for monthly means the number of processed orbits per month determines the data quality and whether there is missing data for a whole month or for certain parts of the globe. Please note that we will not do any previewing of the data set quality and completeness before the KPI evaluation. Any further data analysis or deeper gap analysis is only performed in the case of a rejection of the performance targets.
4. Overview of ICDR KPIs and exemplary application of the evaluation approach
For steady quality control of the new ICDR chains, the evaluation approach outlined in Chapter 3. is applied. 4-1 gives an overview of the ICDR KPI values derived from the corresponding CDRs. Missing entries relating to ICDRs whose CDRs have not been brokered yet will be given at a later stage.
The KPIs for the ICDRs are lower and upper bounds for the binomial test, reflecting the 2.5th and 97.5th percentiles of the corresponding CDR distributions, as explained in Chapter 3. The last column of Table 5-1 briefly describes important data set specific aspects of the analysis, e. g. the reference data set used. Details on the derivation of the ICDR KPIs are in most cases provided in the PQARs of the respective data records (see related documents D1-D5).
For completeness, the KPIs relevant for the CDRs are listed in Table 5-2. Extensive evaluation of these KPIs is performed in the validation reports of the respective CDRs.
We conclude this section with an example of the application of the ICDR KPI evaluation method. Regular account of the KPI evaluation results is given in the Quarterly Reports.
4.1 Exemplary application to the first delivery of the GPCP ICDR
As a non-synthetic example, we evaluate the KPI accuracy for the first delivery of the GPCP precipitation ICDR.
4.1.1 GPCP monthly v2.3
The 2.5- and 97.5-quantiles of the spatially averaged differences of precipitation for GPCP monthly v2.3 CDR, covering the time period Jan 1979 to Dec 2017, are -0.175 mm/d and +0.189 mm/d, respectively. The respective reference is TMPA 3B43 v7, which is only available between 50°S and 50°N, and from Jan 1998. Consequently, all relevant values are obtained by averaging over this spatial TMPA window from Jan 1998.
The first delivery of the ICDR comprises Jan 2018 to Dec 2018, i.e. 12 realizations of monthly means. For all of these, the spatially averaged precipitation differences between GPCP and the reference product fall inside the 2.5- and 97.5-quantiles found in the monthly CDR ensemble. It is thus , so that we conclude that the ICDR at this stage is of at least the same quality as the CDR and therefore passes the described test.
4.1.2 GPCP daily v1.3
The GPCP daily v1.3 CDR covers the period 1st Oct 1996 to 31st Dec 2017. Our reference here is TMPA 3B42 v7, which is, like its monthly realization (see above), available from Jan 1998 between 50°S and 50°N. The respective 2.5- and 97.5-quantiles of spatially averaged differences are -0.332 mm/d and 0.343 mm/d, respectively (see).
The first delivery of the ICDR comprises the same period as for the monthly product, 1st Jan 2018 to 31st Dec 2018, i.e. 365 realizations. As the TMPA 3B42 reference data set is yet missing for one day (31st Dec 2018), 364 realizations remain. For 361 of these, the spatially averaged precipitation differences between GPCP and the reference product fall inside the 2.5- and 97.5-quantiles found in the daily CDR ensemble. It is thus , so that we conclude here, too, that the ICDR quality is sufficient. (Only a number of 338 or less would have indicated that the ICDR’s quality is lower than the CDR’s in this context.)
5. Required KPI values for CDR and ICDR
Table 5-1: Required KPI values for ICDRs
Deliverable ID | ECV name | Product name | KPI 1: accuracy | Explanations/Comments | |
Precipitation | |||||
D3.3.1-v1.x D2.4.1 | Precipitation | PRECIP_1DD_GPCP_ICDR | p2.5=-0.525 mm/d p97.5=0.022 mm/d p2.5=-0.332 mm/d, p97.5=0.343 mm/d (constrained to +/-50° latitude) | CDR percentiles when compared to ERA-5 (successor of TMPA product due to unavailability from 2020 on). CDR percentiles when compared to TRMM TMPA 3B42 v7 inside +-50° latitude. | |
Precipitation | PRECIP_2.5DM_GPCP_ICDR | p2.5=-0.338 mm/d p97.5=-0.063 mm/d p2.5=-0.175 mm/d, p97.5=0.189 mm/d (constrained to +/-50° latitude) | CDR percentiles when compared to ERA-5 (successor of TMPA product due to unavailability from 2020 on). CDR percentiles when compared to TRMM TMPA 3B43 v7 inside +-50° latitude. | ||
Surface Radiation Budget | |||||
D3.3.4-v2.x D2.6.1 | Surface Radiation Budget | SIS_MM_AVHRR_CLARA (CLARA-A2.1) | p2.5=5.28 W/m² p97.5=14.66 W/m² | The CDR is compared to the surface measurements from the Baseline Surface Radiation Network (BSRN). The monthly averages are calculated from the high-resolution BSRN data. The collocations are selected using the nearest neighbor technique. Percentiles are calculated for the mean absolute bias. Satellite data with less than 20 observations per month are excluded. | |
Surface Radiation Budget | SIS_DM_AVHRR_CLARA (CLARA-A2.1) | p2.5=8.81 W/m² p97.5=40.95 W/m² | |||
Surface Radiation Budget | SDL_MM_AVHRR_CLARA (CLARA-A2.1) | p2.5=5.11 W/m² p97.5=11.63 W/m² | |||
Surface Radiation Budget | SOL_MM_AVHRR_CLARA (CLARA-A2.1) | p2.5=7.66 W/m² p97.5=25.79 W/m² | |||
D3.3.5-v2.x D2.6.1 | Surface Radiation Budget | SDL_MM_AVHRR (CLARA-A2.1) | None | None. Datasets are verified using the propagation of uncertainties method. The dataset fulfills the KPIs if the input data fulfills the KPIs. | |
Surface Radiation Budget | SOL_MM_AVHRR (CLARA-A2.1) | None | |||
Surface Radiation Budget | SRS_MM_AVHRR (CLARA-A2.1) | None | |||
Surface Radiation Budget | SNS_MM_AVHRR (CLARA-A2.1) | None | |||
Surface Radiation Budget | SNL_MM_AVHRR (CLARA-A2.1) | None | |||
Surface Radiation Budget | SRB_MM_AVHRR (CLARA-A2.1) | None | |||
D2.6.3 D2.6.4-Px | Surface Radiation Budget | SIS_MM_AVHRR_CLARA (CLARA-A3) | p2.5=-7.95 W/m² p97.5=16.08 W/m² | KPIs are calculated based on the defined limits from the comparison between TCDR and reference dataset ERA5. | |
Surface Radiation Budget | SIS_DM_AVHRR_CLARA (CLARA-A3) | None | |||
Surface Radiation Budget | SDL_MM_AVHRR_CLARA (CLARA-A3) | p2.5=-0.44 W/m² p97.5=0.38 W/m² | |||
Surface Radiation Budget | SNS_MM_AVHRR_CLARA (CLARA-A3) | p2.5=-6.77 W/m² p97.5=14.18 W/m² | |||
Surface Radiation Budget | SNL_MM_AVHRR_CLARA (CLARA-A3) | p2.5=-0.51 W/m² p97.5=0.38 W/m² | |||
Surface Radiation Budget | SRB_MM_AVHRR_CLARA (CLARA-A3) | p2.5=-5.82 W/m² p97.5=10.49 W/m² | |||
D3.3.7-v3.x D2.1.1 D2.1.3 | Surface Radiation Budget | SIS_MM_CCI_AATSR_TCDR | p2.5=-1.29 W/m² p97.5=2.12 W/m² | CDR percentiles compared to CERES | |
Surface Radiation Budget | SRS_MM_CCI_AATSR_TCDR | p2.5=-0.45 W/m² p97.5=0.36 W/m² | |||
Surface Radiation Budget | SOL_MM_CCI_AATSR_TCDR | p2.5=-4.04 W/m² p97.5=3.68 W/m² | |||
Surface Radiation Budget | SDL_MM_CCI_AATSR_TCDR | p2.5=-1.95 W/m² p97.5=2.23 W/m² | |||
Water Vapour | |||||
D3.3.9-v1.x D2.5.1 | Water Vapour | GRM-29-L3-H-I1 | 0-4 km: p2.5=-2.00% p97.5=-0.93% 4-8 km: p2.5=-1.04% p97.5=+0.82% 8-12 km: p2.5=-0.66% p97.5=+2.13% | The observed water vapour data are retrieved from Metop RO data, and the reference data consist of co-located ERA-Interim, similarly averaged as the observations. We compute relative differences between monthly means of observed data and reference data on a global latitude-height grid. These relative differences are globally averaged (properly area weighted) and vertically averaged (in 0-4 km, 4-8 km, and 8-12 km layers). No attempt is made to subtract an annual cycle in the observation-reference differences. For each vertical layer, we find the 2.5% and 97.5% percentiles of the distribution of the 121 months in the CDR difference time series. | |
D3.3.12-v1.x | Upper Tropospheric Humidity | UTH_MW | To 31/08/2019: p2.5=-0.47% p97.5=0.55% Since 01/09/2019: p2.5=0.40% p97.5=1.57% (constrained to +/-60° latitude) | The values result from the comparison of the three UTH products from Metop-A, Metop-B, and NOAA-18 against ERA-Interim. The time periods considered are 2006-2015 for Metop-A, 2008-2015 for Metop-B, and 2013-2015 for NOAA-18. The values depict the (lowest of the three) 2.5 % and the (highest of the three) 97.5 % percentiles of the differences in daily global means. A cosine latitude weighted average was applied for the calculation of the means in each 1° x 1° grid box. The comparison excludes the polar regions. The values result from the comparison of the two UTH products from Metop-A, and Metop-B against ERA-5. The period considered was 1/1/2018-31/12/2018 for Metop-A and Metop-B. The values depict the (lowest of the two) 2.5 % and the (highest of the two) 97.5 % percentiles of the differences in daily global means. A cosine latitude weighted average was applied for the calculation of the means in each 1° x 1° grid box. The comparison excludes the polar regions. | |
D3.3.14-v1.x | Total Column Water Vapour from SSMI/SSMIS (HOAPS) | TCWV_SSMI/SSMIS_TCDR | Compared to RSS_SSMI p2.5=-0.52% p97.5=0.00085% Compared to ERA-5: p2.5=-0.6% p97.5=1.7% Compared to AIRS: p2.5=-0.28% p97.5=0.62% | Compared to three different reference datasets:
| |
Cloud Properties | |||||
D3.3.16-v2.x D2.6.1 | Cloud Properties | CFC_MM_AVHRR_CLARA (CLARA-A2.1) | p2.5=-0.718%, p97.5=0.576% | The values result from a comparison of the CLARA-A2 monthly CDR with the monthly CDR of MODIS Collection 6.1 (time period considered: May 2003 to December 2015). They depict the 2.5 % and 97.5 % percentiles of the annual-cycle corrected differences in global means. | |
Cloud Properties | CFC_DM_AVHRR_CLARA (CLARA-A2.1) | None | |||
Cloud Properties | CTO_MM_AVHRR_CLARA (CLARA-A2.1) | p2.5=-7.036 hPa, p97.5=4.353 hPa | |||
Cloud Properties | CTO_DM_AVHRR_CLARA (CLARA-A2.1) | None | |||
Cloud Properties | LWP_MM_AVHRR_CLARA (CLARA-A2.1) | p2.5=-0.002094 g/m², p97.5=0.002202 g/m² | |||
Cloud Properties | LWP_DM_AVHRR_CLARA (CLARA-A2.1) | None | |||
Cloud Properties | IWP_MM_AVHRR_CLARA (CLARA-A2.1) | p2.5=-0.003076 g/m², p97.5=0.002114 g/m² | |||
Cloud Properties | IWP_DM_AVHRR_CLARA (CLARA-A2.1) | None | |||
D2.6.3 D2.6.4-Px | Cloud Properties | CFC_MM_AVHRR_CLARA (CLARA-A3) | p2.5=-0.575%, p97.5=0.571% | KPIs are calculated based on the defined limits from the comparison between TCDR and reference dataset MODIS Collection 6.1. | |
Cloud Properties | CFC_DM_AVHRR_CLARA (CLARA-A3) | None | |||
Cloud Properties | CTO_MM_AVHRR_CLARA (CLARA-A3) | p2.5=-8.534 hPa p97.5=7.501 hPa | |||
Cloud Properties | CTO_DM_AVHRR_CLARA (CLARA-A3) | None | |||
Cloud Properties | CPPI_MM_AVHRR_CLARA (CLARA-A3) | p2.5=-0.004792 W/m² p97.5=0.002958 W/m² | |||
Cloud Properties | CPPI_DM_AVHRR_CLARA (CLARA-A3) | None | |||
Cloud Properties | CPPL_MM_AVHRR_CLARA (CLARA-A3) | p2.5=-0.003586 W/m² p97.5=0.004053 W/m² | |||
Cloud Properties | CPPL_DM_AVHRR_CLARA (CLARA-A3) | None | |||
D3.3.18-v3.x D2.1.1 D2.1.3 | Cloud Properties | CA_MM_CCI_AATSR | p2.5=-1% p97.5=3% | MODIS Collection 6.1 (Terra only) is used as reference dataset. | |
Cloud Properties | CTP_MM_CCI_AATSR | p2.5=-7.9 hPa p97.5=6.93 hPa | |||
Cloud Properties | IWP_MM_CCI_AATSR | p2.5=-16.4 g/m² p97.5=11.3 g/m² | |||
Cloud Properties | LWP_MM_CCI_AATSR | p2.5=-4.65 g/m² p97.5=4.48 g/m² | |||
Earth Radiation Budget | |||||
D3.3.20-v2.x D2.2.8 | Earth Radiation Budget | RSF_ERB_CERES (v1) | p2.5=0.4 W/m², p97.5=2.3 W/m² | The values result from a comparison of the CERES monthly CDR with ERA 5 (time period considered: February 2002 to October 2018). They depict the 2.5 % and 97.5 % percentiles of the differences in global means. | |
Earth Radiation Budget | OLR_ERB_CERES (v1) | p2.5=-2.4 W/m² p97.5=-1.3 W/m² | |||
D2.2.9 D2.7.0 | Earth Radiation Budget | RSF_ERB_CERES (v2) | p2.5=0.52 W/m² p97.5=2.37 W/m² | The KPIs for the CERES EBAF 4.2 dataset have been calculated based on the comparison of the CERES monthly data with ERA 5. | |
Earth Radiation Budget | OLR_ERB_CERES (v2) | p2.5=-2.36 W/m² p97.5=-1.25 W/m² | |||
D2.2.4 D2.2.6 | Earth Radiation Budget | OLR_HIRS | p2.5=-5.35 W/m² p97.5=-3.02 W/m² | ERA-5 is used as reference dataset | |
D3.3.23-v1.x D2.7.1 D2.7.2-v2.x | Earth Radiation Budget | TSI_TOA (v2) | p2.5=0.87 W/m² p97.5=1.83 W/m² | No reference observations for direct validation, comparison with NRL TSI v2 | |
D2.7.4 D2.7.6-P1 D2.7.7-P2 | Earth Radiation Budget | TSI_TOA (v3) | p2.5=0.052 W/m² p97.5=0.736 W/m² | No reference observations for direct validation, comparison with NRL TSI v2 | |
D3.3.26-v3.x D2.1.1 D2.1.3 | Earth Radiation Budget | OLR_CCI_AATSR | p2.5=-1.15 W/m² p97.5=0.9 W/m² | CDR percentiles when compared to CERES | |
Earth Radiation Budget | RSF_CCI_AATSR | p2.5=-1.36 W/m² p97.5=1.15 W/m² | |||
D2.6.3 D2.6.4-Px | Earth Radiation Budget | OLR_MM_AVHRR_CLARA (CLARA-A3) | p2.5=-2.7 W/m² p97.5=1.55 W/m² | KPIs are calculated based on the defined limits from the comparison between TCDR and reference dataset ERA5. | |
Earth Radiation Budget | OLR_DM_AVHRR_CLARA (CLARA-A3) | None | |||
Earth Radiation Budget | RSF_MM_AVHRR_CLARA (CLARA-A3) | p2.5=-2.39 W/m² p97.5=3.55 W/m² | |||
Earth Radiation Budget | RSF_DM_AVHRR_CLARA (CLARA-A3) | None |
Table 5-2: Required KPI values for CDRs
Deliverable ID | ECV name | Product name | KPI 1: accuracy | Explanations/Comments | KPI2: stability | Explanations/Comments | |
Precipitation | |||||||
D3.3.1-v1.0 | Precipitation | PRECIP_1DD_GPCP_CDR | 0.3 mm/d | Numbers are given for global means, taken from CM SAF target and threshold requirements (Product Requirements Document, CDOP-4), achieved when comparing HOAPS and GPCP | 0.02 mm/d/dec | Taken from CM SAF threshold requirement (Product Requirements Document, CDOP-4), based on global averages, achieved when comparing HOAPS and GPCP | |
Precipitation | PRECIP_2.5DM_GPCP_CDR | 0.3 mm/d | 0.02 mm/d/dec | ||||
D3.3.3-v1.0 | Precipitation | PRECIP_1DD_MW_TCDR | 0.3 mm/d | Numbers are given for global means, taken from CM SAF target and threshold requirements (Product Requirements Document, CDOP-4), achieved when comparing HOAPS and GPCP | 0.02 mm/d/dec | Taken from CM SAF threshold requirement (Product Requirements Document, CDOP-4), based on global averages, achieved when comparing HOAPS and GPCP | |
Precipitation | PRECIP_1DM_MW_TCDR | 0.3 mm/d | 0.02 mm/d/dec | ||||
Surface Radiation Budget | |||||||
D3.3.4-v2.0 | Surface Radiation Budget | SIS_MM_AVHRR_CLARA | 10 W/m² | Numbers are given based on CM SAF Product Requirement Document PRD-2. Reference dataset is BSRN surface stations, parameter in the validation report - mean absolute bias (MAB) | 1.5 W/m²/dec | Numbers are given for trend in the bias vs. BSRN surface stations, based on CM SAF requirements | |
Surface Radiation Budget | SIS_DM_AVHRR_CLARA | 20 W/m² | 1.5 W/m²/dec | ||||
Surface Radiation Budget | SDL_MM_AVHRR_CLARA | 10 W/m² | 0.5 W/m²/dec | ||||
Surface Radiation Budget | SOL_MM_AVHRR_CLARA | 15 W/m² | 0.5 W/m²/dec | ||||
D3.3.5-v2.0 | Surface Radiation Budget | SRS_MM_AVHRR | None | None. Datasets are verified using the propagation of uncertainties method | None | None. Datasets are verified using the propagation of uncertainties method | |
Surface Radiation Budget | SNS_MM_AVHRR | None | None | ||||
Surface Radiation Budget | SNL_MM_AVHRR | None | None | ||||
Surface Radiation Budget | SRB_MM_AVHRR | None | None | ||||
D2.6.3 | Surface Radiation Budget | SIS_MM_AVHRR_CLARA (CLARA-A3) | 5 W/m² | Numbers are given based on CM SAF Product Requirement Document PRD-4. Reference dataset is BSRN surface stations, parameter in the validation report - mean absolute bias (MAB) | 1W/m²/dec | Numbers are given for trend in the bias vs. BSRN surface stations, based on CM SAF requirements | |
Surface Radiation Budget | SIS_DM_AVHRR_CLARA (CLARA-A3) | 15 W/m² | 1W/m²/dec | ||||
Surface Radiation Budget | SDL_MM_AVHRR_CLARA (CLARA-A3) | 5 W/m² | 1W/m²/dec | ||||
Surface Radiation Budget | SNS_MM_AVHRR_CLARA (CLARA-A3) | 5 W/m² | 1W/m²/dec | ||||
Surface Radiation Budget | SNL_MM_AVHRR_CLARA (CLARA-A3) | 5 W/m² | 1W/m²/dec | ||||
Surface Radiation Budget | SRB_MM_AVHRR_CLARA (CLARA-A3) | 5 W/m² | 1W/m²/dec | ||||
D3.3.8-v3.0 | Surface Radiation Budget | SIS_MM_CCI_AATSR_TCDR | 1 W/m² | Based on GCOS requirements (https://gcos.wmo.int/en/essential-climate-variables/surface-radiation/ecv-requirements) | 0.2 W/m²/dec | Based on GCOS requirements (https://gcos.wmo.int/en/essential-climate-variables/surface-radiation/ecv-requirements) | |
Surface Radiation Budget | SDL_MM_CCI_AATSR_TCDR | 1 W/m² | 0.2 W/m²/dec | ||||
Surface Radiation Budget | SOL_MM_CCI_AATSR_TCDR | (1 W/m²) | No direct user requirement for SRB in the Cloud_cci project. Estimations are based on previously mentioned variables and surface albedo | (0.2 W/m²/dec) | No direct user requirement for SRB in the Cloud_cci project. Estimations are based on previously mentioned variables and surface albedo | ||
Surface Radiation Budget | SRS_MM_CCI_AATSR_TCDR | None | None | ||||
Surface Radiation Budget | SNS_MM_CCI_AATSR_TCDR | None | None | ||||
Surface Radiation Budget | SNL_MM_CCI_AATSR_TCDR | None | None | ||||
Surface Radiation Budget | SRB_MM_CCI_AATSR_TCDR | None | None | ||||
Water Vapour | |||||||
D3.3.9-v1.0 | Water Vapour | GRM-29-L3-H-R1 | 3% | According to current ROM SAF specs. | N / A | According to current ROM SAF specs. | |
D3.3.12-v1.0 | Upper Tropospheric Humidity | UTH_MW | 5% | Set according to the GCOS Implementation Plan (GCOS-200, The Global Observing System for Climate: Implementation Needs (2016)). The validation is performed within +/-60 degrees latitude on a daily basis. | 1 %/dec | Adapted GCOS requirement. Verified for each satellite (mainly for NOAA-18 and Metop-A which span ~10 years) within +/-60 degrees latitude. | |
D3.3.11-v1.0 | Water Vapour | TCWV_GV_CDR | 1.4 kg/m² | Numbers are given for global means, taken from CM SAF target requirements (Table 5-2 Requirement Review 3.4 document HOAPS 5.0, Ohring et al 2005 & ESA DUE GlobVapour - Saunders et al 2010) | 0.2 kg/m²/dec | Taken from CM SAF target requirements defined in RR (Requirement Review 3.4 document HOAPS 5.0 Table 5-2), based on global averages (Ohring et al 2005 & ESA DUE GlobVapour - Saunders et al 2010) | |
D3.3.14-v1.0 | Water Vapour | TCWV_SSMI/SSMIS_CDR | 1 kg/m² | Numbers are given for global means, taken from CM SAF VR HOAPS 4.0 document (Table 6-6) of optimal requirements, achieved when comparing HOAPS 4.0 and REMSS | 0.2 kg/m²/dec | Taken from CM SAF VR HOAPS 4.0 document (Table 6-6) of target requirement, based on global averages, achieved when comparing HOAPS 4.0 and REMSS | |
Cloud Properties | |||||||
D3.3.16-v2.0 | Cloud Properties | CFC_MM_AVHRR_CLARA | 5% | Numbers correspond to CM SAF target requirements (see EUMETSAT CM SAF CDOP-2 Product Requirements Document [SAF/CM/DWD/PRD, version 2.9], tables in Chapter 8 concerning requirements for CM-11011, CM11031, CM 11051, CM-11061). They refer to global means. | 2%/dec | Numbers correspond to CM SAF target requirements (see EUMETSAT CM SAF CDOP-2 Product Requirements Document [SAF/CM/DWD/PRD, version 2.9], tables in Chapter 8 concerning requirements for CM-11011, CM11031, CM 11051, CM-11061). They refer to global means. | |
Cloud Properties | CFC_DM_AVHRR_CLARA | 15% | 2%/dec | ||||
Cloud Properties | CTO_MM_AVHRR_CLARA | 50 hPa | 20 hPa/dec | ||||
Cloud Properties | CTO_DM_AVHRR_CLARA | 50 hPa | 20 hPa/dec | ||||
Cloud Properties | LWP_MM_AVHRR_CLARA | 10 g/m² | 3 g/m²/dec | ||||
Cloud Properties | LWP_DM_AVHRR_CLARA | None | 3 g/m²/dec | ||||
Cloud Properties | IWP_MM_AVHRR_CLARA | 20 g/m² | 6 g/m²/dec | ||||
Cloud Properties | IWP_DM_AVHRR_CLARA | None | 6 g/m²/dec | ||||
D2.6.3 | Cloud Properties | CFC_MM_AVHRR_CLARA (CLARA-A3) | 5 % | Numbers are given for global means, taken from CM SAF target and threshold requirements (Product Requirements Document, CDOP-4), achieved when comparing with Cloud/Sat/CALIPSO, PATMOS-x, MODIS, ISCCP and ESA Cloud_cci v3. | 2%/dec | Numbers are given for global means, taken from CM SAF target and threshold requirements (Product Requirements Document, CDOP-4), achieved when comparing with Cloud/Sat/CALIPSO, PATMOS-x, MODIS, ISCCP and ESA Cloud_cci v3. | |
Cloud Properties | CFC_DM_AVHRR_CLARA (CLARA-A3) | 5 % | 2%/dec | ||||
Cloud Properties | CTO_MM_AVHRR_CLARA (CLARA-A3) | CTP: 45 hPa CTH: 800 m | CTP: 15 hPa CTH: 270 m | ||||
Cloud Properties | CTO_DM_AVHRR_CLARA (CLARA-A3) | CTP: 45 hPa CTH: 800 m | CTP: 30 hPa CTH: 270 m | ||||
Cloud Properties | CPPI_MM_AVHRR_CLARA (CLARA-A3) | 20 g/m² | 6 g/m² | ||||
Cloud Properties | CPPI_DM_AVHRR_CLARA (CLARA-A3) | 20 g/m² | 6 g/m² | ||||
Cloud Properties | CPPL_MM_AVHRR_CLARA (CLARA-A3) | 10 g/m² | 3 g/m² | ||||
Cloud Properties | CPPL_DM_AVHRR_CLARA (CLARA-A3) | 10 g/m² | 3 g/m² | ||||
D3.3.17-v3.0 | Cloud Properties | CA_MM_CCI_AATSR | 5% | Based on GCOS requirements to find at Table 7-2 in Cloud_cci Validation Report https://climate.esa.int/media/documents/Cloud_Product-Validation-and-Intercomparison-Report-PVIR_v6.0.pdf | 3% | Based on GCOS requirements to find at Table 7-2 in Cloud_cci Validation Report https://climate.esa.int/media/documents/Cloud_Product-Validation-and-Intercomparison-Report-PVIR_v6.0.pdf | |
Cloud Properties | CTP_MM_CCI_AATSR | None | 15 hPa | ||||
Cloud Properties | CTH_MM_CCI_AATSR | Low: 0.5 km Middle: 0.7 km High: 1.6 km | None | ||||
Cloud Properties | CTT_MM_CCI_AATSR | None | None | ||||
Cloud Properties | CER_MM_CCI_AATSR | 10% | 1 µm | ||||
Cloud Properties | COT_MM_CCI_AATSR | 10% | 2% | ||||
Cloud Properties | LWP_MM_CCI_AATSR | 25% | 5% | ||||
Cloud Properties | IWP_MM_CCI_AATSR | 25% | 5% | ||||
Cloud Properties | CA_DM_CCI_AATSR | None | None | ||||
Cloud Properties | CTP_DM_CCI_AATSR | None | None | ||||
Cloud Properties | CTH_DM_CCI_AATSR | None | None | ||||
Cloud Properties | CTT_DM_CCI_AATSR | None | None | ||||
Cloud Properties | CER_DM_CCI_AATSR | None | None | ||||
Cloud Properties | COT_DM_CCI_AATSR | None | None | ||||
Cloud Properties | CWP_DM_CCI_AATSR | None | None | ||||
Earth Radiation Budget | |||||||
D3.3.20-v1.0 | Earth Radiation Budget | OLR_ERB_CERES | LW up: RMS 2.5 W/m² | Set according to results achieved in CERES_EBAF_Ed4.0 Data Quality Summary (January 12, 2018, available at https://ceres.larc.nasa.gov/documents/DQ_summaries/CERES_EBAF_Ed4.0_DQS.pdf). | LW up: < 0.2 W/m²/dec | Set according to the GCOS Implementation Plan (GCOS-200, The Global Observing System for Climate: Implementation Needs (2016)). | |
Earth Radiation Budget | RSF_ERB_CERES | SW up: RMS 2.5 W/m² | SW up: < 0.3 W/m²/dec | ||||
D3.3.21-v1.0 | Earth Radiation Budget | OLR_HIRS | LW up: RMS 2.5 W/m² | LW up: < 0.2 W/m²/dec | |||
D3.3.23-v1.0 | Earth Radiation Budget | TSI_TOA | 1 W/m² | Based on GCOS requirements | 0.3 W/m²/dec | Based on GCOS requirements | |
D3.3.25-v3.0 | Earth Radiation Budget | OLR_CCI_AATSR | 1 W/m² | Based on GCOS requirements (https://gcos.wmo.int/en/essential-climate-variables/earth-radiation/ecv-requirements) | 0.2 W/m²/dec | Based on GCOS requirements (https://gcos.wmo.int/en/essential-climate-variables/earth-radiation/ecv-requirements) | |
Earth Radiation Budget | RSF_CCI_AATSR | 1 W/m² | 0.3 W/m²/dec | ||||
D2.6.3 | Earth Radiation Budget | OLR_MM_AVHRR_CLARA (CLARA-A3) | 4 W/m² | 0.6 W/m² | |||
Earth Radiation Budget | OLR_DM_AVHRR_CLARA (CLARA-A3) | 8 W/m² | 0.6 W/m² | ||||
Earth Radiation Budget | RSF_MM_AVHRR_CLARA (CLARA-A3) | 4 W/m² | 0.6 W/m² | ||||
Earth Radiation Budget | RSF_DM_AVHRR_CLARA (CLARA-A3) | 8 W/m² | 0.6 W/m² |
References
GCOS-200 (2016): The Global Observing System for Climate: Implementation Needs, Belward, A., and Dowell, M. (eds.), available at https://library.wmo.int/index.php?lvl=notice_display&id=19838#.ZCQwtmnP02w.
Immler, F. J., J. Dykema, T. Gardiner, D. N. Whiteman, P. W. Thorne, and H. Vömel (2010), Reference Quality upper-air measurements: Guidance for developing GRUAN data products, Atmos. Meas. Tech., 3(5), 1217–1231, https://doi.org/10.5194/amt-3-1217-2010.
Loew, A., Bell, W., Brocca, L., Bulgin, C. E., Burdanowitz, J., Calbet, X., Donner, R. V., Ghent, D., Gruber, A., Kaminski, T., Kinzel, J., Klepp, C., Lambert, J.-C., Schaepman-Strub, G., Schröder, M., and Verhoelst, T. (2017): Validation practices for satellite based earth observation data across communities, Rev. Geophys., https://doi.org/10.1002/2017RG000562.
Merchant, C. J., Paul, F., Popp, T., Ablain, M., Bontemps, S., Defourny, P., Hollmann, R., Lavergne, T., Laeng, A., de Leeuw, G., Mittaz, J., Poulsen, C., Povey, A. C., Reuter, M., Sathyendranath, S., Sandven, S., Sofieva, V. F., and Wagner, W. (2017): Uncertainty information in climate data records from Earth observation, Earth Syst. Sci. Data, 9, 511-527, https://doi.org/10.5194/essd-9-511-2017.
Weatherhead, E. C., Reinsel, G. C., Taio, G. C., Meng, X.-L., Choi, D., Cheang, W.-K., Keller, T., DeLuisi, J., Wuebbles, D. J., Kerr, J. B., Miller, A. J., Oltmans, S. J., and Frederick, J. E. (1998), Factors affecting the detection of trends: Statistical considerations and applications to environmental data, J. Geophys. Res., 103, 17,149–17,161, https://doi.org/10.1029/98JD00995.