You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Next »

Contributors: A. Mayer (DWD), A. C. Mikalsen (DWD), B. Würzler (DWD), H. Konrad (DWD), J.F. Meirink (KNMI)

Issued by: DWD/A. Mayer, A. C. Mikalsen, H. Konrad, B. Würzler and KNMI/J. F. Meirink

Date: 28/03/2019

Ref: C3S_D312b_Lot1.0.4.8_201903_Updated_KPIs_v1.0

Table of Contents

History of modifications

Version

Date

Description of modification

Chapters / Sections

V1

28/03/2019

Initial version

All









List of datasets covered by this document

Deliverable ID

Product title

Product type (CDR, ICDR)

D3.3.1-v1.0

GPCP precipitation monthly (v2.3) and daily (v1.3)

CDR

D3.3.1-v1.x

GPCP precipitation monthly (v2.3) and daily (v1.3)

ICDR

D3.3.4-v1.0

Surface Radiation Budget brokered from CLARA-A2

CDR

D3.3.4-v2.0

Surface Radiation Budget brokered from CLARA-A2

CDR

D3.3.4-v2.x

Surface Radiation Budget brokered from CLARA-A2

ICDR

D3.3.5-v1.0

Global Surface Radiation Budget AVHRR

CDR

D3.3.5-v2.0

Global Surface Radiation Budget AVHRR

CDR

D3.3.5-v2.x

Global Surface Radiation Budget AVHRR

ICDR

D3.3.5-v1.0

Tropospheric Water Vapour Profiles

CDR

3.3.9-v1.x

Tropospheric Water Vapour Profiles

ICDR

D3.3.12-v1.0

Upper Tropospheric Humidity

CDR

D3.3.12-v1.x

Upper Tropospheric Humidity

ICDR

D3.3.11-v1.0

Total Column Water Vapour from MERIS and SSMI/SSMIS

CDR

D3.3.14-v1.0

Total Column Water Vapour from SSMI/SSMIS

CDR

D3.3.16-v1.0

Cloud Properties brokered from CLARA-A2

CDR

D3.3.16-v2.0

Cloud Properties brokered from CLARA-A2

CDR

D3.3.16-v2.x

Cloud Properties brokered from CLARA-A2

ICDR

D3.3.20-v1.0

Earth Radiation Budget from CERES

CDR

D3.3.20-v2.x

Earth Radiation Budget from CERES

ICDR

D3.3.21-v1.0

Earth Radiation Budget from HIRS

CDR

Related documents

Reference ID

Document

D1

Annex 2 to the Framework Agreement for C3S_312b_Lot1: Contractors Tender

D2

Annex E to the Contractors Tender

Acronyms

Acronym

Definition

CDS

Climate Data Store

cRMSD

centered RMSD

ECV

Essential Climate Variable

ICDR

Interim Climate Data Record

KPI

Key Performance Indicator

MAD

Mean Absolute Deviation

PQAD

Product Quality Assurance Document

PT

Performance Target

PUGS

Product User Guide and Specifications

QA

Quality Assurance

QA4ECV

Quality Assurance for Essential Climate Variables

RMSD

Root-Mean-Square Deviation

TCDR

Thematic Climate Data Record

Scope of the document

In this document, the general quality assessment approach for C3S_312b_Lot1 is outlined. This includes a description of common validation methods, including metrics and terminology, as well as a practical approach for the derivation of Key Performance Indicators (KPIs) and their evaluation for the Interim Climate Data Records (ICDRs). This approach introduces an objective quality measure across the different datasets provided within the project.

Executive summary

The data records provided within C3S_312b_Lot1 need to be quality assured before they are made publicly available. In order to harmonize the quality assurance of the different data records, a common approach needs to be agreed upon. In this document, common validation methods and metrics are introduced. Moreover, a concrete method for the derivation and evaluation of KPIs for ICDRs is proposed. This method involves an assessment whether ICDRs are consistent with their corresponding Thematic Climate Data Records (TCDRs) in terms of deviations from a reference data record. The document also includes a revision of the KPIs for the TCDRs that have been provided in the tender for this project [D2].

1. Introduction

An important aspect of the C3S_312b_Lot1 project is quality assurance (QA) of the various data records on Essential Climate Variables (ECVs) provided. This will allow users at different levels – scientists, commercial customers, service providers and the general public – to get confidence in the quality of the CDRs and to judge whether the CDRs are fit for their purpose. The QA system should follow state-of-the-art practices for validation of satellite-based data records and take into account existing guidance on error characterization.
Within the recently finalized QA4ECV project, a generic QA system has been set up. This system allows data providers to systematically organize their QA information. It consists of six quality indicators:

(1) product details,
(2) traceability chains,
(3) quality flags,
(4) validation,
(5) metrological traceability, and
(6) assessment against standards.

In C3S_312b_Lot1, this QA information is spread over several documents. For example, the first quality indicator – product details – is addressed in the Algorithm Theoretical Baseline Documents (ATBDs) and Product User Guide and Specifications (PUGSs).

This document focuses on the fourth quality indicator: validation. Firstly, it describes a general approach for product validation, including the adopted metrics and terminology. Secondly, it proposes a guideline for the establishment of KPIs for the ICDRs, based on the corresponding TCDRs, and a concrete method for the evaluation of these KPIs on a frequent (quarterly) basis.

Specific details of the validation methods, including for example the reference datasets used, will depend on the ECV considered. These are not included here but will be summarized in the Product Quality Assurance Documents (PQADs) for the respective ECVs.

2. General validation methodology

Here we give some general guidelines regarding the methodology and terminology of product validation in C3S_312b_Lot1. This is heavily based on the review of validation practices for satellite-based Earth observation data by Loew et al. (2017).

According to Loew et al. (2017), validation can be defined as '(1) the process of assessing, by independent means, the quality of the data products derived from the system outputs and (2) confirmation, through the provision of objective evidence, that specified requirements, which are adequate for an intended use, have been fulfilled'. Both aspects are addressed here, and the 'specified requirements' are given by the KPIs. In the validation process five steps, which are highlighted in Section 2.1, ought to be followed. Additional analyses, including checks on the reported uncertainties are described in Section 2.2.

2.1 The validation process

The first step is quality checking. It involves the selection of data to enter the validation process using available quality information. This holds for satellite products, reference data and ancillary data. The second step is spatio-temporal collocation. Various constraints must be considered. In particular, satellite products and reference data should have comparable space/time sampling, and comparable space/time resolution. Another, sometimes conflicting, requirement is that sufficient statistics should be gathered. The third step is homogenization. It includes conversions and e.g. application of averaging kernels needed to make the satellite and reference data comparable. After this step, a set of satellite measurements (x = xi, i=1 ... n) and a corresponding set of reference data (y = yi) is available for quantitative comparison. The (xi,, yi) here refer to gridded and aggregated products: the KPIs are in most cases defined for monthly-mean global-mean quantities. Thus, the index  \( i \)  here refers to time slots (days or months) rather than to spatial locations. Since aggregated products are considered, the second step in the validation process (spatio-temporal collocation) will not be entirely feasible and resulting uncertainties should be taken into account.

The fourth step is the calculation of metrics quantifying the consistency between satellite products and reference observations. The common metric for systematic differences is the bias b :

\[ b(x,y)= E[x-y]= \frac{1}{n} \sum_{i=1}^n (x_{i} - y_{i}), \quad (1) \]

where E is the expectation operator. For error distributions deviating strongly from Gaussianity, the median difference may be considered instead of the bias. To measure statistical spread, the root-mean-square deviation (RMSD) is commonly used:

\[ RMSD(x,y)= \sqrt{E[(x-y)^2]}= \sqrt{\frac{1}{n} \sum_{i=1}^n (x_{i} - y_{i})^2}. \quad (2) \]

Often the RMSD is corrected for the bias between the datasets, yielding the bias-corrected or centered RMSD (cRMSD):

\[ cRMSD(x,y)= \sqrt{E[(x-y-b(x,y))^2]}. \quad (3) \]

In some cases, the absolute value of deviations rather than their square, as in the RMSD, is taken. The resulting metric is the mean absolute deviation (MAD):

\[ MAD(x,y)= E[|x-y|]=\frac{1}{n} \sum_{i=1}^n |x_{i} - y_{i}|. \quad (4) \]

A range of other metrics exists, including the linear (Pearson) correlation coefficient to measure statistical dependency between the datasets. Finally, the temporal stability  \( \beta \)  is defined as the change in the bias of a data set over time:

\[ \beta= \frac{d}{dt}(x-y). \quad (5) \]

It can be estimated by linear regression analysis of the time series of  \( x-y \) . Before estimating the stability, it is important to first de-seasonalise the datasets and check for breakpoints.

The final step involves analysis and interpretation. Here the main aim is to confirm that the specified requirements, as given by the KPIs, have been fulfilled with statistical significance. KPIs have been established for accuracy and stability, where accuracy is in most cases expressed through the bias and in some cases through the MAD. A useful relation for the uncertainty of the stability estimate, needed to verify the conformance with the requirements, is given by Weatherhead et al. (1998). This uncertainty depends on the length, variability and autocorrelation of the time series.

2.2 Additional analyses

Additional analyses and consistency checks beyond the KPI verification can be included. The spatial variation of deviations from the reference dataset is relevant information for users, even if it may not be captured in the KPIs (if these are based on global means). This can be visualized by maps and quantified by calculating statistics segregated by e.g., latitude band, surface type (land, water, snow, etc.) or time of day (daytime, nighttime, twilight). The calculation of time series anomalies can be an additional analysis, for instance.


Consistency checks on the reported product uncertainties are also important and useful (Merchant et al., 2017). Denoting the standard uncertainties in the satellite and reference measurements by  \( u_{x_{i}} \) and \( u_{y_{i}} \) , respectively, and the uncertainty due to collocation mismatch by σ, one can check whether (Immler et al., 2010):

\[ |x_{i}-y_{i}|<k \sqrt{u_{x_{i}}^2+u_{y_{i}}^2+\sigma^2}, \quad (6) \]

where k is the so-called coverage factor, with k=1 and 2 corresponding to significance levels of 68% and 95%, respectively. If the fraction of the dataset for which Eq. ( 6 ) with k=2 holds is much smaller than 95%, the estimated uncertainties are probably too small. In contrast, if the fraction of the dataset for which Eq. ( 6 ) with k=1 holds is much larger than 68%, the uncertainties have probably been estimated too large. In practice, the reference measurement uncertainty \( u_{y_{i}} \) is not always known and σ is also difficult to quantify, so that statements about  \( u_{x_{i}} \) may remain inconclusive.

3. ICDR KPI evaluation approach

According to the project plan, the project team is supposed to frequently report on the compatibility of ICDRs with the KPIs as part of the quarterly status reports. In view of these requirements, the KPIs for the ICDRs should be applicable for routine quality checks on a quarterly basis and should be harmonized throughout the project to the extent possible.

The project proposal [D1] is not very specific on how to compute KPIs: the term ‘accuracy’ remains undefined and the initial performance targets (PTs), in Table 5-8 in [D2], vary strongly, sometimes suggesting an upper bound for the absolute bias (based on GCOS for some records and on validation reports for others), sometimes suggesting a range from negative to positive bias values. Besides this, for the relatively short individual ICDRs, the criteria for stability as described in the proposal will not allow meaningful conclusions.

For these reasons, we present here a new approach to monitor ICDR chains newly generated for the Climate Data Store (CDS). It is a general strategy which aims, for the first time in this project, to provide a unified objective evaluation of the different ICDRs. Given that very different opinions on “the best” approach may exist, the aim here is to propose a practical concept and common strategy for all ICDRs delivered to the CDS. The primary aim is to allow shortcomings in the ICDRs to be detected with respect to the TCDR performance for which the KPIs were initially designed.

3.1 Basic idea

Our idea is to define KPIs to perform a unified, quick, but reliable quality check of the ICDRs. As ICDRs are released every 3 months, performing a comprehensive, in-depth validation with each release is not feasible. Therefore, we rather propose to compare the performance of new ICDRs with a suitable reference dataset with the performance of the (deeply validated) TCDRs against the reference dataset.

If the ICDR differs from the reference dataset in the same way the TCDR does, we conclude that the ICDR is of the same quality as the TCDR (i.e. the test indicates that the key performance of the ICDR is good/positive). But if the ICDR, statistically speaking, differs unusually widely from the reference dataset, we infer that we need to have a deeper look into the ICDR to identify possible errors (i.e. the test indicates that the key performance of our ICDR is bad/negative). To decide whether the ICDR differs, in a statistical sense, unusually widely from the reference dataset, we perform a statistical hypothesis test, a binomial test. Therefore, we predefine a reference Performance Target PT, i.e. a well-chosen range where most of the TCDR differences (say 95%) are found. Then, we check how many values of the ICDR lie within this range. We get three possible outcomes:

  1. All ICDR values are within the range of the PT: the key performance of the ICDR is good (KPI = good).
  2. Some ICDR values are outside range of the PT, but the binomial test shows that it is plausible: the key performance of the ICDR is still okay (KPI = good).
  3. Some ICDR values are outside the range of the PT, but the binomial test shows that it is not plausible: the key performance of the ICDR is bad (KPI = bad).

To decide when we consider the amount of ICDR values falling out of the interval only just to be plausible, we predefine a threshold probability (i.e. a significance level) of e.g. 5%.
In the following, we outline this strategy in more detail.

3.2 KPI accuracy

3.2.1 Choice of a reference dataset

The approach to verify the KPI accuracy is to first identify a suitable reference dataset (subscript r below), against which differences of (near-) global mean values can be computed for the tested dataset (i.e. the TCDR/ICDR that is delivered to the CDS; subscript t below). The chosen reference dataset should have as much spatial and temporal overlap with the tested dataset as possible. As most of our data have global coverage, a reference dataset with global coverage would consequently be preferential. Any choice, however, will most probably be a subjective one in the end. If we feel that there is no suitable reference data set available (in time), we will directly take the actual TCDR as reference data set, although this implicitly assumes that the true time series is stable in terms of mean value and variability.

3.2.2 TCDR: Basis for the derivation of performance targets

The TCDR will be used to derive PTs for the ICDR as follows. We compute spatially averaged (preferentially global mean) values of both the tested dataset, \( x_{j} \) , and the reference dataset, \( y_{j} \) , and get the respective differences  \( d_{j} = x_{j}-y_{j} \) , for every available, daily or monthly, time slice  \( j=1,...,N \) . In particular, this means that we generally avoid regridding of the datasets. Regridding is only applied if we feel a particular need for it, e.g. due to special characteristics of the data sets. If  \( d_{j} \) has a strong seasonal cycle, it is advised to de-seasonalise it in a further step.

If the actual TCDR is taken as reference data set, we define its long-term mean as the reference, i.e.  \( y_{j}= \frac{1}{N} \sum_{j=1}^N x_{j} \) . Again, if  \( d_{j} \)  (or equivalently  \( x_{j} \) ) has a strong seasonal cycle, de-seasonalisation is recommended.

Subsequently, suitable PTs for the ICDR, i.e. an update for Table 5-8 in [D2], are defined by the 2.5th and 97.5th percentiles,  \( P_{2.5} \) and  \( P_{97.5} \) , respectively, in the time series  \( d_{j} \) . Thus, the new PTs for KPI accuracy are  \( T_{lower}=P_{2.5} \)  and  \( T_{upper}=P_{97.5} \) . The updated Table 5-8 in [D2] will also specify that we expect (only) 95 % of all available ICDR values to be in the  \( [T_{lower},T_{upper}] \)  interval.

3.2.3 ICDR: Testing its performance

The ICDR is an extension of an existing TCDR and will be assessed on the basis of the TCDR distribution.

As for the TCDR, the differences  \( d_{i} = x_{i}-y_{i} \)  will be computed for every time slice in the ICDR (denoted by subscript  \( i=1,...,n \)  to distinguish it from the TCDR), where  \( y_{i} \)  is either the global mean of the reference dataset for that time slice or the long-term global mean of the TCDR. The same reference dataset needs to be used, or otherwise a significant deviation of TCDR and ICDR biases would be expected simply due to the different references. The KPI check is carried out by verifying that 95 % of the values in the difference time series  \( d_{i} \) lie between the upper and lower bounds obtained before, i.e.   \( T_{lower} \leq d_{i} \leq T_{upper} \)  for 95 % of all \( i \) . This procedure allows monitoring the quality of the ICDR with respect to the TCDR to a certain degree.

The proposal [D1] states that “[if] the KPI is negative, then the project team will assess the data product in more detail investigating possible causes for the quality deterioration.” The above procedure does not involve an indication of good data by KPI > 0 and bad data by KPI < 0 anymore. Consequently, we define a new criterion for the need of detailed assessment in the following.

The supposedly stochastic experiment can be characterized by two potential outcomes for the difference value at a given time:

  1. \( T_{lower} \leq d_{i} \leq T_{upper} \quad \text{('successful') with a probability of } p_{0} \)

  2. \( T_{lower} > d_{i} \quad \text{or} \quad d_{i} > T_{upper} \quad \text{with a probability of}\ 1 - p_{0} \)

    i.e. the probability pn,k to draw k successful (option 1) samples in a total of n samples ( d1, …, dn ) is given by the binomial distribution:

    \[ p_{n,k} = \tbinom{n}{k} p_{0}^k (1-p_{0})^{n-k}, \quad (7) \]

Given k 'successful' ICDR difference values in a total of n, we test the hypothesis that the probability for one single sample of the ICDR difference value to fall into the 95% interval of the TCDR is also 95% or higher (null hypothesis), i.e. \( p_{0} \geq 0.95 \) , by carrying out a one-sided binomial test. The respective alternative hypothesis is that \( p_{0} < 0.95 \) , i.e. the ICDR bias deviates significantly from the TCDR bias.

By defining a significance level of 5%, α=0.05, and computing the cumulative probability  \( p_{n,k}^{cum} = \sum_{j=0}^k p_{n,j}, \) for 1 to k 'successful' samples in the total of n ICDR difference values, we are able to verify or falsify the null hypothesis:

  • \( p_{n,k}^{cum} < \alpha: p_{0} < 0.95, \quad \text{i.e. reject null hypothesis; the ICDR bias deviates from the TCDR bias;} \textbf{ the ICDR needs to be assessed in detail.} \)

  • \( p_{n,k}^{cum} \geq \alpha: p_{0} \geq 0.95, \quad \text{null hypothesis verified; the ICDR bias performs at least equally well as the TCDR bias;} \textbf{ no further assessment of the ICDR is required at this stage.} \)

In python, for example,  \( p_{n,k}^{cum} \) can be computed by scipy.stats.binom_test(k, n, p0, alternative='less'). Matlab and R have similar commands on offer.

The ICDR should always be treated as a whole, i.e. by cumulating past deliveries, so that the sample of the bias in the ICDR becomes larger over time. Optionally, we also check the latest ICDR addition separately to identify a possible break early.

Note that in some cases, it might be necessary to slightly change or extend the above mentioned method to make it applicable to each individual dataset, e.g. in case of a three-dimensional data set by performing the test for each altitude interval.

3.2.4 Example

Panel A of Figure 1 shows the time series of the global mean value of a synthetic tested dataset (blue), the global mean value of the respective reference dataset (red), and the resulting differences (orange). The interval defined by \( T_{lower} = P_{2.5} \)  and \( T_{upper} = P_{97.5} \)  based on the TCDR is indicated by thin black lines (not to be confused with the thick black lines indicating the periods for the TCDR and the ICDR). The values corresponding to this interval are \( T_{lower} = -1.06 \)  and \( T_{upper} = 4.95 \) . Panel B shows that k=9 of the n=10 ICDR difference values are successful, i.e. fall inside the 95% interval of the TCDR.

Figure 1 – Illustration of the ICDR evaluation approach. See text for more details.

The program call scipy.stats.binom_test(9, 10, 0.95, alternative='less') returns a cumulative probability for 0-9 successful draws of  \( p_{n,k}^{cum}\approx 0.4 \) , i.e. above the significance level \( \alpha=0.05 \)  so the ICDR bias complies with the TCDR bias and no action is required. Note that \( p_{n,k}^{cum} \)  is smaller than \( \alpha \) , and action required, only for \( K \leq 7 \)  in this example with n=10 (see Table 1).

Table 1: Total number of ICDR-difference values n and corresponding minimum number of cases  \( k_{min,95\%} \) that need to lie within the  \( [T_{lower},T_{upper}] \) range to pass the binomial test with a significance level of 95% (see e.g. http://www.fuellenbach-online.de/fh/pdf/binomialverteilung.pdf, pp. 2-6).


n

kmin,95%

3

2

6

5

9

7

10

8

12

10

15

13

3.3 Concluding remarks

We drop the initially foreseen KPI stability for ICDR checks completely since the stability itself, as defined in Eq. (5), is already included in existing validation reports for brokered TCDRs, and will be included in non-brokered datasets in the C3S validation documents. The main reason for doing so is that the ICDR periods are expected to be too short for detecting trends in noisy (or rather highly variable) data. The above approach for the KPI accuracy/bias will also enable us to detect a significant trend in the ICDR bias, because eventually the ICDR bias will deviate significantly from the TCDR bias if it grows consistently smaller or larger over time. Consequently, we propose to investigate possible trends in the bias once the binomial test for KPI accuracy/bias indicates the need for action.

The KPI accuracy of forthcoming ICDRs will be reported in the Quarterly Reports by providing the percentage of data that fall inside the interval, together with the information whether further validation is required based on the binominal test  \( p_{n,k}^{cum} \geq \alpha=0.05 \) (i.e. no action) or not (i.e. action).

If an ICDR bias deviates significantly from the TCDR bias for comprehensible and documentable reasons, the criterion of 95% of all ICDR difference values falling inside the 95% interval from the TCDR or the significance level might have to be relaxed.

The usage of the percentiles in the TCDR difference values implies that – in contrast to the concept of 'accuracy' – the optimal situation for the ICDR is not a near-zero bias, but a bias close to the mean/median bias of the TCDR. We think that this makes sense as zero bias in the ICDR could be problematic if the TCDR bias is clearly off-zero.

Any ICDR that is not simply the extension of a TCDR would obviously be treated differently than proposed here.

In addition, we are aware of possible data quality variations or missing data within the ICDRs since accuracy and completeness of time-averaged satellite data can differ due to instrumental, transmission or system errors. More precisely, for monthly means the number of processed orbits per month determines the data quality and whether there is missing data for a whole month or for certain parts of the globe. Please note that we will not do any previewing of the data set quality and completeness before the KPI evaluation. Any further data analysis or deeper gap analysis is only performed in the case of a rejection of the performance targets.

4. Application of the KPI evaluation approach to ICDRs

4.1 Updated KPI values for the ICDRs

For steady quality control of our new ICDR chains, we will apply the evaluation approach outlined in Chapter 3 above. Therefore, we set the KPI values according to Table 2 which contains entries for all ICDRs for which the respective TCDRs have already been brokered. Missing entries relating to ICDRs, whose TCDRs have not been brokered yet will be given at a later stage. They will serve as lower and upper bounds for the binomial test as they correspond to the 2.5th and 97.5th percentiles mentioned in Chapter 3. The last column of Table 2 briefly describes important data set specific aspects of the analysis, e. g. the reference data set used. We will regularly give account of our test results in the Quarterly Reports.

4.2 Exemplary application to the first delivery of the GPCP ICDR

As a non-synthetic example, we evaluate the KPI accuracy for the first delivery of the GPCP precipitation ICDR.

4.2.1 GPCP monthly v2.3

The 2.5- and 97.5-quantiles of the spatially averaged differences of precipitation for GPCP monthly v2.3 TCDR, covering the time period Jan 1979 to Dec 2017, are -0.175 mm/d and +0.189 mm/d, respectively (see Table 2). The respective reference is TMPA 3B43 v7, which is only available between 50°S and 50°N, and from Jan 1998. Consequently, all relevant values are obtained by averaging over this spatial TMPA window from Jan 1998.

The first delivery of the ICDR comprises Jan 2018 to Dec 2018, i.e. 12 realizations of monthly means. For all of these, the spatially averaged precipitation differences between GPCP and the reference product fall inside the 2.5- and 97.5-quantiles found in the monthly TCDR ensemble. It is thus \( p_{n,k}^{cum}=1 \geq 0.05 \) , so that we conclude that the ICDR at this stage is of at least the same quality as the TCDR and therefore passes the described test.

4.2.2 GPCP daily v1.3

The GPCP daily v1.3 TCDR covers the period 1st Oct 1996 to 31st Dec 2017. Our reference here is TMPA 3B42 v7, which is, like its monthly realization (see above), available from Jan 1998 between 50°S and 50°N. The respective 2.5- and 97.5-quantiles of spatially averaged differences are -0.332 mm/d and 0.343 mm/d, respectively (see Table 2).

The first delivery of the ICDR comprises the same period as for the monthly product, 1st Jan 2018 to 31st Dec 2018, i.e. 365 realizations. As the TMPA 3B42 reference data set is yet missing for one day (31st Dec 2018), 364 realizations remain. For 361 of these, the spatially averaged precipitation differences between GPCP and the reference product fall inside the 2.5- and 97.5-quantiles found in the daily TCDR ensemble. It is thus \( p_{n,k}^{cum} \approx 0.99 \geq 0.05 \) , so that we conclude here, too, that the ICDR quality is sufficient. (Only a number of 338 or less would have indicated that the ICDR's quality is lower than the TCDR's in this context.)

5. Revision and statement of the KPIs for TCDRs

So far, C3S_312b_Lot1 delivered several brokered TCDRs to the CDS. In advance, these data sets have been selected also under the condition that their performance is compliant with GCOS requirements or close to them. In the following, we report on their data quality based on the two Key Performance Indicators “accuracy” and “stability”, as prescribed in the Contractors Tender [D1]. As we became aware of cases where the targets for the KPIs were not well defined or wrongly cited, we critically reviewed and updated these values. Since all brokered data sets have been produced outside C3S_312b_Lot1 and consequently cannot be altered, we mostly took the original data set requirements defined in the respective project, in which they were produced, as required KPI values, if available. For brokered data sets from CM SAF, for instance, we provide original CM SAF requirements as required KPI values. For those data sets which will be produced within C3S_312b_Lot1 or which will be brokered at a later stage within C3S_312b_Lot1, the required KPI values will be specified in the course of the project.

5.1 Revision of KPIs for TCDRs

The performance targets for accuracy and stability have now been specified according to Table 3, which contains, for all datasets already brokered to the CDS, properly revised entries of the table given in Annex E to the Contractors Tender [D2].

5.2 Statement on the KPIs for TCDRs

In the following, we give an account of the quality of the so far provided TCDRs with respect to the KPIs provided in Table 3. Since these TCDRs are brokered products, we mostly refer to validation activities carried out in previous projects. For the detailed evaluation, see the respective validation reports.

5.2.1 Cloud properties TCDR brokered from CM SAF CLARA-A2 (D3.3.16-v1.0):

All four cloud products fulfill the CM SAF target requirements for accuracy and stability (which were, for CLARA-A2, adopted as KPI values within C3S_312b_Lot1) with respect to at least one qualified reference data set. CLARA-A2 was compared to synoptic observations (SYNOP data), observations from the CALIOP lidar onboard the CALIPSO satellite, the PATMOS-x (short for 'AVHRR Pathfinder Atmospheres-Extended') data set, observations from the International Satellite Cloud Climatology Project (ISCCP), measurements from the Moderate Resolution Imaging Spectroradiometer (MODIS), and the microwave-based liquid water path climatology from the University of Wisconsin. For more information see the CM SAF validation report. It encompasses in-depth validation of level-2b (instantaneous) and level 3 (monthly mean) data. The level-3 daily data were not tested explicitly, but the mix of level-2 (instantaneous) and level-3 (monthly data) studies provides enough information to ensure their quality.

For further details, see the Validation Report CM SAF Cloud, albedo, radiation data record, AVHRR-based, Edition 2 (CLARA-A2), Cloud Products, Ref: SAF/CM/DWD/VAL/GAC/CLD, available at https://www.cmsaf.eu/SharedDocs/Literatur/document/2016/saf_cm_smhi_val_gac_cld_2_3_pdf.

5.2.2 Precipitation TCDR brokered from GPCP (D3.3.1-v1.0):

The GPCP monthly v2.3 precipitation rates fulfil both the accuracy target (0.3 mm/d) and the stability target (0.034 mm/d/decade) when compared to the TMPA 3B43 v7 monthly product. Spatial averages in this case are computed covering the TRMM/TMPA latitudinal window between 50°S and 50°N.
The GPCP daily v1.3 precipitation rates fulfils the stability target (0.034 mm/d/decade) when compared to the TMPA 3B42 v7 daily product, again in the TRMM/TMPA window, see above. However, the accuracy target of 0.3 mm/d is violated on 7.5% of all available dates. Higher temporal and spatial variability can be expected for a dataset at higher resolution, both temporally and spatially, as is the case for GPCP daily v1.3 vs. GPCP monthly v2.3. Consequently, it can be expected that the same target (0.3 mm/d) will not be equally applicable to both datasets. The targets were initially formulated based on the HOAPS v4.0 validation report which does not include a daily product either. In this sense, we recommend to accept this violation in 7.5% of all cases as the expected behavior of a high-resolution dataset.

For further details, see the Validation Report CM SAF SSM/I and SSMIS products, HOAPS version 4.0, Ref: SAF/CM/DWD/VAL/HOAPS, available at https://www.cmsaf.eu/SharedDocs/Literatur/document/2017/saf_cm_dwd_val_hoaps4_1_2_pdf.

5.2.3 Surface Radiation Budget TCDR brokered from CM SAF CLARA-A2 (D3.3.4-v1.0):

The predefined requirements for the surface incoming shortwave radiation are given in CM SAF Product Requirement Document (PRD), Annex A. The validation of the surface radiation datasets was conducted against surface measurements from the Baseline Surface Radiation Network (BSRN) [Ohmura et al., 1998].

All products in the brokered CLARA-A2 dataset fulfil the updated GCOS requirements regarding the horizontal and temporal resolution. The GCOS requirements for accuracy and stability are formulated for the global mean and are not relevant for individual reference measurement networks (e.g. BSRN). Besides this, there also exists the general problem with comparison of area-to-point measurements.

The KPI values for accuracy have been selected as following: SIS monthly means – 10 W/m2 that corresponds to CM SAF target accuracy, SIS daily means – 20 W/m2 that corresponds to CM SAF threshold accuracy, SDL monthly means – 10 W/m2 that corresponds to CM SAF target accuracy and SOL monthly means – 10 W/m2 that corresponds to CM SAF target accuracy. These targets are met by all datasets and the KPIs are fulfilled. Considering the uncertainty of the surface observations of 5 W/m2, less than 5% of the SIS monthly means exceed the target requirements. For the SIS daily means about 20% exceed the threshold accuracy. This number can be explained by higher temporal resolution and thus less random biases averaging out. For the SDL monthly means less than 14% exceed the target accuracy and for the SOL monthly means less than 32% exceed the target accuracy.

Stability requirements for the Surface Radiation Budget TCDR have been defined as follows: SIS monthly means - 1.5 W/(m2 dec), SIS daily means – 1.5 W/(m2 dec), SDL and SOL monthly means – 0.5 W/(m2 dec). Within the range of uncertainty the CLARA-A2 dataset agrees with the surface reference data. Achieved stability for the SIS monthly means (after 1994) is less than 1 W/(m2 dec), achieved stability for the SDL and SOL monthly means is less than 1 W/(m2 dec).

For further details, see the Validation Report CM SAF Cloud, Albedo, Radiation dataset, AVHRR-based, Edition 2 (CLARA-A2), Surface Radiation Products, Code: SAF/CM/DWD/VAL/GAC/RAD, available at https://www.cmsaf.eu/SharedDocs/Literatur/document/2016/saf_cm_dwd_val_gac_rad_2_1_pdf.

5.2.4 Water vapour profiles TCDR brokered from ROM SAF (D3.3.9-v1.0):

The monthly-mean gridded humidity profile data have been validated against ERA-Interim reanalysis data, as a part of the ROM SAF validation procedures. The compliance of the observed data with the ROM SAF target accuracy of 3% is checked within six broad latitude-height regions (low-, mid-, high latitudes, and 0-8 km, 8-12 km). The data comply with the target requirements for the whole time period, except for a single month in a single region (mid-latitudes, 0-8 km, December 2013) where it is just slightly above the target. This can most likely be explained by wintertime high variability northern hemisphere conditions leading to larger uncertainties both in the observed and reference data.

For further details, see the ROM SAF Level 3 Validation Report, Ref: SAF/ROM/DMI/REP/GRD/001, available at http://www.romsaf.org/product_documents/romsaf_vr_grd_rep.pdf.

5.2.5 Water vapour UTH TCDR brokered from CM SAF (D3.3.12-v1.0):

Validation of the MW UTH TCDR was performed against the MW-equivalent UTH calculated from ERA-Interim reanalysis. The UTH from AMSU-B (NOAA-15, NOAA-16, NOAA-17) and MHS (NOAA-18, MetOp-A, MetOp-B) instruments was examined. The data record fulfills the CM SAF target requirements (see CM SAF Products Requirements Document, SAF/CM/DWD/PRD/2.10) and the GCOS requirements in terms of both accuracy and decadal stability within ±60° latitude.

For further details, see the CM SAF Validation Report, Ref: SAF/CM/UKMO/VAL/UTH, DOI: 10.5676/EUM_SAF_CM/UTH/V001, available at https://www.cmsaf.eu/SharedDocs/Literatur/document/2018/saf_cm_ukmo_val_uth_1_3_pdf.

5.2.6 Earth Radiation Budget TCDR brokered from CERES (D3.3.20-v1.0):

The CERES team invests a lot of effort in the comprehensive validation of the CERES products. The main findings are discussed during bi-annual science team meetings and are summarised in the so-called "Data Quality Summary (DQS)" that any users of the data is expected to read. For the 4th edition of the Energy Balanced And Filed (EBAF), the DQS reports accuracies of 2.5 W/m² as well for the reflected shortwave radiation as for the emitted longwave radiation. In terms of stability, the record seems to remain stable within 0.3 W/m²/decade for the SW and 0.2 W/m²/decade for the LW.

For further details, see the CERES_EBAF_Ed4.0 Data Quality Summary (January 12, 2018), available at https://ceres.larc.nasa.gov/documents/DQ_summaries/CERES_EBAF_Ed4.0_DQS.pdf.

Table 2: Required KPI values for ICDRs

 

Deliverable ID

ECV name

Product name

KPI 1: accuracy

ID

Explanations/ Comments

Precipitation








D3.3.1-v1.x

Precipitation

PRECIP_1DD_GPCP_ICDR

p2.5=-0.332 mm/d, p97.5=0.343 mm/d(constrained to +/-50° latitude)

KPI.Q03.1

TCDR percentiles when compared to TRMM TMPA 3B42 v7 inside +-50° latitude.



Precipitation

PRECIP_2.5DM_GPCP_ICDR

p2.5=-0.175 mm/d, p97.5=0.189 mm/d(constrained to +/-50° latitude)

KPI.Q04.1

TCDR percentiles when compared to TRMM TMPA 3B43 v7 inside +-50° latitude.

Surface Radiation Budget

 



 

 

 


D3.3.4-v2.x

Surface Radiation Budget

SIS_MM_AVHRR_CLARA

p2.5=-17.4 W/m², p97.5=35 W/m²

KPI.Q15.1

The TCDR is compared to the surface measurements from the Baseline Surface Radiation Network (BSRN). The monthly averages are calculated from the high-resolution BSRN data. The collocations are selected using the nearest neighbour technique. Percentiles are calculated for the mean absolute bias. Satellite data with less than 20 observations per month are excluded.



Surface Radiation Budget

SIS_DM_AVHRR_CLARA

p2.5=-36.8 W/m², p97.5=74 W/m²

KPI.Q16.1




Surface Radiation Budget

SDL_MM_AVHRR_CLARA

p2.5=-10.9 W/m², p97.5=26.7 W/m²

KPI.Q16a.1




Surface Radiation Budget

SOL_MM_AVHRR_CLARA

p2.5=-22.5 W/m², p97.5=49.9 W/m²

KPI.Q16b.1



D3.3.5-v2.x

Surface Radiation Budget

SDL_MM_AVHRR

None

KPI.Q17.1

None. Datasets are verified using the propagation of uncertainties method. The dataset fulfills the KPIs if the input data fulfills the KPIs.



Surface Radiation Budget

SOL_MM_AVHRR

None

KPI.Q18.1




Surface Radiation Budget

SRS_MM_AVHRR

None

KPI.Q19.1




Surface Radiation Budget

SNS_MM_AVHRR

None

KPI.Q20.1




Surface Radiation Budget

SNL_MM_AVHRR

None

KPI.Q21.1




Surface Radiation Budget

SRB_MM_AVHRR

None

KPI.Q22.1


Water vapour

 



 

 

 


D3.3.9-v1.x

Water Vapour

GRM-29-L3-H-I1

0-4 km: p2.5=-2.00% , p97.5=-0.93%4-8 km: p2.5=-1.04% , p97.5=+0.82%8-12 km: p2.5=-0.66% , p97.5=+2.13%

KPI.Q37.1

The observed water vapour data are retrieved from Metop RO data, and the reference data consist of co-located ERA-Interim, similarly averaged as the observations.We compute relative differences between monthly means of observed data and reference data on a global latitude-height grid. These relative differences are globally averaged (properly area weighted) and vertically averaged (in 0-4 km, 4-8 km, and 8-12 km layers). No attempt is made to subtract an annual cycle in the observation-reference differences.For each vertical layer, we find the 2.5% and 97.5% percentiles of the distribution of the 121 months in the TCDR difference time series.


D3.3.12-v1.x

Upper Tropospheric Humidity

UTH_MW

p2.5=-0.47 %, p97.5=0.55%(constrained to +/-60° latitude)

KPI.Q39.1

The values result from the comparison of the three UTH products from MetOp-A, MetOp-B, and NOAA-18 against ERA-Interim. The time periods considered are 2006-2015 for MetOp-A, 2008-2015 for MetOp-B, and 2013-2015 for NOAA-18. The values depict the (lowest of the three) 2.5 % and the (highest of the three) 97.5 % percentiles of the differences in daily global means. A cosine latitude weighted average was applied for the calculation of the means in each 1° x 1° grid box. The comparison excludes the polar regions.

Cloud properties

 



 

 

 


D3.3.16-v2.x

Cloud Properties

CFC_MM_AVHRR_CLARA

p2.5=-0.60 %, p97.5=0.48 %

KPI.Q59.1

The values result from a comparison of the CLARA-A2 monthly TCDR with the monthly TCDR of MODIS Collection 6.1 (time period considered: May 2003 to December 2015). They depict the 2.5 % and 97.5 % percentiles of the annual-cycle corrected differences in global means. Therefore at first, the time series of the differences in the global means between CLARA-A2 and MODIS was calculated. For the calculation of the global means, a cosine latitude weighted average was applied. Afterwards the mean annual cycle, i.e. the temporal mean of all Januaries, Februaries, etc., was subtracted. Finally, the 2.5 % and 97.5 % percentiles were calculated. Thus, to compare the new ICDR values to the declared percentile values, the mean annual cycle must be subtracted, too. For LWP and IWP the comparison excludes the polar regions (due to only part-time coverage during the year and challenging retrieval conditions over snow/ice). Only the monthly dataset is considered. As the monthly dataset originates from the daily dataset, one can assume that the results for the monthly dataset are also representative for the daily dataset.



Cloud Properties

CFC_DM_AVHRR_CLARA

None

KPI.Q60.1




Cloud Properties

CTO_MM_AVHRR_CLARA

p2.5=-6.32.97 hPa, p97.5=3.74 hPa

KPI.Q61.1




Cloud Properties

CTO_DM_AVHRR_CLARA

None

KPI.Q62.1




Cloud Properties

LWP_MM_AVHRR_CLARA

p2.5=-1.97 g/m², p97.5=2.23 g/m²(constrained to +/- 50° latitude)

KPI.Q63.1




Cloud Properties

LWP_DM_AVHRR_CLARA

None

KPI.Q64.1




Cloud Properties

IWP_MM_AVHRR_CLARA

p2.5=-1.74 g/m², p97.5=1.91 g/m²(constrained to +/- 50° latitude)

KPI.Q65.1




Cloud Properties

IWP_DM_AVHRR_CLARA

None

KPI.Q66.1


TOA radiation

 



 

 

 


D3.3.20-v2.x

Earth Radiation Budget

RSF_ERB_CERES_FF

p2.5=0.4 W/m², p97.5=2.3 W/m²

KPI.Q78.1

The values result from a comparison of the CERES monthly TCDR with ERA 5 (time period considered: February 2002 to October 2018). They depict the 2.5 % and 97.5 % percentiles of the differences in global means.



Earth Radiation Budget

OLR_ERB_CERES_FF

p2.5=-2.4W/m², p97.5=-1.3 W/m²

KPI.Q79.1


Table 3: Required KPI values for TCDRs


Deliverable ID

ECV name

Product name

KPI 1: accuracy

ID

Explanations/Comments

KPI 2: stability

ID

Explanations/Comments

Precipitation











D3.3.1-v1.0

Precipitation

PRECIP_1DD_GPCP_TCDR

0.3 mm/d

KPI.Q01.1

Numbers are given for global means, taken from CM SAF target and threshold requirements, achieved when comparing HOAPS and GPCP

0.034 mm/d/dec

KPI.Q01.2

Taken from CM SAF threshold requirements, based on global averages, achieved when comparing HOAPS and GPCP



Precipitation

PRECIP_2.5DM_GPCP_TCDR

0.3 mm/d

KPI.Q02.1


0.034 mm/d/dec

KPI.Q02.2


Surface Radiation Budget

 



 

 

 

 

 

 


D3.3.4-v1.0D3.3.4-v2.0

Surface Radiation Budget

SIS_MM_AVHRR_CLARA

10 W/m²

KPI.Q07.1

Numbers are given based on CM SAF Product Requirement Document PRD-2. Reference dataset is BSRN surface stations, parameter in the validation report - mean absolute bias (MAB)

1.5 W/m²/dec

KPI.Q07.1

Numbers are given for trend in the bias vs. BSRN surface stations, based on CM SAF requirements



Surface Radiation Budget

SIS_DM_AVHRR_CLARA

20 W/m²

KPI.Q08.1


1.5 W/m²/dec

KPI.Q08.2




Surface Radiation Budget

SDL_MM_AVHRR_CLARA

10 W/m²

KPI.Q09.1


0.5 W/m²/dec

KPI.Q09.2




Surface Radiation Budget

SOL_MM_AVHRR_CLARA

15 W/m²

KPI.Q10.1


0.5 W/m²/dec

KPI.Q10.2



D3.3.5-v1.0D3.3.5-v2.0

Surface Radiation Budget

SRS_MM_AVHRR

None

KPI.Q11.1

None. Datasets are verified using the propagation of uncertainties method

None

KPI.Q11.2

None. Datasets are verified using the propagation of uncertainties method



Surface Radiation Budget

SNS_MM_AVHRR

None

KPI.Q12.1


None

KPI.Q12.2




Surface Radiation Budget

SNL_MM_AVHRR

None

KPI.Q13.1


None

KPI.Q13.2




Surface Radiation Budget

SRB_MM_AVHRR

None

KPI.Q14.1


None

KPI.Q14.2


Water vapour

 



 

 

 

 

 

 


D3.3.9-v1.0

Water Vapour

GRM-29-L3-H-R1

3%

KPI.Q36.1

According to current ROM SAF specs. Numbers are given for monthly means within5-degree latitude bins.

N / A

KPI.Q36.2

According to current ROM SAF specs.A stability criterion is planned to be intro-duced in the future.


D3.3.12-v1.0

Upper Tropospheric Humidity

UTH_MW

5%

KPI.Q38.1

Set according to the GCOS Implementation Plan (GCOS-200, The Global Observing System for Climate: Implementation Needs (2016)). The validation is performed within +/-60 degrees latitude on a daily basis.

1 %/dec

KPI.Q38.2

Adapted GCOS requirement. Verified for each satellite (mainly for NOAA-18 and MetOp-A which span ~10 years) within +/-60 degrees latitude.

 

D3.3.11-v1.0

Water Vapour

TCWV_GV_TCDR

1.4 kg/m²

KPI.Q40.1

Numbers are given for global means, taken from CM SAF target requirements (Table 5-2 Requirement Review 3.4 document HOAPS 5.0, Ohring etal 2005 & ESA DUE GlobVapour - Saunders etal 2010)

0.2 kg/m²/dec

KPI.Q40.2

Taken from CM SAF target requirements defined in RR (Requirement Review 3.4 document HOAPS 5.0 Table 5-2), based on global averages (Ohring etal 2005 & ESA DUE GlobVapour - Saunders etal 2010)

 

D3.3.14-v1.0

Water Vapour

TCWV_SSMI/SSMIS_TCDR

1 kg/m²

KPI.Q41.1

Numbers are given for global means, taken from CM SAF VR HOAPS 4.0 document (Table 6-6) of optimal requirements, achieved when comparing HOAPS 4.0 and REMSS

0.2 kg/m²/dec

KPI.Q41.2

Taken from CM SAF VR HOAPS 4.0 document (Table 6-6) of target requirement, based on global averages, achieved when comparing HOAPS 4.0 and REMSS

Cloud properties

 



 

 

 

 

 

 


 D3.3.16-v1.0D3.3.16-v2.0

Cloud Properties


CFC_MM_AVHRR_CLARA

5%

KPI.Q43.1

Numbers correspond to CM SAF target requirements (see EUMETSAT CM SAF CDOP-2 Product Requirements Document [SAF/CM/DWD/PRD, version 2.9], tables in Chapter 8 concerning requirements for CM-11011, CM11031, CM 11051, CM-11061). They refer to global means.

2%/dec

KPI.Q43.2

Numbers correspond to CM SAF target requirements (see EUMETSAT CM SAF CDOP-2 Product Requirements Document [SAF/CM/DWD/PRD, version 2.9], tables in Chapter 8 concerning requirements for CM-11011, CM11031, CM 11051, CM-11061). They refer to global means.



Cloud Properties

CFC_DM_AVHRR_CLARA

15%

KPI.Q44.1


2%/dec

KPI.Q44.2




Cloud Properties

CTO_MM_AVHRR_CLARA

50 hPa

KPI.Q45.1


20 hPa/dec

KPI.Q45.2




Cloud Properties

CTO_DM_AVHRR_CLARA

50 hPa

KPI.Q46.1


20 hPa/dec

KPI.Q46.2




Cloud Properties

LWP_MM_AVHRR_CLARA

10 g/m²

KPI.Q47.1


3 g/m²/dec

KPI.Q47.2




Cloud Properties

LWP_DM_AVHRR_CLARA

None

KPI.Q48.1


3 g/m²/dec

KPI.Q48.2




Cloud Properties

IWP_MM_AVHRR_CLARA

20 g/m²

KPI.Q49.1


6 g/m²/dec

KPI.Q49.2




Cloud Properties

IWP_DM_AVHRR_CLARA

None

KPI.Q50.1


6 g/m²/dec

KPI.Q50.2


TOA radiation

 



 

 

 

 

 

 


D3.3.20-v1.0D3.3.20-v2.0

Earth Radiation Budget

OLR_ERB_CERES

LW up: RMS 2.5 W/m²

KPI.Q75.1

Set according to results achieved in CERES_EBAF_Ed4.0 Data Quality Summary (January 12, 2018, available at https://ceres.larc.nasa.gov/documents/DQ_summaries/CERES_EBAF_Ed4.0_DQS.pdf ).

LW up: < 0.2 W/m²/dec

KPI.Q75.2

Set according to the GCOS Implementation Plan (GCOS-200, The Global Observing System for Climate: Implementation Needs (2016)).



Earth Radiation Budget

RSF_ERB_CERES

SW up: RMS 2.5 W/m²

KPI.Q76.1


SW up: < 0.3 W/m²/dec

KPI.Q76.2



D3.3.21-v1.0

Earth Radiation Budget

OLR_HIRS

LW up: RMS 2.5 W/m²

KPI.Q77.1


LW up: < 0.2 W/m²/dec

KPI.Q77.2


References

Immler, F. J., J. Dykema, T. Gardiner, D. N. Whiteman, P. W. Thorne, and H. Vömel (2010), Reference Quality upper-air measurements: Guidance for developing GRUAN data products, Atmos. Meas. Tech., 3(5), 1217–1231, doi:10.5194/amt-3-1217-2010.

Loew, A., Bell, W., Brocca, L., Bulgin, C. E., Burdanowitz, J., Calbet, X., Donner, R. V., Ghent, D., Gruber, A., Kaminski, T., Kinzel, J., Klepp, C., Lambert, J.-C., Schaepman-Strub, G., Schröder, M., and Verhoelst, T. (2017): Validation practices for satellite based earth observation data across communities, Rev. Geophys., https://doi.org/10.1002/2017RG000562 .

Merchant, C. J., Paul, F., Popp, T., Ablain, M., Bontemps, S., Defourny, P., Hollmann, R., Lavergne, T., Laeng, A., de Leeuw, G., Mittaz, J., Poulsen, C., Povey, A. C., Reuter, M., Sathyendranath, S., Sandven, S., Sofieva, V. F., and Wagner, W. (2017): Uncertainty information in climate data records from Earth observation, Earth Syst. Sci. Data, 9, 511-527, https://doi.org/10.5194/essd-9-511-2017 .

Weatherhead, E. C., Reinsel, G. C., Taio, G. C., Meng, X.-L., Choi, D., Cheang, W.-K., Keller, T., DeLuisi, J., Wuebbles, D. J., Kerr, J. B., Miller, A. J., Oltmans, S. J., and Frederick, J. E. (1998), Factors affecting the detection of trends: Statistical considerations and applications to environmental data, J. Geophys. Res., 103, 17,149–17,161, doi:10.1029/98JD00995.

This document has been produced in the context of the Copernicus Climate Change Service (C3S).

The activities leading to these results have been contracted by the European Centre for Medium-Range Weather Forecasts, operator of C3S on behalf of the European Union (Delegation agreement signed on 11/11/2014). All information in this document is provided "as is" and no guarantee or warranty is given that the information is fit for any particular purpose.

The users thereof use the information at their sole risk and liability. For the avoidance of all doubt , the European Commission and the European Centre for Medium - Range Weather Forecasts have no liability in respect of this document, which is merely representing the author's view.

Related articles



  • No labels