Executive Summary

REDD projects aim to reduce greenhouse gas emissions resulting from deforestation or forest degradation in a specific project area. “REDD” refers to “Reducing emissions from deforestation and forest degradation”. Emission reductions are calculated by comparing the difference between: (a) baseline emissions (i.e., the emissions that would have occurred without the REDD project); and (b) actual emissions (i.e., the emissions that occurred with the REDD project). Verra registers REDD projects that meet its Verified Carbon Standard and issues carbon credits for projects that have successfully implemented and achieved emission reductions by reducing deforestation or forest degradation.

On 18 January 2023, the British newspaper the Guardian published an article that included sensational claims about the value of carbon credits issued by Verra for REDD projects. The same claims were mentioned in another article in Die Zeit. The Guardian article purported to be based on three scientific studies, one unpublished. Its key findings were based on calculations made by the journalists using results from the three studies. Verra’s Technical Review of the Guardian article and the studies found that the Guardian article and two of the three studies on which the article was based are patently unreliable.

Two of the studies were conducted by West et al., and the other by Guizar-Coutiño et al. All three studies relied on creating control areas, whose success depends on identifying relevant points of comparison and selecting geographic areas that are as similar as possible to a corresponding REDD project (Schleicher et al. 2019 and Desbureaux 2021).

The Technical Review found multiple failings in the West et al. studies, which make their conclusions patently unreliable, because they contain multiple serious methodological deficiencies. Specifically, they:

  • Constructed their synthetic controls by looking at only a small set of superficial physical characteristics such as initial forest cover, slope, and proximity to state capitals, while excluding the key determinants of deforestation such as forest type, agricultural practices, and in fact any socioeconomic factors whatsoever;
  • Selected geographic areas for their synthetic controls that were not facing any serious threat of deforestation, thereby underestimating the risk of deforestation in the REDD project areas;
  • Selected too few geographic areas in composing their synthetic controls; and
  • Used unsuitable data, including satellite imagery at a resolution of 250 meters x 250 meters (6.5 hectares), which is too crude for REDD projects, and much lower than Verra’s recommendation of a resolution of 100 meters x 100 meters, and a dataset from Global Forest Watch that scientists widely recognize should not be used to estimate deforestation or for REDD purposes without appropriate adjustments that the authors failed to make.

The Guardian focused its own “further analysis” on 29 of the 36 REDD projects reviewed in the two West et al. studies in order to reach a sensational conclusion that 94% of the credits from these projects should not have been issued. The journalists’ failure to publish this analysis is a breach of transparency and seriously undermines the credibility of their reporting, as unlike the scientific studies, these can not be reviewed.

In contrast to the Technical Review of the West et al. studies, Verra found the results of the third study, by Guizar-Coutiño et al., cited in the Guardian to be moderately reliable and a useful contribution to the literature. While the authors used higher resolution satellite data (30 meters x 30 meters), Verra found two deficiencies in the methodology. First, reliance on satellite data means on-the-ground data was overlooked. Second, the authors’ claim that the characteristics chosen to select their control areas reflect deforestation drivers is somewhat tenuous, given that these are highly location-specific. Once again, Verra’s REDD methodology emphasizes these for important scientific reasons.

According to Guizar-Coutiño et al., REDD project implementation was found to be associated with reductions in deforestation, with decreased deforestation in 34 sites and slightly increased deforestation in 6 sites. Collectively, these REDD+ projects reduced deforestation by 47% in the first five years. This rate of reduction was higher in high-deforestation countries and did not appear to be substantially undermined by leakage activities (i.e., deforestation being displaced to areas outside the REDD project area). The study concluded that, “Our results provide some room for optimism. Despite the many challenges to just and economically sustainable implementation, the initial wave of REDD+ projects were effective at reducing forest loss.”

The Guardian failed to report that the two methodologies reached largely different results, instead noting that “the data showed broad agreement on the lack of effectiveness of the projects compared with the Verra-approved predictions.” Major inconsistencies found by the Technical Review include that, of the 12 REDD projects in Brazil considered by both West et al. 2020 and Guizar-Coutiño et al., the former found that deforestation or degradation was lower in 33% of projects, whereas the latter found deforestation was lower in 92% and degradation in 75%.

Further, the Guardian grossly misrepresented the Guizar-Coutiño et al. findings in order to support the hypothesis drawn from the flawed West et al. 2020 and West et al. 2023 studies. The journalists, after converting Guizar-Coutiño et al.’s findings into emission reductions, compared these figures with the pre-project predictions of the project developers. The Technical Review found this to be a false comparison. For a variety of reasons, the pre-project predictions of project developers often overestimate what they actually achieve, and assume that everything works as planned. What the journalists should have done is to compare Guizar-Coutiño et al.’s findings with the actual emission reductions delivered by the projects. It is on this basis that Verra issues carbon credits, not on the basis of the predictions of project developers.

1. Introduction

This document sets out Verra’s Technical Review of the following studies:

  • Thales West et al., “Overstated carbon emission reductions from voluntary REDD+ projects in the Brazilian Amazon,” Proceedings of the National Academy of Sciences 117, no. 39 (September 14, 2020): 24188. (West et al. 2020)
  • Thales West et al., “Action needed to make carbon offsets from tropical forest conservation work for climate change mitigation,” unpublished preprint (2023). (West et al. 2023)
  • Alejandro Guizar-Coutiño et al., “A global evaluation of the effectiveness of voluntary REDD+ projects at reducing deforestation and degradation in the moist tropics,” Conservation Biology 36, no. 6 (17 June 2022): e13970. (Guizar-Coutiño et al.)

This document also sets out Verra’s Technical Review of statements made in the following media article about the above materials:

  • Patrick Greenfield, “Revealed: more than 90% of rainforest carbon offsets by biggest provider are worthless, analysis shows,” Guardian (January 18, 2023). (The Guardian)

2. Findings

Verra makes the following findings.

First, the West et al. 2020 and West et al. 2023 studies are patently unreliable because they contain multiple serious methodological deficiencies, as explored in Section 4. Specifically, they:

  • Ignored key factors in deforestation, including all socio-economic factors, when establishing their own baselines;
  • Compared REDD projects to areas that likely were under no serious threat of deforestation;
  • Relied on an insufficient number of comparison areas; and
  • Used datasets widely known to be unsuitable for measuring deforestation, particularly for REDD projects.

Second, the Guizar-Coutiño et al. study is mostly reliable because it contains only minor methodological deficiencies, as explored in Section 5. Specifically, it:

  • Used relevant, albeit simple, characteristics when establishing alternative baselines; and
  • Used datasets that are reasonably suitable for measuring deforestation, though they do not replace on-the-ground analysis.

Third, the Guardian article is patently unreliable because it contains multiple serious methodological deficiencies, as explored in Section 6. Specifically, it:

  • Failed to acknowledge the inconsistencies between the studies;
  • Grossly misrepresented the Guizar-Coutiño et al. findings; and
  • Used an unpublished and largely undisclosed methodology to interpret the studies.

3. Background and Key Concepts

The Technical Review relates to “REDD”, which refers to “Reducing emissions from deforestation and forest degradation”. A “REDD project” is an activity aimed at reducing emissions from deforestation or forest degradation in a specific geographic area.

In the REDD context, “emission reductions” refer to the quantity of emissions that a REDD project reduces over a period of time, equivalent to the difference between: (a) baseline emissions (i.e., the emissions that would have occurred without the REDD project); and (b) actual emissions (i.e., the emissions that occurred with the REDD project). An explainer for calculating emission reductions is in Table 1.

Table 1: Explainer for calculating emission reductions from a REDD project

Verra registers REDD projects that meet its Verified Carbon Standard. The process of registration includes the following steps:

  • Project developers and their local partners identify deforestation patterns and risks existing in the project area and its surroundings (the reference region) based on information and their knowledge of local circumstances and socio-economic processes, gained primarily through extensive on-the-ground work;
  • Project developers and their local partners calculate a baseline accordingly;
  • The baseline is validated through independent third-party auditors, known as validation and verification bodies (VVB). VVBs themselves, must be independently accredited to ISO 14065 General principles and requirements for bodies validating and verifying environmental information by an International Accreditation Forum (IAF) member, before being approved by Verra to audit projects to the Verified Carbon Standard;
  • Verra considers a request to register the project in line with the requirements of the Verified Carbon Standard.

Verra issues carbon credits to REDD projects that have been successfully implemented and have reduced deforestation or forest degradation. The process of issuance includes the following steps:

  • Project developers and their local partners monitor whether deforestation occurred and its magnitude against the deforestation rate in the baseline and the way that actions aimed at preventing or reducing deforestation were implemented;
  • Project developers and their local partners calculate emission reductions on this basis;
  • The findings are verified by an independent expert auditor (i.e., a validation and verification body, as described above);
  • Verra considers a request to issue carbon credits in line with the requirements of the Verified Carbon Standard, with each carbon credit representing one tonne of carbon dioxide that is not released into the atmosphere.

Verra issues carbon credits only after the emission reductions have been delivered (ex post crediting). Verra does not issue carbon credits based on the predictions of the project developers about how many emission reductions are likely to be achieved (ex ante crediting).

4. Analysis of West et al. 2020 and West et al. 2023

West et al. 2020 considered 12 REDD projects in one country (Brazil). West et al. 2023 considered 24 REDD projects in six countries (Cambodia, Colombia, Democratic Republic of Congo, Peru, Tanzania, and Zambia). For each REDD project, the authors formulated an alternative baseline by constructing a “synthetic control”. Each of these synthetic controls was a hypothetical composite of other geographic areas that have similar characteristics with the REDD project in question.

The authors looked at the following characteristics of the other geographic areas when constructing the synthetic controls for the REDD projects:

  • Property size
  • Initial forest cover
  • Slope
  • Soil quality
  • Physical distances from state capitals, towns, federal highways, and local roads
  • Proportion of primary and secondary forests, pastureland, agriculture, and urban areas within 10km buffer zones.

In West et al. 2020, each synthetic control was a composite of between two and nine rural properties in the Brazilian Amazon (per Annexes 1 and 2). In West et al. 2023, each synthetic control was a composite of between one and six circular areas located in the country of the REDD project.

For each REDD project, the authors measured the difference between deforestation in the synthetic control less the deforestation in the surroundings (10-km width buffer) of the project in question and concluded that many REDD projects do not reduce deforestation.

Verra has identified multiple deficiencies in these two works, as set out below. Independently, each deficiency seriously undermines the credibility of the authors’ conclusions; together, these deficiencies point to a work that is fundamentally flawed to the point of being patently unreliable for assessing the impact of REDD projects.

The use of synthetic controls in conservation work is a novel and useful approach that is becoming more widely accepted, and Verra is currently adopting this approach in some of its newest methodologies (e.g., VM0045, for Improved Forest Management projects). That said, the relevance of a synthetic control depends on how well it is constructed, as noted by, for example, Schleicher et al. 2019 and Desbureaux 2021.

In particular, it is essential for a synthetic control to involve the identification of relevant points of comparison and the choice of geographic areas that are as similar as possible to the project area.

Both West et al. 2020 and West et al. 2023 constructed overly simplistic synthetic controls because they looked at superficial characteristics only. All their characteristics are simple physical ones, such as distance from a state capital. Their synthetic controls excluded the key factors in deforestation and land-use change in a given location, which include some physical characteristics but more pertinently include a wide number of socio-economic characteristics. Examples of the authors’ omissions are set out in Table 2.

Table 2: Consequences of omitting key factors when setting baselines in REDD projects

Verra, in its methodological approach to registering REDD projects and issuing carbon credits, considers not only the simple, superficial physical characteristics identified by the authors, but also the key determinants of deforestation, as set out in Table 2.

The authors acknowledged the limitations of their approach, noting that “the construction of our synthetic controls may not have included all relevant structural determinants of deforestation.” The authors did not, however, elaborate on their rationale for excluding such important factors or the consequences of excluding them.

Verra finds that the authors’ omission of multiple key factors in deforestation weakens the relevance of their synthetic controls and, therefore, the credibility of their alternative baselines.

Verra further finds that this construction of non-credible baselines seriously undermines the authors’ conclusions about the REDD projects in question.

The authors justified their selection of the geographic areas that compose their synthetic controls by verifying that the pre-project deforestation rates in these areas were similar to the pre-project deforestation rate in a 10-km area around the corresponding REDD project. However, under Verra’s rules, a REDD project is required to be 100% forested on its start date, which means that the deforestation rate in the project area itself is zero or near zero and that in its surrounding buffer area is also likely to be low or, at most, modest.

The consequence of the authors’ approach is that they likely selected geographic areas that may have had zero to low threat of deforestation. In contrast, REDD project areas are selected because they are exposed to some significant threat of deforestation, as revealed and documented throughout a detailed analysis of deforestation drivers in the region surrounding the project area.

Verra finds that the authors’ selection of geographic areas not facing a serious threat of deforestation in their synthetic controls means that their alternative baselines underestimate the risk of deforestation in REDD projects.

Verra further finds that this underestimation of the risk of deforestation in REDD projects seriously undermines their conclusions about the REDD projects in question.

The authors constructed their synthetic controls on the basis of a very small number of other geographic areas: between two and nine in West et al. 2020, and between one and six in West et al. 2023. As a general statistical rule, the smaller the sample size, the larger the uncertainty; and complex, highly variable phenomena generally require larger sample sizes.

Verra finds that the authors’ use of small sample sizes contributes to a high level of uncertainty in their comparisons between the synthetic controls and the REDD projects in question.

Verra further finds that this high level of uncertainty seriously undermines the authors’ conclusions about the REDD projects in question.

The authors used satellite imagery to assess the physical characteristics that they considered. In contrast, Verra requires extensive on-the-ground analysis, as noted in section 3 above, which includes a detailed consideration of local circumstances and socio-economic processes. Satellite imagery can be helpful, but overreliance on it, in contrast to the extensive body of information gathered through on-the-ground data collection for REDD projects, can be problematic.

In the case of these two works, the limitations of using satellite imagery are exacerbated by the authors’ choice of which satellite imagery to use.

West et al. 2020 used the land-cover MapBiomas dataset as the basis for their analyses. However, to estimate deforestation, they first resampled the original high-resolution data (30 meters x 30 meters) to lower-resolution data (250 meters x 250 meters). As a general rule, using higher-resolution data is better than lower-resolution data:

  • With higher-resolution 30-meter data, a feature would have to be smaller than 30 meters by 30 meters to go unnoticed; and
  • With lower-resolution 250-meter data, any feature smaller than 250 meters by 250 meters (i.e., 6.5 hectares) would go unnoticed.

The use of lower-resolution data can, of course, go in both directions: it could overcount forest cover by only detecting a continuous tract of forest rather than one with small patches of deforestation, or it could undercount forest cover by only detecting a vast expanse of agriculture with only small remaining fragments of forest remaining. What is clear, however, is that West et al.’s use of 250-meter resolution does not meet the minimum mapping unit that Verra requires for REDD projects (100 meters by 100 meters, or finer), and this likely led to a significant underestimation of deforestation in their synthetic controls, as Figure S1 in West et al. 2020 clearly shows.

West et al. 2023 used Hansen et al. (2013)’s tree cover loss data taken from the Global Forest Watch database, developed by the World Resources Institute, to estimate annual deforestation over the period 2001 to 2020 for the REDD projects and their synthetic controls. The use of these data for this purpose is highly questionable.

First, as the Global Forest Watch portal itself states, its dataset shows tree cover extent and its changes over time, with tree cover being defined as: “all vegetation taller than 5 meters in height. “Tree cover” is the biophysical presence of trees and may take the form of natural forests or plantations existing over a range of canopy densities”. For this reason, this dataset bears key limitations, presented here verbatim from the Global Forest Review webpage:

  • Not all tree cover is a forest. Satellite data are effective for monitoring changes in tree cover, but forests are typically defined as a combination of tree cover and land use. For example, agricultural tree cover, such as an oil palm plantation, is not usually considered to be forest. As such, satellite-based monitoring systems may overestimate forest area unless combined with additional land-use data sets. No land-use data set currently exists at an adequate resolution or updated frequency to enable this analysis at global scale.
  • Not all tree cover loss is deforestation. Defined as permanent conversion of forested land to other land uses, deforestation can only be identified at the moment trees are removed if it is known how the land will be used afterward. In the absence of a global data set on land use, it is not possible to accurately classify tree cover loss as permanent (i.e., deforestation) or temporary (e.g., where it is associated with wildfire, timber harvesting rotations, or shifting cultivation) at the time it occurs. However, new models analyzing spatial and temporal trends in tree cover loss are enabling better insights into the drivers of loss.
  • Tree cover is a one-dimensional measure of a forest. Many qualities of a forest cannot be measured as a function of tree cover and are difficult, if not impossible, to detect from space using existing technologies. Forests that are vastly different in terms of form and function—such as an intact primary forest and a planted forest managed for timber production—are nearly indistinguishable in satellite imagery based on tree cover. Detecting forest degradation through remote sensing is also challenging because degradation often entails small changes occurring beneath the forest canopy.

Second, Hansen et al. (2013) improved their methodology by using finer resolution data and improved analytical method, such that:

  • these changes lead to a different and improved detection of global forest loss. However, the years preceding 2011 have not yet been reprocessed in this manner, and users will notice inconsistencies as a result… The integrated use of version 1.0 2000–2012 data and updated version 1.7 2011–2019 data should be performed with caution”.

Notwithstanding this limitation, the authors used data from 2001 to 2020 apparently with no adjustment to make allowance for such changes, which makes their analysis and results questionable.

Third, in many forest types, Hansen et al. (2013)’s data yield inconsistent results, either overestimating or underestimating forest cover change. As the figure below indicates, a stand of open forest existed in February 2013 (top panel), with the remaining forest in February 2020 (middle panel); however, the forest loss detected by the Global Forest Watch dataset (black squares, lower panel) is much smaller than the amount of forest that was actually lost. This inability to detect real forest loss in a control area calls into question the usefulness of the exercise. Further, as the accuracy of this dataset has not been assessed at the local level, and it might vary significantly between localities, deforestation estimates using this data set are unreliable.

Figure 1. Area in Tanzania showing a stand of open forest in February 2013 (top panel), forest remaining in the same area as of February 2018 (middle panel), and forest loss (black squares) up to October 2019 (bottom panel). Top and middle panels © Google Earth. Bottom panel © Global Forest Watch dataset.

For these and other reasons, several studies, including some by the World Resources Institute itself (e.g., Harris et al., 2018; Bos et al. 2019; Chen et al. 2020), have explicitly stated that Hansen et al. (2013) data should not be used off-the-shelf to estimate deforestation or for REDD purposes, although they might be used for those purposes provided suitable adjustments are made first. Such elementary but crucial caution was neglected by West et al. 2023, rendering their results doubtful and their conclusions questionable.

The authors briefly acknowledged this shortcoming: “Many remote sensing studies highlight the differences in deforestation rates between [the Global Forest Watch] and the numbers officially recognized by governments. Such differences emerge from different mapping methodologies and definitions of deforestation and forest degradation”. However, the authors ignored the implications of this shortcoming.

Verra finds that the authors’ use of unsuitable data – whether being too crude (West et al. 2020) or not accommodating data uncertainty (West et al. 2023) – means that their estimates of deforestation in their synthetic controls are highly questionable.

Verra further finds that the questionable nature of their estimates of deforestation in their synthetic controls seriously undermines the authors’ conclusions about the REDD projects in question.

5. Analysis of Guizar-Coutiño et al. (2022)

Guizar-Coutiño et al. considered 40 REDD projects in nine countries (Belize, Brazil, Cambodia, Colombia, Congo, Democratic Republic of Congo, Madagascar, Papua New Guinea, and Peru). All of these projects were located in tropical humid forests only. This project set included all 12 projects considered by West et al. 2020, 9 of the 24 projects considered by West et al. 2023, and 19 other projects.

Similar to West et al. 2020 and West et al. 2023, the authors formulated an alternative baseline for each REDD project by looking at other geographic areas. In contrast to these other studies, however, Guizar-Coutiño et al. did not construct synthetic controls by selecting comparable compact geographic areas. Instead, they drew from a large number of 30-meter by 30-meter plots of land, each known as a “pixel”, that were scattered across many sites, selecting at the initial stage at least seven pixels per corresponding pixel in the REDD project.

The authors selected these pixels on the basis of the similarity of their characteristics compared to the pixels encompassed in the corresponding REDD project, selecting from such criteria as:

  • Same country;
  • Same biome;
  • Areas that had remained as undisturbed forest from 1990 until the project starting year (to match the requirement in Verra REDD methodologies);
  • Elevation and slope;
  • Distance to the nearest urban center in 2015;
  • Distance to the forest edge.

The performance of the 40 REDD projects was evaluated by comparing the average annual deforestation and degradation rates measured in each project area with those measured in its corresponding control (matched pixels) over the five years following the project’s start date.

According to the authors, these findings indicate that incentivizing forest conservation through voluntary site-based projects can slow tropical deforestation and highlight the importance of prioritizing financing for areas at greater risk of deforestation. They concluded that: “Our results provide some room for optimism. Despite the many challenges to just and economically sustainable implementation, the initial wave of REDD+ projects were effective at reducing forest loss.”

The results from this study contradict those of West et al. 2020 and West et al. 2023. According to Guizar-Coutiño et al., REDD+ project implementation was found to be associated with reductions in deforestation, with decreased deforestation in 34 sites and slightly increased deforestation in 6 sites. Collectively, these REDD+ projects reduced deforestation by 47% in the first five years. This rate of reduction was higher in high-deforestation countries and did not appear to be substantially undermined by leakage activities (i.e., deforestation being displaced to areas outside the REDD project area).

It should be noted that this study examined deforestation and degradation rates only. This study did not convert estimates of deforestation and degradation to estimates of emission reductions, and the projects’ deforestation and degradation rates were not compared to the projects’ baselines as registered in Verra’s Registry.

Verra has identified two deficiencies in this work, set out below. These deficiencies somewhat undermine the reliability of the authors’ conclusions. These two deficiencies should, however, be viewed in the context of larger methodological choices made by the authors that enhance the credibility of their conclusions. The net effect is a report that is moderately reliable and a useful contribution to the literature.

The authors’ selection of characteristics for their control pixels, along with their use of data from the Tropical Moist Forest database, ensured that comparisons were always made between tropical moist forests located in the same biome in the same country, under comparable bioclimatic conditions. These simple specifications in the construction of the controls likely made these more comparable to their real-life counterparts, the REDD projects, than did the controls developed in the loose approach of West et al. 2020 and West et al. 2023.

Nevertheless, the authors’ claim that the characteristics chosen (elevation and slope, distance to the nearest urban center in 2015, and distance to forest edge) reflect the sociodemographic and biophysical features associated with deforestation across different countries, is somewhat tenuous, given that these processes are highly location-specific.

Verra finds that the authors’ omission of some key factors in deforestation somewhat weakens the relevance of their control areas.

Verra further finds that the somewhat weakened relevance of their control areas slightly undermines the authors’ conclusions about the REDD projects in question.

The authors used satellite imagery to assess the physical characteristics that they considered. In contrast, Verra requires extensive on-the-ground analysis, as noted in section 3 above, which includes a detailed consideration of local circumstances and socio-economic processes. Satellite imagery can be helpful, but overreliance on it, in contrast to the extensive body of information gathered through on-the-ground data collection, can be problematic in REDD projects.

In the case of this work, however, the limitations of using satellite imagery are mitigated by the authors’ choice of which satellite imagery to use.

Unlike West et al. 2020 and 2023, Guizar-Coutiño et al. used 30-meter resolution satellite imagery. They estimated deforestation rates in the projects’ areas and their corresponding controls (matched pixels) during the first five years of implementation using annual (1990–2019) maps (approximately 30-meter resolution) of forest cover and deforestation from the Tropical Moist Forests database, which were derived from Landsat imagery. Absolute differences in deforestation and forest degradation rates during the first five years of implementation between treatments and controls were then calculated. Background deforestation values (i.e., the country-level deforestation rate) were also estimated to determine whether the projects were located in high-threat or low-threat countries.

Verra finds that the authors’ use of data – reasonably high-resolution satellite data, though not on-the-ground data – means that their estimates of deforestation in their control areas are somewhat questionable.

Verra further finds that the somewhat questionable nature of their estimates of deforestation in their control areas slightly undermines the authors’ conclusions about the REDD projects in question.

6. Analysis of The Guardian

When writing on West et al. 2020 and West et al. 2023, the Guardian looked at the results of 29 of the 36 different projects mentioned in these two studies, and then conducted “further analysis” on them, concluding that a great majority of the credits from those projects should not have been issued.

Regarding Guizar-Coutiño et al., the Guardian looked at the results of 32 of the 40 projects mentioned in this study, “analysed these results more closely” by comparing deforestation and degradation rates with pre-project predictions of emission reductions, and concluded that REDD project baselines were inflated by approximately 400% over what was initially predicted by the project developers. Notably, they compared deforestation and degradation rates against the early-stage predictions of project developers, not the actual number of carbon credits issued by Verra.

Verra has identified multiple deficiencies in the work, set out below. Independently, each deficiency seriously undermines the credibility of the Guardian’s conclusions; together, these deficiencies point to a work that is fundamentally flawed to the point of being patently unreliable for assessing the impact of REDD projects.

The Guardian failed to report that the studies reached largely different results, instead noting that “the data showed broad agreement on the lack of effectiveness of the projects compared with the Verra-approved predictions.” This failure is curious given that West et al. 2020 and Guizar-Coutiño et al. are the two most prominent papers in this field and that they reach largely opposite conclusions.

First, take the 12 REDD projects in Brazil considered by both West et al. 2020 and Guizar-Coutiño et al.: the former found that deforestation or degradation was lower in 4 of 12 projects (33% of projects), whereas the latter found that deforestation was lower in 11 of 12 projects (92%) and degradation was lower in 9 of 12 projects (75%). The Guardian also neglected to note that the results of the two approaches were consistent in fewer than half the cases (see Table 4).
Table 4: Contradictory results between West et al. 2020 and Guizar-Coutiño et al.

Conclusion by study: Did the project stop or slow deforestation and degradation?

VCS Project Name VCS Project # West et al. 2020 Guizar-Coutiño et al. Consistent Conclusion?
Florestal Santa Maria 875 No Yes FALSE
Purus 963 No Deforestation – Yes
Degradation – No
FALSE
RMDLT Portel-Para 977 Yes Yes TRUE
ADPML Portel-Para 981 Yes Yes TRUE
Russas 1112 No Yes FALSE
Valparaiso 1113 No Yes FALSE
Jari/Amapá 1115 No Yes FALSE
Suruí 1118 No No TRUE
Maísa 1329 Yes Yes TRUE
Rio Preto-Jacundá 1503 No Yes FALSE
Manoa 1571 Yes Yes TRUE
Agrocortex 1686 No Deforestation – Yes
Degradation – No
FALSE
TOTAL 4 Yes, 8 No Deforestation:
11 Yes, 1 No
Degradation:
9 Yes, 3 No
5 TRUE, 7 FALSE

 

Second, take the 10 projects considered by West et al. 2023 and Guizar-Coutiño et al.: the former found that deforestation or degradation was lower in 5 of 10 projects, whereas the latter found that deforestation was lower in 7 of 10 projects and degradation was lower in 9 of 10 projects. Again, the journalist failed to note that the conclusions were consistent in fewer than half the cases (see Table 5).

Table 5: Contradictory results between West et al. 2023 and Guizar-Coutiño et al.

Conclusion by study: Did the project stop or slow deforestation?

VCS Project Name VCS Project # West et al. 2023 Guizar-Coutiño et al. Consistent Conclusion?
Madre De Dios (Peru) 844 No Yes FALSE
Alto Mayo (Peru) 944 Yes Deforestation – No
Degradation – Yes
FALSE
Biocorredor Martin Sagrado (Peru) 958 Yes No FALSE
Tambopata and Bahuaja-Sonene (Peru) 1067 Yes Yes TRUE
Isangi (DRC) 1359 No Yes FALSE
Cajambre (Colombia) 1392 No Yes FALSE
Bajo Calima y Bahía Málaga (Colombia) 1395 Yes Yes TRUE
Rio Pepe y ACABA (Colombia) 1396 Yes Yes TRUE
Concosta (Colombia) 1400 No Yes FALSE
RIU-SM (Colombia) 1566 No Deforestation: No
Degradation: Yes
FALSE
TOTAL 5 Yes, 5 No Deforestation:
7 Yes, 3 No
Degradation:
9 Yes, 1 No
3 TRUE, 7 FALSE

 

Verra finds that the journalists failed to present the inconsistency between the studies that they reported on, in favor of portraying all leading studies as having reached one conclusion.

Verra further finds that this failure to present different views in the studies seriously undermines the journalists’ conclusions about the REDD projects in question.

The Guardian grossly misrepresented the Guizar-Coutiño et al. data by claiming that the baselines of the 32 projects were inflated by approximately 400%.

First, Guizar-Coutiño et al. considered deforestation and degradation, not emission reductions, and made no statements at all about the baselines.

Second, the Guardian, after converting Guizar-Coutiño et al.’s findings into emission reductions, compared these figures with the pre-project predictions of the project developers. This is a false comparison. For a variety of reasons, the number of credits eventually issued almost always falls below initial estimates – not because baselines were poorly-constructed, but because emission-reduction activities are difficult to implement, among other things. What the Guardian should have done, and failed to do, is to compare Guizar-Coutiño et al.’s findings not with the early-stage predictions of project developers, but with the actual emission reductions achieved by the projects.

Third, it is incorrect to refer to the project developers’ predictions as “Verra’s claims”. As noted above, Verra issues carbon credits on the basis of actual emission reductions that have been achieved, not on the basis of the predictions of project developers.

Taken together, it appears that The Guardian grossly misrepresented the Guizar-Coutiño et al. findings in order to support the hypothesis that they drew from the flawed West et al. 2020 and West et al. 2023 studies.

Verra finds that the journalists grossly misrepresented the Guizar-Coutiño et al. findings by comparing them with project developers’ early-stage predictions, rather than Verra’s issuances of carbon credits.

Verra further finds that this flawed comparison seriously undermines the journalists’ conclusions about the REDD projects in question.

As noted above, The Guardian looked at the results of only 29 of the 36 different REDD projects considered by West et al. 2020 and West et al. 2023, per Table 3. The Guardian offered no explanation for excluding the other projects, either publicly or to Verra, other than a passing reference that further analysis was not possible.

Table 3: Subset of projects considered by The Guardian

 

Also as noted above, The Guardian did “further analysis” on the above 29 projects in order to reach its conclusion that 94% of the credits from those projects should not have been issued. The sum total of their published analysis reads as follows: “The journalists were able to do further analysis on those projects, comparing the estimates made by the offsetting projects with the results obtained by the scientists. The analysis indicated about 94% of the credits the projects produced should not have been approved.”

As for Guizar-Coutiño et al., the Guardian also looked at a subset of REDD projects, in this case 32 out of 40 projects. The Guardian published no explanation for excluding the other 8 projects, other than a passing reference that it was not possible to consider them. It provided Verra with short explanations of a few words each to explain their exclusion.

In a similar vein to its approach with West et al. 2020 and West et al. 2023, the Guardian “analysed these results more closely” to reach its conclusion that baselines were inflated by approximately 400%. The Guardian published no analysis to explain how it reached this conclusion.

In addition, the journalists’ scientific credentials to interpret the studies are not clear, and certainly have not been disclosed.

Verra finds that the journalists failed to properly explain their exclusion of seven projects (18%) considered by West et al. 2020 and West et al. 2023 and 8 projects (20%) considered by Guizar-Coutiño et al. Verra also finds, based on the information before it, that the journalists may have interpreted the results of the studies without having the appropriate scientific credentials to do so.

Verra further finds that the failure to properly explain the exclusion of certain projects, as well as the lack of transparency over the credentials of the people who conducted the “further analysis” of the studies, seriously undermine the journalists’ conclusions about the REDD projects in question.

TO KEEP UP WITH THE LATEST NEWS, SIGN UP FOR OUR NEWSLETTER