Describe the control processes in place to ensure quality of research.
Our data quality processes vary depending on the source of the data and the level of quality assurance done prior to that source entering into our possession. The climate data has been vetted by rigorous review processes in the scientific community before being made available for use, and we have high confidence in the quality of that data under the standard parameters of our analysis. Our methods for applying this climate data to specific locations is also vetted by a team of expert advisors. We acknowledge a level of uncertainty exists in both the climate model projections and the assumptions we take about non-linear business impacts. We have applied strict statistical validation methods to account for model uncertainties and to ensure a practicable level of directional accuracy in these estimates.
Other data sources, such as company asset-level data, come from third party sources with some level of quality control, but not necessarily at the level of rigor needed for the type of analysis we perform. These sources require us to apply more direct quality control measures. We do this through desktop research, focusing in detail on the largest drivers of results and performing spot checks on the other components to identify any systematic errors. If we encounter issues with data quality in the source data, our preference is to use alternate sources, or failing that, downgrade the weighting of that data source.
Finally, we employ quality assurance and control procedures throughout our analysis. Quality checks occur prior to and following the transformation of data by individuals with designated QA/ QC responsibilities. These quality assurance procedures enable us to find and correct a number of errors and inconsistencies in the data, resulting in a more robust data set. However, some limitations continue to apply. We attempt to be as transparent as possible about these limitations when communicating with clients.