Introduction
Climate change significantly impacts climate extremes, including droughts, storms, megafires, and heatwaves. Record-shattering extremes, exceeding past records by a wide margin, pose particular challenges because they can overwhelm natural and social systems. The occurrence of these extremes presents challenges for scientific understanding and communication. The persistence of old temperature records, despite global warming, raises questions about the pace of climate change and the accuracy of climate models. The relationship between extreme events and climate change depends on the type of extreme. For well-aggregated data (e.g., monthly global temperatures), the underlying trend often dominates variability, and new records are primarily driven by climate change. However, for disaggregated data, such as daily temperatures at specific locations, variability can be significant compared to the trend, and record-setting rates might show little response to climate change. This paper focuses on local, record-shattering extremes, utilizing a large ensemble approach to analyze how these extremes behave in large samples and the influence of sample size on their perception.
Literature Review
Existing literature highlights the sensitivity of extreme events to sample size in both statistical and modeling studies. Statistical maximum likelihood estimators are known to be sensitive to sample size, and research addresses the sample size needed for reliable extreme likelihood estimations. Studies have demonstrated the sensitivity of extreme events to sample size in large model ensembles. Recent work shows that very large ensembles can generate local, record-shattering heat extremes comparable to those in observations. The June 2021 Pacific Northwest heatwave, a significant outlier, serves as a case study, with existing studies suggesting increased frequency of such events in the future. This research provides further analysis on the rarity of such events at fixed locations and the slow increase in their frequency.
Methodology
This study defines a record-shattering extreme as an event setting a record by a wide margin and being a demonstrable outlier. The analysis uses two datasets: SeaTac airport's daily maximum temperature records (1948-present) from the Global Historical Climatology Network (GHCN), and hindcasts from a decadal climate forecast system (ACCESS-D) using the GFDL CM2.1 coupled model. The study uses the annual maximum of daily maximum temperature (TXx) as a heatwave index. Due to the single record-breaking event in the SeaTac observations, the likelihood of such an event is assessed using a generalized extreme value (GEV) distribution fitted to the data. The GEV fitting process considers the impact of excluding the record-breaking event and also considers the effect of successively removing each observation. The ACCESS-D model hindcast comprises daily 10-year lead-time forecasts initiated from 1995 to 2020, with 96 ensemble members, providing a large sample size (44,928 years). Bias correction adjusts the model's maximum temperatures to align with SeaTac observations. The study assesses return periods through direct sampling of the large ensemble and GEV fitting, investigating the impact of sample size on return period estimations. The study examines the weather patterns associated with the record-breaking day and the hottest days in the model ensemble using 500 hPa geopotential height fields. Pattern matching techniques evaluate the repeatability of weather patterns. The relationship between the hottest temperature in a sample and the sample size is explored, using random sampling from the GEV distribution for observations and direct sampling from the model ensemble. The impact of non-stationarity and warming is analyzed by examining TXx distributions as a function of calendar year and the maximum TXx value in each year. Finally, a schematic model illustrates the effects of sampling bias and selection bias on the persistence of record-shattering events.
Key Findings
The record-shattering heat in the Pacific Northwest in June 2021 was a demonstrable outlier. The hottest days in the model ensemble exhibited similar large-scale and synoptic patterns to the observed event, but were highly sensitive to the precise alignment of weather systems. Return period estimates from the model ensemble showed large uncertainties, strongly influenced by sample size. Samples of approximately 5000 years were necessary to provide reasonable bounds on return periods. The hottest day simulated in the model increased with sample size, even up to almost 45,000 years. Small samples are highly unlikely to capture the most extreme events due to the influence of chance weather patterns. While warming was evident in the ensemble, the extreme hottest days showed no clear upward trend, emphasizing the dominance of weather variability in extremely hot days. The persistence of record-breaking heat is attributable to sampling and selection bias. Once a record-shattering event occurs, the likelihood of another similar event occurring in a small sample of subsequent years is minuscule. Climate models are at a disadvantage because they do not share the benefit of selection bias from observations (searching across many locations and periods), and their sample sizes might not be large enough to find the rarest of events. The study concludes that the frequency of record-breaking events increases over time due to climate change influencing the baseline temperature; it is less about the generation of extreme events and more about making less extreme events warmer than past extreme events.
Discussion
The findings address the research question by demonstrating the profound influence of weather variability and sampling limitations on record-shattering heat extremes. The study highlights the challenges in using climate models to accurately represent such events due to the high sensitivity of extremely hot days to precise weather patterns. The results are significant because they explain the persistence of record-shattering heat records, even in a warming climate, and emphasize the need for large ensemble simulations to capture such rare events. The relevance to the field lies in improved model evaluation and interpretations of extreme event occurrences, particularly in attribution studies. The findings underscore that the absence of an extreme event in a model sample does not automatically indicate the model’s shortcomings, but might be due to insufficient sample size or not searching for it like observations.
Conclusion
This study illustrates how record-shattering heat extremes are very sensitive to sampling and chance weather configurations. Large ensembles are necessary for capturing these rare events. The persistence of record-shattering extremes is explained by the interplay of selection bias in observations and insufficient sample sizes in model projections. Future work should focus on extending the analysis to other types of extreme events and explore the influence of regime changes on these findings.
Limitations
The study primarily focuses on a specific location and a particular model. While the model ensemble size is exceptionally large, it might not perfectly replicate the complexity of real-world climate dynamics. The extrapolated model data for evaluating non-stationarity rely on assumptions about future climate trends that may not accurately reflect the actual changes.
Related Publications
Explore these studies to deepen your understanding of the subject.