Application of an interpretable prototypical-part network to subseasonal-to-seasonal climate prediction over North America
Date
2024
Journal Title
Journal ISSN
Volume Title
Abstract
In recent years, the use of neural networks for weather and climate prediction has greatly increased. In order to explain the decision-making process of machine learning "black-box" models, most research has focused on the use of machine learning explainability methods (XAI). These methods attempt to explain the decision-making process of the black box networks after they have been trained. An alternative approach is to build neural network architectures that are inherently interpretable. That is, construct networks that can be understood by a human throughout the entire decision-making process, rather than explained post-hoc. Here, we apply such a neural network architecture, named ProtoLNet, in a subseasonal-to-seasonal climate prediction setting. ProtoLNet identifies predictive patterns in the training data that can be used as prototypes to classify the input, while also accounting for the absolute location of the prototype in the input field. In our application, we use data from the Community Earth System Model version 2 (CESM2) pre-industrial long control simulation and train ProtoLNet to identify prototypes in precipitation anomalies over the Indian and North Pacific Oceans to forecast 2-meter temperature anomalies across the western coast of North America on subseasonal-to-seasonal timescales. These identified CESM2 prototypes are then projected onto fifth-generation ECMWF Reanalysis (ERA5) data to predict temperature anomalies in the observations several weeks ahead. We compare the performance of ProtoLNet between using CESM2 and ERA5 data. We then demonstrate a novel approach for performing transfer learning between CESM2 and ERA5 data which allows us to identify skillful prototypes in the observations. We show that the predictions by ProtoLNet using both datasets have skill while also being interpretable, sensible, and useful for drawing conclusions about what the model has learned.