Browsing by Author "Barnes, Elizabeth, committee member"
Now showing 1 - 18 of 18
Results Per Page
Sort Options
Item Open Access Advances in statistical analysis and modeling of extreme values motivated by atmospheric models and data products(Colorado State University. Libraries, 2018) Fix, Miranda J., author; Cooley, Daniel, advisor; Hoeting, Jennifer, committee member; Wilson, Ander, committee member; Barnes, Elizabeth, committee memberThis dissertation presents applied and methodological advances in the statistical analysis and modeling of extreme values. We detail three studies motivated by the types of data found in the atmospheric sciences, such as deterministic model output and observational products. The first two investigations represent novel applications and extensions of extremes methodology to climate and atmospheric studies. The third investigation proposes a new model for areal extremes and develops methods for estimation and inference from the proposed model. We first detail a study which leverages two initial condition ensembles of a global climate model to compare future precipitation extremes under two climate change scenarios. We fit non-stationary generalized extreme value (GEV) models to annual maximum daily precipitation output and compare impacts under the RCP8.5 and RCP4.5 scenarios. A methodological contribution of this work is to demonstrate the potential of a "pattern scaling" approach for extremes, in which we produce predictive GEV distributions of annual precipitation maxima under RCP4.5 given only global mean temperatures for this scenario. We compare results from this less computationally intensive method to those obtained from our GEV model fitted directly to the RCP4.5 output and find that pattern scaling produces reasonable projections. The second study examines, for the first time, the capability of an atmospheric chemistry model to reproduce observed meteorological sensitivities of high and extreme surface ozone (O3). This work develops a novel framework in which we make three types of comparisons between simulated and observational data, comparing (1) tails of the O3 response variable, (2) distributions of meteorological predictor variables, and (3) sensitivities of high and extreme O3 to meteorological predictors. This last comparison is made using quantile regression and a recent tail dependence optimization approach. Across all three study locations, we find substantial differences between simulations and observational data in both meteorology and meteorological sensitivities of high and extreme O3. The final study is motivated by the prevalence of large gridded data products in the atmospheric sciences, and presents methodological advances in the (finite-dimensional) spatial setting. Existing models for spatial extremes, such as max-stable process models, tend to be geostatistical in nature as well as very computationally intensive. Instead, we propose a new model for extremes of areal data, with a common-scale extension, that is inspired by the simultaneous autoregressive (SAR) model in classical spatial statistics. The proposed model extends recent work on transformed-linear operations applied to regularly varying random vectors, and is unique among extremes models in being directly analogous to a classical linear model. We specify a sufficient condition on the spatial dependence parameter such that our extreme SAR model has desirable properties. We also describe the limiting angular measure, which is discrete, and corresponding tail pairwise dependence matrix (TPDM) for the model. After examining model properties, we then investigate two approaches to estimation and inference for the common-scale extreme SAR model. First, we consider a censored likelihood approach, implemented using Bayesian MCMC with a data augmentation step, but find that this approach is not robust to model misspecification. As an alternative, we develop a novel estimation method that minimizes the discrepancy between the TPDM for the fitted model and the estimated TPDM, and find that it is able to produce reasonable estimates of extremal dependence even in the case of model misspecification.Item Open Access Airborne radar quality control and analysis of the rapid intensification of Hurricane Michael (2018)(Colorado State University. Libraries, 2020) DesRosiers, Alexander J., author; Bell, Michael M., advisor; Barnes, Elizabeth, committee member; Chen, Suren, committee memberImprovements made by the National Hurricane Center (NHC) in track forecasts have outpaced advances in intensity forecasting. Rapid intensification (RI), an increase of at least 30 knots in the maximum sustained winds of a tropical cyclone (TC) in a 24 hour period, is poorly understood and provides a considerable hurdle to intensity forecasting. RI depends on internal processes which require detailed inner core information to better understand. Close range measurements of TCs from aircraft reconnaissance with tail Doppler radar (TDR) allow for the retrieval of the kinematic state of the inner core. Fourteen consecutive passes were flown through Hurricane Michael (2018) as it underwent RI on its way to landfall at category 5 intensity. The TDR data collected offered an exceptional opportunity to diagnose mechanisms that contributed to RI. Quality Control (QC) is required to remove radar gates originating from non meteorological sources which can impair dual-Doppler wind synthesis techniques. Automation of the time-consuming manual QC process was needed to utilize all TDR data collected in Hurricane Michael in a timely manner. The machine learning (ML) random forest technique was employed to create a generalized QC method for TDR data collected in convective environments. The complex decision making ability of ML offered an advantage over past approaches. A dataset of radar scans from a tornadic supercell, bow echo, and mature and developing TCs collected by the Electra Doppler Radar (ELDORA) containing approximately 87.9 million radar gates was mined for predictors. Previous manual QC performed on the data was used to classify each data point as weather or non-weather. This varied dataset was used to train a model which classified over 99% of the radar gates in the withheld testing data succesfully. Creation of a dual-Doppler analysis from a tropical depression using ML efforts that was comparable to manual QC confirmed the utility of this new method. The framework developed was capable of performing QC on the majority of the TDR data from Hurricane Michael. Analyses of the inner core of Hurricane Michael were used to document inner core changes throughout RI. Angular momentum surfaces moved radially inward and became more vertically aligned over time. The hurricane force wind field expanded radially outward and increased in depth. Intensification of the storm became predominantly axisymmetric as RI progressed. TDR-derived winds are used to infer upper-level processes that influenced RI at the surface. Tilting of ambient horizontal vorticity, created by the decay of tangential winds aloft, by the axisymmetric updraft created a positive vorticity tendency atop the existing vorticity tower. A vorticity budget helped demonstrate how the axisymmetric vorticity tower built both upward and outward in the sloped eyewall. A retrieval of the radial gradient of density temperature provided evidence for an increasing warm core temperature perturbation in the eye. Growth of the warm core temperature perturbation in upper levels aided by subsidence helped lower the minimum sea level pressure which correlated with intensification of the near-surface wind field.Item Open Access Dynamics of flow in river bends(Colorado State University. Libraries, 2018) Aseperi, Oladapo, author; Venayagamoorthy, Subhas K., advisor; Julien, Pierre Y., committee member; Ramirez, Jorge A., committee member; Barnes, Elizabeth, committee memberWater is indispensable to life and the means by which it is conveyed is equally important. Natural rivers and manmade channels play a critical role in this respect because they are vital for water supply, navigation, transport of sediments, pollutants and nutrients. Most natural rivers typically have meandering (curved) geometries which make a direct study of their flow dynamics cumbersome. In order to reduce this complexity, natural rivers are usually idealized as open channel bends with rigid boundaries in order to gain insights into the flow dynamics. As such, this research examines the dynamics of flow in open channel bends with rigid boundaries, using computational fluid dynamics (CFD). The particular computational fluid dynamics code used in this research, discretizes the equations of fluid motion (i.e. the Navier-Stokes equations) using a finite volume scheme while tracking the free surface with the volume of fluid method. Turbulence was incorporated into the solution of the equations using large eddy simulation techniques. Even though the general aim is to improve current understanding of natural river bend physics, the specific aims of this research are threefold. These are: (1) to study the effects of radius of curvature on the flow physics of an idealized river bend; (2) to study in detail the effect of a variation in curvature length on the flow structure and dynamics of an open channel bend; and (3) examine in detail the effect of inertial forces on the flow dynamics of an idealized river bend by varying the inflow Froude number. While some of the findings in this research confirm some of the results that has already appeared in literature, a significant amount of results highlight new insights into dynamic events in an open channel bend. As a concrete example on the effect of curvature on the flow structure, simulation results show that the maximum bed and wall shear stress are exerted on the inner wall at the entrance to the curve regardless of curvature. However, further into the bend, the maximum shear stress shifts to the outer bend and wall region. Furthermore, the angular distance into the bend at which this occurs is found to depend on the curvature of the channel. Thus, for a mild channel, the maximum shear stress shifts to the outer bend and wall region a short angular distance from the entrance. This distance increases with a decrease in radius of curvature (i.e. as the channel gets tighter) with the maximum shear stress in the tightest channel (that was simulated in this study) always occurring on the inner side of the bend for the entire channel length. Another key finding comes from an investigation of the effect of the variation of curvature length on the flow structure and dynamics of open channel bends. It was found that the flow circulation pattern depends on the curvature length. Simulation results showed that shorter channel bends reached fully developed vortical states faster than similar channels with longer lengths. Furthermore, new results from this study provide a clear explanation for the emergence of a three-cell circulation structure in tight channel bends that occurs as a result of the splitting of the main cell circulation due to the enhanced vorticity in tight bends. Finally, the study on the effects of Froude number on flow structure clearly shows that an increase in the inertia of the fluid does not affect the radial pressure gradient force (a very important force that plays a critical role in shaping the bend channel dynamics) in a mild channel. Remarkably in the tight channels, there seems to be a positive correlation between the magnitudes of the fluid inertia (as measured by the velocity) and the radial pressure gradient force. This finding has important implications for the modeling of river bends since geometric factors are not sufficient to adequately parameterize the flow structure under certain circumstances in reduced order models. These and more results not mentioned in this abstract are detailed in this dissertation. The overall aim of this research is to provide better insights into bend channel flow dynamics so as to enable engineers to carry out more accurate river modeling and training works.Item Open Access Estimating the likelihood of significant climate change with the NCAR 40-member ensemble(Colorado State University. Libraries, 2014) Foust, William Eliott, author; Thompson, David, advisor; Randall, David, committee member; Barnes, Elizabeth, committee member; Cooley, Daniel, committee memberIncreasing greenhouse gas concentrations are changing the radiative forcing on the climate system, and this forcing will be the key driver of climate change over the 21st century. One of the most pressing questions associated with climate change is whether certain aspects of the climate system will change significantly. Climate ensembles are often used to estimate the probability of significant climate change, but they struggle to produce accurate estimates of significant climate change because they sometimes require more realizations than what is feasible to produce. Additionally, the ensemble mean suggests how the climate will respond to an external forcing, but since it filters out the variability, it cannot determine if the response is significant. In this study, the NCAR CCSM 40-member ensemble and a lag-1 autoregressive model (AR1 model) are used to estimate the likelihood that climate trends will be significant. The AR1 model generates an analytic solution for what the distribution of trends should be if the NCAR model was run an infinite number of times. The analytical solution produced by the AR1 model is used to assess the significance of future climate trends. The results of this study demonstrate that an AR1 model can aid in making a probabilistic forecast. Additionally, the results give insight into the certainty of the trends in the surface temperature field, precipitation field, and atmospheric circulation, the probability of climate trends being significant, and whether the significance of climate trends is dependent on the internal variability or anthropogenic forcing.Item Open Access Fire and ice: analyzing ice nucleating particle emissions from western U.S. wildfires(Colorado State University. Libraries, 2019) Barry, Kevin Robert, author; Kreidenweis, Sonia, advisor; DeMott, Paul, advisor; Barnes, Elizabeth, committee member; Farmer, Delphine, committee memberWildfires in the western U.S. can have impacts on health and air quality and are forecasted to increase in the future. Some of the particles released from wildfires can affect cloud formation through serving as ice nucleating particles (INPs). INPs are necessary for heterogenous ice formation in mixed-phase clouds at temperatures warmer than about -38 °C and can have climate implications from radiative impacts on cloud phase and by affecting cloud lifetime. Wildfires have been shown to be a potential source of INPs from previous ground-based measurement studies, but almost no data exist at the free tropospheric level that is relevant for cloud formation. The Western Wildfire Experiment for Cloud Chemistry, Aerosol Absorption, and Nitrogen (WE-CAN) campaign that was conducted in summer 2018 utilized the NSF/NCAR C-130 to sample many smoke plumes of various ages in the free troposphere and aged smoke in the boundary layer. INP measurements were made with the CSU Continuous Flow Diffusion Chamber (CFDC) and with aerosol filter collections to analyze offline with the CSU Ice Spectrometer (IS). The results presented in this thesis indicate a contribution of smoke to the INP number concentration budget over the plume-background air, but much variability exists in concentrations and in INP composition among fires. Treatments of the filter suspensions show a dominant organic influence in all plume filters analyzed while a biological INP population is evident in several cases. For the South Sugarloaf fire, which had a primary fuel of sagebrush shrubland, the highest INP concentrations of the campaign were measured, and the unique INP temperature spectrum suggests lofting of material from uncombusted plant material. Normalization of INP concentrations measured in WE-CAN confirms that smoke is not an especially efficient source of ice nucleating particles, however emissions impacts may still occur regionally. The determination of a Normalized Excess Mixing Ratio (NEMR) of INP emissions for the first time will permit modeling of such impacts, and possible INP in-plume production will be explored in future research.Item Open Access Investigating the enhancement of air pollutant predictions and understanding air quality disparities across racial, ethnic, and economic lines at US public schools(Colorado State University. Libraries, 2022) Cheeseman, Michael J., author; Pierce, Jeffrey R., advisor; Barnes, Elizabeth, committee member; Fischer, Emily, committee member; Ford, Bonne, committee member; Volckens, John, committee memberAmbient air pollution has significant health and economic impacts worldwide. Even in the most developed countries, monitoring networks often lack the spatiotemporal density to resolve air pollution gradients. Though air pollution affects the entire population, it can disproportionately affect the disadvantaged and vulnerable communities in society. Pollutants such as fine particulate matter (PM2.5), nitrogen oxides (NO and NO2), and ozone, which have a variety of anthropogenic and natural sources, have garnered substantial research attention over the last few decades. Over half the world and over 80% of Americans live in urban areas, and yet many cities only have one or several air quality monitors, which limits our ability to capture differences in exposure within cities and estimate the resulting health impacts. Improving sub-city air pollution estimates could improve epidemiological and health-impact studies in cities with heterogeneous distributions of PM2.5, providing a better understanding of communities at-risk to urban air pollution. Biomass burning is a source of PM2.5 air pollution that can impact both urban and rural areas, but quantifying the health impacts of PM2.5 from biomass burning can be even more difficult than from urban sources. Monitoring networks generally lack the spatial density needed to capture the heterogeneity of biomass burning smoke, especially near the source fires. Due to limitations of both urban and rural monitoring networks several techniques have been developed to supplement and enhance air pollution estimates. For example, satellite aerosol optical depth (AOD) can be used to fill spatial gaps in PM monitoring networks, but AOD can be disconnected from time-resolved surface-level PM in a multitude of ways, including the limited overpass times of most satellites, daytime-only measurements, cloud cover, surface reflectivity, and lack of vertical-profile information. Observations of smoke plume height (PH) may provide constraints on the vertical distribution of smoke and its impact on surface concentrations. Low-cost sensor networks have been rapidly expanding to provide higher density air pollution monitoring. Finally, both geophysical modeling, statistical techniques such as machine learning and data mining, and combinations of all of the aforementioned datasets have been increasingly used to enhance surface observations. In this dissertation, we explore several of these different data sources and techniques for estimating air pollution and determining community exposure concentrations. In the first chapter of this dissertation, we assess PH characteristics from the Multi-Angle Implementation of Atmospheric Correction (MAIAC) and evaluate its correlation with co-located PM2.5 and AOD measurements. PH is generally highest over the western US. The ratio PM2.5:AOD generally decreases with increasing PH:PBLH (planetary boundary layer height), showing that PH has the potential to refine surface PM2.5 estimates for collections of smoke events. Next, to estimate spatiotemporal variability in PM2.5, we use machine learning (Random Forests; RFs) and concurrent PM2.5 and AOD measurements from the Citizen Enabled Aerosol Measurements for Satellites (CEAMS) low-cost sensor network as well as PM2.5 measurements from the Environmental Protection Agency's (EPA) reference monitors during wintertime in Denver, CO, USA. The RFs predicted PM2.5 in a 5-fold cross validation (CV) with relatively high skill (95% confidence interval R2=0.74-0.84 for CEAMS; R2=0.68-0.75 for EPA) though the models were aided by the spatiotemporal autocorrelation of the PM22.5 measurements. We find that the most important predictors of PM2.5 are factors associated with pooling of pollution in wintertime, such as low planetary boundary layer heights (PBLH), stagnant wind conditions, and, to a lesser degree, elevation. In general, spatial predictors are less important than spatiotemporal predictors because temporal variability exceeds spatial variability in our dataset. Finally, although concurrent AOD is an important predictor in our RF model for hourly PM2.5, it does not improve model performance during our campaign period in Denver. Regardless, we find that low-cost PM2.5 measurements incorporated into an RF model were useful in interpreting meteorological and geographic drivers of PM2.5 over wintertime Denver. We also explore how the RF model performance and interpretation changes based on different model configurations and data processing. Finally, we use high resolution PM2.5 and nitrogen dioxide (NO2) estimates to investigate socioeconomic disparities in air quality at public schools in the contiguous US. We find that Black and African American, Hispanic, and Asian or Pacific Islander students are more likely to attend schools in locations where the ambient concentrations of NO2 and PM2.5 are above the World Health Organization's (WHO) guidelines for annual-average air quality. Specifically, we find that ~95% of students that identified as Asian or Pacific Islander, 94% of students that identified as Hispanic, and 89% of students that identified as Black or African American, attended schools in locations where the 2019 ambient concentrations were above the WHO guideline for NO2 (10 μg m-3 or ~5.2 ppbv). Conversely, only 83% of students that identified as white and 82% of those that identified as Native American attended schools in 2019 where the ambient NO2 concentrations were above the WHO guideline. Similar disparities are found in annually averaged ambient PM2.5 across racial and ethnic groups, where students that identified as white (95%) and Native American (83%) had a smallest percentage of students above the WHO guideline (5 μg m-3), compared to students that identified with minoritized groups (98-99%). Furthermore, the disparity between white students and other minoritized groups, other than Native Americans, is larger at higher PM2.5 concentrations. Students that attend schools where a higher percentage of students are eligible for free or reduced meals, which we use as a proxy for poverty, are also more likely to attend schools where the ambient air pollutant concentrations exceed WHO guidelines. These disparities also tend to increase in magnitude at higher concentrations of NO2 and PM2.5. We investigate the intersectionality of disparities across racial/ethnic and poverty lines by quantifying the mean difference between the lowest and highest poverty schools, and the most and least white schools in each state, finding that most states have disparities above 1 ppbv of NO2 and 0.5 μg m-3 of PM2.5 across both. We also identify distinct regional patterns of disparities, highlighting differences between California, New York, and Florida. Finally, we also highlight that disparities do not only exist across an urban and non-urban divide, but also within urban areas.Item Open Access Ketones in the troposphere: studies of loss processes, emissions, and production(Colorado State University. Libraries, 2020) Brewer, Jared F., author; Fischer, Emily, advisor; Ravishankara, A. R., advisor; Barnes, Elizabeth, committee member; Jathar, Shantanu, committee memberKetones play an important role in atmospheric chemistry of the troposphere because they are oxidized VOCs that are both relatively abundant with sufficiently long-lifetimes to be distributed regionally. Ketone photolysis is a potentially important source of HOx radicals in the upper troposphere; it can also serve as a source of peroxy radicals which contribute to the formation of peroxy acyl nitrate-type (PAN-type) compounds. My thesis focuses on the atmospheric processes and budgets of smaller ketones. In this thesis, we discuss a series of four studies aimed at understanding the importance of atmospheric ketones to production of oxidants and PAN-type compounds. The four studies covered here involve laboratory measurements, interpretation of atmospheric observations, and modeling calculations. Chapter 2 of this thesis discusses an update to and global sensitivity analysis of the global budget of acetone. We test how sensitive a global simulation of acetone is to literature-derived ranges of input factors used to represent 1) direct emissions and secondary natural sources of acetone from the biosphere; 2) loss via photolysis; and 3) dry deposition. We use the Morris method (one-at-a-time variations) to identify and prioritize potential reasons for model-measurement differences for acetone. This study helps identify what specific processes and/or geographic regions deserve further attention via modeling and/or measurements to constrain the global budget of this species. Of the sources tested, acetone is globally most sensitive to the direct emissions from the biosphere, with other sources and sinks being important on a seasonal and regional basis. Chapter 3 presents the results of laboratory measurements of absorption cross sections of MEK and DEK (along with their uncertainties) measured in the laboratory between 200-335 nm at temperatures ranging from 242-320 K, with a spectral resolution of 1 nm. We also report absorption cross sections for PEK at the same resolution and wavelengths at 296 K. We present a simple "two-state" physically based model to understand the temperature variation of the cross sections and to extrapolate cross sections beyond the temperatures of the measurements. The implementation of these temperature-dependent cross-sections is most important in the colder upper troposphere, where this work suggests a ~20% decrease in MEK photolysis rate relative to the previous understanding. In Chapter 4, we present an analysis of aircraft observations of MEK in the remote marine troposphere from the Atmospheric Tomography (ATom) project. We show that the observed vertical profiles over clean oceans suggests an oceanic source of MEK. We show that the ocean serves as a source of MEK to the atmosphere during both meteorological winter and summer. MEK in clean marine air over the remote oceans correlates with both acetone and acetaldehyde, whose primary sources in the ocean water are the photooxidation of organic material. Finally, in Chapter 5, we bring together the information gathered from Chapters 2 and 3 to improve our ability to model MEK globally and, with these and other model improvements, present the first global budget of MEK. We discuss the magnitudes, distribution, and seasonality of the sinks, sources, and atmospheric mixing ratios of MEK as well. We also present a comparison of simulated MEK abundances using a suite of available aircraft observations of MEK from around the globe. Our results suggest that MEK is much less abundant in the atmosphere than acetone, but the fluxes of MEK into the atmosphere are about a tenth as those of acetone. The most important sources of MEK to the atmosphere are from the ocean and the oxidation of primarily anthropogenic alkanes, while the most important sinks of MEK are photolysis and oxidation by OH. We pull the information from all these four studies to show that our knowledge of the atmospheric role of acetone and MEK are improved. We also identify gaps in our knowledge that should be pursued to further improve quantifications of the roles of ketones in the troposphere.Item Open Access Mixing in stably stratified turbulent flows: improved parameterizations of diapycnal mixing in oceanic flows(Colorado State University. Libraries, 2018) Garanaik, Amrapalli, author; Venayagamoorthy, Subhas Karan, advisor; Bienkiewicz, Bogusz, committee member; Barnes, Elizabeth, committee member; Julien, Pierre Y., committee memberMixing of fluid with different properties across a gravitationally stable density interface, due to background turbulence is an ubiquitous phenomenon in both natural and engineered flows. Fundamental understanding and quantitative prediction of turbulent mixing in stratified flows is a challenging problem, with a broad range of applications including (but not limited to) prediction of climate, ocean thermohaline circulation, global heat and mass budget, pollutant and nutrients transport, etc. Large scale geophysical flows such as in the ocean and atmosphere are usually stably stratified i.e. the density increases in the direction of gravitational force. The stabilizing nature of the density layers has a tendency to inhibit the vertical motion. In such flows, diapycnal mixing, i.e. mixing of fluid across the isopycnal surfaces of constant density, plays a crucial role in the flow dynamics. In numerical models of large scale flows, turbulent mixing is inherently a small scale phenomenon that is difficult to resolve and is therefore generally parameterized using known bulk parameters of the flow. In oceans, the mixing of water masses is typically represented through a turbulent (eddy) diffusivity of mass Kρ. A widely used formulation for Kρ in oceanic flows is given as Kρ = Γϵ/N2, ϵ is the rate of dissipation of turbulent kinetic energy, N = √(-g/ρ)(∂ρ/∂z) is the buoyancy frequency of the background stratification, ρ is the density, Γ = Rƒ/(1 - Rƒ) is a mixing coefficient and Rƒ is the mixing efficiency, that is widely (but questionably) assumed to be constant or sometimes parameterized. However, a robust and universal parameterization for the mixing efficiency remains elusive to date despite numerous studies on this topic. This research focuses on improved parameterizations of diapycnal mixing through an integration of theoretical knowledge with observational and high resolution numerical simulation data. The main objectives are: (1) to provide a better assessment of field microstructure data and methodology for data analysis in order to develop/test appropriate parameterization of the mixing efficiency, (2) to determine the relevant length and velocity scales for diapycnal mixing, (3) to provide improved parameterization(s) of diapycnal mixing grounded on physical reasoning and scaling analysis, (4) to provide a practical field method to identify the dynamic state of turbulence in stably stratified flows from measurable length scales in the ocean. First, an analysis of field microstructure data collected from different locations in the ocean was performed to verify existing parameterizations. A key finding is that the mixing efficiency, Rƒ does not scale with buoyancy Reynolds number, Rℓb, as been proposed previously by others. Rather, Rƒ depends on the strength of background stratification. In a strongly stratified thermocline, a constant value for the mixing efficiency is found to be reasonable while for weakly stratified conditions (e.g. near boundaries) a parameterization is required. A discussion on different methods to estimate the background shear and stratification from field data is provided. Furthermore, the present state-of-the-art microstructure instruments measure the small scale dissipation rate of turbulent kinetic energy ϵ from one dimensional components by invoking the small scale isotropy assumption that is strictly valid for high Reynolds number flows. A quantitative assessment of the departure from isotropy in stably stratified flows is performed and a pragmatic method is proposed to estimate the true three dimensional dissipation (ϵ3D) from one dimensional dissipation (ϵ1D) obtained from microstructure profilers in the ocean. Next, a scaling analysis for strongly stratified flow is presented to show that, the true diapycnal length scale Ld and diapycnal velocity scale wd can be estimated from the measurable Ellison length scale, LE and a measurable root mean square vertical velocity, w´, using a turbulent Froude number defined as Fr = ϵ/Nk, where k is the turbulent kinetic energy. It is shown that the eddy diffusivity Kρ can be then directly inferred from LE and w´. For weakly stratified flow regimes, Fr > O(1), Kρ ~ w´LE and for strongly stratified flow regimes, Fr < O(1), Kρ ~ w´LE x Fr. This finding is confirmed with direct numerical simulation (DNS) data for decaying as well as sheared stratified turbulence. This result indicates that Fr is a relevant non-dimensional parameter to identify strength of stratification in stably stratified turbulent flows. DNS with particle tracking is performed to separate isopycnal and diapycnal displacements of fluid particles, an analysis that is not possible from an Eulerian approach or from standard field measurements. The Lagrangian analysis show that LE is indeed an isopycnal length scale. Furthermore, having established that Fr is the signature parameter which can describe the state of stratified turbulence, a parameterization of mixing coefficient, Γ (or Rƒ) as a function of turbulent Froude number Fr is developed using scaling arguments of energetics of the flow. Proposed parameterization is then verified using DNS data of decaying, sheared and forced stratified turbulence. It is shown that for Fr << O(1), Γ ~ Fr0, for Fr ~ O(1), Γ ~ Fr-1 and for Fr >> O(1), Γ ~ Fr-2. Finally, a practically useful method to identify the dynamic state of turbulence in stably stratified flows is developed. Two commonly measurable length scales in the ocean are the Thorpe overturning length scale, LT and the dimensionally constructed Ozmidov length scale, LO. From scaling analysis and DNS data of decaying, sheared and forced stratified turbulence a new relation between Fr and the ratio of the length scales, LT/LO is derived. The new scaling is, for LT/LO > O(1), Fr ~ (LT/LO)-2 and for LT/LO < O(1), Fr ~ (LT/LO)-2/3.Item Open Access Model post-processing for the extremes: improving forecasts of locally extreme rainfall(Colorado State University. Libraries, 2016) Herman, Gregory Reid, author; Schumacher, Russ, advisor; Barnes, Elizabeth, committee member; Cooley, Daniel, committee memberThis study investigates the science of forecasting locally extreme precipitation events over the contiguous United States from a fixed-frequency perspective, as opposed to the traditionally applied fixed-quantity forecasting perspective. Frequencies are expressed in return periods, or recurrence intervals; return periods between 1-year and 100-years are analyzed for this study. Many different precipitation accumulation intervals may be considered in this perspective; this research chooses to focus on 6- and 24-hour precipitation accumulations. The research presented herein discusses the beginnings of a comprehensive forecast system to probabilistically predict extreme precipitation events using a vast suite of dynamical numerical weather prediction model guidance. First, a recent climatology of extreme precipitation events is generated using the aforementioned fixed-frequency framework. The climatology created generally conforms with previous extreme precipitation climatologies over the US, with predominantly warm season events east of the continental divide, especially to the north away from major bodies of water, and primarily cool-season events along the Pacific coast. The performance of several operational and quasi-operational models of varying dynamical cores and model resolutions are assessed with respect to their extreme precipitation characteristics; different biases are observed in different modeling systems, with one model dramatically overestimating extreme precipitation occurrences across the entire US, while another coarser model fails to produce the vast majority of the rarest (50-100+ year) events, especially to the east of the Rockies where most extreme precipitation events are found to be convective in nature. Some models with a longer available record of model data are employed to develop model-specific quantitative precipitation climatologies by parametrically fitting right-skewed distributions to model precipitation data, and applying these fitted climatologies for extreme precipitation forecasting. Lastly, guidance from numerous models is examined and used to generate probabilistic forecasts for locally extreme rainfall events. Numerous methods, from the simple to the complex, are explored for generating forecast probabilities; it is found that more sophisticated methods of generating forecast probabilities from an ensemble of models can significantly improve forecast quality in every metric examined when compared with the most traditional probabilistic forecasting approach. The research concludes with the application of the forecast system to a recent extreme rainfall outbreak which impacted several regions of the United States.Item Open Access Modeling of channel stacking patterns controlled by near wellbore modeling(Colorado State University. Libraries, 2023) Escobar Arenas, Luis Carlos, author; Stright, Lisa, advisor; Ronayne, Michael, committee member; Barnes, Elizabeth, committee memberReservoir models of deep-water channels rely upon low-resolution but spatially extensive seismic data, high vertical resolution but spatially sparse well log data and geomodeling methods. The results cannot predict architecture below seismic resolution or between well logs. Usually, the data and interpretations that provide constraints for modeling workflows do not capture sub-seismic scale architecture. Therefore, standard modeling methods do not generate models that include details that can impact hydrocarbon flow and recovery. Constraining models to well and seismic data is problematic. Employing measured sections in the Tres Pasos Fm. (Magallanes Basin, Chile) is feasible to predict deep-water channel architecture, specifically channel stacking patterns with 1D information analogous to well data. This research performed near-wellbore modeling to generate multiple scenarios of channel stacking patterns constrained by machine learning-derived probabilities using (i) conditional Monte Carlo simulation with soft probabilities per channel element within the measured section choosing the highest probabilities for each element (ii) conditional Monte Carlo simulation of channel stacking, (iii) template-based modeling, (iv) forward modeling with Markov transition probabilities without matching to thickness and (v) conditional Monte Carlo simulation constrained to measured section thickness. Machine learning workflows generate channel position probabilities (i.e., axis, off-axis, margin) within a measured section given the interpreted top/bases of channel elements. These probabilities constitute the input for Monte Carlo simulations capturing channel element stacking patterns at the measured section locations. The most likely 2D channel stacking pattern scenarios defined channel centerline points, and volumes of the individual channel elements can be generated connecting them. Surface-based modeling offers a way to depict reservoirs of hydrocarbons, water or low-enthalpy geothermal systems in which small-scale heterogeneity needs to be captured explicitly by bounding surfaces because it impacts fluid flow, improving our forecasts of resource exploitation. Furthermore, predicting heterogeneity controlled by depositional architecture is critical for transport and storage capacity in CO2 reservoirs. The dataset provided and the advent of these flexible and accurate methods to depict the subsurface offer the opportunity to overcome the historical limitations of grid-based models and allow us to assess multi-scale architecture that controls fluid flow. This research aims to show the results of modeling deep-water channels, including a 1D identification of architectural positions and a 2D arrangement of channel stacking patterns.Item Open Access Nationwide decadal source apportionment of PM2.5 with a focus on iron(Colorado State University. Libraries, 2021) Niño, Lance, author; Kreidenweis, Sonia, advisor; Barnes, Elizabeth, committee member; Bond, Tami, committee memberFine particulate matter pollution (PM2.5) has detrimental effects on human health, visibility, and the environment. One component of PM2.5, aerosol-phase iron, also has a multi-faceted effect on climate. In its largely insoluble iron oxide form, found in dust aerosol, it absorbs shortwave radiation. Emissions from anthropogenic processes, primarily industry and coal combustion, also contain iron, with most of that iron in soluble forms. Soluble iron is an important phytoplankton nutrient and thus its atmospheric abundance is intertwined with carbon sequestration. To ascertain the various sources of PM2.5 as well as aerosol-phase iron across the contiguous United States, we used the ME-2 version of PMF to obtain a 10-factor source apportionment solution using IMPROVE data from 2011-2019. The percentage of anthropogenic iron at various sites during that time span varied from nearly none in the inter-mountain West to over 50% over the eastern half of the US. The percentage of total iron detected that was classified as soluble iron reached over 20% along coastal sites but was only around 3% of the total iron emitted. Trends in PM2.5 component factors showed a pronounced decrease in PM2.5 from coal combustion and various industrial sources during the time period, but trends were mixed and not significant for other sources. Further research is needed applying source apportionment to nationwide speciated datasets like IMPROVE, and a more comprehensive global PM2.5 observation network would enable source apportionment on a global scale.Item Open Access On the certainty framework for causal network discovery with application to tropical cyclone rapid intensification(Colorado State University. Libraries, 2022) DeCaria, Michael, author; van Leeuwen, Peter Jan, advisor; Chiu, Christine, committee member; Barnes, Elizabeth, committee member; Ebert-Uphoff, Imme, committee memberCausal network discovery using information theoretic measures is a powerful tool for studying new physics in the earth sciences. To make this tool even more powerful, the certainty framework introduced by van Leeuwen et al. (2021) adds two features to the existing information theoretic literature. The first feature is a novel measure of relative strength of driving processes created specifically for continuous variables. The second feature consists of three decompositions of mutual information between a process and its drivers. These decompositions are 1) coupled influences from combinations of drivers, 2) information coming from a single driver coupled with a specific number of other drivers (mlinks), and 3) total influence of each driver. To represent all the coupled influences, directed acyclic hypergraphs replace the standard directed acyclic graphs (DAGs). The present work furthers the interpretation of the certainty framework. Measuring relative strength is described thermodynamically. Two-driver coupled influence is interpreted using DAGs, introducing the concept of separability of drivers' effects. Coupled influences are proved to be a type of interaction information. Also, total influence is proved to be nonnegative, meaning the total influences constitute a nonnegative decomposition of mutual information. Furthermore, a new reference distribution for calculating self-certainty is introduced. Finally, the framework is generalized for variables that are continuous with one discrete mode, for which partial Shannon entropy is introduced. The framework was then applied to the rapid intensification of Hurricane Patricia (2015). The hourly change in maximum tangential windspeed was used as the target. The four drivers were outflow layer (OL) maximum radial windspeed (uu), boundary layer (BL) radial windspeed at radius of maximum wind (RMW) (ul), equivalent potential temperature at BL RMW (θe), and the temperature difference between the OL and BL (ΔT). All variables were azimuthally averaged. The drivers explained 45.5% of the certainty. The certainty gain was 35.8% from θe, 24.5% from ΔT, 24.0% from uu, and 15.7% from ul. The total influence of θe came mostly from inseparable effects, while the total influence of uu came mostly from separable effects. Physical mechanisms, both accepted in current literature and suggested from this application, are discussed.Item Open Access On the observed and simulated responses of the extratropical atmosphere to surface thermal forcing(Colorado State University. Libraries, 2019) Wills, Samantha M., author; Thompson, David W. J., advisor; Alexander, Michael, committee member; Barnes, Elizabeth, committee member; Maloney, Eric, committee member; Venayagamoorthy, Subhas Karan, committee memberThe ocean is an integral part of the climate system, and its closely-coupled interactions with the atmosphere system have wide-ranging impacts on the large-scale and local patterns of climate and weather variability from one region of the globe to another. Improvements in the resolution of satellite observations and numerical models over the past decade have led to a series of advances in understanding the role of the ocean in extratropical air-sea interaction. While the influence of the extratropical ocean can be relatively subtle and difficult to detect, recent studies have provided a growing body of evidence suggesting that the extratropical ocean has a potentially important influence on the atmospheric circulation on a wide variety of timescales. The aim of this thesis is to improve the current understanding on the role of the extratropical ocean in climate by 1) presenting new observational analyses on the relationships between midlatitude SST anomalies and the atmospheric circulation on subseasonal timescales and 2) providing a new, simplistic framework for interpreting the atmospheric response to surface thermal forcing across the globe in an idealized global climate model. In the first theme of this thesis, observational analyses of daily-mean data are exploited to re-examine the evidence for midlatitude air-sea interaction over the Kuroshio-Oyashio Extension region, and important comparisons are drawn to a previous companion study over the Gulf Stream Extension region. The results indicate that during the boreal winter season, SST anomalies in both the Gulf Stream and Kuroshio-Oyashio Extension regions are associated with distinct spatial and temporal patterns of atmospheric variability that precede and follow peak amplitude in the SST field on daily-mean timescales. In particular, a very similar pattern of low pressure anomalies that develops over the warm SST anomalies is viewed as the most robust common aspect of the atmospheric "response" over both ocean basins. The least common aspect of the "response" is characterized by robust high pressure anomalies that develop over the North Atlantic and have a seemingly unique relationship to positive lower-tropospheric temperature anomalies generated over the Gulf Stream Extension region. These results suggest that extratropical SST anomalies on subseasonal timescales are capable of forcing significant changes in the large-scale atmospheric circulation through the transfer of heat from the ocean to the atmosphere. Partially motivated by the results from the observational analyses, the second theme of this thesis presents a simplified model framework to critically assess the one-way influence of the ocean on the atmosphere at different locations across the globe. A series of steady-state and transient numerical experiments are designed to explore the atmospheric response to surface thermal forcing in an idealized "aquaplanet" configuration of the NCAR Community Atmosphere Model, Version 5.3. The results indicate that in each of the extratropical SST perturbation experiments, there is a consistent and robust steady-state atmospheric response (of similar sign and amplitude) to surface thermal forcing. The response is characterized by a hemispheric-scale, equivalent-barotropic pattern of atmospheric circulation anomalies reminiscent of the model's leading mode of internal variability and is seemingly independent of the latitudinal placement of the heat source. This result is explored further, and a possible explanation of the consistent steady-state atmospheric circulation response is discussed.Item Open Access On the observed relationships between variability in sea surface temperatures and the atmospheric circulation in the Northern Hemisphere(Colorado State University. Libraries, 2015) Wills, Samantha M., author; Thompson, David W. J., advisor; Barnes, Elizabeth, committee member; Venayagamoorthy, Subhas Karan, committee memberThe advent of increasingly high-resolution satellite observations and numerical models has led to a series of advances in our understanding of the role of midlatitude sea surface temperature (SST) in climate variability, especially near western boundary currents (WBC). For example, recent observational analyses suggest that ocean dynamics play a central role in driving interannual SST variability over the Kuroshio-Oyashio and Gulf Stream Extension regions, and recent numerical experiments suggest that variations in the SST field in the Kuroshio-Oyashio Extension region may have a much more pronounced influence on the atmospheric circulation than previously thought. We assess the observational support for (or against) a robust atmospheric response to midlatitude ocean variability in the Kuroshio-Oyashio and Gulf Stream Extension regions. We apply lead/lag analysis based on daily data to assess relationships between SST anomalies and the atmospheric circulation on transient timescales, building off of previous studies that have applied a similar methodology to weekly data. In addition, we employ a novel approach to separate the regressions into an "atmospheric forcing" pattern and an "atmospheric response" pattern through spatial linear decomposition. The analysis reveals two distinct patterns associated with midlatitude atmosphere/ocean interaction in the vicinity of the major Northern Hemisphere WBCs: 1) a pattern that peaks 2-3 weeks before the SST anomalies (the "atmospheric forcing") and 2) a pattern that peaks after the SST anomalies (the "atmospheric response"). The latter pattern is independent of the former, and is interpreted as the signature of SST variability in the atmospheric circulation. Further analysis is required to understand if the "atmospheric response" pattern truly reflects the response to the SST anomalies within the WBC regions.Item Open Access Residential cookstove emissions: measurement and modeling from the lab and field(Colorado State University. Libraries, 2018) Bilsback, Kelsey, author; Volckens, John, advisor; Barnes, Elizabeth, committee member; Jathar, Shantanu, committee member; Marchese, Anthony, committee member; Pierce, Jeffrey, committee memberEmissions from solid-fuel cookstoves, which result from poorly controlled combustion, have been linked to indoor and outdoor air pollution, climate forcing, and human disease. The adverse effects of cookstoves have motivated commitment of substantial time and resources towards development of "improved" cookstoves that operate more efficiently and reduce emissions of harmful air pollutants. However, once disseminated to cookstove users, improved cookstoves often do not ameliorate air quality to a level that substantially reduces health risks or negative environmental impacts. Several critical knowledge gaps related to the emissions and performance of "improved" cookstoves exist; attempting to address these gaps is the subject of this dissertation. Widely-used laboratory testing protocols overestimate the ability of improved stoves to lower emissions. In this work, we develop and validated a novel laboratory test protocol entitled the Firepower Sweep Test. We find that the Firepower Sweep Test reproduces the range of PM2.5 and CO emissions observed in the field, including high emissions events not typically observed under current laboratory protocols. We also find that firepower is modestly correlated with emissions, although this relationship depends on stove-fuel combination. Our results justify incorporating multiple-firepower testing into laboratory-based protocols, but demonstrate that firepower alone cannot explain the observed variability in cookstove emissions. Cookstoves emit many pollutants; however, most studies only measure fine particulate matter (PM2.5) and carbon monoxide (CO). In this work, we present an extensive inventory of air pollutants emitted from wood, charcoal, kerosene, and liquefied petroleum gas (LPG) cookstoves. One-hundred and twenty pollutants, including PM2.5, CO, organic matter, elemental carbon, inorganic ions, carbohydrates, ultrafine particles, volatile organic compounds, carbonyls, and polycyclic aromatic hydrocarbons, are included in this inventory. Our results demonstrate that, while most improved stoves tend to reduce PM2.5 and/or CO emissions, reductions PM2.5 and/or CO emissions do not always correspond to reductions of other harmful pollutants. These findings highlight the need to characterize the full emissions profile of "improved" cookstove designs before they are disseminated to users. Accurate emissions data are critical inputs for models that aim to quantify the impacts of cookstoves on climate and health. Currently, model inputs are primarily derived from laboratory experiments that do not represent in-home use. In this work, we present a relatively inexpensive technique that uses a temperature measurement made at the combustion chamber outlet to estimate firepower. These firepower estimations have the potential to provide valuable information about the range of firepowers over which cookstoves are operated at during real-world use. We also demonstrate that in-use firepower measurements from "improved" cookstoves can be combined with laboratory emissions data from the Firepower Sweep Test to estimate in-use emissions using linear regression models. We find that the model predictions are accurate enough to determine which International Standards Organization emissions tier a given "improved" stove is likely to fall under.Item Open Access Skillful long-range forecasts of North American heat waves from Pacific storm propagation(Colorado State University. Libraries, 2017) Jenney, Andrea, author; Randall, David, advisor; Barnes, Elizabeth, committee member; Anderson, Georgiana Brooke, committee memberExtreme heat poses major threats to public health and the economy. Long- range predictions of heat waves offer little improvement over climatology despite the continuing improvements of weather forecast models. Previous studies have hinted at possible relationships between tropical West Pacific convection and subsequent anomalous near-surface air temperature and rainfall over the North American Plains. We show that the later stages of propagation of the Boreal Summer Intraseasonal Oscillation (BSISO) can be used to skillfully hindcast a number of Great Plains heat waves between 1948 and 2016 with a three-month lead time. Possible teleconnection mechanisms are investigated, with the most likely being related to a BSISO-induced reduction in Plains spring rainfall and subsequent land-atmosphere feedbacks. Our results are the first to demonstrate that a West Pacific weather event can be used to skillfully forecast US Plains heat waves with a lead time of three months.Item Open Access The hydrometeorological sustainability of Miscanthus × giganteus as a biofuel crop in the US Midwest(Colorado State University. Libraries, 2016) Roy, Gavin R., author; Kummerow, Christian, advisor; Randall, David, committee member; Barnes, Elizabeth, committee member; Niemann, Jeffrey, committee member; Peters-Lidard, Christa, committee memberMiscanthus × giganteus (M. × giganteus) is a dense, 3-5 m tall, productive perennial grass that has been suggested to replace corn as the principal source of biofuel for the US transportation industry. However, cultivating a regime of this water-intensive rhizomatous crop across the US Midwest may not be agronomically realistic if it is unable to survive years of low precipitation or extreme cold wintertime soil temperatures, both of which have previously killed experimental crops. The goal of this research was to use a third-generation land surface model (LSM) to provide a new assessment of the hypothetical biogeophysical sustainability of a regime of M. × giganteus across the US Midwest given that, for the first time, a robust and near-complete dataset over a large area of mature M. × giganteus was available for model validation. Modifications to the local hydrology and microclimate would necessarily occur in areas where M. × giganteus is adapted, but a switch to this biofuel crop can only occur where its intense growing season water usage (up to 600 mm) and wintertime soil temperature requirements (no less than -6° C) are feasibly sustainable without irrigation. The first step was to interpret the observed turbulent and ecosystem flux behavior over an extant area of mature M. × giganteus and replicate this behavior within the SiB3 third-generation LSM (Simple Biosphere Model, version 3). A new vegetation parameterization was developed in SiB3 using several previous empirical studies of M. × giganteus as a foundation. The simulation results were validated against a new, robust series of turbulent and ecosystem flux data taken over a four-hectare experimental crop of M. × giganteus in Champaign, IL, USA from 2011-2013. Wintertime mortality of M. × giganteus was subsequently assessed. It was proposed that areas with higher seasonal snowfall in the US Midwest may be favorable for M. × giganteus sustainability and expansion due to the significant insulating effect of snow cover. Observations of snow cover and air and soil temperatures from small experimental plots of M. × giganteus in Illinois, Wisconsin, and the lake effect snowbelt of southern Michigan were analyzed during several anomalously cold winters. While a large insulating effect was observed, shallow soil temperatures were still observed to drop below laboratory mortality temperature thresholds of M. × giganteus during periods of snow cover. Despite this, M. × giganteus often survived these low temperatures, and it is hypothesized that the rate of soil temperature decrease might play a role in wintertime rhizome survival. The domain was expanded in SiB3 to cover the US Midwest, and areas defined as cropland were replaced with the developed M. × giganteus surface parameterization. A 14-year uncoupled simulation was carried out and compared to an unmodified simulation in order to gauge the first-order hydrometeorological sustainability of a large-scale M. × giganteus regime in this area in terms of simulated productivity, evapotranspiration, soil water content, and wintertime cold soil temperature. It was found that M. × giganteus was biogeophysically sustainable and productive in a relatively small portion of the domain in southern Indiana and Ohio, consistent with a small set of previous studies and ultimately in disagreement with the theory that M. × giganteus could reliably replace corn in areas such as Illinois and Iowa as a profitable and sustainable biofuel crop.Item Open Access Using operational HMS smoke observations to gain insights on North American smoke transport and implications for air quality(Colorado State University. Libraries, 2016) Brey, Steven J., author; Fischer, Emily V., advisor; Barnes, Elizabeth, committee member; Pierce, Jeffrey, committee member; Rocca, Monique, committee memberWildfires represent a major challenge for air quality managers, as they are large sources of particulate matter (PM) and ozone (O3) precursors, and they are highly dynamic and transient events. Smoke can be transported thousands of kilometers to deteriorate air quality over large regions. Under a warming climate, fire severity and frequency are likely to in- crease, exacerbating an existing problem. Using the National Environmental Satellite, Data and Information Service (NESDIS) Hazard Mapping System (HMS) smoke data for the U.S. and Canada for the period 2007 to 2014, I examine a subset of fires that are confirmed to have produced sufficient smoke to warrant the initiation of a National Weather Service smoke forecast. The locations of these fires combined with Hybrid Single Particle Lagragian Integrated Trajectory Model (HYSPLIT) forward trajectories, satellite detected smoke plume data, and detailed land-cover data are used to develop a climatology of the land- cover, location, and seasonality of the smoke that impacts the atmospheric column above 10 U.S. regions. I examine the relative contribution of local versus long-range transport to the presence of smoke in different regions as well as the prevalence of smoke generated by agricultural burning versus wildfires. This work also investigates the influence of smoke on O3 abundances over the contiguous U.S. Using co-located observations of particulate matter and the NESDIS HMS smoke data, I identify summertime days between 2005 and 2014 that Environmental Protection Agency Air Quality System O3 monitors are influenced by smoke. I compare O3 mixing ratio distributions for smoke-free and smoke-impacted days for each monitor, while accounting for temperature. This analysis shows that (i) the mean O3 abundance measured on smoke-impacted days is higher than on smoke-free days at 20% of monitoring locations, and (ii) the magnitude of the difference between smoke-impacted and smoke-free mixing ratios varies by location and is sensitive to the minimum temperature allowed for smoke-free days. For each site, I present the percentage of days when the 8-hr average O3 mixing ratio (MDA8) exceeds 75 ppbv and smoke is present. When our most lenient temperature criteria are applied to smoke-free days, smoke-impacted O3 mixing ratios are most elevated in locations with the highest emissions of nitrogen oxides. The Northeast corridor, Dallas, Houston, Atlanta, Birmingham, and Kansas City stand out as having smoke present 10-20% of the days when 8-hr MDA8 O3 mixing ratios exceed 75 ppbv. Most U.S. cities maintain a similar proportion of smoke-impacted exceedance days when they are held against the new MDA8 limit of 70 ppbv.