Browsing by Author "Carter, Ellison, committee member"
Now showing 1 - 15 of 15
Results Per Page
Sort Options
Item Open Access A direct-reading particle sizer (DRPS) with elemental composition analysis(Colorado State University. Libraries, 2023) Sipich, James Robert, author; Yalin, Azer P., advisor; Volckens, John, committee member; L'Orange, Christian, committee member; Carter, Ellison, committee memberThere is a lack of aerosol measurement technology capable of quantifying, in real-time, the size, concentration, and composition of large inhalable particles with an aerodynamic diameter larger than 20 µm. Aerosols of this size penetrate the upper respiratory system upon inhalation and present surface contamination hazards upon settling. The ability to obtain information on the composition of airborne particles is necessary to identify and control risks from exposure to potentially toxic materials, especially in the workplace. The objective of this work was to validate the performance of a prototype Direct-Reading Particle Sizer (DRPS) that counts and sizes particles via time-of-flight light scattering and determines single-particle elemental composition via Laser-Induced Breakdown Spectroscopy (LIBS). Counting, sizing, and spectral measurement efficiency were evaluated using test aerosols of multiple materials with diameters between 25 and 125 µm. Particle sizing results showed good agreement with optical microscopy images. The relationship between the median aerodynamic diameters measured by the DRPS time-of-flight and optical microscopy was linear (Deming regression slope of 0.998) and strongly correlated (r2 > 0.999). The mean absolute difference between the median aerodynamic diameters measured by the instrument by time-of-flight and microscopy over all 8 test aerosol types was 0.9 µm with a mean difference in interquartile range of 1.9 µm. The prototype sensor uses an optical triggering system and pulsed Nd:YAG laser to generate a microplasma and ablate falling particles. Particle composition is determined based on collected emission spectra using a real-time material classification algorithm. The accuracy of the composition determinations was validated with a set of 1480 experimental spectra from four different aerosol test materials. We have studied the effects of varying detection thresholds and find operating conditions with good agreement to truth values (F1 score ≥ 0.9). Details of the analysis method, including subtracting the spectral contribution from the air plasma, are discussed. The time-of-flight aerodynamic diameter measurement and LIBS elemental analysis capabilities demonstrated by the DRPS provide a system capable of both counting, sizing, and identifying the composition of large inhalable particles.Item Open Access Air pollutant source estimation from sensor networks(Colorado State University. Libraries, 2024) Thakur, Tanmay, author; Lear, Kevin, advisor; Pezeshki, Ali, committee member; Carter, Ellison, committee memberA computationally efficient model for the estimation of unknown source parameters using the Gaussian plume model, linear least square optimization, and gradient descent is presented in this work. This thesis discusses results for simulations of a two-dimensional field using advection-diffusion equations underlining the benefits of plume solutions when compared to other methods. The Gaussian plume spread for pollutant concentrations has been studied in this work and modeled in Matlab to estimate the pollutant concentration at various wireless sensor locations. To set up the model simulations, we created a field in Matlab with several pollutant-measuring sensors and one or two pollutant-emitting sources. The forward model estimated the concentration measured at the sensors when the sources emit the pollutants. These pollutants were programmed in Matlab to follow Gaussian plume equations while spreading. The initial work estimated the concentration of the pollutants with varying sensor noise, wind speed, and wind angles. The varying noise affects the sensors' readings whereas the wind speed and wind angle affect the plume shape. The forward results are then applied to solving the inverse problem to determine the possible sources and pollutant emission rates in the presence of additive white Gaussian noise (AWGN). A vector of possible sources within a region of interest is minimized using L2 minimization and gradient descent methods. Initially, the input to the inverse model is random a guess for the source location coordinates. Then, initial values for the source emission rates are calculated using the linear least squares method since the sensor readings are proportional to the source emission rates. The accuracy of this model is calculated by comparing the predicted source locations with the true locations of the sources. The cost function reaches a minimum value when the predicted sensor concentrations are close to the true concentration values. The model continues to minimize the cost function until it remains fairly constant. The inverse model is initially developed for a single source and later developed for two sources. Different configurations for the number of sources and locations of the sensors are considered in the inverse model to evaluate the accuracy. After verifying the inverse algorithm with synthetic data, we then used the algorithm to estimate the source of pollution with real air pollution sensor data collected by Purple Air sensors. For this problem, we extracted data from Purpleair.com from 4 sensors around the Woolsey forest fire area in California in 2018 and used its data as input to the inverse model. The predictions suggested the source was located close to the true high-intensity forest fire in that area. Later, we apply a neural network method to estimate the source parameters and compare estimates of the neural network with the results from the inverse problem using the physical model for the synthetic data. The neural vii model uses sequential neural network techniques for training, testing, and predicting the source parameters. The model was trained with sensor concentration readings, source locations, wind speeds, wind angles, and corresponding source emission rates. The model was tested using the testing data set to compare the predictions with the true source locations and emission rates. The training and testing data were subjected to feature engineering practices to improve the model's accuracy. To improve the accuracy of the model different configurations of activation functions, batch size, and epoch size were used. The neural network model was able to obtain an accuracy above 90% in predicting the source emission rates and source locations. This accuracy varied depending upon the type of configuration used such as single source, multiple sources, number of sensors, noise levels, wind speed, and wind angle used. In the presence of sensor noise, the neural network model was more accurate than the physical inverse model in predicting the source location based on a comparison of R2 scores for fitting the predicted source location to the true source location. Further work on this model's accuracy will help the development of a real-time air quality wireless sensor network application with automatic pollutant source detection.Item Open Access Characterizing mold VOCs in residential structures impacted by flood(Colorado State University. Libraries, 2024) Murphy, Molly, author; Schaeffer, Joshua, advisor; Magzamen, Sheryl, committee member; Carter, Ellison, committee memberMold growth is a health concern for people re-entering their homes after a flooding event. Mold exposure can be hazardous, especially for people with asthma. Mold produces volatile organic compounds (VOCs) as it grows, and those VOCs can be used to detect the presence of mold. While VOC profiles of mold have been constructed in laboratory settings, there has been little work with samples directly from the field. VOC samples were taken from the homes of 55 Houston residents. 33 homes had been flooded, and 22 had not. The VOCs were analyzed using GCMS and identified using a NIST library of mass spectra. The VOCs found in flooded homes were compared to VOCs found in unflooded homes. There was a difference in VOCs identified, and the concentration of those VOCs, in flooded versus non-flooded homes, and some of those VOCs have been previously associated with mold growth. However, the origin of those VOCs is still not clear. Further work should include associating the VOCs found with the maximum water levels in the flooded homes, and with health data collected from the participants.Item Open Access Comparison of indoor air quality between building type in campus buildings(Colorado State University. Libraries, 2018) Erlandson, Grant, author; Schaeffer, Joshua, advisor; Carter, Ellison, committee member; Magzamen, Sheryl, committee member; Reynolds, Stephen, committee memberThe average American spends an estimated 90% of their time indoors on any given day. Rapid urbanization is also sweeping the country leading to ever increasing time spent in the built environment. Human exposure to the surrounding environment accounts for 90% of all disease. The air we breathe represents a major component of that exposure and becomes increasingly relevant as more time is spent indoors. Many studies have set out to characterize and improve indoor air quality in various settings from the workplace to schools. However, few have investigated higher education and its shift toward green, sustainable buildings. The objective of this research was to evaluate the effects of building type and occupancy on indoor air quality in higher education buildings. We measured LEED certified, retrofitted, and conventional building types on a college campus for particulate matter, formaldehyde, carbon dioxide, and nitrogen oxides. For each building type, we conducted multi-zonal, 48 hour measurements during times when the buildings were occupied and unoccupied. Statistically significant differences in two size fractions of particulate matter were observed between building types. Carbon dioxide and particulate matter concentrations were significantly higher during occupied sampling when compared to unoccupied. Results from this study suggest that occupancy status has a larger impact on indoor air quality in campus buildings than building type.Item Open Access Development of a low-firepower continuous feed biomass combustor(Colorado State University. Libraries, 2020) Rayno, Mars, author; Mizia, John, advisor; Windom, Bret, advisor; Carter, Ellison, committee memberApproximately 25% of world's population lacks basic sanitation amenities. This lack of sanitation leads directly to the spread of contagious diseases and parasites. One method that can help mitigate these consequences is the thermal treatment of human feces in a combustion system. Colorado State University's Advanced Biomass Combustion Lab has been working on thermal treatment systems as part of the Bill and Melinda Gates Foundation Reinvent the toilet challenge for over 7 years. The goal is to develop stand-alone treatment technologies that can process waste for less than 5 cents per person per day. Thermal processing is an attractive solution because it not only destroys pathogens, but also significantly reduces the amount of mass that needs to be disposed of. Until recently, the focus has been on larger (2 kW) fecal gasifiers. This scale of combustor was designed to incinerate the solid waste of approximately 28 users per hour. The large amount of users required to operate meant that either fuel would need to be stored before usage or the combustor would be subject to frequent startups and shutdowns. During steady state operation the gasifier emits low quantities of harmful pollutants, but during startup and shutdown the emissions are considerably higher. Thus, there is a need to mitigate or reduce the frequency of those transient events. One way to address this problem is to develop a suite of scaled combustors. A 500 W combustor, for example, would be able to run continuously for 12 hours with 30 users, or 24 hours with 60 users. This project investigated a scaled version of the 2kW fecal combustor developed under the BMGF RTTC. Emission factors for this scaled device were generated for various firepowers, air-fuel ratios, and primary-to-secondary air ratios.Item Open Access Experimental evaluation of stack testing methods for accurate VOC measurement(Colorado State University. Libraries, 2019) King, Brenna Allison, author; Olsen, Daniel, advisor; Quinn, Jason, committee member; Carter, Ellison, committee memberThere are more than 1,400 natural gas compressor stations that utilize large-bore, two-stroke natural gas engines in the United States to transport natural gas through pipelines across the country. Because of the long operating lives associated with these engines, it is important for emissions to be monitored and technology to be improved to ensure the engines are meeting current emissions standards. One emission class that is currently regulated by the Environmental Protection Agency (EPA) is volatile organic compounds (VOCs). VOCs are defined as non-methane, non-ethane hydrocarbons and have negative environmental effects, especially in the formation of ozone and fine particulates that create smog. The combination of a Gas Chromatograph (GC) and a Flame Ionization Detector (FID) can be used to measure methane, ethane, and VOCs. The use of a GC/FID to quantify hydrocarbon concentration is in compliance with EPA Method 18/25A. In some cases, this approach is mandated by regulatory bodies. The Fourier Transform Infrared Spectrometer (FTIR) can also be used to measure VOCs in engine exhaust gas, following EPA Method 320. However, there is concern that Method 320 is not as accurate as Method 18/25A. The main objective of this research is to provide data and analysis with both measurement methods from different engine types, conditions, and fuel quality to determine whether Method 320 is acceptable for VOC quantification. iii Exhaust gas was sampled from engines of different types and configurations: the GMV-4 lean burn testing with open chamber spark ignition, pre-combustion chamber ignition, and high-pressure fuel injection with electronic fuel valves and the Caterpillar 3304 rich burn testing with a three-way catalyst. For the GMV-4 configurations, an ignition timing sweep was performed, including retarding and advancing ignition timing from the nominal 18°aTDC. In addition, fuel ethane and fuel higher hydrocarbons were added to the natural gas fuel supply separately to determine the effects fuel variability has on emissions and engine performance. For the Caterpillar 3304 configuration, only an ignition timing sweep was performed. It was concluded that the HP 5890 Series II GC utilizing EPA Method 18/25a is the most accurate method for VOC quantification. Both the Gasmet and MKS FTIRs (EPA Method 320) overestimate total VOC concentration compared to the HP GC by approximately 18 percent and 12 percent, respectively. However, in most cases the differences were within uncertainty bounds. A common process currently used for VOC quantification, which subtracts the methane and ethane measurements from the MKS FTIR (utilizing EPA Method 320) from the THC measurement from the Siemens 5-Gas analyzer, is not an accurate method as it creates large uncertainty up to 193 percent and overestimates total VOC concentration by nearly 100 percent relative to the HP GC.Item Open Access Experimental investigation of automotive refueling system flow and emissions dynamics to support CFD development(Colorado State University. Libraries, 2019) Stoker, T. McKay, author; Windom, Bret C., advisor; Jathar, Shantanu, committee member; Carter, Ellison, committee memberGovernment regulations restrict the evaporative emissions during refueling to 0.20 grams per gallon of dispensed fuel. This requires virtually all of the vapors generated and displaced while refueling to be stored onboard the vehicle. The refueling phenomenon of spit-back and early click-off are also important considerations in designing refueling systems. Spit-back is fuel bursting past the nozzle and into the environment and early click-off is the pump shutoff mechanism being triggered before the tank is full. Both are detrimental to customer satisfaction, and spit-back leads to failing government regulations. Development of a new refueling system design is required for each vehicle as packaging requirements change. Each new design (or redesign) must be prototyped and tested to ensure government regulations and customer satisfaction criteria are satisfied. Often designs need multiple iterations, costing money and time in prototype-based validation procedures. To conserve resources, it is desired to create a Computational Fluid Dynamics (CFD) tool to assist in design validation. To aid in creating such a model, controlled experiments were performed to inform and validate simulations. The simulations and experiments were performed on the same in-production refueling system. Test data provided characterization of non-trivial boundary conditions. Refueling experiments gave points of comparison for CFD results, especially the tank pressure. Finally, collection of emissions data during refueling experiments provided insight into the travel of gasoline vapor in the refueling system. All the information gathered provides greater understanding of the refueling process and will aid the continued development of CFD models for refueling.Item Open Access Filtration efficiency and breathability of fabric masks and their dependence on fabric characteristics(Colorado State University. Libraries, 2022) Fontenot, Jacob, author; Volckens, John, advisor; Carter, Ellison, committee member; Jathar, Shantanu, committee memberThroughout the COVID-19 pandemic, the demand for face coverings offering two-way protection significantly increased, which resulted in widespread use of masks made from common fabrics (e.g., wool, cotton, and synthetic materials). However, the effectiveness of these fabric masks, which vary in material and design, is not well understood. This work investigates the performance of fabric masks, namely filtration efficiency and breathability, and their dependence on fabric characteristics. Filtration efficiency (FE) and flow resistance – a measure of mask breathability – were evaluated for 50 fabric masks, followed by individual layer testing (n = 70 total layers). The characteristics of the fabric layers, namely yarn diameter, fiber diameter, thread count, air permeability, porosity, cloth cover factor, infra-red (IR) attenuation, and fabric thickness were quantified in a laboratory setting. Fabric mask FEs were relatively low (i.e., < 50%) for submicron particles but increased with particle diameter. Approximately half of the masks achieved a FE meeting the Level 1 barrier standard specified in ASTM F3502-21. The FE and flow resistance of the component fabric layers was found to accurately predict the FE and flow resistance of the entire mask; therefore, we find that fabric masks can generally be treated as filters in series. FE exhibited the strongest relationship with cloth cover factor, IR attenuation, air permeability, and the number of fabric layers; in contrast, we found little to no relationship between FE and yarn diameter, fiber diameter, thread count, porosity, fabric thickness, and fabric material (e.g., natural vs. synthetic). Results of this work should help inform the design of more effective fabric masks, which could prove especially useful for airborne infectious disease response efforts in resource limited environments (i.e., where N95 technologies are not available) around the planet.Item Open Access Methods to detect and analyse volatile organic carbons using low cost real-time sensors(Colorado State University. Libraries, 2019) Gupta, Vatsal, author; Carlson, Kenneth, advisor; Carter, Ellison, committee member; Ham, Jay, committee memberVOCs are ubiquitous and can be found not only as vapors in the air but also as soil gas and dissolved in ground water. Vapor intrusion occurs when volatile organic compounds from contaminated soil or groundwater migrate upwards toward the ground surface and into overlying buildings or surfaces through gaps and cracks in the ground. In this thesis I have detailed several statistical analysis techniques and used these techniques on data that I obtained from active real-time soil gas and ground water quality monitoring sensors placed around an abandoned oil and gas well in Longmont, Colorado, to see if there were VOCs still being released from the site. The main goal of this study was to develop a more precise setup for real-time VOC release monitoring and help regulate fracking sites more efficiently and to analyze the data collected faster and more accurately. Another goal of this study was to bridge the gap between laboratory sampling and real-time on-site testing. From the results, we were able to analyze the movement of the contaminant plume using real time sensing and were also able to identify most of the constituents of the contaminants using in-situ data according to EPA method 18.Item Open Access Modeling energy systems using large data sets(Colorado State University. Libraries, 2024) Duggan, Gerald P., author; Young, Peter, advisor; Zimmerle, Daniel, advisor; Bradley, Thomas, committee member; Carter, Ellison, committee memberModeling and simulation are playing an increasingly import role in the sciences, and science is having a broader impact on policy definition at a local, national, and global scale. It is therefore important that simulations which impact policy produce high-quality results. The veracity of these models depend on many factors, including the quality of input data, the verification process for the simulations, and how result data are transformed into conclusions. Input data often comes from multiple sources and it is difficult to create a single, verified data set. This dissertation describes the challenges in creating a research-quality, verified and aggregated data set. It then offers solutions to these challenges, then illustrates the process using three case studies of published modeling and simulation results from different application domains.Item Open Access Plate frame and bar plate evaporator model validation and volume minimization(Colorado State University. Libraries, 2019) Simon, John Robert, III, author; Bandhauer, Todd M., advisor; Quinn, Jason, committee member; Carter, Ellison, committee memberVapor compression chillers are the primary cooling technology for large building applications. Chillers have a large up front capital cost, with the heat exchangers accounting for the majority of the cost. Heat exchanger cost is a function of size, and therefore, a reduction in heat exchanger size can be correlated to a reduction in chiller capital cost. Few investigations focus on the reduction in heat exchanger size for vapor compression systems. Therefore, this investigation aims to decrease the size of chillers by predicting the minimum evaporator volume for a fixed performance. Only the evaporator was minimized because it was assumed that a similar process could be performed for the condenser in a future study. The study focused on a simple vapor compression cycle, and implemented high fidelity heat exchanger models for two compact heat exchanger types: brazed bar plate and gasketed plate and frame. These models accounted for variable fluid properties, phase change, and complex geometries within the evaporator core. The models used in this investigation were developed based on liquid-coupled evaporators in an experimental vapor compression system, and validated using collected data. The bar plate model was validated based on sizing and pressure drop to mean absolute errors of 14.2% and 14.0%, respectively. The plate frame model was validated for sizing to mean absolute errors equal to 7.9%; however, due to measurement uncertainty, pressure drop was not validated. The heat exchanger models were integrated into a simple vapor compression cycle model to determine the minimum required evaporator volume. Both heat exchanger types, in parallel and counter flow arrangements were minimized in this study. The minimum volume was achieved by varying the ratio between core length and number of channels. It was found that for both heat exchanger types, the parallel flow arrangement resulted in a smaller volume than the counter flow arrangement. Furthermore, the bar plate heat exchanger resulted in an optimum volume 91% smaller than the plate frame counterpart.Item Open Access Prediction based scaling in a distributed stream processing cluster(Colorado State University. Libraries, 2020) Khurana, Kartik, author; Pallickara, Sangmi Lee, advisor; Pallickara, Shrideep, committee member; Carter, Ellison, committee memberProliferation of IoT sensors and applications have enabled us to monitor and analyze scientific and social phenomena with continuously arriving voluminous data. To provide real-time processing capabilities over streaming data, distributed stream processing engines (DSPEs) such as Apache STORM and Apache FLINK have been widely deployed. These frameworks support computations over large-scale, high frequency streaming data. However, current on-demand auto-scaling features in these systems may result in an inefficient resource utilization which is closely related to cost effectiveness in popular cloud-based computing environments. We propose ARSTREAM, an auto-scaling computing environment that manages fluctuating throughputs for data from sensor networks, while ensuring efficient resource utilization. We have built an Artificial Neural Network model for predicting data processing queues and this model captures non-linear relationships between data arrival rates, resource utilization, and the size of data processing queue. If a bottleneck is predicted, ARSTREAM scales-out the current cluster automatically for current jobs without halting them at the user level. In addition, ARSTREAM incorporates threshold-based re-balancing to minimize data loss during extreme peak traffic that could not be predicted by our model. Our empirical benchmarks show that ARSTREAM forecasts data processing queue sizes with RMSE of 0.0429 when tested on real-time data.Item Open Access Sensing via signal analysis, analytics, and cyberbiometric patterns(Colorado State University. Libraries, 2022) Anderson, Wesley, author; Simske, Steve, advisor; Lear, Kevin, committee member; Volckens, John, committee member; Carter, Ellison, committee memberInternet-connected, or Internet of Things (IoT), sensor technologies have been increasingly incorporated into everyday technology and processes. Their functions are situationally dependent and have been used for vital recordings such as electrocardiograms, gait analysis and step counting, fall detection, and environmental analysis. For instance, environmental sensors, which exist through various technologies, are used to monitor numerous domains, including but not limited to pollution, water quality, and the presence of biota, among others. Past research into IoT sensors has varied depending on the technology. For instance, previous environmental gas sensor IoT research has focused on (i) the development of these sensors for increased sensitivity and increased lifetimes, (ii) integration of these sensors into sensor arrays to combat cross-sensitivity and background interferences, and (iii) sensor network development, including communication between widely dispersed sensors in a large-scale environment. IoT inertial measurement units (IMU's), such as accelerometers and gyroscopes, have been previously researched for gait analysis, movement detection, and gesture recognition, which are often related to human-computer interface (HCI). Methods of IoT Device feature-based pattern recognition for machine learning (ML) and artificial intelligence (AI) are frequently investigated as well, including primitive classification methods and deep learning techniques. The result of this research gives insight into each of these topics individually, i.e., using a specific sensor technology to detect carbon monoxide in an indoor environment, or using accelerometer readings for gesture recognition. Less research has been performed on analyzing the systems aspects of the IoT sensors themselves. However, an important part of attaining overall situational awareness is authenticating the surroundings, which in the case of IoT means the individual sensors, humans interacting with the sensors, and other elements of the surroundings. There is a clear opportunity for the systematic evaluation of the identity and performance of an IoT sensor/sensor array within a system that is to be utilized for "full situational awareness". This awareness may include (i) non-invasive diagnostics (i.e., what is occurring inside the body), (ii) exposure analysis (i.e., what has gone into the body through both respiratory and eating/drinking pathways), and (iii) potential risk of exposure (i.e., what the body is exposed to environmentally). Simultaneously, the system has the capability to harbor security measures through the same situational assessment in the form of multiple levels of biometrics. Through the interconnective abilities of the IoT sensors, it is possible to integrate these capabilities into one portable, hand-held system. The system will exist within a "magic wand", which will be used to collect the various data needed to assess the environment of the user, both inside and outside of their bodies. The device can also be used to authenticate the user, as well as the system components, to discover potential deception within the system. This research introduces levels of biometrics for various scenarios through the investigation of challenge-based biometrics; that is, biometrics based upon how the sensor, user, or subject of study responds to a challenge. These will be applied to multiple facets surrounding "situational awareness" for living beings, non-human beings, and non-living items or objects (which we have termed "abiometrics"). Gesture recognition for intent of sensing was first investigated as a means of deliberate activation of sensors/sensor arrays for situational awareness while providing a level of user authentication through biometrics. Equine gait analysis was examined next, and the level of injury in the lame limbs of the horse was quantitatively measured and classified using data from IoT sensors. Finally, a method of evaluating the identity and health of a sensor/sensory array was examined through different challenges to their environments.Item Open Access Spatial patterns and particle size distributions of atmospheric amines in northern Colorado(Colorado State University. Libraries, 2020) Bangs, Evelyn J., author; Collett, Jeffrey L., Jr., advisor; Kreidenweis, Sonia, committee member; Carter, Ellison, committee member; Benedict, Katherine B., committee memberEmissions of reactive nitrogen along the Front Range in Northern Colorado have implications for sensitive and protected environments such as those in Rocky Mountain National Park (RMNP). Nitrogen-containing pollutants exert a variety of adverse effects on the environment, including visibility impairment and excessive nitrogen input to sensitive alpine ecosystems. Northern Colorado has many urban, agricultural, and oil and natural gas production activities that emit various forms of reactive nitrogen to the atmosphere. Model simulations and past measurements demonstrate that these emissions are capable of being transported long distances in gaseous and particulate forms. RMNP is particularly exposed to increased concentrations of reactive nitrogen pollutants during periods of easterly, upslope flow when emissions along the Front Range and sources from even farther away (e.g. the Western United States coast) are transported into the mountains. A detailed understanding of the composition of transported reactive nitrogen pollution is needed to predict environmental impacts within RMNP. While emissions of ammonia and nitrogen oxides have received significant attention in previous studies, relatively little is known about organic nitrogen pollution, despite its ability to contribute to excess N deposition and to formation of particulate matter (PM). Amines are organic analogs of ammonia, where one or more hydrogen atoms are replaced by organic functional groups. The animal agriculture industry is known to be a major source of some amines, while the beer and wine industry, sugar beet industry, leather manufacturing, and chemical manufacturing are also potentially important sources. Many of these industries are located along Colorado's Front Range, providing a good opportunity to study amine atmospheric chemistry. While the chemical lifetime of many gas phase amines is relatively short (hours), they are strong bases that can compete with ammonia to form longer-lived particles that are transported over substantial distances. The work carried out in this study focused on assessing a spatial gradient of particulate amines between RMNP, Fort Collins, and Greeley. Greater concentrations of many amines were typically observed near source emissions in Greeley and/or Fort Collins, but significant concentrations of amines such as dimethylamine, were also observed in the more remote environment at RMNP. To better understand amines, their chemistry and their contribution to PM, size distributions of 16 different amines were analyzed from measurements with a Micro-Orifice Uniform Deposit Impactor (MOUDI). Of 16 analyzed amines, nine were found above the detection limits in summertime Fort Collins and five during the winter. Several organic acids and inorganic acid anions particle size distributions were also assessed to understand contributions from potential anion species involved in salt formation with amine cations. Organic acid particle size distributions, particularly oxalate, overlap with fine particle mode size distributions of both ammonia and amine cations. The size distribution measurements also reveal important reactions between gaseous nitric acid and coarse soil particles to generate coarse mode nitrate particles. Continued measurements of amines and other species size distributions and spatial gradients at more locations would help improve understanding of amine PM chemistry. This understanding would allow necessary changes to be made to better protect the health of living beings and the sensitive ecosystems like those found in Rocky Mountain National Park.Item Open Access Two-stroke lean burn natural gas engine oxidation catalyst degradation and regeneration via washing(Colorado State University. Libraries, 2018) Hackleman, Bryan, author; Olsen, Daniel, advisor; Bandhauer, Todd, committee member; Carter, Ellison, committee memberLean burn two stroke engines are used extensively for stationary applications including power generation, cogeneration and compression. Natural gas is abundant, relatively inexpensive, and combustion produces less CO2, particulate matter, and SOx than gasoline and diesel. However, the Natural gas industry continues to be impacted by increasingly stricter emissions limits. One approach to comply with these emission limits is outfitting engines with an oxidation catalyst. Oxidation catalysts are proven to reduce hydrocarbon and carbon monoxide emissions, but surface poisoning due to lube oil carry over diminishes performance. Zinc, phosphorus, and sulfur found in oil additives poison the catalysts surface, and readily leach into an acidic environment. Two commercial catalyst modules were aged at a field site on a slipstream of a GMVH-12 engine until they no longer met the National Emissions Standards for Hazardous Air Pollutants (NESHAP) formaldehyde limit. The oxidation catalyst modules underwent a washing process of immersion into caustic soda, neutral water, and acetic acid baths. The surface chemistry of samples was analyzed on a scanning electron microscope (SEM-EDS) and X-ray photoelectron spectroscopy (XPS). Catalytic performance testing was carried out by a slipstream of a laboratory Cummins QSK-19G engine, five gas analyzer and Fourier transform infrared spectroscopy (FTIR). The washing process removed the majority of surface poisons and improved the catalytic performance. The modules were then aged again until non-compliance with emissions limits occurred. The modules were periodically tested for poison accumulation and catalytic performance to determine the rate of degradation post-washing. These results were used to compare with that of a new catalyst to estimate the increase in lifespan from washing. The results of the experiments reported here should encourage the use of washing as a low cost partial regeneration procedure for oxidation catalysts.